Maximizing AI Proliferation
Reliability and efficiency are the two fundamental properties that will enable the maximum proliferation of AI.
Reliability
Prediction 1: All else being equal, the proliferation of AI will be directly proportional to its reliability.
So, those who have an interest in maximizing the benefits of AI should also be interested in maximizing its reliability.
The following technologies will be necessary in order to maximize the reliability of AI:
The ability to construct AI entirely from source code and data that we can understand.
The source code and data must be written in one or more target formal languages (such as programming languages) that are capable of rich specifications.
A document system for the creation and management of formal specifications used in the design of AI.
New programming languages and data serialization formats that are capable of rich specifications.
Verified operating systems that are purpose-built for AI, including AGI.
Verified digital infrastructure.
Specialized hardware with multiple redundancies that is hardened against interference.
Prediction 2: Certain reliability improvements will accelerate AI research and development.
Having AI in the form of human-readable source code will make it highly modular, which leads to the next prediction.
Prediction 3: There will be websites dedicated to hosting AI defined entirely by human-readable source code and data.
This will turn AI development into software development. It will then be possible to create AI capabilities once and reuse them indefinitely.
Prediction 4: AI defined by human-readable source code and data will enhance AI safety.
Being able to break apart AI systems into smaller components will improve the safety and security of AI. It will not be required to have one massive AI system that can do everything; we can lower risks by limiting capabilities to exactly what is needed for any given application.
The elegance of this solution is that it is an AI safety solution that does not limit the proliferation of AI; it, in fact, accelerates it.
Societal Reliability
The preceding discussion is purely technical, which is insufficient on its own. Societal levels of reliability must also be investigated, but these are outside the scope of this document.
Efficiency
I define AI efficiency in terms of computation and storage. The most efficient AI will do the most work while using the least computation and storage. Both of these are important for an AI to be considered efficient.
Energy is implicitly tied to computation. The intent is that lowering the amount of necessary computation will also lower the energy demand. So energy efficiency is tied to operational and algorithmic efficiency. This approach is superior to reducing everything to energy efficiency because it specifically addresses the efficiency of each AI implementation. Energy efficiency also involves factors external to the AI, such as the hardware it runs on and the infrastructure it uses. The computational efficiency of the AI is more precise and focused, which brings clarity.
Storage efficiency has to be broken down into the memory used in the implementation of the AI and the distribution size of the AI itself. Smaller AI distributions are more efficient. Likewise, AI implementations that use the least amount of memory are also more efficient. It must be noted that there appears to be a fundamental trade-off in computer science between computation and storage; the balance between the two will have to be decided on a case-by-case basis for what is considered optimal.
Prediction 5: A correctly designed AGI will be able to run locally on resource-constrained embedded devices.
The reasoning to support this prediction is based on the projection that AI will eventually be implemented using human-readable source code and data. That will enable AI features and capabilities to be expressed in the most efficient ways possible, using theoretically optimal algorithms and data structures. It will also be possible to represent that logic in custom hardware. These advances will collectively enable a new wave of efficiency gains for AI, which will also be exploited by AGI.
Prediction 6: It will be possible to distribute AGI in a highly compact, minimal form and allow it to learn over time while it operates.
This is a direct result of storage efficiency. An analogy for this is a network install of an open-source operating system distribution. The AGI will be very small to distribute and easily shared over the Internet. It will then learn from both online and offline sources of information without having to be shut down or pre-trained.
Effects on Proliferation
In general, the more efficient AI becomes, the more it will spread. This creates an incentive to make AI distributions more compact and their implementations more efficient. The optimal way to do this is to represent AI entirely in the form of source code and data that we can understand.
If we can turn AI development into software and hardware development, then we will be able to reach the highest levels of efficiency. This is because the functionality of the AI will not be tied to any one specific model of computation. This is difficult to explain and requires taking on a new perspective about machine learning.
Every machine learning model that runs on a computer is, in fact, a computer program. The way in which that program is implemented affects its efficiency. Even with hardware acceleration for AI, better ways to implement its logic still exist.
Instead of forcing our hardware to match a specific abstract model of computation, I suggest that we build AI in a way that represents its logic using any kind of data structure or algorithm. That is not currently possible with the way that we build AI today. The ability to represent AI entirely in the form of human-readable source code and data is precisely what will give us this capability.
Once we have the ability to implement the capability and logic of AI using any algorithm or data structure, we will then gain the ability to maximize its efficiency. We will be able to represent the logic of AI directly in custom hardware designs, and it will be possible to design entirely new methods of computation specifically designed to realize task-specific AI. This will lead to efficiency gains that are orders of magnitude beyond anything that we have today. And I believe this will be the critical turning point for AI in a new economy.