AI could be humanity's last invention. Developing a type of AI that is so sophisticated it can create other, more intelligent AI entities could change the face of invention forever. Such entities would surpass human intelligence and reach superhuman achievements.
How close are we to creating an artificial superintelligence that surpasses its maker? The short answer is, not very close, but the pace is quickening.
Beginnings of intelligent machines
The idea of machines that would compete with human intelligence or replace us has percolated through human history since antiquity, surfacing in Greek myths and manifest in the animatronic statues built by Egyptian engineers.
Modern research into such machines began in the 1950s. In the decades that followed, computer scientists, mathematicians and experts in other fields strived to advance the field, either by improving the algorithms or improving the hardware. Despite assertions by AI's pioneers that a thinking machine comparable to the human brain was imminent, however, the goal has proved elusive.
This article is part of
Starting in the 1970s, support for AI research went through several ups and downs until it surged again in 2012, propelled by the deep learning revolution. It's an open question how close the latest efforts will get us to AI superintelligence -- or when.
Here is a brief look at the types of AI that are precursors to superintelligence, from reactive AI to limited memory machines, theory of mind and self-aware AI.
Early AI algorithms had one thing in common; they lacked memory and were purely reactional. Given a specific input, the output would always be the same.
That is the case with many machine learning models. Stemming from statistical math, these models were able to consider huge chunks of data, then produce a seemingly intelligent output. For instance, it is extremely difficult (if not impossible) to write a math formula for movie recommendations. But machine learning models were able to yield great results by looking at the purchase history of other customers. Solving that problem became one of the factors behind Netflix's success.
The same mechanism works for spam filters, which can statistically determine if the presence and density of certain words should raise a red flag.
This kind of AI is known as "reactional or reactive AI," and works great -- even performing beyond human capacity in certain domains. Most notably, it defeated chess Grandmaster Garry Kasparov in 1997. However, reactive AI is also extremely limited.
In real life, many of our actions are not reactional -- we may not have all information at hand to react on in the first place. Yet, we are masters of anticipation and can prepare for the unexpected, even based on imperfect information. This "imperfect information" scenario has been one of the target milestones in the evolution of AI and is necessary for a range of use cases from language understanding to self-driving cars.
For that reason, researchers worked to develop the next level of AI, which had the ability to remember and learn.
Limited memory machines
As mentioned earlier, in 2012 we witnessed the deep learning revolution. Based on our understanding of the brain's inner mechanisms, an algorithm was developed which was able to imitate the way our neurons connect. One of the characteristics of deep learning is that it gets smarter the more data it is trained on.
Deep learning dramatically improved AI's image recognition capabilities, and soon other kinds of AI algorithms were born, such as deep reinforcement learning.
These AI models were much better at absorbing the characteristics of their training data, but more importantly, they were able to improve over time.
One notable example is Google's AlphaStar project, which managed to defeat top professional players at the real-time strategy game StarCraft 2. The models were developed to work with imperfect information and the AI repeatedly played against itself to learn new strategies and perfect its decisions.
In the StarCraft game, the decision a player makes early in the game may have decisive effects later. As such, the AI had to be able to predict the outcome of its actions well in advance. We witness the same concept in self-driving cars, where the AI must predict the trajectory of nearby cars in order to avoid collisions. In these systems, the AI is basing its actions on historical data. Needless to say, reactive machines were incapable of dealing with situations like these.
Despite all these advancements, AI still lags behind human intelligence. Most notably, it requires huge amounts of data to learn simple tasks. While the models can be retrained to advance and improve, changes to the environment the AI was trained on would force it into full retraining from scratch. For instance, consider a language: Once we learn a second language, learning a third and fourth become proportionally easier. For AI, it makes no difference.
That is the limitation of the "narrow AI" we are dealing with -- it can become perfect at doing a specific task but fails miserably with the slightest alterations.
Theory of mind, artificial general intelligence
Many believe the next step after narrow AI is artificial general intelligence, i.e. an AI type that would be nearly as smart as humans, and specifically can learn with just a few samples, similar to how humans learn.
Other types of AI in this category of machine intelligence include the "theory of mind," i.e. the ability to assign mental states to other beings. The "theory of mind" is derived from psychology and would require the AI to understand the motives and intents of other entities.
Artificial emotional intelligence is currently being developed, which involves not only detecting human emotions but also empathizing with people. Still, that is far from theory of mind AI, as such an AI would not only treat each human differently, but would also understand them.
Indeed, "understanding," as it is generally defined, is one of AI's huge barriers. The type of AI that can generate a masterpiece portrait still has no clue what it has painted. It can generate long essays without understanding a word of what it has said. An AI that has reached the theory of mind state would have overcome this limitation.
ASI, self-aware AI
The types of AI discussed above are precursors to conscious or self-aware machines, i.e., systems that are not only aware of the mental state of other entities but also aware of their own. This essentially means an AI that is on par with human intelligence and can mimic the same emotions, desires or needs.
The tech parlance prefers the term artificial superintelligence (ASI), and its definition of the ultimate AI also varies slightly. ASI describes AI's intelligence capabilities and how it can supersede human intelligence, while self-aware AI aims for creating human-like intelligence that has its own thoughts, feelings and purposes.
This is a very long-shot goal, for which we still possess neither the algorithms nor the hardware. Yet, our current AI developments still do a remarkable and sometimes superhuman job in certain tasks, making us believe that the system is far more intelligent than it really is.
Whether ASI and self-aware AI are correlative is to be seen in the far future. We still know too little about the human brain to build an artificial one that is nearly as intelligent.
Types of AI: 3 takeaways
- AI does not have to be super-intelligent or even on par with humans to yield remarkable results.
- The current AI algorithms still lag behind human intelligence as they require huge amounts of data to learn even the simplest tasks.
- "Understanding" is one of AI's huge barriers. AI can generate a masterpiece portrait but has no clue what it has painted.