NicoElNino - Fotolia
Artificial general intelligence is the holy grail of AI research; this form of an AI system can think for itself, has common sense, has a similar intelligence level to humans and could even pass for a human in conversation.
AGI raises big questions about ethics and human employment, but the most fundamental questions about AGI and how close we are to it have yet to be answered.
In late 2018, in a book titled Architects of Intelligence, futurist Martin Ford interviewed AI professionals who said that, on average, there was a 50% chance that common sense AI would be completed by 2099. Google's Ray Kurzweil put it at 2029. Rodney Brooks, co-founder of iRobot, was at the end of the spectrum, predicting the year 2200.
Samir Hans, AI expert at Deloitte Risk and Financial Advisory, predicted that we're going to see tangible results in two to three years. AI can already learn from its mistakes, which means there's a feedback loop that improves the AI over time.
So, why the spectrum of possible application dates? Part of the problem is the very definition of AGI and how we measure it.
What is intelligence?
There was a time when technology's ability to do mathematics or make logical inferences was a sign of intelligence. As soon as calculators were invented, the goal posts were moved. Where is the goal for AGI? Maybe it's the ability to play chess. To parse human speech. To extract meaning from text. To translate from one language to another. To play Jeopardy. To pass a Turing test. As AI hits each of those milestones, it becomes clear that we still aren't seeing true AI in its final form.
"My definition of general AI is where a machine is fully autonomous, performing human tasks, without human involvement," said Josh Elliot, director of AI at Booz Allen Hamilton Inc. "And you need the ability to perceive, and learn, and act and the emotional side of what humans actually bring to the table."
Today, AI is primarily special-purpose machine learning systems and algorithms that can do one thing well, Elliot said. This is narrow AI, and while it's getting really good, AGI requires the ability to do tasks across multiple silos.
"We can get very good results in specific domains, but there is a huge gap," said Raj Minhas, head of the AI research lab at the Palo Alto Research Center.
Minhas said that he wouldn't even hazard a guess about whether technology would ever achieve AGI.
"We will not get there using the techniques we have today," he added. "The techniques we have are the ladders that allow us to climb skyscrapers, but they won't get us to the moon."
With each new advance in AI, the definition of AGI gets more nebulous. As computers advance calculation, analysis and predictive abilities, the criteria for real intelligence becomes more amorphous and includes feelings, self-awareness, empathy and ethics.
"You can have a machine that's very adept at learning, but does it have the ability to be sentient?" said Matt Jackson, VP of digital innovation services at Insight, a consulting and system integration firm.
With the increase in available computational power, the emergence of quantum computers and the improvements in AI algorithms, the progression to eventual common sense AI is there.
"It will happen in a reasonable lifetime, 20 to 50 years, probably on the latter end," Jackson added.
But a machine that can be useful and capable enough to pass a Turing test is a lot closer, he said.
"If you take Siri or Alexa and think about how it can expand on the abilities [we] have today, then you're effectively simulating general AI with multiple types of narrow AI," he said. "I think we will have that in a decade or two."
Or maybe even sooner, according to some experts.
"When I studied AI at university, I was taught that we would have completed AI if it could beat a human being -- a grandmaster -- at the ancient Chinese game of Go," said Rob Clyde, chair of the ISACA board of directors.
"Software could not brute-force it, like it can [in] tic-tac-toe, checkers or chess. Two years ago, Google bought an AI that beat a grandmaster. The holy grail has been reached. Since then, they have built self-learning AIs that learn by playing themselves," he continued.
According to Clyde, with platforms like AlphaGo and Watson, the same AIs can do many different things -- achieving some experts' definitions of AGI.
"I would argue that the tipping point has been reached. We've reached the point where the growth is exponential. The pace is going to be incredible over the next few years," Clyde added.
Along with no clear definition of common sense AI, the AI industry also lacks clear metrics for progress.
One common approach is to measure the success of AI algorithms at particular tasks, such as image recognition or natural language processing. Here, AI systems are quickly approaching -- or already exceeding -- human levels of performance, and the rate of progress is accelerating.
In 2017, AI programs matched or exceeded human performance at identifying skin cancer, recognizing speech, and playing poker and arcade games. In 2018, AI matched humans at tasks including translating Chinese to English and grading prostate cancer. AI systems keep getting better at communicating with humans. Last May, a new language benchmark test, General Language Understanding Evaluation, was released. AIs scored at under 70% -- compared to around 90% for humans. By October, AIs had already improved, with scores crossing the 80% mark.
If AI can continuously improve, how do we measure progress vs. achievement? Maybe AGI depends on common sense -- being able to explain what's going on in a situation. Say, for example, looking at a picture and answering questions about what's going on and why. But who sets the limit, the goal, and when is it achieved?
What comes after common sense?
AGI will enable companies to move on from AI technology that currently exists as narrow, special-purpose machine learning systems that are difficult to train and calibrate.
Banks, for example, will be able to gauge customer emotions, identify special needs cases, make more accurate predictions and better detect fraud, said Raghav Nyapati, digital automation product strategist at Bank of America.
"In order for the machines to reach that state of general artificial intelligence, it might take another 10 years. We are already seeing some of this, where systems are able to discern a person's emotional state based on their voice or facial recognition," Nyapati said.
With common sense AI, the Anderson Center for Autism could get answers to questions it didn't know it needed to ask, said CIO Gregg Paulk. His center currently uses HR tools from Ultimate Software to predict which of its most valuable employees are most likely to leave the company early enough for the organization to take steps to improve their job satisfaction.
"If [AI] had common sense, it could identify areas where we can improve, such as with tasks that we're doing on a daily basis," he said. "I think that would have a huge reward. A lot of times, you don't know what you don't know."