Part one of this story on the future of AI explained how technology developments have led to a resurgence in a...
field that has progressed in fits and starts since the 1950s. Today's cheap storage and amped-up and inexpensive compute power, combined with an explosion in data, have revived an interest in deep learning and "neural nets," which use multiple layers of data processing that proponents sometimes liken to how the brain takes in information.
The field is red hot today, with Google, Facebook and other technology giants racing to apply the technology to consumer products. In the second part of this story, SearchCIO senior news writer Nicole Laskowski reports on where two AI luminaries -- Facebook's Yann LeCun and Microsoft's Eric Horvitz -- see the trend going.
Like Microsoft, IBM and Google, Facebook Inc. is placing serious bets on deep learning, neural networks and natural language processing. The social media maven recently signaled its commitment to advancing these types of machine learning by hiring Yann LeCun, a well-regarded authority on deep learning and neural nets, to head up its new artificial intelligence (AI) lab. A tangible byproduct of this renewed focus on neural nets is Facebook's digital personal assistant, M, which rolled out to select users a few months back.
Today, M's AI technology is backed by human assistants, who oversee how M is responding to queries (such as placing a take-out order or making a reservation) and can step in when needed. According to a Wired article, the AI-plus-human system is helping Facebook build a model: When human assistants intervene to perform a task, their process is recorded, creating valuable data for the AI system.
Once enough of the "right data" is collected, M will be built on neural nets, which is where LeCun's team comes in. But even as the AI behind M advances to neural nets, humans will need to be in the loop to continue training the technology, according to the article.
Still work to be done
That's because M, like most contemporary AI systems, is a supervised learning system, which means the system can't teach itself. Instead, if LeCun wants an algorithm to recognize dogs, he has to feed it examples of what dogs look like -- and not just a handful of examples. "You have to do it a million times," LeCun said at EmTech, an emerging technology conference hosted by the MIT Technology Review. "But, of course, humans don't learn that way. We learn by observing the world. We figure out that the world is three-dimensional, that objects move independently. ... We'd like machines to be able to do this, but we don't have good techniques for this."
Building machines that have a kind of artificial common sense, according to LeCun, will be the big challenge for the future of AI. "It's done by solving a problem we don't really have good solutions for yet, which is unsupervised learning," he said.
One of the ways Facebook (among others) is trying to insert rudimentary reasoning into AI systems is with vector embedding, where unstructured data is mapped to a sequence of numbers that describe the text or object in detail. LeCun said the process brings together perception, reason, perspective and language capabilities so that if the algorithm encounters an unfamiliar word or image, it can make an educated guess by comparing and contrasting the rich mathematical descriptions of the unknown against the known. One of his clearest explanations about how vector embedding works had to do with language translation: "We can take two pieces of text, one in French and one in English, and figure out if they mean the same thing," he said.
Facebook is not alone in taking this approach to improving AI. A recent article in ExtremeTech described the "thought vector" work Google's doing as a way of training computers to comprehend language, which they are incapable of doing now.
The future of AI
But language comprehension is a far cry from machines that can perform the same intellectual tasks that humans perform. Developing a common sense program, or "artificial general intelligence," as it’s called, is still a way’s off, said LeCun, who shared the EmTech stage with Eric Horvitz, director at the Microsoft Research laboratory in Redmond, Wash. "If Eric were to grab his water bottle and walk off stage, you could close your eyes, be told that, and picture all of the actions he'd have to take to do that." AI machines, on the other hand, can't.
"The best way we can think of to train computers to be able to do that is to have them watch a lot of videos. Prediction is the essence of intelligence, and that's what we're trying to do," LeCun said.
Sci-fi films such as Her and Ex Machina may give the impression that the future of AI is conscious machines, but LeCun and Horvitz described generalized intelligence as really hard problems to solve. "We're nowhere near being able to think that through," Horvitz said. "I do think that with the great work on the long-term road toward intelligence, we'll have competencies, new kinds of machines, and it may well be that deep competency is perceived as consciousness."
One of the basic obstacles Horvitz is interested in solving is a classic IT problem: AI technologies were essentially built in silos. For systems to become more powerful, they'll likely need to be knitted together. "When I say we're trying to build systems where the whole is greater than the sum of its parts, we sometimes see surprising increases in competency when we combine, for example, language and vision together," said Horvitz, who recently launched (and, along with his wife, is funding) the One Hundred Year Study on Artificial Intelligence at Stanford University, an interdisciplinary research effort on the effects of AI.
Exactly how AI systems should be integrated together is still up for debate, "but I'm pretty sure that the next big leaps in AI will come from these kinds of integrative solutions," Horvitz said. Silo busting may not be as sexy as creating conscious machines, but it could lead to what Horvitz called a "symphony of intelligence."
"With every advance, and particularly with the advances in machine learning and deep learning more recently, we get more tools. And these tools are often really interesting components we can add to older components and see them light up in new ways," he said.
More on the future of AI:
Ten steps CIOs should take to prepare for AI
A computer program passes the famed AI Turing Test
Smart machines raise challenging questions
AI is necessary if you're hiring software developers