BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Robots are traditionally built to look like humans, but does the locus of their "brains" also have to mimic the human model? "The assumption is that, like us, the robot should have all of its intelligence onboard," Gill Pratt, CEO at Toyota Research Institute, said at the MIT Disruption Timeline Conference.
As it turns out, an anthropomorphized command center is an inefficient design, especially in light of advanced technologies like cloud computing. The transmission of information between humans, be it through speech or gestures, is remarkably slow. "We communicate only at around ten bits per second of information," Pratt said. "The last time you checked how fast your internet connection was at home, I hope it was faster than ten bits per second."
Rather than have the computing take place inside the robot, Pratt said a more efficient design is cloud-based robotics. The mechanical brain operates in the cloud so that it can take advantage of greater storage, communication and computational power. Doing so enables what the automotive industry refers to as fleet learning -- when one robot learns something, the cloud enables a network effect so that all robots learn the same thing, according to Pratt.
Artificial intelligence (AI) is already benefitting from internet-related technologies. Machine learning applications rely on enormous data sets created by mobile devices, internet connectivity and social media. Because of access to billions of images, for example, object recognition technology can now function at near human-level accuracy. "These algorithms have been around for decades," Pratt said. "But what's changed is the computing caught up, the data caught up and the internet caught up, and suddenly, it was possible to show how good they were."
The robotics field could reap similar advances with cloud-based robotics, which has the potential to spark "hyper-exponential growth of capability," Pratt said. Translation: When cloud-based robotics takes root, the robotics field will experience advances at a faster pace than Moore's Law, a term that refers to chip performance doubling every 18 months. "It's going to be more like a snap," Pratt said. "Suddenly, the machines will be quite good at what they do."
Sensors vs. maps
The first panel discussion at the MIT Disruption Timeline Conference focused heavily on the difficulties of designing level five autonomous vehicles, defined as capable of driving anywhere without human assistance.
One element up for debate is the reliance on sensors versus maps. Google's Waymo places an emphasis on developing incredibly precise maps for its vehicles. A vehicle's sensors will collect data in real time, and that data is layered over pre-existing maps the company has already built. The sensor-map duo allows "the car to know its position on the road to within 10 centimeters of accuracy," according to a Waymo blog post.
But Mobileye, acquired last week by Intel for $15.3 billion and is an object recognition technology used in vehicles to avoid or mitigate collisions, uses low-bit maps and relies heavily on sensors. "The main philosophy there is kind of the opposite of Google: It's not to have very precise, very high density maps, but to have more robust sensors," said Tomaso Poggio, director of the Center for Brains, Minds and Machines, a professor in the department of brain and cognitive sciences at MIT, and a director of Mobileye.
Poggio pointed to Mobileye's alliance with Volkswagen, BMW and General Motors as an example. Vehicles outfitted with Mobileye technology crowdsource data on road conditions, for example, in real time.
TRI's Pratt said that maps are a sensor. "Roads don't suddenly move from one place to the other very often," he said, making maps a valuable data source. But maps can be incorrect due to construction or flooding, for example. "We use the map as a sensor, and we consider that like all sensors, it may have some noise in it," he said.
Robots at home
TRI isn't just working on autonomous vehicles; it's also developing robots for the home. "The thing that drives us at the Toyota Research Institute (TRI), primarily, is aging society," Pratt said. In less than 15 years, 20% of Americans will be 65 years of age or older. In Japan, where Toyota is headquartered, 20% of the population is already 65 or older; that percentage is set to double in 20 years, he said. A similar dynamic is happening across the globe.
"A big question, an economic question, is not only who is going to take care of us, but, also, where are we going to live when we're over age 65," Pratt said. Robots that assist a growing population of older adults could help them remain in their homes longer.
But, like autonomous vehicles, building robots that operate alongside humans is a multifaceted endeavor. And one of the nuances TRI is sensitive to is the cultural differences that exist from one country to the next.
In Japan, "housekeeping is a noble art," Pratt said. It's a point of pride to keep house into your 80s, 90s and beyond. "And it may be that what the robot should do is not help you with housekeeping because that's something you want to do," he said. But in the United States, Americans don't regard housekeeping in the same manner and, likely, would happily give over dusting, sweeping and vacuuming duties to a machine.
Pratt's hope is that the advances in machine learning and cloud-based robotics will enable machines to adapt quickly. "We will find out very quickly what it is that people want to do with these machines and, naturally, the software will be good at that," he said.
"In the second machine age, it's not our muscles being augmented by machines but our minds." -- Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, Schussel Family Professor at MIT Sloan School, research associate at the National Bureau of Economic Research
"When I see a mother and a child on the sidewalk, and I'm trying to judge, are they going to cross against the traffic light, I make a prediction quite different than [if it's] two teenagers with skateboards kind of milling around. How much AI does it take to recognize the difference and to understand when one is likely to cross the street and the other is not?" -- Gill Pratt, CEO, Toyota Research Institute
"There are things from a research point of view, from a scientific point of view, that are quite opaque like how does deep learning and reinforcement learning work. But there are other things, algorithms we've used, that have also been hidden in code -- hidden in the way we built the machines -- that can be much more visible." -- Manuela Veloso, Herbert A. Simon University Professor, School of Computer Science, Carnegie Mellon University
"Society takes it differently when machines make mistakes versus when humans make mistakes." -- John Leonard, Samuel C. Collins Professor of mechanical and ocean engineering, MIT Department of Mechanical Engineering
Artificial intelligence systems still need humans
The promise of deep learning