Car companies and internet giants are all betting big that self-driving cars will be the wave the future -- and they're racing to own as big a share that future as possible.
That's not just testing. The point all those miles is to refine the machine learning algorithms that underlie the decisions the cars make. The more miles cars drive themselves, the more experience they earn. And just like a teenager with a learner's permit, experience makes the cars better drivers.
The algorithms collect data on everything including weather conditions, human actions and objects surrounding the cars. They use this data to predict what objects will do and decide how to react. Just like any other deep-learning exercise, the more data the better.
And now we'll see if the general public believes these algorithms will make the right decisions. Ride-hailing service Uber has said that it will start offering trips in self-driving cars to passengers in Pittsburgh by the end of the month. These trips will still have humans in the driver's seats to take the wheel if need be, but that won't be an option in the future that's coming.
Auto manufacturer Ford unveiled plans this month to start producing fully autonomous cars. This means cars won't have steering wheels or gas and brake pedals. Every action will be directed by an algorithm without the possibility of human intervention. Several other manufacturers like General Motors, Tesla and others are heading in the same direction.
This basically amounts to a big bet on the public's trust of machine learning algorithms to make decisions. After all, there's no greater sign of trust than putting your life in someone's (or something's) hands. But there are some reasons to be dubious on this point. In February, a Google self-driving car crashed into a bus in the company's hometown Mountain View, Calif. And in May, a Tesla driving in autopilot mode crashed in Florida, killing its driver.
It's unclear if the machine learning algorithms made the wrong decisions in these cases, or if the crashes were something a human driver could have avoided. But placing blame isn't really the point. If people hear about autonomous car crashes, they're going to think autonomous cars are unsafe.
More than 37,000 people die in car crashes in the U.S. every year. But if you ask the average driver, they'd probably say driving is more or less safe. It's generally believed that self-driving cars will drastically reduce the number traffic fatalities, but to the family who loses a loved one in a freak crash, self-driving cars will be unsafe. People think in anecdotes, not statistics.
So, it will be worth keeping an eye on whether news about the benefits autonomous vehicles outweighs the stories about mishaps. This could play an important role in the public's willingness to accept self-driving cars and the decisions they make underwritten by machine learning.
These types of algorithms already play a large role in everyday life. Most of the general public is aware of things like Amazon's recommendation engines, which are powered by machine learning. Most of the time, people don't see the analytics, nor do they care. But when you're speeding down a highway at 75 mph with nothing more than a machine learning algorithm separating you from death, it's hard to imagine anything more interesting.
Spark's libraries help boost adoption machine learning algorithms
Machine learning will play a huge role in many industries
Get machine learning basics right to avoid costly mistakes later