A self-driving car (sometimes called an autonomous car or driverless car) is a vehicle that uses a combination of sensors, cameras, radar and artificial intelligence (AI) to travel between destinations without a human operator. To qualify as fully autonomous, a vehicle must be able to navigate without human intervention to a predetermined destination over roads that have not been adapted for its use.
Companies developing and/or testing autonomous cars include Audi, BMW, Ford, Google, General Motors, Tesla, Volkswagen and Volvo. Google's test involved a fleet of self-driving cars -- including Toyota Prii and an Audi TT -- navigating over 140,000 miles of California streets and highways.
Levels of autonomy in self-driving cars
The U.S. National Highway Traffic Safety Administration (NHTSA) lays out six levels of automation, beginning with zero, where humans do the driving, through driver assistance technologies up to fully autonomous cars. Here are the five levels that follow zero automation:
- Level 1: Advanced driver assistance system (ADAS) aid the human driver with either steering, braking or accelerating, though not simultaneously. ADAS includes rearview cameras and features like a vibrating seat warning to alert drivers when they drift out of the traveling lane.
- Level 2: An ADAS that can steer and either brake or accelerate simultaneously while the driver remains fully aware behind the wheel and continues to act as the driver.
- Level 3: An automated driving system (ADS) can perform all driving tasks under certain circumstances, such as parking the car. In these circumstances, the human driver must be ready to re-take control and is still required to be the main driver of the vehicle.
- Level 4: An ADS is able to perform all driving tasks and monitor the driving environment in certain circumstances. In those circumstances, the ADS is reliable enough that the human driver needn't pay attention.
- Level 5: The vehicle's ADS acts as a virtual chauffeur and does all the driving in all circumstances. The human occupants are passengers and are never expected to drive the vehicle.
As of 2018, car makers have reached Level 3. Self-driving vehicles are at least a few years off, because manufacturers must clear a variety of technological milestones and a number of important issues must be addressed before autonomous vehicles can be purchased and used on public roads in the United States, according to the NHTSA.
How self-driving cars work
AI technologies power self-driving car systems. Developers of self-driving cars use vast amounts of data from image recognition systems, along with machine learning and neural networks, to build systems that can drive autonomously.
The neural networks identify patterns in the data, which is fed to the machine learning algorithms. That data includes images from cameras on self-driving cars from which the neural network learns to identify as traffic lights, trees, curbs, pedestrians, street signs and other parts of any given driving environment.
For example, Google's self-driving car project, called Waymo, uses a mix of sensors, Lidar (light detection and ranging -- a technology similar to radar), and cameras and combines all of the data those systems generate to identify everything around the vehicle and predict what those objects might do next. This happens in fractions of a second. The system learns more as it drives, so maturity is important with these systems.
How Google Waymo vehicles work:
- The driver (or passenger) sets a destination. The car's software calculates a route.
- A rotating, roof-mounted Lidar sensor monitors a 60-meter range around the car and creates a dynamic 3D map of the car's current environment.
- A sensor on the left rear wheel monitors sideways movement to detect the car's position relative to the 3D map.
- Radar systems in the front and rear bumpers calculate distances to obstacles.
- AI software in the car is connected to all the sensors and collects input from Google Street View and video cameras inside the car.
- The AI simulates human perceptual and decision-making processes and controls actions in driver-control systems such as steering and brakes.
- The car's software consults Google Maps for advance notice of things like landmarks and traffic signs and lights.
- An override function is available to allow a human to take control of the vehicle.
The pros and cons of autonomous cars
The top benefit touted by autonomous vehicle proponents is safety. A U.S. Department of Transportation and NHTSA statistical projection of traffic fatalities for 2017 estimated that 37,150 people died in motor vehicle traffic crashes that year. The NHTSA estimates that 94% of serious crashes are due to human error or poor choices, such as drunk or distracted driving. Autonomous cars remove those risk factors from the equation -- though self-driving cars are still vulnerable to other factors, such as mechanical issues, that cause crashes.
If autonomous cars can significantly reduce the number of crashes, the economic benefits could be enormous. Injuries impact economic activity, including $57.6 billion in lost workplace productivity, and $594 billion due to loss of life and decreased quality of life due to injuries, according to the NHTSA.
In theory, if the roads were mostly occupied by autonomous cars, traffic would flow smoothly and there would be less traffic congestion. In cars that are fully automated, the occupants could do productive activities while commuting to work. People who aren't able to drive due to physical limitations could find new independence through autonomous vehicles and would have the opportunity to work in fields that require driving.
Autonomous trucks have been tested in the U.S. and Europe to allow drivers to use autopilot over long distances, freeing the driver to rest or complete tasks and improving driver safety and fuel efficiency. This initiative, called truck platooning, is powered by adaptive cruise control (ACC), collision avoidance systems and vehicle-to-vehicle communications for cooperative adaptive cruise control (CACC).
The downsides of autonomous vehicle technology could be that riding in a vehicle without a driver behind the steering wheel may be unnerving, at least at first. But as self-driving capabilities become commonplace, human drivers may become overly reliant on the autopilot technology and leave their safety in the hands of automation, even when they should act as backup drivers in case of software failures or mechanical issues.
In one example from March 2018, Tesla's Model X SUV was on autopilot when it crashed into a highway lane divider. The driver's hands were not on the wheel, despite visual warnings and an audible warning to put his hands back on the steering wheel, according to the company.
Autonomous car safety and challenges
Autonomous cars must learn to identify countless objects in the vehicle's path, from branches and litter to animals and people. Other challenges on the road are tunnels that interfere with Global Positioning Systems (GPS), construction projects that cause lane changes, or complex decisions, like where to stop to allow emergency vehicles to pass.
The systems need to make instantaneous decisions on when to slow down, swerve or continue acceleration normally. This is a continuing challenge for developers and there are reports of self-driving cars hesitating and swerving unnecessarily when objects are detected in or near the roadways.
This problem was clear in a fatal accident in March 2018 which involved an autonomous car operated by Uber. The company reported that the vehicle's software identified a pedestrian but deemed it a false positive and failed to swerve to avoid hitting her. This crash caused Toyota to temporarily cease its testing of self-driving cars on public roads, but its testing will continue elsewhere. The Toyota Research Institute is constructing a test facility on a 60-acre site in Michigan to further develop automated vehicle technology.
With crashes also comes the question of liability, and lawmakers have yet to define who is liable when an autonomous car is involved in an accident. There are also serious concerns that the software used to operate autonomous vehicles can be hacked, and automotive companies are working to address cybersecurity risks.
Car makers are subject to Federal Motor Vehicle Safety Standards, and the NHTSA reports that more work must be done for vehicles to meet those standards.
The road to driverless cars
The path toward self-driving cars began with incremental automation features for safety and convenience before the year 2000, with cruise control and antilock brakes. After the turn of the millennium, advanced safety features including electronic stability control, blind spot detection and collision and lane shift warnings became available in vehicles. Between 2010 and 2016, advanced driver assistance capabilities such as rearview video cameras, automatic emergency brakes and lane centering assistance emerged, according to the NHTSA.
Since 2016, automation has moved toward partial autonomy, with features that help drivers stay in their lane, along with adaptive cruise control technology, and the ability to self-park.
Fully automated vehicles are not publicly available yet and may not be for many years. In the U.S., the NHTSA provides federal guidance for introducing automated driving systems onto public roads and as autonomous car technologies advance, so will the department's guidance.
Self-driving cars are not yet legal on most roads. In June 2011, Nevada became the first jurisdiction in the world to allow driverless cars to be tested on public roadways; California, Florida, Ohio and Washington, D.C., have followed in the years since.
The history of driverless cars goes back much further than that. Leonardo da Vinci designed the first prototype around 1478. Leonardo's car was designed as a self-propelled robot powered by springs, with programmable steering and the ability to run preset courses.