People who will own self-driving cars in the upcoming years can enjoy the benefits of checking their emails, and even watching films on built-in screens, according to the updates of the Highway Code.
Automated vehicles (AVs) have seen a boom in popularity thanks to companies such as Tesla. The announcement from the government shows, AVs will have an important role in tackling congestion, climate change, and public transport. But AV technology is still in the early stages and many disagree on the next steps for implementation.
Two-thirds of Brits say that they are uncomfortable riding in, walking around, or driving near-autonomous cars. Another survey by Insure the Gap found that half of Britain doesn’t trust driverless technology. But more interestingly, there seems to be a gender gap in opinion, with men being the more trusting of this technology.
The key question is, just how safe are self-driving cars?
Safety of driverless cars
Circling the internet is a mass of videos of self-driving cars with homicidal tendencies, often not recognising the difference between a road and a human being. Understandably, many people are nervous about letting artificial intelligence (AI) control the roads, and rightfully so.
The self-driving car accident rate is higher than that of human-driven vehicles. Regular vehicles have a rate of 4.1 crashes per million miles driven. However, when looking at the severity of injuries, in general, the injuries caused by self-driving cars are less severe than normal cars.
According to the Scottish Law Commission, for a vehicle to be deemed safe, it must allow for a driver to regain control easily. An individual who is not monitoring the vehicle can be expected to remain ‘receptive’ to a transition demand, provided three criteria are met:
(1) ‘Clear, multi-sensory signals’: The transition should not just rely on visual and audio warnings. It should also include haptic signals (such as vibrations) so that they can be received by a hearing-impaired driver who is not looking at the console.
(2) ‘Sufficient time to gain situational awareness’: Current reviews are stating that ten seconds are enough, however, companies must give sufficient reasons for the amount of time given.
(3) ‘Mitigation against the risk of injury or damage if the user fails to take over’: Again, what is sufficient will need to be assessed by the regulator. As a minimum, we would expect the ADS (Auto driving system) to bring the vehicle to a controlled stop in the lane.
Human error accounts for about 88 percent of car accidents, by using a driver-assisted approach to AI, we should reduce the amount of damage caused by accidents by a significant amount.
Driver-assisted approach vs self-driving
What we have currently can best be described as ‘driver assisted’ technology, rather than fully self-driving cars. In other words, a human being should still be at the forefront when it comes to driving, and should be able to take control of a vehicle at any point, especially if it can prevent an accident.
Self-driving cars are still a long way from being deemed safe on the road, so taxi drivers can feel safe knowing that robots are not coming for their jobs anytime soon, and if companies do employ self-driving cars, they would still need a capable driver at the help.
However, there is still debate over whether we should have full automation or the gradual approach the government is taking.
Fully automated cars by 2045?
The gradual approach might even slow down the government’s goal of having a fully automated cars on the road by 2045, as hesitation grows, it is probably easier to pull off the auto-driving Band-Aid quickly.
Hod Lipson and Melba Kauffman, believe that the idea that autonomous driving will happen in stages is a myth that will delay innovation and perhaps cost more lives in the process. When humans share the wheel they cease to pay attention, making them less capable to grab the wheel if something goes wrong.
The distinction between driver assistance and self-driving is crucial. Yet many drivers are currently confused about where the boundary lies. This can be dangerous. This problem is aggravated if marketing gives drivers the misleading impression that they do not need to monitor the road while driving – even though the technology is not good enough to be self-driving.
So fully self-driving cars are a long way away, but this won’t stop companies from advertising their vehicles as such. To stop misleading adverts, Lawcom suggests two new criminal offences.
(1) uses terms such as ‘self-drive’, ‘self-driving’, ‘drive itself’, ‘driverless’ and ‘automated vehicle’ when discussing driving automation technology that is:
(a) not authorised under our recommended AV authorisation scheme, and (b) designed for use on roads or in public places; or
(2) is likely to confuse drivers into thinking that an unauthorised vehicle does not need to be monitored when on a public road or place.
Remember in the early days of self-driving Teslas, where videos circulated the internet of the cars developing a mind of their own and running over dogs? And then those videos just stopped.
Before we jump into a conspiracy theory regarding Tesla. Let’s talk about the concept of Deep Learning. Deep learning is the catalyst behind many of the advances and improvements in safety of automatic vehicles. Software that has taken decades to develop is an integral part of AI technology. It is an artificial neural network with three or more layers, designed to emulate brain activity.
As automatic driving systems are exposed to more information, and respond to more stimuli, their intelligence grows. Its ability to recognise images also improves, in a way the more we use self-driving cars, the more they can technically ‘see’. This should explain why the videos of Tesla cars making almost fatal mistakes diminished as they were used more often.
When self-driving cars are connected, they can exchange this information and boost their ability to recognise features, for example, the difference between a pedestrian and a stoplight. In theory the more people using self-driving cars, the safer they will be.
Who is at fault?
A world with autonomous vehicles is a world fraught with change in the way we move and the way we think of highway infrastructure. A legal framework is already being developed, and we should hope that AVs don’t mean a lack of accountability when the driver is at fault.
Assuming the government follows the advice following a complete handover, voluntarily or in response to a demand, the driver in question will assume all responsibilities as they will be the driver. If not it will follow as the government has laid out, that the insurance company will be liable. Technology can help us see the exact time when a takeover has occurred.
Mad Max or Star Trek?
The future of self-driving seems uncertain the way our government is approaching driver-assisted vehicles may cause more harm than good. However, we won’t be seeing homicidal Hondas taking the road, but instead a process of trial and error.
Looking at the data, self-driving vehicles pose less of a danger than normal cars, and despite getting into more accidents, the incidents are relatively minor, and thus trial and error may not be such a catastrophic decision.
Although it may be impractical, the more people that adopt self-driving vehicles, the safer our roads will be.