AMERICA’S National Highway Traffic Safety Administration (NHTSA) is investigating the fatal crash in May of a Tesla Model S electric car. Normally such an accident, tragic though it is for the friends and family of the victim, would not warrant a high-level inquiry of this sort. In the case in question, though, the car was operating on Autopilot. That is the name Tesla, an electric-vehicle-maker based in California, has chosen for its “autonomous-driving mode”, in which the vehicle itself, via sensors and computers, lifts from the person behind the wheel much of the burden of controlling the car. According to Tesla, neither the Model S’s driver nor the car’s own sensors noticed a large articulated lorry crossing the road ahead. The car therefore failed to brake, and it ended up careering under the lorry’s trailer. That ripped off its roof, killing the driver.
In the accident, which happened in Florida, the lorry, which was painted white, was set against a brightly lit sky, Tesla noted. One possibility is that the vehicle’s cameras, working in combination with its forward-facing radar, wrongly concluded that the lorry was an overhead sign with space beneath it. Some reports have suggested the driver might have been watching a video at the time. But whatever the NHTSA determines to be the cause, the accident makes plain that self-driving cars still have a long way to go before they are ready for routine use.
Tesla acknowledges this by describing Autopilot as an “assist” feature designed to relieve some of the workload of driving. When engaged, the system advises drivers: “Always keep your hands on the wheel. Be prepared to take over at any time.” Autopilot periodically checks pressure on the steering wheel to ensure that it is being held, and will slow the car if no pressure is detected. Yet plenty of videos have been posted on social media of drivers not touching the steering wheel and relying totally on their vehicle’s autonomous features. One of these was filmed by a driver from the back seat.
The virtues of driving virtually
For Tesla and other firms developing autonomous vehicles (from information-technology companies such as Google and Uber to established carmakers), the systems now available are more akin to intelligent cruise control than robot chauffeurs. But the features they provide, such as lane-keeping, automatic braking, maintaining a safe distance from the vehicle in front and overtaking, are necessary steps towards fully self-driving cars that, backers say, will operate more safely than those driven by people. Most accidents are, after all, caused by human error.
To get to that happy state of affairs, though, much practical development work must take place. Doing this on the open road provides the most realistic data, but as the accident in Florida shows, this can be a risky business. A new facility, at the University of Warwick, in Britain, offers an alternative approach. It is a driving simulator specifically designed to test “intelligent” vehicles. It can thus interact with the sensors of an autonomous car and put that car through its paces without its needing to go on the road.
The car to be tested sits in the middle of the simulator, which projects a 360° high-definition image of the vehicle’s virtual surroundings, constructed from digital maps of 48km (30 miles) of roads in and around the nearby city of Coventry, together with adjacent buildings and scenery. The simulator comes complete with virtual traffic, cyclists, pedestrians and even dogs scampering into the road—all of which its operators can control. It also features surround-sound and actuators that move the vehicle as it would when accelerating, braking or cornering. Even the thump of a virtual pothole can be created.
Some car sensors will interact directly with the projected image. Camera-based systems on many vehicles typically use a form of artificial intelligence, called machine vision, to analyse the shapes of objects. But this can go wrong, says Paul Jennings, head of experimental engineering at Warwick, such as when cameras succumb to a condition known as “washout” caused by the glare of bright sunrises and sunsets. Unlike the real world, hundreds of sunrises and sunsets can be created in the simulator every day. This will speed up the development of antiglare systems. Other visible hazards that might be hard for self-driving cars to manage—streets crowded with pedestrians, cars jumping red lights, joggers suddenly running into the road—can also be created endlessly in a simulator without endangering anyone.
Cameras are not, though, the only sensors fitted to autonomous vehicles. They also have devices that can detect how far away objects are. These may use ultrasound, radar or lidar (a system like radar but which substitutes laser light for radio waves). The researchers at Warwick can bypass these sensors and feed in simulated signals from the computer model. But they are also working on ways to test the sensors directly. One possibility is to generate radar or ultrasonic signals and send them to the test vehicle as if they had been reflected from cars and other objects in the projected scene.
Besides testing a car’s hardware and software, Dr Jennings’s simulator will also test its “wetware”—ie, the humans who are being transported—for he plans to invite members of the public to become drivers and passengers. His idea is to use gaze-monitoring and cameras inside the vehicles to find out how they respond to certain situations. In particular, he and his colleagues hope to see how quickly they realise that something might be going wrong and understand that they should therefore take back control of the car. This is important, for there is ample evidence that some people put too much trust in machines. For example, drivers have been known to follow instructions from satellite-navigation devices slavishly, even when the result is that they end up hundreds of kilometres from their intended destinations.
Hacked to death?
Autonomous vehicles also rely on navigation signals from satellites, though, and on other wireless transmissions as well. In the future, such connectivity will increase. Autonomous vehicles will probably communicate both with each other and with bits of transport infrastructure, such as traffic lights. The integrity of the signals involved will be paramount. So for safety’s sake, Dr Jennings’s machine can simulate what happens when contact is degraded or shut off—for example, when a vehicle enters a tunnel or a city canyon of tall buildings. A giant Faraday cage, formed from a mesh of materials that block electrical signals, surrounds the simulator. This both insulates it from outside interference and enables the signals that are required inside it to be created and controlled accurately, and terminated at will.
On top of this testing of accidental interference with a car’s wireless traffic, the team will also try to hack deliberately into vehicles—something that it would be illegal as well as irresponsible to attempt on public roads. Such tests, nevertheless, need to be done. Carsten Maple, a cyber-security expert at Warwick, reckons criminals are only about five years away from being able to disable a car’s ignition remotely, holding it to ransom until the owner has made a payment. Indeed, in 2015 Fiat Chrysler recalled 1.4m vehicles in America after security researchers showed it was possible to take control of a Jeep Cherokee via its internet-connected entertainment system.
Despite the potential problems, though, Dr Jennings and his team are convinced that genuinely driverless vehicles have a big future. At first this future could be in controlled and specially designated areas, such as city centres. One vehicle that will be tested in the simulator has been designed with just such a purpose in mind. It is an electrically powered passenger-carrying pod produced by RDM, a firm in Coventry. The pods are already being tested in pedestrianised areas of Milton Keynes, a modernist British city. RDM says they are also intended for use in places such as airports, shopping centres, university campuses and theme parks.
On the open road, however, it may take longer before steering wheels become obsolete. Even after extensive testing in simulators, the performance of autonomous systems will still need to be verified in the real world. And no self-driving system will ever be completely foolproof. As the Florida crash showed, accidents will still happen—although, mercifully, there may be fewer of them.