Researchers fool a Tesla Model X with projectors

Ben Nassi, a PhD student at the Ben-Gurion University of the Negev, carried out a series of experiments that fooled autonomous driving software using projected images.

Nassi and other BGU researchers conducted their research under the supervision of Professor Yuval Elovici. Their research project was entitled “Phantom of the ADAS” (Phantom Attacks Against Advanced Driving Assistance Systems). The team used drones and projectors to beam images onto road surfaces that fooled both Mobileye 630 PRO and the Tesla Model X driving systems.

In one of the tests, the team used a projector to beam an image of Tesla boss Elon Musk onto a road surface before an oncoming Tesla vehicle. The Model X then responded to the depthless image by displaying a human obstacle on its digital dashboard indicating that the spoof worked. In another test, the team projected a car onto the road which the Tesla perceived as an obstacle. Even though Tesla’s radar sensors did not sense physical obstructions ahead, the vehicle still responded to the projections in the same way it would react to a 3D obstacle.

The BGU researchers also fooled Intel’s Mobileye 630 PRO automated driving assistance systems. They projected a fake road sign onto a tree that the system identified as a real object. The system then displayed the corresponding digital sign inside the vehicle. Nassi and his team also demonstrated that these attacks could be carried out remotely with the use of drones. They flashed a fake road sign, onto a surface for a mere 125 milliseconds using a small projector and a drone. Most drivers wouldn’t even notice the image. However, a vehicle equipped with Mobileye 630 PRO braked suddenly to comply with the spontaneously appearing sign.

Finally, the tests also showcased a terrifying scenario by displaying how phantom projections could cause a Tesla Model X to swerve from its lane into oncoming traffic. All the above situations are scary and raise serious questions for the tech and motoring community at large.

“Those object detection models “were not trained to distinguish between real and fake objects.

The Phantom tests show how attackers could use similar methods to mess with a vehicle’s systems or worse, cause an accident. Ben Nassi was quick to point out that the vehicle’s responses to the projections were not the result of bugs or poor programming. He noted that there is an overall flaw in the way some computers are trained to identify objects. He wrote that those object detection models “were not trained to distinguish between real and fake objects.”

Additionally, while many other attacks required skilled persons to be present at a scene, Phantom attacks don’t need much skill, knowledge or expensive gadgets. That makes these attacks dangerous.

“The tests prove that human intelligence is still king on the roads”

Nassi also highlighted the fact that vehicle to vehicle communication technologies could assist in attacks of this nature. By implementing vehicle communication systems, cars could verify objects with other vehicles for a more accurate reading.

Image result for fool a Tesla Model X with projectors

The tests prove that human intelligence is still king on the roads. Most if not all the images displayed wouldn’t fool a human driver, but the machines fell for the traps.

Tesla’s semi-autonomous driving and Intel’s Mobileye 630 PRO systems are believed to be two of the safest in the world. But Be Nassi and his team exposed critical flaws in the way these systems recognise objects. On a more positive note, the researchers claim they shared their findings with the respective companies back in 2019.