In the context of driverless cars, it may refer to the fact that objects detected by the car’s sensors, such as lidar or radar, may appear closer than they actually are, and the car’s software may need to take this into account when determining the car’s course of action.
The first assault method that can trick industry-standard autonomous car sensors into thinking nearby objects are closer (or further) than they look without being detected has been demonstrated by researchers at Duke University.
According to the research, in order to completely defend autonomous cars against attacks, it might be essential to add visual 3D capabilities or the capacity to share data with neighboring automobiles.
The results will be presented Aug. 10-12 at the 2022 USENIX Security Symposium.
Protecting against attacks is one of the main issues facing researchers working on autonomous driving systems. Checking data from various devices against one another to ensure that their measurements agree is a frequent practice to ensure safety.
The most common locating technology used by today’s autonomous car companies combines 2D data from cameras and 3D data from LiDAR, which is essentially laser-based radar. This combination has shown to be extremely resistant to a variety of attacks that aim to trick the visual system into perceiving the world inaccurately.
At least, until now.
“Our goal is to understand the limitations of existing systems so that we can protect against attacks,” said Miroslav Pajic, the Dickinson Family Associate Professor of Electrical and Computer Engineering at Duke. “This research shows how adding just a few data points in the 3D point cloud ahead or behind of where an object actually is, can confuse these systems into making dangerous decisions.”
The new attack method adds erroneous data points to the LIDAR sensor of a car by firing a laser gun into it. Previous studies have proven that the system can identify the attack if those data points are drastically out of alignment with what a car’s camera is observing.
Our goal is to understand the limitations of existing systems so that we can protect against attacks. This research shows how adding just a few data points in the 3D point cloud ahead or behind of where an object actually is, can confuse these systems into making dangerous decisions.
Miroslav Pajic
But the new research from Pajic and his colleagues shows that 3D LIDAR data points carefully placed within a certain area of a camera’s 2D field of view can fool the system.
This exposed area extends in front of the camera’s lens in the form of a 3D pyramid with its tip severed. This implies that a few data points placed in front of or behind another adjacent automobile can cause the system to perceive it to be several meters farther away in the case of a forward-facing camera installed on a car.
“This so-called frustum attack can fool adaptive cruise control into thinking a vehicle is slowing down or speeding up,” Pajic said. “And by the time the system can figure out there’s an issue, there will be no way to avoid hitting the car without aggressive maneuvers that could create even more problems.”
According to Pajic, there is not much risk of somebody taking the time to set up lasers on a car or roadside object to trick individual vehicles passing by on the highway. However, in military settings where a single vehicle can be a very valuable target, the risk is greatly increased. And multiple vehicles could be assaulted simultaneously if hackers could figure out a means to generate these fictitious data points virtually rather than using actual lasers.
The path to protecting against these attacks, Pajic says, is added redundancy. For example, if cars had “stereo cameras” with overlapping fields of view, they could better estimate distances and notice LIDAR data that does not match their perception.
“Stereo cameras are more likely to be a reliable consistency check, though no software has been sufficiently validated for how to determine if the LIDAR/stereo camera data are consistent or what to do if it is found they are inconsistent,” said Spencer Hallyburton, a PhD candidate in Pajic’s Cyber-Physical Systems Lab (CPSL@Duke) and the lead author of the study. “Also, perfectly securing the entire vehicle would require multiple sets of stereo cameras around its entire body to provide 100% coverage.”
Another option, Pajic suggests, is to develop systems in which cars within close proximity to one another share some of their data. A cyberattack is unlikely to be able to hit all cars with a single blow since different models of cars may have different operating systems, just as physical attacks are unlikely to be able to affect numerous cars at once.
“With all of the work that is going on in this field, we will be able to build systems that you can trust your life with,” Pajic said. “It might take 10+ years, but I’m confident that we will get there.”
This work was supported by the Office of Naval Research (N00014-20-1-2745), the Air Force Office of Scientific Research (FA9550-19-1-0169) and the National Science Foundation (CNS-1652544, CNS-2112562).
CITATION: “Security Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles,” R. Spencer Hallyburton, Yupei Liu, Yulong Cao, Z. Morley Mao, Miroslav Pajic. 31st USENIX Security Symposium, Aug. 10-12, 2022.