Adversarial techniques were developed that can make objects “invisible” to image detection systems that use deep-learning algorithms. These techniques can also trick systems into thinking they see another object or can change the location of objects.

Many of today's vehicles use object detection systems to help avoid collisions. Unique patterns can trick these systems into seeing something else, seeing the objects in another location, or not seeing the objects at all. In this photo, the object detection system sees a person rather than a vehicle. (SWRI)

Deep-learning algorithms excel at using shapes and color to recognize the differences between humans and animals, or cars and trucks, for example. These systems reliably detect objects under an array of conditions and as such, are used in myriad applications and industries, often for safety-critical uses. The automotive industry uses deep-learning object detection systems on roadways for lane-assist, lane-departure, and collision-avoidance technologies. These vehicles rely on cameras to detect potentially hazardous objects around them. While the image processing systems are vital for protecting lives and property, the algorithms can be deceived by parties intent on causing harm.

Security researchers working in “adversarial learning” are finding and documenting vulnerabilities in deep- and other machine-learning algorithms. Researchers created futuristic, Bohemian-style patterns that when worn by a person or mounted on a vehicle, trick object detection cameras into thinking the objects aren't there, that they're something else, or that they're in another location. Malicious parties could place these patterns near roadways, potentially creating chaos for vehicles equipped with object detectors.

The patterns cause the algorithms in the camera to either misclassify or mislocate objects, creating a vulnerability. The patterns are referred to as perception invariant adversarial examples because they don't need to cover the entire object or be parallel to the camera to trick the algorithm. The algorithms can misclassify the object as long as they sense some part of the pattern.

While they might look like unique and colorful displays of art to the human eye, these patterns are designed in such a way that object-detection camera systems see them very specifically. A pattern disguised as an advertisement on the back of a stopped bus could make a collision-avoidance system think it sees a harmless shopping bag instead of the bus. If the vehicle's camera fails to detect the true object, it could continue moving forward and hit the bus, causing a potentially serious collision.

The team created a framework capable of repeatedly testing these attacks against a variety of deep learning detection programs that will be extremely useful for testing solutions. They continue to evaluate how much or how little of the pattern is needed to misclassify or mislocate an object.

For more information, contact Maria Stothoff at 210-522-3305.