Google software engineers have created an image-independent adversarial patch that can fool image classifiers regardless of scale or location.
In a paper released on 27 December 2017, Tom B Brown et al were able to present a method used for creating image patches in the real world on a large scale. While other engineers have been able to fool image recognition with small imperceptible changes, the Google Brain team was able to create an image-independent patch with the potential to be widely distributed across the Internet by would-be malicious attackers to print out and use.
“We show that we can generate a universal, robust, targeted patch that fools classifiers regardless of the scale or location of the patch, and does not require knowledge of the other items in the scene that it is attacking,” Brown et al stated.
“Our attack works in the real world, and can be disguised as an innocuous sticker. These results demonstrate an attack that could be created offline, and then broadly shared.”
The patches can be printed, added to any scene, photographed, and presented to image classifiers. Even if the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class, according to the Google engineers.
As this type of attack uses a large perturbation, existing defence techniques that focus on defending against small perturbations “may not be robust to larger perturbations such as these”. Brown and his colleagues revealed that recent work had demonstrated state-of-the-art adversarially trained models on MNIST, the hand-written database used to train image processing systems, “are still vulnerable to larger perturbations than those used in training”. This was either by searching for a nearby adversarial example using a different metric for distance or by applying large perturbations in the background, the team stated.
The Google engineers concluded that focusing on defending against small perturbations “is insufficient”. Read the paper in full.
Author: Desi Corbett