Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks.

RSS Source
Authors
Thilo Strauss, Markus Hanselmann, Andrej Junginger, Holger Ulmer

Deep learning has become the state of the art approach in many machinelearning problems such as classification. It has recently been shown that deeplearning is highly vulnerable to adversarial perturbations. Taking the camerasystems of self-driving cars as an example, small adversarial perturbations cancause the system to make errors in important tasks, such as classifying trafficsigns or detecting pedestrians. Hence, in order to use deep learning withoutsafety concerns a proper defense strategy is required. We propose to useensemble methods as a defense strategy against adversarial perturbations. Wefind that an attack leading one model to misclassify does not imply the samefor other networks performing the same task. This makes ensemble methods anattractive defense strategy against adversarial attacks. We empirically showfor the MNIST and the CIFAR-10 data sets that ensemble methods not only improvethe accuracy of neural networks on test data but also increase their robustnessagainst adversarial perturbations.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.