Evolving robust neural architectures to defend from adversarial attacks

Research output: Contribution to journalConference articlepeer-review

2 Citations (Scopus)

Abstract

Neural networks are prone to misclassify slightly modified input images. Recently, many defences have been proposed, but none have improved the robustness of neural networks consistently. Here, we propose to use adversarial attacks as a function evaluation to search for neural architectures that can resist such attacks automatically. Experiments on neural architecture search algorithms from the literature show that although accurate, they are not able to find robust architectures. A significant reason for this lies in their limited search space. By creating a novel neural architecture search with options for dense layers to connect with convolution layers and vice-versa as well as the addition of concatenation layers in the search, we were able to evolve an architecture that is inherently accurate on adversarial samples. Interestingly, this inherent robustness of the evolved architecture rivals state-ofthe-art defences such as adversarial training while being trained only on the non-adversarial samples. Moreover, the evolved architecture makes use of some peculiar traits which might be useful for developing even more robust ones. Thus, the results here confirm that more robust architectures exist as well as opens up a new realm of feasibilities for the development and exploration of neural networks.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2640
Publication statusPublished - 2020
Event2020 Workshop on Artificial Intelligence Safety, AISafety 2020 - Yokohama, Japan
Duration: Jan 5 2021Jan 10 2021

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Evolving robust neural architectures to defend from adversarial attacks'. Together they form a unique fingerprint.

Cite this