Enhancing Adversarial Robustness via Test-time Transformation Ensembling

Juan C. Perez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda, Ali Thabet, Bernard Ghanem, Pablo Arbelaez

Research output: Contribution to journalConference articlepeer-review

24 Scopus citations

Abstract

Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks. In this work, we study how equipping models with Test-time Transformation Ensembling (TTE) can work as a reliable defense against such attacks. While transforming the input data, both at train and test times, is known to enhance model performance, its effects on adversarial robustness have not been studied. Here, we present a comprehensive empirical study of the impact of TTE, in the form of widely-used image transforms, on adversarial robustness. We show that TTE consistently improves model robustness against a variety of powerful attacks without any need for re-training, and that this improvement comes at virtually no trade-off with accuracy on clean samples. Finally, we show that the benefits of TTE transfer even to the certified robustness domain, in which TTE provides sizable and consistent improvements.

Original languageEnglish
Pages (from-to)81-91
Number of pages11
JournalProceedings of the IEEE International Conference on Computer Vision
DOIs
StatePublished - 2021
Event18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021 - Virtual, Online, Canada
Duration: 11 Oct 202117 Oct 2021

Bibliographical note

Publisher Copyright:
© 2021 IEEE.

Fingerprint

Dive into the research topics of 'Enhancing Adversarial Robustness via Test-time Transformation Ensembling'. Together they form a unique fingerprint.

Cite this