Enhancing Adversarial Robustness via Test-time Transformation Ensembling

  • Juan C. Perez
  • , Motasem Alfarra
  • , Guillaume Jeanneret
  • , Laura Rueda
  • , Ali Thabet
  • , Bernard Ghanem
  • , Pablo Arbelaez

Producción científica: Contribución a una revistaArtículo de la conferenciarevisión exhaustiva

24 Citas (Scopus)

Resumen

Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks. In this work, we study how equipping models with Test-time Transformation Ensembling (TTE) can work as a reliable defense against such attacks. While transforming the input data, both at train and test times, is known to enhance model performance, its effects on adversarial robustness have not been studied. Here, we present a comprehensive empirical study of the impact of TTE, in the form of widely-used image transforms, on adversarial robustness. We show that TTE consistently improves model robustness against a variety of powerful attacks without any need for re-training, and that this improvement comes at virtually no trade-off with accuracy on clean samples. Finally, we show that the benefits of TTE transfer even to the certified robustness domain, in which TTE provides sizable and consistent improvements.

Idioma originalInglés
Páginas (desde-hasta)81-91
Número de páginas11
PublicaciónProceedings of the IEEE International Conference on Computer Vision
DOI
EstadoPublicada - 2021
Evento18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021 - Virtual, Online, Canadá
Duración: 11 oct. 202117 oct. 2021

Nota bibliográfica

Publisher Copyright:
© 2021 IEEE.

Huella

Profundice en los temas de investigación de 'Enhancing Adversarial Robustness via Test-time Transformation Ensembling'. En conjunto forman una huella única.

Citar esto