Revisiting model’s uncertainty and confidences for adversarial example detection

Abstract Security-sensitive applications that rely on Deep Neural Networks (DNNs) are vulnerable to small perturbations that are crafted to generate Adversarial Examples. The (AEs) are imperceptible to humans and cause DNN to misclassify them. Many defense and detection techniques have been proposed...
Ausführliche Beschreibung

Gespeichert in:
Autor*in:

Aldahdooh, Ahmed [verfasserIn]

Hamidouche, Wassim

Déforges, Olivier

Format:

E-Artikel

Sprache:

Englisch

Erschienen:

2022

Schlagwörter:

Adversarial examples

Adversarial attacks

Adversarial example detection

Deep learning robustness

Anmerkung:

© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022

Übergeordnetes Werk:

Enthalten in: Applied intelligence - Dordrecht [u.a.] : Springer Science + Business Media B.V, 1991, 53(2022), 1 vom: 19. Apr., Seite 509-531

Übergeordnetes Werk:

volume:53 ; year:2022 ; number:1 ; day:19 ; month:04 ; pages:509-531

Links:

Volltext

DOI / URN:

10.1007/s10489-022-03373-y

Katalog-ID:

SPR048958409

Nicht das Richtige dabei?

Schreiben Sie uns!