Recent researches have been shown that models induced by machine learning, in particular by deep learning, can be easily fooled by an adversary who carefully crafts imperceptible, at least from the human perspective, or physically plausible modifications of the input data. This discovery gave birth to a new field of research, the adversarial machine learning, where new methods of attacks and defence are developed continuously, mimicking what is happening from a long time in cybersecurity. In this paper we will show that the drawbacks of inducing models from data less prone to be misled actually provides some benefits when it comes to assess their generalisation abilities.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||The Benefits of Adversarial Defence in Generalisation|
|Data di pubblicazione:||2021|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|