In this paper we explore some of the potential applications of robustness criteria for machine learning (ML) systems by way of tangible 'demonstrator' scenarios. In each demonstrator, ML robustness metrics are applied to real-world scenarios with military relevance, indicating how they might be used to help detect and handle possible adversarial attacks on ML systems. We conclude by sketching promising future avenues of research in order to: (1) help establish useful verification methodologies to facilitate ML robustness compliance assessment; (2) support development of ML accountability mechanisms; and (3) reliably detect, repel, and mitigate adversarial attack.
Evaluation of Robustness Metrics for Defense of Machine Learning Systems
Roli F.;Ledda E.;
2023-01-01
Abstract
In this paper we explore some of the potential applications of robustness criteria for machine learning (ML) systems by way of tangible 'demonstrator' scenarios. In each demonstrator, ML robustness metrics are applied to real-world scenarios with military relevance, indicating how they might be used to help detect and handle possible adversarial attacks on ML systems. We conclude by sketching promising future avenues of research in order to: (1) help establish useful verification methodologies to facilitate ML robustness compliance assessment; (2) support development of ML accountability mechanisms; and (3) reliably detect, repel, and mitigate adversarial attack.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.