In this paper we explore some of the potential applications of robustness criteria for machine learning (ML) systems by way of tangible 'demonstrator' scenarios. In each demonstrator, ML robustness metrics are applied to real-world scenarios with military relevance, indicating how they might be used to help detect and handle possible adversarial attacks on ML systems. We conclude by sketching promising future avenues of research in order to: (1) help establish useful verification methodologies to facilitate ML robustness compliance assessment; (2) support development of ML accountability mechanisms; and (3) reliably detect, repel, and mitigate adversarial attack.

Evaluation of Robustness Metrics for Defense of Machine Learning Systems

Roli F.;Ledda E.;
2023-01-01

Abstract

In this paper we explore some of the potential applications of robustness criteria for machine learning (ML) systems by way of tangible 'demonstrator' scenarios. In each demonstrator, ML robustness metrics are applied to real-world scenarios with military relevance, indicating how they might be used to help detect and handle possible adversarial attacks on ML systems. We conclude by sketching promising future avenues of research in order to: (1) help establish useful verification methodologies to facilitate ML robustness compliance assessment; (2) support development of ML accountability mechanisms; and (3) reliably detect, repel, and mitigate adversarial attack.
2023
979-8-3503-4385-4
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1158797
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact