In the last decade it became increasingly apparent the inability of technical metrics such as accuracy, sustainability, and non-regressiveness to well characterize the behavior of intelligent systems. In fact, they are nowadays requested to meet also ethical requirements such as explainability, fairness, robustness, and privacy increasing our trust in their use in the wild. Of course often technical and ethical metrics are in tension between each other but the final goal is to be able to develop a new generation of more responsible and trustworthy machine learning. In this paper, we focus our attention on machine learning algorithms and associated predictive models, questioning for the first time, from a theoretical perspective, if it is possible to simultaneously guarantee their performance in terms of both technical and ethical metrics towards machine learning algorithms that we can trust. In particular, we will investigate for the first time both theory and practice of deterministic and randomized algorithms and associated predictive models showing the advantages and disadvantages of the different approaches. For this purpose we will leverage the most recent advances coming from the statistical learning theory: Complexity-Based Methods, Distribution Stability, PAC-Bayes, and Differential Privacy. Results will show that it is possible to develop consistent algorithms which generate predictive models with guarantees on multiple trustworthiness metrics.
Towards algorithms and models that we can trust: A theoretical perspective
Oneto L.;Ridella S.;Anguita D.
2024-01-01
Abstract
In the last decade it became increasingly apparent the inability of technical metrics such as accuracy, sustainability, and non-regressiveness to well characterize the behavior of intelligent systems. In fact, they are nowadays requested to meet also ethical requirements such as explainability, fairness, robustness, and privacy increasing our trust in their use in the wild. Of course often technical and ethical metrics are in tension between each other but the final goal is to be able to develop a new generation of more responsible and trustworthy machine learning. In this paper, we focus our attention on machine learning algorithms and associated predictive models, questioning for the first time, from a theoretical perspective, if it is possible to simultaneously guarantee their performance in terms of both technical and ethical metrics towards machine learning algorithms that we can trust. In particular, we will investigate for the first time both theory and practice of deterministic and randomized algorithms and associated predictive models showing the advantages and disadvantages of the different approaches. For this purpose we will leverage the most recent advances coming from the statistical learning theory: Complexity-Based Methods, Distribution Stability, PAC-Bayes, and Differential Privacy. Results will show that it is possible to develop consistent algorithms which generate predictive models with guarantees on multiple trustworthiness metrics.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S0925231224005691-main.pdf
accesso chiuso
Tipologia:
Documento in Post-print
Dimensione
1 MB
Formato
Adobe PDF
|
1 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.