This doctoral thesis explores the concept of trustworthiness in Machine Learning (ML) systems, focusing on four key aspects: Explainability, Fairness, Privacy, and Robustness. The increasing ubiquity of ML in various domains, from healthcare to finance, requires the development of models that not only perform well but are also interpretable, unbiased, private, and robust. In other words, they need to be worthy of human trust. Ensuring these attributes in an environment where ML algorithms are omnipresent is, again, becoming crucial. Indeed, domain experts frequently highlight that fundamental rights, such as the right to non-discrimination, to safeguard private information, or to demand an explanation, are often either neglected or directly threatened. The initial contribution of this work is the collection of technical definitions and concepts associated with the four primary attributes of trustworthy ML. This provides a broad review of what has been proposed in the current literature, offering a comprehensive overview for an agnostic reader. Subsequently, this work contributes to the field by developing algorithms and frameworks that horizontally deliver explainability, fairness, privacy, and robustness. This demonstrates the feasibility of constructing supervised ML applications that simultaneously prioritize utility and guarantee trustworthiness. In conclusion, the definitive aim of this thesis is to present an overview of trustworthy ML and build upon the field's foundational works, demonstrating the potential to deploy reliable and ethically sound ML systems for real-world applications.

Towards Trustworthiness in Artificial Intelligence: Pushing for Explainable, Fair, Robust, and Private Supervised Machine Learning

FRANCO, DANILO
2024-05-31

Abstract

This doctoral thesis explores the concept of trustworthiness in Machine Learning (ML) systems, focusing on four key aspects: Explainability, Fairness, Privacy, and Robustness. The increasing ubiquity of ML in various domains, from healthcare to finance, requires the development of models that not only perform well but are also interpretable, unbiased, private, and robust. In other words, they need to be worthy of human trust. Ensuring these attributes in an environment where ML algorithms are omnipresent is, again, becoming crucial. Indeed, domain experts frequently highlight that fundamental rights, such as the right to non-discrimination, to safeguard private information, or to demand an explanation, are often either neglected or directly threatened. The initial contribution of this work is the collection of technical definitions and concepts associated with the four primary attributes of trustworthy ML. This provides a broad review of what has been proposed in the current literature, offering a comprehensive overview for an agnostic reader. Subsequently, this work contributes to the field by developing algorithms and frameworks that horizontally deliver explainability, fairness, privacy, and robustness. This demonstrates the feasibility of constructing supervised ML applications that simultaneously prioritize utility and guarantee trustworthiness. In conclusion, the definitive aim of this thesis is to present an overview of trustworthy ML and build upon the field's foundational works, demonstrating the potential to deploy reliable and ethically sound ML systems for real-world applications.
31-mag-2024
Trustworthy AI
File in questo prodotto:
File Dimensione Formato  
phdunige_3809721.pdf

accesso aperto

Tipologia: Tesi di dottorato
Dimensione 8.9 MB
Formato Adobe PDF
8.9 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1174964
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact