The increasing adoption of machine learning and deep learning models in critical applications raises the issue of ensuring their trustworthiness, which can be addressed by quantifying the uncertainty of their predictions. However, the black-box nature of many such models allows only to quantify uncertainty through ad hoc superstructures, which require to develop and train a model in an uncertainty-aware fashion. However, for applications where previously trained models are already in operation, it would be interesting to develop uncertainty quantification approaches acting as lightweight “plug-ins” that can be applied on top of such models without modifying and re-training them. In this contribution we present a research activity of the Pattern Recognition and Applications Lab of the University of Cagliari related to a recently proposed post hoc uncertainty quantification method, we named dropout injection, which is a variant of the well-known Monte Carlo dropout, and does not require any re-training nor any further gradient descent-based optimization; this makes it a promising, lightweight solution for integrating uncertainty quantification on any already-trained neural network. We are investigating a theoretically grounded solution to make dropout injection as effective as Monte Carlo dropout through a suitable rescaling of its uncertainty measure; we are also evaluating its effectiveness in the computer vision tasks of crowd counting and density estimation for intelligent video surveillance, thanks to our participation in a project funded by the European Space Agency.

Trustworthy AI in Video Surveillance: The IMMAGINA Project

Ledda E.;Roli F.
2023-01-01

Abstract

The increasing adoption of machine learning and deep learning models in critical applications raises the issue of ensuring their trustworthiness, which can be addressed by quantifying the uncertainty of their predictions. However, the black-box nature of many such models allows only to quantify uncertainty through ad hoc superstructures, which require to develop and train a model in an uncertainty-aware fashion. However, for applications where previously trained models are already in operation, it would be interesting to develop uncertainty quantification approaches acting as lightweight “plug-ins” that can be applied on top of such models without modifying and re-training them. In this contribution we present a research activity of the Pattern Recognition and Applications Lab of the University of Cagliari related to a recently proposed post hoc uncertainty quantification method, we named dropout injection, which is a variant of the well-known Monte Carlo dropout, and does not require any re-training nor any further gradient descent-based optimization; this makes it a promising, lightweight solution for integrating uncertainty quantification on any already-trained neural network. We are investigating a theoretically grounded solution to make dropout injection as effective as Monte Carlo dropout through a suitable rescaling of its uncertainty measure; we are also evaluating its effectiveness in the computer vision tasks of crowd counting and density estimation for intelligent video surveillance, thanks to our participation in a project funded by the European Space Agency.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1158792
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact