Generalizability and interpretability are common terminologies that can be found in today’s machine learning algorithm design. Generalizability requires a clear understanding of one’s own action (self-awareness) and a robust interaction with the environment (situation awareness). Many current studies are devoted in developing an algorithm that is more robust in generalizing unseen situations while explaining self-action. However, such algorithms are complex and are not yet fully developed to be used in production. Intelligent transportation systems like self-driving cars are one of the emerging technologies that need generalizability and explainability in anomalous conditions. We propose to enhance generalizability and interpretability of a self-driving car model by introducing a novel methodology that fuses multi-sensorial data from proprioceptive and exteroceptive sensors of an agent, coupled in a Hierarchical Dynamic Bayesian Network model, in an Active Inference framework. The developed model has three stages: 1) a lower dimensional unsupervised learning stage, considering odometry and action modalities, carried out by first applying Null Force Filtering and then by applying modified GNG clustering algorithms; 2) a self-supervised higher-dimensional video modality learning stage assisted by the learned odometry vocabularies; and 3) an online model-based active learning in continuous and discrete state spaces, and action spaces, in the Active Inference framework. The developed system is tested using the CARLA simulator environment for localizing interacting agents, and exhibits low error compared to state-of-the-art methods.
Integrated Learning and Decision Making for Autonomous Agents through Energy based Bayesian Models
Alemaw, Abrham Shiferaw;Zontone, Pamela;Marcenaro, Lucio;Gomez, David Martin;Regazzoni, Carlo
2024-01-01
Abstract
Generalizability and interpretability are common terminologies that can be found in today’s machine learning algorithm design. Generalizability requires a clear understanding of one’s own action (self-awareness) and a robust interaction with the environment (situation awareness). Many current studies are devoted in developing an algorithm that is more robust in generalizing unseen situations while explaining self-action. However, such algorithms are complex and are not yet fully developed to be used in production. Intelligent transportation systems like self-driving cars are one of the emerging technologies that need generalizability and explainability in anomalous conditions. We propose to enhance generalizability and interpretability of a self-driving car model by introducing a novel methodology that fuses multi-sensorial data from proprioceptive and exteroceptive sensors of an agent, coupled in a Hierarchical Dynamic Bayesian Network model, in an Active Inference framework. The developed model has three stages: 1) a lower dimensional unsupervised learning stage, considering odometry and action modalities, carried out by first applying Null Force Filtering and then by applying modified GNG clustering algorithms; 2) a self-supervised higher-dimensional video modality learning stage assisted by the learned odometry vocabularies; and 3) an online model-based active learning in continuous and discrete state spaces, and action spaces, in the Active Inference framework. The developed system is tested using the CARLA simulator environment for localizing interacting agents, and exhibits low error compared to state-of-the-art methods.File | Dimensione | Formato | |
---|---|---|---|
Integrated_Learning_and_Decision_Making_for_Autonomous_Agents_through_Energy_based_Bayesian_Models.pdf
accesso chiuso
Tipologia:
Documento in versione editoriale
Dimensione
1.27 MB
Formato
Adobe PDF
|
1.27 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.