Intelligent transportation systems (ITSs) provide a paradigm change in perceiving and interacting with transportation networks, leading to enhanced levels of safety, sustainability, and efficiency. Vehicular-to-everything (V2X) communication is the core component in the ITSs. The proprioceptive and exteroceptive sensors allow these vehicles to be aware of the surrounding environment and respond to emergencies by utilizing their abilities to reach a high level of self-awareness. In this paper, we propose a self-awareness approach to learn a generative dynamic Bayesian network (G-DBN) from the real-time LiDAR perception. Without reducing the dimensionality, we perform offline training and online testing phases on the three-dimensional (3D) point clouds. In the offline training phase, initially, the raw point clouds are preprocessed using a joint probabilistic data association filter (JPDAF) to obtain the 3D tracks of the multiple vehicles in space. Then, we perform an unsupervised clustering on all the generalized states (GSs) containing positions and velocities (a 6D vector) by considering the growing neural gas (GNG) technique, thus achieving a trained model from the 3D LiDAR point clouds. In the online testing phase, the high-dimensional Markov jump particle filter (HD-MJPF) utilizes the G-DBN’s probabilistic information to predict the positions of multiple vehicles and to detect the abnormalities at the discrete and continuous levels in normal and abnormal scenarios. Our proposed approach is useful for learning high-dimensional generative models and provides a way to meet the current curse of dimensionality challenges, that machine learning models are suffering.
Learning 3D LiDAR Perception Models for Self-Aware Autonomous Systems
Saleemullah, Saleemullah;Krayani, Ali;Zontone, Pamela;Marcenaro, Lucio;Gomez, David Martin;Regazzoni, Carlo
2024-01-01
Abstract
Intelligent transportation systems (ITSs) provide a paradigm change in perceiving and interacting with transportation networks, leading to enhanced levels of safety, sustainability, and efficiency. Vehicular-to-everything (V2X) communication is the core component in the ITSs. The proprioceptive and exteroceptive sensors allow these vehicles to be aware of the surrounding environment and respond to emergencies by utilizing their abilities to reach a high level of self-awareness. In this paper, we propose a self-awareness approach to learn a generative dynamic Bayesian network (G-DBN) from the real-time LiDAR perception. Without reducing the dimensionality, we perform offline training and online testing phases on the three-dimensional (3D) point clouds. In the offline training phase, initially, the raw point clouds are preprocessed using a joint probabilistic data association filter (JPDAF) to obtain the 3D tracks of the multiple vehicles in space. Then, we perform an unsupervised clustering on all the generalized states (GSs) containing positions and velocities (a 6D vector) by considering the growing neural gas (GNG) technique, thus achieving a trained model from the 3D LiDAR point clouds. In the online testing phase, the high-dimensional Markov jump particle filter (HD-MJPF) utilizes the G-DBN’s probabilistic information to predict the positions of multiple vehicles and to detect the abnormalities at the discrete and continuous levels in normal and abnormal scenarios. Our proposed approach is useful for learning high-dimensional generative models and provides a way to meet the current curse of dimensionality challenges, that machine learning models are suffering.File | Dimensione | Formato | |
---|---|---|---|
Learning_3D_LiDAR_Perception_Models_for_Self-Aware_Autonomous_Systems.pdf
accesso chiuso
Tipologia:
Documento in versione editoriale
Dimensione
8.57 MB
Formato
Adobe PDF
|
8.57 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.