This paper proposes an algorithm to model and process streams of LiDAR data under an autonomous vehicle framework. LiDAR is assumed to be an exteroceptive sensor that allows the vehicle to have dynamic 3D scene perception of its surroundings. We employ an encoder-decoder architecture based on 3D-Convolutional layers called 3D Convolution Encoder-Decoder (3D-CED), together with a transfer learning strategy to extract a set of features from point clouds, which are relevant in the context of autonomous driving. The resulting features allow to make inferences of the future point cloud data and detect multiple abstraction level anomalies in controlled scenarios by utilizing a probabilistic switching dynamic model called High Dimensional Markov Jump Particle Filter (HD-MJPF). Moreover, a comparison is provided between piecewise linear, piecewise nonlinear, and nonlinear predictive models for anomaly detection at multiple abstraction levels. Our approach is evaluated with data collected from the LiDAR sensors of the autonomous vehicle while performing certain tasks in a controlled environment.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Modeling Perception in Autonomous Vehicles via 3D Convolutional Representations on LiDAR|
|Data di pubblicazione:||2021|
|Appare nelle tipologie:||01.01 - Articolo su rivista|