An autonomous vehicle must have the capability of interpreting data provided by multiple sensors in order to face various environmental conditions. To this end, different physical sensors (i.e, RGB or IR camera, laser range finder, etc.) which can provide information of the image type can be used. Moreover, virtual sensors (i.e., processes which simulate new sensors by transforming in different ways original images) can be obtained by Computer Vision techniques. In this paper, we present a knowledge-based data fusion system with a distributed control, which integrates data both at physical and at virtual sensors level, by pursuing segmentation and interpretation goals. Outdoor road scenes, with and without obstacles are considered as an applicative test set. © SPIE.
Integration of data-fusion techniques for autonomous vehicle driving
REGAZZONI, CARLO;VERNAZZA, GIANNI;
1990-01-01
Abstract
An autonomous vehicle must have the capability of interpreting data provided by multiple sensors in order to face various environmental conditions. To this end, different physical sensors (i.e, RGB or IR camera, laser range finder, etc.) which can provide information of the image type can be used. Moreover, virtual sensors (i.e., processes which simulate new sensors by transforming in different ways original images) can be obtained by Computer Vision techniques. In this paper, we present a knowledge-based data fusion system with a distributed control, which integrates data both at physical and at virtual sensors level, by pursuing segmentation and interpretation goals. Outdoor road scenes, with and without obstacles are considered as an applicative test set. © SPIE.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.