An autonomous vehicle must have the capability of interpreting data provided by multiple sensors in order to face various environmental conditions. To this end, different physical sensors (i.e, RGB or IR camera, laser range finder, etc.) which can provide information of the image type can be used. Moreover, virtual sensors (i.e., processes which simulate new sensors by transforming in different ways original images) can be obtained by Computer Vision techniques. In this paper, we present a knowledge-based data fusion system with a distributed control, which integrates data both at physical and at virtual sensors level, by pursuing segmentation and interpretation goals. Outdoor road scenes, with and without obstacles are considered as an applicative test set. © SPIE.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Integration of data-fusion techniques for autonomous vehicle driving|
|Data di pubblicazione:||1990|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|