Starting from the acknowledged properties of visual cortical neurons, we show how diversified and composite visual descriptors come up from different hierarchical combinations of the harmonic content of the visual signal. The resulting deep hierarchy networks can specialize to solve different tasks and trigger different behaviors, without necessarily getting through an explicit measure of the re-constructive visual attributes of the observed scene. Distinct specializations for stereopis and for active control of the vergence movements of a binocular system are presented. In particular, the advantage of not abandoning distributed representations of multiple solutions to prematurely construct integrated description of cognitive entities and commit the system to a particular behavior is discussed. Pilot CPU-GPU implementations of the proposed cortical-like architectures prove to be promising solutions for the next-generation of robot vision systems, which should be capable of calibrating and adapting autonomously through the interaction with the environment.

Deep Representation Hierarchies for 3D Active Vision

SABATINI, SILVIO PAOLO
2014-01-01

Abstract

Starting from the acknowledged properties of visual cortical neurons, we show how diversified and composite visual descriptors come up from different hierarchical combinations of the harmonic content of the visual signal. The resulting deep hierarchy networks can specialize to solve different tasks and trigger different behaviors, without necessarily getting through an explicit measure of the re-constructive visual attributes of the observed scene. Distinct specializations for stereopis and for active control of the vergence movements of a binocular system are presented. In particular, the advantage of not abandoning distributed representations of multiple solutions to prematurely construct integrated description of cognitive entities and commit the system to a particular behavior is discussed. Pilot CPU-GPU implementations of the proposed cortical-like architectures prove to be promising solutions for the next-generation of robot vision systems, which should be capable of calibrating and adapting autonomously through the interaction with the environment.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/777993
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact