This paper deals with the problem of 3D stereo estimation and eye-hand calibration in humanoid robots. We first show how to implement a complete 3D stereo vision pipeline, enabling online and real-time eye calibration. We then introduce a new formulation for the problem of eye-hand coordination. We developed a fully automated procedure that does not require human supervision. The end-effector of the humanoid robot is automatically detected in the stereo images, providing large amounts of training data for learning the vision-to-kinematics mapping. We report exhaustive experiments using different machine learning techniques; we show that a mixture of linear transformations can achieve the highest accuracy in the shortest amount of time, while guaranteeing real-time performance. We demonstrate the application of the proposed system in two typical robotic scenarios: (1) object grasping and tool use; (2) 3D scene reconstruction. The platform of choice is the iCub humanoid robot.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
Titolo: | 3d stereo estimation and fully automated learning of eye-hand coordination in humanoid robots |
Autori: | |
Data di pubblicazione: | 2014 |
Abstract: | This paper deals with the problem of 3D stereo estimation and eye-hand calibration in humanoid robots. We first show how to implement a complete 3D stereo vision pipeline, enabling online and real-time eye calibration. We then introduce a new formulation for the problem of eye-hand coordination. We developed a fully automated procedure that does not require human supervision. The end-effector of the humanoid robot is automatically detected in the stereo images, providing large amounts of training data for learning the vision-to-kinematics mapping. We report exhaustive experiments using different machine learning techniques; we show that a mixture of linear transformations can achieve the highest accuracy in the shortest amount of time, while guaranteeing real-time performance. We demonstrate the application of the proposed system in two typical robotic scenarios: (1) object grasping and tool use; (2) 3D scene reconstruction. The platform of choice is the iCub humanoid robot. |
Handle: | http://hdl.handle.net/11567/810416 |
ISBN: | 978-1-4799-7174-9 |
Appare nelle tipologie: | 04.01 - Contributo in atti di convegno |