A fundamental trait of human intelligence is represented by social intelligence, which enables natural and fruitful interaction since very early during infancy. The ability to collaborate is also a key challenge for today's robotics, which could benefit from the design of computational models supporting the understanding of social intelligence for the future of humanrobot interaction. Our research focuses on these topics from the perspective of computational vision. In particular we aim at understanding how social intelligence develops in presence of the very limited sensory-motor skills and prior knowledge common to babies. As a starting point we consider the natural predisposition of newborns to notice potential interacting partners in their surroundings, which is manifested by a preference for biological motion over other types of motion. To model this skill, we propose a video-based computational method for biological motion detection inspired by the Two-Thirds Power Law, a well-known invariant of human movements. In particular, we address the problem by recruiting machine learning framework, leveraging a binary classification to discriminate biological from non-biological stimuli from rather coarse motion models extracted from video measurements. After evaluating the performance of the method and its generalization power to complex scenarios in an offline test, the method is engineered to work online on a robot, the humanoid iCub. The integration with the attentional module of the robot enables it to direct its gaze toward human activity in the scene. We posit that the possibility for a robotic system to orient the attention toward potential interacting agents, as a human infant would, represents one of the first stages of social intelligence, on top of which more complex skills, as action and intention understanding, could emerge.
|Titolo:||Computational vision for social intelligence|
|Data di pubblicazione:||2017|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|