The ability to recognize human partners is an important social skill to build personalized and long-Term Human-Robot Interactions (HRI). However, in HRI contexts, unfolding in ever-changing and realistic environments, the identification problem presents still significant challenges. Possible solutions consist of relying on a multimodal approach and making robots learn from their first-hand sensory data. To this aim, we propose a framework to allow robots to autonomously organize their sensory experience into a structured dataset suitable for person recognition during a multiparty interaction. Our results demonstrate the effectiveness of our approach and show that it is a promising solution in the quest of making robots more autonomous in their learning process.
Towards a cognitive framework for multimodal person recognition in multiparty hri
Gonzalez J.;Belgiovine G.;Sciutti A.;Sandini G.;
2021-01-01
Abstract
The ability to recognize human partners is an important social skill to build personalized and long-Term Human-Robot Interactions (HRI). However, in HRI contexts, unfolding in ever-changing and realistic environments, the identification problem presents still significant challenges. Possible solutions consist of relying on a multimodal approach and making robots learn from their first-hand sensory data. To this aim, we propose a framework to allow robots to autonomously organize their sensory experience into a structured dataset suitable for person recognition during a multiparty interaction. Our results demonstrate the effectiveness of our approach and show that it is a promising solution in the quest of making robots more autonomous in their learning process.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.