This work explores a novel approach to empowering robots with visual perception capabilities using textual descriptions. Our approach involves the integration of GPT-4 with dense captioning, enabling robots to perceive and interpret the visual world through detailed text-based descriptions. To assess both user experience and the technical feasibility of this approach, experiments were conducted with human participants interacting with a Pepper robot equipped with visual capabilities. The results affirm the viability of the proposed approach, allowing to perform vision-based conversations effectively, despite processing time limitations.

Grounding Conversational Robots on Vision Through Dense Captioning and Large Language Models

Grassi L.;Recchiuto C. T.;Sgorbissa A.
2024-01-01

Abstract

This work explores a novel approach to empowering robots with visual perception capabilities using textual descriptions. Our approach involves the integration of GPT-4 with dense captioning, enabling robots to perceive and interpret the visual world through detailed text-based descriptions. To assess both user experience and the technical feasibility of this approach, experiments were conducted with human participants interacting with a Pepper robot equipped with visual capabilities. The results affirm the viability of the proposed approach, allowing to perform vision-based conversations effectively, despite processing time limitations.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1214037
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact