We are witnessing the global spread of artificial intelligence (AI) technology in people’s everyday lives. With the advent of the AI surge, the demand for explainable AI (XAI) techniques arose due to the growing intricacy of AI models. Users sought to comprehend the reasoning behind the decisions made by these models, a necessity that became more pressing as an expanding number of AI-driven robots interacted with people in real-world scenarios. Recent years have witnessed the XAI community recognizing the imperative of leveraging the social dimensions of the explanation process. Drawing insights from psychology and cognitive sciences, researchers are increasingly reframing explainability as a social problem. This conceptual shift is exemplified by explainable robots capable of using the common ground established with human partners during the explanation generation process. Social autonomous robots can trigger neural and social mechanisms in people similar to those happening during interactions between humans. Given the ease with which individuals attribute intentions, beliefs, and even second-order theory of mind capabilities to robots, the human-robot interaction (HRI) community is increasingly incorporating these mechanisms into explanatory interchanges. This thesis introduces a theoretical framework for XAI in HRI that leverages the social- dialogical nature of explanations. The framework models the explanation process as a dialogue between robots and human partners. Differently form existing approaches, our framework emphasizes the influence of the human-robot common ground and interaction history on the generation of explanations. Subsequently, the framework’s philosophy is implemented in an HRI collaborative decision-making scenario. The focus is on exploring how explanations based on shared human-robot experiences impact individuals’ decision-making and the role of their personality traits in this context. Results showed that a social robot that justifies its suggestions with explanations exploiting its common ground with the human partner is more persuasive than classical explanations, especially for less skilled participants. Moreover, participants’ personality traits significantly impacted their decision-making and interaction with the robot. Finally, to assess the effectiveness of such explanations compared to classical ones, an evaluation task is designed to measure the informativeness of XAI systems for non-expert users. This task is instantiated in various domains such as human-computer interaction (HCI), HRI, and self-learning, examining how different types of explanations and artificial explainable agents influence people’s learning of new tasks. Results showed that expert explainable agents influenced participants’ learning, limiting them from adequately exploring the learning environment, as participants who learned alone did. Through this thesis, we advanced existing literature on collaborative decision-making with both HCI and HRI domains. Employing methodologies derived from HRI, we compared classical XAI techniques with explanation approaches that leverage on the human-robot common ground. Our investigation highlighted how these latter approaches improve robot’s persuasiveness, particularly in social collaborative contexts. Additionally, we conceptualized and developed an assessment task to measure the quality of explanations. Our findings did not highlight differences between classical and partner-aware explanation methodologies. Nevertheless, results brought to light the influence that both robotic and artificial agents have on people’s learning, limiting their exploration strategies.

XAI in HRI: A Journey to the Centre of the Explainability

MATARESE, MARCO
2024-04-02

Abstract

We are witnessing the global spread of artificial intelligence (AI) technology in people’s everyday lives. With the advent of the AI surge, the demand for explainable AI (XAI) techniques arose due to the growing intricacy of AI models. Users sought to comprehend the reasoning behind the decisions made by these models, a necessity that became more pressing as an expanding number of AI-driven robots interacted with people in real-world scenarios. Recent years have witnessed the XAI community recognizing the imperative of leveraging the social dimensions of the explanation process. Drawing insights from psychology and cognitive sciences, researchers are increasingly reframing explainability as a social problem. This conceptual shift is exemplified by explainable robots capable of using the common ground established with human partners during the explanation generation process. Social autonomous robots can trigger neural and social mechanisms in people similar to those happening during interactions between humans. Given the ease with which individuals attribute intentions, beliefs, and even second-order theory of mind capabilities to robots, the human-robot interaction (HRI) community is increasingly incorporating these mechanisms into explanatory interchanges. This thesis introduces a theoretical framework for XAI in HRI that leverages the social- dialogical nature of explanations. The framework models the explanation process as a dialogue between robots and human partners. Differently form existing approaches, our framework emphasizes the influence of the human-robot common ground and interaction history on the generation of explanations. Subsequently, the framework’s philosophy is implemented in an HRI collaborative decision-making scenario. The focus is on exploring how explanations based on shared human-robot experiences impact individuals’ decision-making and the role of their personality traits in this context. Results showed that a social robot that justifies its suggestions with explanations exploiting its common ground with the human partner is more persuasive than classical explanations, especially for less skilled participants. Moreover, participants’ personality traits significantly impacted their decision-making and interaction with the robot. Finally, to assess the effectiveness of such explanations compared to classical ones, an evaluation task is designed to measure the informativeness of XAI systems for non-expert users. This task is instantiated in various domains such as human-computer interaction (HCI), HRI, and self-learning, examining how different types of explanations and artificial explainable agents influence people’s learning of new tasks. Results showed that expert explainable agents influenced participants’ learning, limiting them from adequately exploring the learning environment, as participants who learned alone did. Through this thesis, we advanced existing literature on collaborative decision-making with both HCI and HRI domains. Employing methodologies derived from HRI, we compared classical XAI techniques with explanation approaches that leverage on the human-robot common ground. Our investigation highlighted how these latter approaches improve robot’s persuasiveness, particularly in social collaborative contexts. Additionally, we conceptualized and developed an assessment task to measure the quality of explanations. Our findings did not highlight differences between classical and partner-aware explanation methodologies. Nevertheless, results brought to light the influence that both robotic and artificial agents have on people’s learning, limiting their exploration strategies.
2-apr-2024
human-robot interaction; explainable artificial intelligence; social robotics; human-centred AI
File in questo prodotto:
File Dimensione Formato  
phdunige_4942214.pdf

accesso aperto

Tipologia: Tesi di dottorato
Dimensione 2.58 MB
Formato Adobe PDF
2.58 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1167075
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact