In this work, we performed a user study in which participants had to solve a human-robot teaming decision-making task (the Connect 4 game) with an explainable vs non-explainable robot. During the task, the robot provided suggestions and, depending on the experimental condition, explanations to justify those suggestions. We compared participants’ behaviours in interacting with both types of robots. In particular, we investigated how participants’ personality dimensions and previous experiences with the iCub robot impacted participants’ decision-making. We also studied how participants aligned with iCub’s playing style as the interaction continued. Our results show that participants’ negative agency and agreeableness substantially impacted how they accepted the robot’s suggestions when it provided example-based counterfactual explanations. We also observed a learning effect: participants tended to align with the robot’s playing style during the interaction. However, the participants’ learning depended not only on the presence of the explanations, but also on the time spent with the robot. Moreover, the human-robot team’s victories were mainly attributable to the robot’s persuasiveness rather than the participants’ skills in the game.
Natural Born Explainees: how users’ personality traits shape the human-robot interaction with explainable robots
Matarese, Marco;Cocchella, Francesca;Rea, Francesco;Sciutti, Alessandra
2023-01-01
Abstract
In this work, we performed a user study in which participants had to solve a human-robot teaming decision-making task (the Connect 4 game) with an explainable vs non-explainable robot. During the task, the robot provided suggestions and, depending on the experimental condition, explanations to justify those suggestions. We compared participants’ behaviours in interacting with both types of robots. In particular, we investigated how participants’ personality dimensions and previous experiences with the iCub robot impacted participants’ decision-making. We also studied how participants aligned with iCub’s playing style as the interaction continued. Our results show that participants’ negative agency and agreeableness substantially impacted how they accepted the robot’s suggestions when it provided example-based counterfactual explanations. We also observed a learning effect: participants tended to align with the robot’s playing style during the interaction. However, the participants’ learning depended not only on the presence of the explanations, but also on the time spent with the robot. Moreover, the human-robot team’s victories were mainly attributable to the robot’s persuasiveness rather than the participants’ skills in the game.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.