Human interaction often entails lies. Understanding when a partner is being deceitful is an important social skill, that also robots will need, to properly navigate social exchanges. In this work, we investigate how good are human observers at detecting false claims and which features they base their judgment on. Moreover, we compare their performance with that of an algorithm for lie detection developed for the robot iCub and based uniquely on pupillometry. We ran an online survey asking participants to classify as truthful or deceptive 20 videos of individuals describing complex drawings to iCub, either correctly or untruly. They also had to rate their confidence and provide a written motivation for each classification. Responders achieved an average accuracy of 53.9% with a higher score on detecting lies (55.4%) with respect to true statements (52.8%). Also, they performed better and more confidently on the videos iCub failed to classify than on the ones iCub correctly detected. Interestingly, the human observers listed a wide range of behavioral features as means to decide whether a speaker was lying, while the robot’s judgment was driven by pupil size only. This suggests that an avenue for improving lie detection could be a joint effort between humans and robots, where human sensitivity to subtle behavioral cues could complement the quantitative assessment of physiological signals feasible to the robot. Finally, based on the reported motivations, we speculate and give hints on how the lie detection field should evolve in the future, aiming to portability to real-world interactions.
File in questo prodotto:
Non ci sono file associati a questo prodotto.