Social Engineering is the science of using social interaction to influence others on taking computer-related actions of attacker’s interest. It is used to steal credentials, money, or people’s identities. After being left unchecked for a long time, social engineering is raising increasing concerns. Despite its social nature, state-of-the-art defense systems mainly focus on engineering factors. They detect technical features specific to the medium employed in the attack (e.g., phishing emails), or they train final users on detecting them. However, the crucial aspects of social engineering are humans, their vulnerabilities, and how attackers leverage them, gaining victims’ compliance. Recent solutions involved victims’ explicit perception and judgment in technical defenses (Humans-as-a-Security-Sensor paradigm). However, humans also communicate implicitly: gaze, heart rate, sweating, body posture, and voice prosody are physiological and behavioral cues that implicitly disclose humans’ cognitive and emotional state. In literature, expert social engineers reported monitoring such cues from the victims continuously to adapt their strategy (e.g., in face-to-face attacks); also, they stressed the importance of controlling them to avoid revealing the attacker’s malicious intentions. This thesis studies how to leverage such behavioral and physiological cues to defend against social engineering. Moreover, it researches humanoid social robots - more precisely the iCub and Furhat robotic platforms - as novel agents in the cybersecurity field. Humans’ trust in robots and their role are still debated: attackers could hijack and control them to perform face-to-face attacks from a safe distance. However, this thesis speculates robots could be helpers, everyday companions able to warn users against social engineering attacks, better than traditional notification vectors could do. Finally, this thesis explores leveraging game-based entertaining human-robot interactions to collect more realistic, less biased data. For this purpose, I performed four studies concerning different aspects of social engineering. Firstly, I studied how the trust between attackers and victims evolves and can be exploited. In a Treasure Hunt game, players had to decide whether trust the hints of iCub. The robot showed four mechanical failures designed to mine its perceived reliability in the game and could provide transparent motivations for them. The study showed how players’ trust in iCub decreased only if they perceived all the faults or the robot explained them; i.e., they perceived the risk of relying on a faulty robot. Then, I researched novel physiological-based methods to unmask malicious social engineers. In a Magic Trick card game, autonomously led by the iCub robot, players lied or told the truth about gaming card descriptions. ICub leveraged an End-to-end deception detection architecture to identify lies based on players’ pupil dilation alone. The architecture enables iCub to learn customized deception patterns, improving the classification over prolonged interactions. In the third study, I focused on victims’ behavioral and physiological reactions during social engineering attacks; and how to evaluate their awareness. Participants played an interactive storytelling game designed to challenge them against social engineering attacks from virtual agents and the humanoid robot iCub. Post-hoc, I trained three Random Forest classifiers to detect whether participants’ perceived the risk and uncertainty of Social Engineering attacks and predict their decisions. Finally, I explored how social humanoid robots should intervene to prevent victims’ compliance with social engineering. In a refined version of the interactive storytelling, the Furhat robot contrasted players’ decisions with different strategies, changing their minds. Preliminary results suggest the robot effectively affected participants’ decisions, motivating further studies toward closing the social engineering defense loop in human-robot interaction. Summing up, this thesis provides evidence that humans’ implicit cues and social robots could help against social engineering; it offers practical defensive solutions and architectures supporting further research in the field and discusses them aiming for concrete applications.

Social Engineering Defense Solutions Through Human-Robot Interaction

PASQUALI, DARIO
2022-07-29

Abstract

Social Engineering is the science of using social interaction to influence others on taking computer-related actions of attacker’s interest. It is used to steal credentials, money, or people’s identities. After being left unchecked for a long time, social engineering is raising increasing concerns. Despite its social nature, state-of-the-art defense systems mainly focus on engineering factors. They detect technical features specific to the medium employed in the attack (e.g., phishing emails), or they train final users on detecting them. However, the crucial aspects of social engineering are humans, their vulnerabilities, and how attackers leverage them, gaining victims’ compliance. Recent solutions involved victims’ explicit perception and judgment in technical defenses (Humans-as-a-Security-Sensor paradigm). However, humans also communicate implicitly: gaze, heart rate, sweating, body posture, and voice prosody are physiological and behavioral cues that implicitly disclose humans’ cognitive and emotional state. In literature, expert social engineers reported monitoring such cues from the victims continuously to adapt their strategy (e.g., in face-to-face attacks); also, they stressed the importance of controlling them to avoid revealing the attacker’s malicious intentions. This thesis studies how to leverage such behavioral and physiological cues to defend against social engineering. Moreover, it researches humanoid social robots - more precisely the iCub and Furhat robotic platforms - as novel agents in the cybersecurity field. Humans’ trust in robots and their role are still debated: attackers could hijack and control them to perform face-to-face attacks from a safe distance. However, this thesis speculates robots could be helpers, everyday companions able to warn users against social engineering attacks, better than traditional notification vectors could do. Finally, this thesis explores leveraging game-based entertaining human-robot interactions to collect more realistic, less biased data. For this purpose, I performed four studies concerning different aspects of social engineering. Firstly, I studied how the trust between attackers and victims evolves and can be exploited. In a Treasure Hunt game, players had to decide whether trust the hints of iCub. The robot showed four mechanical failures designed to mine its perceived reliability in the game and could provide transparent motivations for them. The study showed how players’ trust in iCub decreased only if they perceived all the faults or the robot explained them; i.e., they perceived the risk of relying on a faulty robot. Then, I researched novel physiological-based methods to unmask malicious social engineers. In a Magic Trick card game, autonomously led by the iCub robot, players lied or told the truth about gaming card descriptions. ICub leveraged an End-to-end deception detection architecture to identify lies based on players’ pupil dilation alone. The architecture enables iCub to learn customized deception patterns, improving the classification over prolonged interactions. In the third study, I focused on victims’ behavioral and physiological reactions during social engineering attacks; and how to evaluate their awareness. Participants played an interactive storytelling game designed to challenge them against social engineering attacks from virtual agents and the humanoid robot iCub. Post-hoc, I trained three Random Forest classifiers to detect whether participants’ perceived the risk and uncertainty of Social Engineering attacks and predict their decisions. Finally, I explored how social humanoid robots should intervene to prevent victims’ compliance with social engineering. In a refined version of the interactive storytelling, the Furhat robot contrasted players’ decisions with different strategies, changing their minds. Preliminary results suggest the robot effectively affected participants’ decisions, motivating further studies toward closing the social engineering defense loop in human-robot interaction. Summing up, this thesis provides evidence that humans’ implicit cues and social robots could help against social engineering; it offers practical defensive solutions and architectures supporting further research in the field and discusses them aiming for concrete applications.
29-lug-2022
social engineering; human-robot interaction; physiology; machine learning, game, autonomy
File in questo prodotto:
File Dimensione Formato  
phdunige_461448.pdf

Open Access dal 30/01/2023

Descrizione: tesi di dottorato 461448
Tipologia: Tesi di dottorato
Dimensione 11.06 MB
Formato Adobe PDF
11.06 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1092333
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact