The advancements in robotics have revealed the potential of complex robotic platforms, promising a wide spread of robotics technologies to help people in various scenarios, from industrial to households. To harness the capabilities of modern robots, it is of paramount importance to develop human-robot interaction interfaces that allow people to seamlessly operate them. To address this challenge, traditional interface methods, such as remote controllers and keyboards, are going to be replaced by more intuitive communication means, that permit, for example, to command the robot through body gestures, and to receive feedback that extends the visual domain, such as tactile clues. At the same time, the robot must be equipped with autonomous capabilities that relieve the operators in considering all the aspects of the task and of the robot motions, thus reducing their workload, decreasing the execution time of the task, and minimizing the possibility of failures. This PhD thesis takes on these challenges by exploring and developing innovative human-robot interaction paradigms that focus on the key aspects of enabling intuitive human-robot communication, enhancing user's situation awareness, and incorporating different levels of robot autonomy. With the TelePhysicalOperation interface, the user can teleoperate the different capabilities of a robot (e.g., single/double arm manipulation, wheel/leg locomotion) by applying virtual forces on selected robot body parts. This approach emulates the intuitiveness of physical human-robot interaction, but at the same time it permits to teleoperate the robot from a safe distance, in a way that resembles a "Marionette" interface. The system is further enhanced with wearable haptic feedback functions to align better with the "Marionette" metaphor, and a user study has been conducted to validate its efficacy with and without the haptic channel enabled. Considering the importance of robot independence, the TelePhysicalOperation interface incorporates autonomy modules to face, for example, the teleoperation of dual-arm mobile base robots for bimanual object grasping and transportation tasks. With the laser-guided interface, the user can indicate points of interest to the robot through the utilization of a simple but effective laser emitter device. With a neural network-based vision system, the robot tracks the laser projection in real time, allowing the user to indicate not only fixed goals, like objects, but also paths to follow. With the implemented autonomous behavior, a mobile manipulator employs its locomanipulation abilities to follow the indicated goals. The behavior is modeled using Behavior Trees, exploiting their reactivity to promptly respond to changes in goal positions, and their modularity to adapt the motion planning to the task needs. The proposed laser interface has also been employed in an assistive scenario. In this case, users with upper limbs impairments can control an assistive manipulator by directing a head-worn laser emitter to the point of interests, to collaboratively address activities of everyday life. In summary, this research contributes to effectively exploiting the extensive capabilities of modern robotic systems through user-friendly human-robot interfaces. With the developed interfaces, the gap that still prevents a large adoption of robotic systems is further reduced.

Intuitive Human-Robot Interfaces Leveraging on Autonomy Features for the Control of Highly-redundant Robots

TORIELLI, DAVIDE
2024-02-20

Abstract

The advancements in robotics have revealed the potential of complex robotic platforms, promising a wide spread of robotics technologies to help people in various scenarios, from industrial to households. To harness the capabilities of modern robots, it is of paramount importance to develop human-robot interaction interfaces that allow people to seamlessly operate them. To address this challenge, traditional interface methods, such as remote controllers and keyboards, are going to be replaced by more intuitive communication means, that permit, for example, to command the robot through body gestures, and to receive feedback that extends the visual domain, such as tactile clues. At the same time, the robot must be equipped with autonomous capabilities that relieve the operators in considering all the aspects of the task and of the robot motions, thus reducing their workload, decreasing the execution time of the task, and minimizing the possibility of failures. This PhD thesis takes on these challenges by exploring and developing innovative human-robot interaction paradigms that focus on the key aspects of enabling intuitive human-robot communication, enhancing user's situation awareness, and incorporating different levels of robot autonomy. With the TelePhysicalOperation interface, the user can teleoperate the different capabilities of a robot (e.g., single/double arm manipulation, wheel/leg locomotion) by applying virtual forces on selected robot body parts. This approach emulates the intuitiveness of physical human-robot interaction, but at the same time it permits to teleoperate the robot from a safe distance, in a way that resembles a "Marionette" interface. The system is further enhanced with wearable haptic feedback functions to align better with the "Marionette" metaphor, and a user study has been conducted to validate its efficacy with and without the haptic channel enabled. Considering the importance of robot independence, the TelePhysicalOperation interface incorporates autonomy modules to face, for example, the teleoperation of dual-arm mobile base robots for bimanual object grasping and transportation tasks. With the laser-guided interface, the user can indicate points of interest to the robot through the utilization of a simple but effective laser emitter device. With a neural network-based vision system, the robot tracks the laser projection in real time, allowing the user to indicate not only fixed goals, like objects, but also paths to follow. With the implemented autonomous behavior, a mobile manipulator employs its locomanipulation abilities to follow the indicated goals. The behavior is modeled using Behavior Trees, exploiting their reactivity to promptly respond to changes in goal positions, and their modularity to adapt the motion planning to the task needs. The proposed laser interface has also been employed in an assistive scenario. In this case, users with upper limbs impairments can control an assistive manipulator by directing a head-worn laser emitter to the point of interests, to collaboratively address activities of everyday life. In summary, this research contributes to effectively exploiting the extensive capabilities of modern robotic systems through user-friendly human-robot interfaces. With the developed interfaces, the gap that still prevents a large adoption of robotic systems is further reduced.
20-feb-2024
Human-Robot Interfaces; Telerobotics and Teleoperation; Shared Control; Mobile Manipulation; Haptic Interfaces; Assistive Human-Robot Collaboration;
File in questo prodotto:
File Dimensione Formato  
phdunige_4119809.pdf

accesso aperto

Tipologia: Tesi di dottorato
Dimensione 33.36 MB
Formato Adobe PDF
33.36 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1160113
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact