The capability of manipulating objects is fundamental for robots in order to interact and integrate in the environment they are deployed in. Since most manipulation actions involve or begin with picking up objects, grasping is an integral part of any manipulation toolbox. Although planning grasps on known object shapes is a mature research field with decades of work behind it, such information is not available in many realistic scenarios. For robots deployed in dynamic and unstructured environments such as homes, stores or even outdoor settings can be cumbersome or outright unfeasible to know the complete 3D shape of target objects, or even recover it in place with exploration techniques. In such cases, approaches that allow planning for optimal grasps given partial 3D information are essential in order to attempt manipulation actions, and the research field is far from being depleted. In the attempt to model such a use case, in this Thesis we consider a target scenario consisting of a surface cluttered with everyday household objects and we explore different approaches to grasp planning with single-view point clouds, under the the hypothesis that the object 3D models are not known a priori. Although many state of the art methods can find feasible grasps on the observable part of the scene, explicitly modeling the 3D shape of the target extends this capability to unobservable object sides. Initially, we formulate the hypothesis that geometric primitives such as superquadrics can be effectively used as a modeling tool. According to this line of thought, we propose a grasp planning method that constrains the superquadric parameterization to account for the characteristics of the target scenario. In order to evaluate the performance of our method, we tackle the lack of widespread benchmarking protocols for grasp planning tasks by proposing GRASPA, a complete benchmarking tool inspired by reproducibility and interpretability principles. The nature of GRASPA allowed us to evaluate the performance of grasping pipelines with different features on different robotic setups using the same rigorous experimental procedure and observe the failure cases of each approach. In the final part of the Thesis, we show a possible way of overcoming the limitations of primitive-based methods by studying shape completion methods that have recently been gaining traction in the computer vision and robot vision fields. Thanks to these methods, the 3D description of the target object can be reconstructed from partial views by leveraging past experience (in the form of a learned model) to infer information on unseen parts of objects. We present a proof of concept of how shape completion deep autoencoders can be effectively integrated in a grasp planning algorithm, and we point to interesting research avenues by showing the importance of their internal representation.

Where’s my mesh? An exploratory study on model-free grasp planning

BOTTAREL, FABRIZIO
2021-05-12

Abstract

The capability of manipulating objects is fundamental for robots in order to interact and integrate in the environment they are deployed in. Since most manipulation actions involve or begin with picking up objects, grasping is an integral part of any manipulation toolbox. Although planning grasps on known object shapes is a mature research field with decades of work behind it, such information is not available in many realistic scenarios. For robots deployed in dynamic and unstructured environments such as homes, stores or even outdoor settings can be cumbersome or outright unfeasible to know the complete 3D shape of target objects, or even recover it in place with exploration techniques. In such cases, approaches that allow planning for optimal grasps given partial 3D information are essential in order to attempt manipulation actions, and the research field is far from being depleted. In the attempt to model such a use case, in this Thesis we consider a target scenario consisting of a surface cluttered with everyday household objects and we explore different approaches to grasp planning with single-view point clouds, under the the hypothesis that the object 3D models are not known a priori. Although many state of the art methods can find feasible grasps on the observable part of the scene, explicitly modeling the 3D shape of the target extends this capability to unobservable object sides. Initially, we formulate the hypothesis that geometric primitives such as superquadrics can be effectively used as a modeling tool. According to this line of thought, we propose a grasp planning method that constrains the superquadric parameterization to account for the characteristics of the target scenario. In order to evaluate the performance of our method, we tackle the lack of widespread benchmarking protocols for grasp planning tasks by proposing GRASPA, a complete benchmarking tool inspired by reproducibility and interpretability principles. The nature of GRASPA allowed us to evaluate the performance of grasping pipelines with different features on different robotic setups using the same rigorous experimental procedure and observe the failure cases of each approach. In the final part of the Thesis, we show a possible way of overcoming the limitations of primitive-based methods by studying shape completion methods that have recently been gaining traction in the computer vision and robot vision fields. Thanks to these methods, the 3D description of the target object can be reconstructed from partial views by leveraging past experience (in the form of a learned model) to infer information on unseen parts of objects. We present a proof of concept of how shape completion deep autoencoders can be effectively integrated in a grasp planning algorithm, and we point to interesting research avenues by showing the importance of their internal representation.
12-mag-2021
robotics; grasping; grasp planning; manipulation; benchmarking; 3D perception; shape completion
File in questo prodotto:
File Dimensione Formato  
phdunige_4466541.pdf

accesso aperto

Descrizione: Tesi di Dottorato
Tipologia: Tesi di dottorato
Dimensione 13.63 MB
Formato Adobe PDF
13.63 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1045782
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact