This paper is about recognising hand-held objects from incomplete tactile observations with a classifier trained only on visual representations. Our method is based on the Deep Learning (DL) architecture PointNet and a Curriculum Learning (CL) technique for fostering the learning of descriptors robust to partial representations of objects. The learning procedure gradually decomposes the visual point clouds to synthesise sparser and sparser input data for the model. In this manner, we were able to use one-shot learning, using the decomposed visual point clouds as augmentations, and reduce the data-collection requirement for training. The approach allows for a gradual improvement of prediction accuracy as more tactile data become available. We evaluated the effectiveness of the curriculum strategy on our generated visual and tactile datasets, experimentally showing that the proposed method improved the recognition accuracy by up to 23% on partial tactile data and boosted accuracy on full tactile data from 93% to 100%. The curriculumtrained network recognised objects with an accuracy of 80% using only 20% of the tactile data representing the objects, increasing to 100% accuracy on clouds containing at least 60% of the points.

Visuo-Tactile Recognition of Partial Point Clouds Using PointNet and Curriculum Learning: Enabling Tactile Perception from Visual Data

Alessandro Albini;Daniele De Martini;Perla Maiolino
2022-01-01

Abstract

This paper is about recognising hand-held objects from incomplete tactile observations with a classifier trained only on visual representations. Our method is based on the Deep Learning (DL) architecture PointNet and a Curriculum Learning (CL) technique for fostering the learning of descriptors robust to partial representations of objects. The learning procedure gradually decomposes the visual point clouds to synthesise sparser and sparser input data for the model. In this manner, we were able to use one-shot learning, using the decomposed visual point clouds as augmentations, and reduce the data-collection requirement for training. The approach allows for a gradual improvement of prediction accuracy as more tactile data become available. We evaluated the effectiveness of the curriculum strategy on our generated visual and tactile datasets, experimentally showing that the proposed method improved the recognition accuracy by up to 23% on partial tactile data and boosted accuracy on full tactile data from 93% to 100%. The curriculumtrained network recognised objects with an accuracy of 80% using only 20% of the tactile data representing the objects, increasing to 100% accuracy on clouds containing at least 60% of the points.
File in questo prodotto:
File Dimensione Formato  
Visuo-Tactile_Recognition_of_Partial_Point_Clouds_Using_PointNet_and_Curriculum_Learning_Enabling_Tactile_Perception_from_Visual_Data.pdf

accesso aperto

Descrizione: Articolo su rivista
Tipologia: Documento in versione editoriale
Dimensione 1.74 MB
Formato Adobe PDF
1.74 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1105342
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact