Affordance detection consists in predicting the possibility of a specific action on an object. While this problem is generally defined for fully autonomous robotic platforms, we are interested in affordance detection for a semi-autonomous scenario, with a human in the loop. In this scenario, a human first moves their robotic prosthesis (e.g. lower arm and hand) towards an object and then the prosthesis selects the part of the object to grasp. The main challenges are the indirectly controlled camera position, which influences the quality of the view, and the limited computational resources available. This paper proposes an affordance detection pipeline to overcome framing issues leveraging object detectors and a reduced computational load for the pipeline to run on resource-constrained platforms. Experimental results on two state-of-the-art datasets show improvements in affordance detection with respect to the baseline solution which consists in considering only an affordance detection model. We argue that the combination of the selected models allows achieving a trade-off between performance and computational for embedded, resource-constrained systems.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
Titolo: | An Affordance Detection Pipeline for Resource-Constrained Devices | |
Autori: | ||
Data di pubblicazione: | 2021 | |
Handle: | http://hdl.handle.net/11567/1073845 | |
ISBN: | 978-1-7281-8281-0 | |
Appare nelle tipologie: | 04.01 - Contributo in atti di convegno |