Researchers have been using different technological solutions (platforms) as intervention tools with children with Autism Spectrum Disorder (ASD), who typically present difficulties in engaging and interacting with their peers. Social robots, one example of these technological solutions, are often unaware of their game partners, preventing the automatic adaptation of their behaviour to the user. Therefore, enriching the interaction between the user and the platform, lightening up the cognitive burden on the human operator, may be a valuable contribution. An information that can be used to enrich this interaction and, consequently, adapt the system behaviour is the recognition of different actions of the user through skeleton pose data from depth sensors. The present work proposes a method to automatically detect in real-time typical and stereotypical actions of children with ASD by using the Intel RealSense and the Nuitrack SDK to detect and extract the user joints coordinates. A Convolution Neural Network learning model trained on the different actions is used to classify the different patterns of behaviour. The model achieved an average accuracy of 92.6±0.5% on the test data. The entire pipeline runs on average at 31 FPS.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Human action recognition using an image-based temporal and spatial representation|
|Data di pubblicazione:||2020|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|