Studying human motion requires modelling its multiple temporal scale nature to fully describe its complexity since different muscles are activated and coordinated by the brain at different temporal scales in a complex cognitive process. Nevertheless, current approaches are not able to address this requirement properly, and are based on oversimplified models with obvious limitations. Data-driven methods represent a viable tool to address these limitations. Nevertheless, shallow data-driven models, while achieving reasonably good recognition performance, require to handcraft features based on domain-specific knowledge which, in this cases, is limited and does no allow to properly model motion- and subject-specific temporal scales. In this work, we propose a new deep multiple temporal scale data-driven model, based on Temporal Convolutional Networks, able to automatically learn features from the data at different temporal scales. Our proposal focuses first on over-performing state-of-the-art shallows and deep models in terms of recognition performance. Then, thanks to the use of feature ranking for shallow models and an attention map for deep models, we will give insights on what the different architectures actually learned from the data. We designed, collected data, and tested our proposal in custom experiment of motion recognition: detecting the person who draw a particular shape (i.e., an ellipse) on a graphics tablet, collecting data about his/her movement (e.g., pressure and speed) in different extrapolating scenarios (e.g., training with data collected from one hand and testing the model on the other one). Collected data regarding our experiment and code of the methods are also made freely available to the research community. Results, both in terms of accuracy and insight on the cognitive problem, support the proposal and support the use of the proposed technique as a support tool for better understanding the human movements and its multiple temporal scale nature.

The Importance of Multiple Temporal Scales in Motion Recognition: from Shallow to Deep Multi Scale Models

D'Amato V.;Oneto L.;Camurri A.;Anguita D.;
2022-01-01

Abstract

Studying human motion requires modelling its multiple temporal scale nature to fully describe its complexity since different muscles are activated and coordinated by the brain at different temporal scales in a complex cognitive process. Nevertheless, current approaches are not able to address this requirement properly, and are based on oversimplified models with obvious limitations. Data-driven methods represent a viable tool to address these limitations. Nevertheless, shallow data-driven models, while achieving reasonably good recognition performance, require to handcraft features based on domain-specific knowledge which, in this cases, is limited and does no allow to properly model motion- and subject-specific temporal scales. In this work, we propose a new deep multiple temporal scale data-driven model, based on Temporal Convolutional Networks, able to automatically learn features from the data at different temporal scales. Our proposal focuses first on over-performing state-of-the-art shallows and deep models in terms of recognition performance. Then, thanks to the use of feature ranking for shallow models and an attention map for deep models, we will give insights on what the different architectures actually learned from the data. We designed, collected data, and tested our proposal in custom experiment of motion recognition: detecting the person who draw a particular shape (i.e., an ellipse) on a graphics tablet, collecting data about his/her movement (e.g., pressure and speed) in different extrapolating scenarios (e.g., training with data collected from one hand and testing the model on the other one). Collected data regarding our experiment and code of the methods are also made freely available to the research community. Results, both in terms of accuracy and insight on the cognitive problem, support the proposal and support the use of the proposed technique as a support tool for better understanding the human movements and its multiple temporal scale nature.
2022
978-1-7281-8671-9
File in questo prodotto:
File Dimensione Formato  
C109.pdf

accesso chiuso

Descrizione: Contributo in atti di convegno
Tipologia: Documento in Post-print
Dimensione 537.89 kB
Formato Adobe PDF
537.89 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1102737
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact