Learning to play and perform violin is a complex task, that requires a high conscious control and coordination for the player. In this paper, our aim is to understand which technology and which motion features can be used to efficiently and effectively distinguish a professional performance from a student one trading off intrusiveness and accuracy. We collected and made freely available a dataset consisting of Motion Capture (MOCAP), Electromyography, Accelerometer, and Gyroscope (MYO), and Microsoft Kinect (KINECT) recordings of different violinists with different skills performing different exercises covering different pedagogical and technical aspects. We then engineered peculiar features starting from the different sources (MOCAP, MYO, and KINECT) and trained a data-driven classifier to distinguish among two different levels of violinist experience, namely Beginners and Experts. We then studied how much accuracy do we loose when, instead of using MOCAP data (the most intrusive and costly technology), MYO data (which is less intrusive than MOCAP), or the KINECT data (the less intrusive technology) are exploited. In accordance with the hierarchy present in the dataset, we study two different scenarios: extrapolation with respect to different exercises and violinists. Furthermore we study which features are the most predictive ones of the quality of a violinist to corroborate the significance of the results. Results, both in terms of accuracy and insight on the cognitive problem, support the proposal and support the use of the presented technique as an effective tool for students to monitor and enhance their home study and practice.

Accuracy and Intrusiveness in Data-Driven Violin Players Skill Levels Prediction: MOCAP Against MYO Against KINECT

D'Amato V.;Volta E.;Oneto L.;Volpe G.;Camurri A.;Anguita D.
2021-01-01

Abstract

Learning to play and perform violin is a complex task, that requires a high conscious control and coordination for the player. In this paper, our aim is to understand which technology and which motion features can be used to efficiently and effectively distinguish a professional performance from a student one trading off intrusiveness and accuracy. We collected and made freely available a dataset consisting of Motion Capture (MOCAP), Electromyography, Accelerometer, and Gyroscope (MYO), and Microsoft Kinect (KINECT) recordings of different violinists with different skills performing different exercises covering different pedagogical and technical aspects. We then engineered peculiar features starting from the different sources (MOCAP, MYO, and KINECT) and trained a data-driven classifier to distinguish among two different levels of violinist experience, namely Beginners and Experts. We then studied how much accuracy do we loose when, instead of using MOCAP data (the most intrusive and costly technology), MYO data (which is less intrusive than MOCAP), or the KINECT data (the less intrusive technology) are exploited. In accordance with the hierarchy present in the dataset, we study two different scenarios: extrapolation with respect to different exercises and violinists. Furthermore we study which features are the most predictive ones of the quality of a violinist to corroborate the significance of the results. Results, both in terms of accuracy and insight on the cognitive problem, support the proposal and support the use of the presented technique as an effective tool for students to monitor and enhance their home study and practice.
2021
978-3-030-85098-2
978-3-030-85099-9
File in questo prodotto:
File Dimensione Formato  
C095.pdf

accesso chiuso

Descrizione: Contributo in atti di convegno
Tipologia: Documento in Post-print
Dimensione 997.93 kB
Formato Adobe PDF
997.93 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1086614
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact