In human–machine interfaces, decoder calibration is critical to enable an effective and seamless interaction with the machine. However, recalibration is often necessary as the decoder off-line predictive power does not generally imply ease-of-use, due to closed loop dynamics and user adaptation that cannot be accounted for during the calibration procedure. Here, we propose an adaptive interface that makes use of a non-linear autoencoder trained iteratively to perform online manifold identification and tracking, with the dual goal of reducing the need for interface recalibration and enhancing human–machine joint performance. Importantly, the proposed approach avoids interrupting the operation of the device and it neither relies on information about the state of the task, nor on the existence of a stable neural or movement manifold, allowing it to be applied in the earliest stages of interface operation, when the formation of new neural strategies is still on-going. In order to more directly test the performance of our algorithm, we defined the autoencoder latent space as the control space of a body–machine interface. After an initial offline parameter tuning, we evaluated the performance of the adaptive interface versus that of a static decoder in approximating the evolving low-dimensional manifold of users simultaneously learning to perform reaching movements within the latent space. Results show that the adaptive approach increased the representational efficiency of the interface decoder. Concurrently, it significantly improved users’ task-related performance, indicating that the development of a more accurate internal model is encouraged by the online co-adaptation process.

Building an adaptive interface via unsupervised tracking of latent manifolds

Rizzoglio F.;Casadio M.;De Santis D.;
2021-01-01

Abstract

In human–machine interfaces, decoder calibration is critical to enable an effective and seamless interaction with the machine. However, recalibration is often necessary as the decoder off-line predictive power does not generally imply ease-of-use, due to closed loop dynamics and user adaptation that cannot be accounted for during the calibration procedure. Here, we propose an adaptive interface that makes use of a non-linear autoencoder trained iteratively to perform online manifold identification and tracking, with the dual goal of reducing the need for interface recalibration and enhancing human–machine joint performance. Importantly, the proposed approach avoids interrupting the operation of the device and it neither relies on information about the state of the task, nor on the existence of a stable neural or movement manifold, allowing it to be applied in the earliest stages of interface operation, when the formation of new neural strategies is still on-going. In order to more directly test the performance of our algorithm, we defined the autoencoder latent space as the control space of a body–machine interface. After an initial offline parameter tuning, we evaluated the performance of the adaptive interface versus that of a static decoder in approximating the evolving low-dimensional manifold of users simultaneously learning to perform reaching movements within the latent space. Results show that the adaptive approach increased the representational efficiency of the interface decoder. Concurrently, it significantly improved users’ task-related performance, indicating that the development of a more accurate internal model is encouraged by the online co-adaptation process.
File in questo prodotto:
File Dimensione Formato  
RizzoglioEtAl21.pdf

accesso aperto

Descrizione: Articolo su rivista
Tipologia: Documento in versione editoriale
Dimensione 1.02 MB
Formato Adobe PDF
1.02 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1070636
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 8
social impact