This paper presents some recent developments at DISTInfoMus Lab on multimodal and cross-modal processing of multimedia data streams with a particular focus on interactive systems exploiting Tangible Acoustic Interfaces (TAIs). In our research multimodal and cross-modal algorithms are employed for enhancing the extraction and analysis of the expressive information conveyed by gesture in non-verbal interaction. The paper discusses some concrete examples of such algorithms focusing on the analysis of high-level features from expressive gestures of subjects interacting with TAIs. The features for explicit support of multimodal and cross-modal processing in the new EyesWeb 4 open platform (available at www.eyesweb.org) are also introduced. Results are exploited in a series of public events in which the developed techniques are applied and evaluated with experiments involving both experts and the general audience. Research is carried out in the framework of the EU-IST STREP Project TAI-CHI (Tangible Acoustic Interfaces for Computer-Human Interaction).
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Multimodal and cross-modal processing in interactive systems based on tangible acoustic interfaces|
|Data di pubblicazione:||2005|
|Appare nelle tipologie:||04.01 - Contributo in atti di convegno|