In this paper, a complete Smart Space architecture and related system prototype is presented; it is able to analyse situations of interest in a given environment and to produce contextual information related to it. Experimental results show that video information plays a major role for what concerns both situation perception and personalized context-aware communications. Fot this reason, the proposed multisensor system automatically extracts information from multiple cameras as well as from diverse sensors describing environment status and it uses this information to trigger personalized and context-aware video messages adaptively sent to users. A rule-based module is encharged to customize video messages in relation to typology of users, contextual situation and users's terminal.The systems outputs graphically generated video messages consisting of an animated avatar (i.e.: Virtual Character) closing the loop on users. Proposed results validates the conceptual schema behind the architecture and the successful adaption to the analysis of different situations.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||VIDEO PROCESSING AND UNDERSTANDING TOOLS FOR AUGMENTED MULTISENSOR PERCEPTION AND MOBILE USER INTERACTION IN SMART SPACES|
|Data di pubblicazione:||2005|
|Appare nelle tipologie:||01.01 - Articolo su rivista|