Non-verbal behaviors have a key role in making a virtual character appear life-like. We describe an extensible system for the specification, control and real-time generation of facial expressions and gestures. The system approximates in a MPEG-4 based virtual character the wide expressive range, dynamism (an expression's meaning significantly depends on its temporal evolution) and variability (an emotion is never expressed exactly in the same way by different people, and even by the same person at different times), typical of human nonverbal behavior. The MPEG-4 standard only allows high-level control of 6 basic emotions and does not explicitly support the description of an expression temporal evolution. Our approach has been that of creating a hierarchical model of expressiveness; expressions are defined in term of parameterized functions controlling low-level animation parameters trajectories (by means of an XML-based expression definition markup language). The real-time generation of those expressions is performed by an expression synthesis engine. The system allows to effectively modulate expressivity both at design-time (the developer tweaks the parameters to give the character a given expressive style), and at run-time (the engine automatically changes the way in which an expression is performed each time), producing controllable, but non-deterministic, behavior patterns, a key factor for enhancing believability.

A system for real-time synthesis of subtle expressivity for life-like MPEG-4 based Virtual Characters

BRACCINI, CARLO ANDREA;LAVAGETTO, FABIO
2004-01-01

Abstract

Non-verbal behaviors have a key role in making a virtual character appear life-like. We describe an extensible system for the specification, control and real-time generation of facial expressions and gestures. The system approximates in a MPEG-4 based virtual character the wide expressive range, dynamism (an expression's meaning significantly depends on its temporal evolution) and variability (an emotion is never expressed exactly in the same way by different people, and even by the same person at different times), typical of human nonverbal behavior. The MPEG-4 standard only allows high-level control of 6 basic emotions and does not explicitly support the description of an expression temporal evolution. Our approach has been that of creating a hierarchical model of expressiveness; expressions are defined in term of parameterized functions controlling low-level animation parameters trajectories (by means of an XML-based expression definition markup language). The real-time generation of those expressions is performed by an expression synthesis engine. The system allows to effectively modulate expressivity both at design-time (the developer tweaks the parameters to give the character a given expressive style), and at run-time (the engine automatically changes the way in which an expression is performed each time), producing controllable, but non-deterministic, behavior patterns, a key factor for enhancing believability.
2004
9780780385788
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/232326
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact