The aging of the global population obviates the need for systems able to monitor people in need in their personal or hospital environment. The isolated processing of data coming from visual or other sensors has reached it limits; however, the incorporation of multimodal information may lead to enhanced function and more robust recognition of emotional behavior. So far, affective computing approaches have not taken into consideration a cognitive model for the design of the experimental protocols. Moreover, the potential of neurophysiology to model human brain responses to emotional stimuli has not been sufficiently investigated.
The main objective of the project is the development of a methodology and an affective computing system for the recognition of physiological states and biological activities in assistive environments. The proposed research aims in developing human-centered computers that can understand states of the user (identity, emotions, and movements) using audiovisual and biological signals so as to enable an interaction mainly based on synthetic audiovisual information.