The aging of the global population obviates the need for systems able to monitor people in need in their personal or hospital environment. The isolated processing of data coming from visual or other sensors has reached it limits; however, the incorporation of multimodal information may lead to enhanced function and more robust recognition of emotional behavior. So far, affective computing approaches have not taken into consideration a cognitive model for the design of the experimental protocols. Moreover, the potential of neurophysiology to model human brain responses to emotional stimuli has not been sufficiently investigated.

Our research in this project aims at the development of methodologies and tools to compose pervasive human-centered systems, which will be able to understand the human state (identity, emotions and behavior) in assistive environments using audiovisual and biological signals. The methodologies that we aim to develop will be able to offer services such as support for the aged/disabled/chronic patients, detection of critical situations from audiovisual content, biosignal and neurophysiology analysis for the detection of pathology (e.g. Alzheimer’s disease), as well as for treatment follow-up.

Our main research goals are to develop:

• Tools to recognize human behaviors from audio-visual content using fusion of non-invasive sensors such as visual or audio. This type of behavior includes motion patterns and trajectories and abnormal events such as falls.

• Tools to identify affect and emotion from audio-visual data in a non-invasive way.

• Tools to analyze neurophysiological data (EEG, ECG, SCR) in order to recognize emotional states and to perform follow-up studies.

• Tools for dynamic multimodal human-machine interaction. Further to that, visual attention mechanisms will be investigated and correlated to social behavior.

• An educational portal for researchers and professionals, where educational material and the created datasets will be freely accessible.

The non-invasive services of the proposed system are expected to be more attractive to the users, however less accurate. The feasibility and reliability of novel services provided by non-invasive methods, such as the audiovisual content processing, will be evaluated as opposed to the invasive ones, such as the neurophysiological.


We do hope that you will find the content of this web site quite interesting and stimulating in order to get acquainted with this new human aspect of modern computational systems.



Dr. Ilias Maglogiannis

Coordinator of STHENOS project

University of Piraeus -Department of Digital Systems