Multi-modal analysis of human behaviour

External Member

David Ahmedt-Aristizabal

Description

Abstract: The healthcare system demands effective autonomous solutions to improve service and provide individualize care. Most of these solutions require a multidisciplinary approach that combines healthcare with computational abilities. This project explores wearable and non-intrusive multimodal sensor networks to evaluate each application. This project aims to analyse the strengths and the limitations of each sensor (learning how to represent and summarize multimodal data in a way that exploits the complementary and redundancy), evaluate different approaches used to fuse these modalities (e.g. decision level or feature level integration) and identify the direct relations between (sub)elements from two or more different modalities. Possible applications: neurological disorder classification, emotion recognition, sleep stages, sleep pose, depression, hand gesture, lie detection. Multimodal data: visible images (body and faces), infrared cameras, temperature, accelerometers, electroencephalogram (EEG), arterial oxygen level (SpO2), photoplethysmography (PPG), electrodermal activity (EDA), polysomnography (PSG). Dataset: public and own datasets (e.g. physionet, MASS, SSC, DREAMER, Data61HR).

Contact: David Ahmedt-Aristizaba

Updated:  1 June 2019/Responsible Officer:  Head of School/Page Contact:  CECS Marketing