Emotion recognition has recently attracted much attention in both industrial and academic research as it can be applied in many areas from education to national security. In healthcare, emotion detection has a key role as emotional state is an indicator of depression and mental disease. Much research in this area focuses on extracting emotion related features from images of the human face. Nevertheless, there are many other sources that can identify a person’s emotion. In the context of MENHIR, an EU-funded R&D project that applies Affective Computing to support people in their mental health, a new emotion-recognition system based on speech is being developed. However, this system requires comprehensive data-management support in order to manage its input data and analysis results. As a result, a cloud-based, high-performance, scalable, and accessible ecosystem for supporting speech-based emotion detection is currently developed and discussed here.
|Title of host publication||Conversational Dialogue Systems for the Next Decade, IWSDS 2020|
|Subtitle of host publication||IWSDS 2020|
|Editors||Luis Fernando D’Haro, Zoraida Callejas, Satoshi Nakamura|
|Place of Publication||Madrid, Spain|
|Number of pages||10|
|Publication status||Published - 25 Oct 2020|
|Name||Lecture Notes in Electrical Engineering|
Bibliographical noteFunding Information:
Acknowledgements This publication has been produced in the context of the MENHIR project. This project has received funding from the European Union’s H2020 Programme under grant agreement No. 823907. However, this paper reflects only the authors’ view and the European Commission is not responsible for any use that may be made of the information it contains.
© 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Copyright 2020 Elsevier B.V., All rights reserved.