PACIS 2022 Proceedings
Automatic Speech Emotion Recognition Using Machine Learning: Digital Transformation of Mental Health
Paper Number
1630
Abstract
Human’s emotional states affect their utterances which are generated through vocal cord vibrations. Accurate recognition of these emotional states encoded in human speech signals is critical and can be leveraged for mental health. This includes assisting practitioners in their assessments and decision-making, improving therapy effectiveness, monitoring patients, and clinical training. However, very few works address speech emotion recognition from a mental health perspective. This is our preliminary research analysis that demonstrates the feasibility of automatic-speech-emotion-recognition for mental health purposes. We used five machine learning paradigms for classifying emotions and evaluated their performance by focusing on their effectiveness in capturing human emotions using custom and benchmark databases, including TESS, EMO-DB, and RAVDESS. SVM demonstrated superior performance in overlapping settings based on F1-value and achieved 74% accuracy in RAVDESS and the custom datasets. We believe this research could be the initial step towards a fully implemented intelligent support service for mental health.
Recommended Citation
Madanian, Samaneh; Parry, David; Adeleye, Olayinka; Poellabauer, Christian; Mirza, Farhaan; Mathew, Shilpa; and Schneider, Sandy, "Automatic Speech Emotion Recognition Using Machine Learning: Digital Transformation of Mental Health" (2022). PACIS 2022 Proceedings. 45.
https://aisel.aisnet.org/pacis2022/45
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
Paper Number 1630