Abstract
The ability to identify emotions and stress requires a lot of issues to address since emotions tend to be subjective, and the data gathering and interpretation are challenging. The purpose of this work is to assemble multimodal data sets that have been used in the identification of emotion and stress. A random Forest, XGBoost, and Decision Trees, which are classical classifiers only attained 4 percent accuracy, the highest on the cross-validation mean. To counteract such a limiting factor, it is proposed a multi-modal sentiment analysis system composed of physiological signals (ECG, EEG, GSR, skin temperature, breathing) and demographic data, and questionnaire data. The proposed model achieves an improved accuracy of 65% using weighted fusion and a deep learning technique, coding a 15% gain to the most efficient baseline. This has demonstrated the multi-modal data can aid with strong emotion identification.
Recommended Citation
Perumal, Bhuvaneswari; Moorthy, Vaishnavi; H, Gladius Jennifer; and Nguyen, Lemai, "Emotion Mapping of Integrating Physiological Data and
Multi-Modal Sentiment Analysis in Intelligent Information
Systems for a Digital and Sustainable Future" (2025). ACIS 2025 Proceedings. 78.
https://aisel.aisnet.org/acis2025/78