Abstract
Multimodal sentiment analysis is an important research topic in the field of NLP, aiming to analyze speakers' sentiment tendencies through features extracted from textual, visual, and acoustic modalities. Its main methods are based on machine learning and deep learning. Machine learning-based methods rely heavily on labeled data. But deep learning-based methods can overcome this shortcoming and capture the in-depth semantic information and modal characteristics of the data, as well as the interactive information between multimodal data. In this paper, we survey the deep learning-based methods, including fusion of text and image and fusion of text, image, audio, and video. Specifically, we discuss the main problems of these methods and the future directions. Finally, we review the work of multimodal sentiment analysis in conversation.
Recommended Citation
Luo, Xudong; Liu, Jie; Lin, Pingping; and Fan, Yifan, "Multimodal Sentiment Analysis Based on Deep Learning: Recent Progress" (2021). ICEB 2021 Proceedings (Nanjing, China). 16.
https://aisel.aisnet.org/iceb2021/16