Paper Number

2933

Paper Type

Complete

Abstract

The boom of the live streaming commerce provides a wealth of multimodal information, which provide more possibilities for predicting user engagement. Existing studies usually employ a unified framework to process and fuse multimodal information, which fails to understand user engagement behaviors deeply. This paper proposes to handle the multimodal information in live streaming commerce from affective and cognitive perspectives. An Elm-based Multimodal Analysis Framework (EMAF) is presented, which extracts features from multimodal information from affective and cognitive perspectives respectively and predicts user engagement behavior in real-time in live streaming commerce. A module named MD-Transformer is designed to integrate product details more effectively. Experiments have been conducted on a real-world dataset, and the results demonstrate the advantages of our framework against the state-of-the-art multimodal fusion methods.

Comments

13-DataAnalytics

Share

COinS
 
Dec 15th, 12:00 AM

Affective and Cognitive: Exploring the Multi-modal data for Predicting User Engagement Behavior in Live Streaming Commerce

The boom of the live streaming commerce provides a wealth of multimodal information, which provide more possibilities for predicting user engagement. Existing studies usually employ a unified framework to process and fuse multimodal information, which fails to understand user engagement behaviors deeply. This paper proposes to handle the multimodal information in live streaming commerce from affective and cognitive perspectives. An Elm-based Multimodal Analysis Framework (EMAF) is presented, which extracts features from multimodal information from affective and cognitive perspectives respectively and predicts user engagement behavior in real-time in live streaming commerce. A module named MD-Transformer is designed to integrate product details more effectively. Experiments have been conducted on a real-world dataset, and the results demonstrate the advantages of our framework against the state-of-the-art multimodal fusion methods.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.