Abstract

An extremely large amount of user-generated content is produced by users worldwide every day with the rapid development of online social media. Content moderation has emerged to ensure the quality of posts on various social media platforms. This process typically demands collaboration between humans and AI because of the complementarity of the two agents in different facets. Wondering how AI can better assist humans to make final judgment in the “machine-in-the-loop” paradigm, we propose a lab experiment to explore the influence of different types of cues provided by AI through a nudging approach as well as time constraints on human moderators’ performance. The proposed study contributes to the literature on the AI-assisted decision-making pattern, and helps social media platforms in creating an effective human-AI collaboration framework for content moderation.

Share

COinS