Paper Number
ICIS2025-1964
Paper Type
Complete
Abstract
To battle against misinformation, social media platforms are experimenting with crowdsourced fact-checking---systems that rely on social media users' annotations of potentially misleading content. This paper focuses on Twitter's Community Notes to empirically investigate the efficacy of such systems in curbing misinformation. Utilizing two identification strategies---regression discontinuity design and instrumental variable analysis---we confirm the effectiveness of publicly displaying community notes on an author's voluntary tweet retraction, demonstrating the viability of crowdsourced fact-checking as an alternative to professional fact-checking and forcible content removal. Our findings also reveal that the effect is primarily driven by the author's consideration of misinformation's influence on users who have actively engaged with it (i.e., ``observed influence'') rather than those who might be exposed to it (i.e., ``presumed influence''). Furthermore, results from discrete-time survival analyses show that public notes not only increase the probability of tweet retractions but also accelerate the retraction process among retracted tweets, thereby improving platforms' responsiveness to emerging misinformation.
Recommended Citation
Gao, Yang; Zhang, Maggie Mengqing; and Rui, Huaxia, "Can Crowdchecking Curb Misinformation? Evidence from Community Notes" (2025). ICIS 2025 Proceedings. 10.
https://aisel.aisnet.org/icis2025/is_media/is_media/10
Can Crowdchecking Curb Misinformation? Evidence from Community Notes
To battle against misinformation, social media platforms are experimenting with crowdsourced fact-checking---systems that rely on social media users' annotations of potentially misleading content. This paper focuses on Twitter's Community Notes to empirically investigate the efficacy of such systems in curbing misinformation. Utilizing two identification strategies---regression discontinuity design and instrumental variable analysis---we confirm the effectiveness of publicly displaying community notes on an author's voluntary tweet retraction, demonstrating the viability of crowdsourced fact-checking as an alternative to professional fact-checking and forcible content removal. Our findings also reveal that the effect is primarily driven by the author's consideration of misinformation's influence on users who have actively engaged with it (i.e., ``observed influence'') rather than those who might be exposed to it (i.e., ``presumed influence''). Furthermore, results from discrete-time survival analyses show that public notes not only increase the probability of tweet retractions but also accelerate the retraction process among retracted tweets, thereby improving platforms' responsiveness to emerging misinformation.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
23-Media