Loading...
Description
Digital technologies, particularly the internet, led to unprecedented opportunities to freely inform oneself, debate, and share thoughts. However, the reduced level of control through traditional gatekeepers such as journalists alsoled to a surge in problematic (e.g., fake news), straight-up abusive, and hateful content (e.g., hate speech). Being under ethical and often legal pressures, many operators of platforms respond to the onslaught of abusive user-generated content by introducing automated, machine learning-enabled moderation tools. Even though meant to protect online audiences, such systems have massive implications regarding free speech, algorithmic fairness, and algorithmic transparency. We set forth to present a large-scale survey experiment that aims at illuminating how the degree of transparency influences the commenter’s acceptance of the machine-made decision, dependent on its outcome. With the presented study design, we seek to determine the necessary amount of transparency needed for automated comment moderation to be accepted by commenters.
Recommended Citation
Müller, Kilian; Koelmann, Holger; Niemann, Marco; Plattfaut, Ralf; and Becker, Jörg, "Exploring Audience’s Attitudes Towards Machine Learning-based Automation in Comment Moderation" (2022). Wirtschaftsinformatik 2022 Proceedings. 1.
https://aisel.aisnet.org/wi2022/human_rights/human_rights/1
Exploring Audience’s Attitudes Towards Machine Learning-based Automation in Comment Moderation
Digital technologies, particularly the internet, led to unprecedented opportunities to freely inform oneself, debate, and share thoughts. However, the reduced level of control through traditional gatekeepers such as journalists alsoled to a surge in problematic (e.g., fake news), straight-up abusive, and hateful content (e.g., hate speech). Being under ethical and often legal pressures, many operators of platforms respond to the onslaught of abusive user-generated content by introducing automated, machine learning-enabled moderation tools. Even though meant to protect online audiences, such systems have massive implications regarding free speech, algorithmic fairness, and algorithmic transparency. We set forth to present a large-scale survey experiment that aims at illuminating how the degree of transparency influences the commenter’s acceptance of the machine-made decision, dependent on its outcome. With the presented study design, we seek to determine the necessary amount of transparency needed for automated comment moderation to be accepted by commenters.