Location

Online

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2023 12:00 AM

End Date

7-1-2023 12:00 AM

Description

Despite the growing prevalence of ML algorithms, NLP, algorithmically-driven content recommender systems and other computational mechanisms on social media platforms, some of their core and mission-critical gatekeeping functions are nonetheless deeply reliant on the persistence of humans-in-the-loop to both validate computational models in use, and to intervene when those models fail. Perhaps nowhere is this human interaction with/on behalf of computation more key than in social media content moderation, where human capacities for discretion, discernment and the holding of complex mental models of decision-trees and changing policy are called upon hundreds, if not thousands, of times per day. This paper presents the results of a qualitative, interview-based study of an in-house content moderation team (Trust & Safety, or T&S) at a mid-size, erstwhile niche social platform we call FanClique. Findings indicate that while the FanClique T&S team is treated well in terms of support from managers, respect and support from the wider company, and mental health services provided (particularly in comparison to other social media companies), the work of content moderation remains an extremely taxing form of labor that is not adequately compensated or supported.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 7th, 12:00 AM

"We Care About the Internet; We Care About Everything" Understanding Social Media Content Moderators' Mental Models and Support Needs

Online

Despite the growing prevalence of ML algorithms, NLP, algorithmically-driven content recommender systems and other computational mechanisms on social media platforms, some of their core and mission-critical gatekeeping functions are nonetheless deeply reliant on the persistence of humans-in-the-loop to both validate computational models in use, and to intervene when those models fail. Perhaps nowhere is this human interaction with/on behalf of computation more key than in social media content moderation, where human capacities for discretion, discernment and the holding of complex mental models of decision-trees and changing policy are called upon hundreds, if not thousands, of times per day. This paper presents the results of a qualitative, interview-based study of an in-house content moderation team (Trust & Safety, or T&S) at a mid-size, erstwhile niche social platform we call FanClique. Findings indicate that while the FanClique T&S team is treated well in terms of support from managers, respect and support from the wider company, and mental health services provided (particularly in comparison to other social media companies), the work of content moderation remains an extremely taxing form of labor that is not adequately compensated or supported.

https://aisel.aisnet.org/hicss-56/dsm/critical_and_ethical_studies/3