Paper Number

2090

Paper Type

Complete

Description

Online communities thrive on the basis of interactions between like-minded individuals, and usually involve some form of feedback or evaluations by peers. In these contexts, there is systematic evidence of gender-based biases in evaluations. How can such biases be attenuated? We study the efficacy of one approach—anonymization of gender information on the community. We use data from a large-scale digital discussion platform, Political Science Rumors, to examine the presence of gender bias. When users on the community post a discussion message, they are randomly assigned a pseudonym in the form of a given (or first name), such as “Daniel” or “Haylee,” and each post subsequently garners positive and negative votes from readers. We analyze the up votes, down votes, and net votes garnered by 1.4 million posts where names are randomly assigned to posters. We find that posts from randomly assigned “female” names receive 2.5% lower evaluation scores, all else equal. Further, when “female” users post emotive content with a negative tone, the posts receive disproportionately more negative evaluations.

Comments

02-General

Share

COinS
 
Dec 12th, 12:00 AM

Gender Effects in Online Low-Threshold Evaluations: Evidence from a Large-Scale Online Discussion-based Community

Online communities thrive on the basis of interactions between like-minded individuals, and usually involve some form of feedback or evaluations by peers. In these contexts, there is systematic evidence of gender-based biases in evaluations. How can such biases be attenuated? We study the efficacy of one approach—anonymization of gender information on the community. We use data from a large-scale digital discussion platform, Political Science Rumors, to examine the presence of gender bias. When users on the community post a discussion message, they are randomly assigned a pseudonym in the form of a given (or first name), such as “Daniel” or “Haylee,” and each post subsequently garners positive and negative votes from readers. We analyze the up votes, down votes, and net votes garnered by 1.4 million posts where names are randomly assigned to posters. We find that posts from randomly assigned “female” names receive 2.5% lower evaluation scores, all else equal. Further, when “female” users post emotive content with a negative tone, the posts receive disproportionately more negative evaluations.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.