Should We Trust Teachers or Algorithms? Paradoxical Effects of Algorithmic Evaluations for Gaslighting Victims

Amber Young, University of Arkansas
Eugene Young, Missouri Baptist University

Abstract

Standardized tests generate data by which students, teachers, schools, districts, states and even countries are algorithmically evaluated. Reliance on algorithmic evaluations of student performance is controversial. Teacher advocates cite demoralizing effects on teachers whose judgement is challenged-if not replaced-by algorithmic evaluations. Student advocates explain that algorithmic evaluation systematically disadvantages minority students and distracts from more important tasks and goals. Yet, teacher evaluations of students-especially minorities-are systematically biased as well. Research shows that high achieving minority students are more likely to be identified as gifted by algorithms than teachers; even when algorithms identify gifted students of color, teachers systematically fail to refer those students to gifted education programs (Siegle et al. 2010). This research provides a counterpoint to the narrative that algorithmic evaluations should be abolished because of negative effects on minorities. Gaslighting is a process through which an abuser-often someone in a position of power-manipulates the physical or mental state of a victim-often someone not in a position of power-in a way that makes the victim question his/her perceptions of reality (Davis & Ernst 2017). When gifted students question their abilities based on evaluations, they experience gaslighting. Instigators and reinforcers of gaslighting can be either algorithms or teachers. While bias in algorithms can be quantified and mitigated, training bias out of teachers may be trickier (Tetlock and Mitchell 2009). Algorithmic evaluations-while flawed-provide an additional datapoint against which students should be allowed to compare their perceptions of their academic progress. While these algorithms are not infallible and are in some cases deeply flawed, algorithmic evaluations may reflect a truer evaluation for certain students, e.g., gifted minorities living in areas where teachers are particularly prone to bias. In response to the question of whether we should trust teachers or algorithms, our answer is yes to both. Both offer students a datapoint against which to evaluate their own perceptions, but students should not accept any one perception datapoint (their own, the teacher’s or the algorithm’s) as gospel. Moreover, students should remember that true emancipatory learning is hard to measure (Young 2018). By describing how algorithms can instigate or reinforce gaslighting, but also counter the effects of gaslighting, this research answers the call for investigation of paradoxical societal effects of emerging technology (Miranda et al. 2016).

 

Should We Trust Teachers or Algorithms? Paradoxical Effects of Algorithmic Evaluations for Gaslighting Victims

Standardized tests generate data by which students, teachers, schools, districts, states and even countries are algorithmically evaluated. Reliance on algorithmic evaluations of student performance is controversial. Teacher advocates cite demoralizing effects on teachers whose judgement is challenged-if not replaced-by algorithmic evaluations. Student advocates explain that algorithmic evaluation systematically disadvantages minority students and distracts from more important tasks and goals. Yet, teacher evaluations of students-especially minorities-are systematically biased as well. Research shows that high achieving minority students are more likely to be identified as gifted by algorithms than teachers; even when algorithms identify gifted students of color, teachers systematically fail to refer those students to gifted education programs (Siegle et al. 2010). This research provides a counterpoint to the narrative that algorithmic evaluations should be abolished because of negative effects on minorities. Gaslighting is a process through which an abuser-often someone in a position of power-manipulates the physical or mental state of a victim-often someone not in a position of power-in a way that makes the victim question his/her perceptions of reality (Davis & Ernst 2017). When gifted students question their abilities based on evaluations, they experience gaslighting. Instigators and reinforcers of gaslighting can be either algorithms or teachers. While bias in algorithms can be quantified and mitigated, training bias out of teachers may be trickier (Tetlock and Mitchell 2009). Algorithmic evaluations-while flawed-provide an additional datapoint against which students should be allowed to compare their perceptions of their academic progress. While these algorithms are not infallible and are in some cases deeply flawed, algorithmic evaluations may reflect a truer evaluation for certain students, e.g., gifted minorities living in areas where teachers are particularly prone to bias. In response to the question of whether we should trust teachers or algorithms, our answer is yes to both. Both offer students a datapoint against which to evaluate their own perceptions, but students should not accept any one perception datapoint (their own, the teacher’s or the algorithm’s) as gospel. Moreover, students should remember that true emancipatory learning is hard to measure (Young 2018). By describing how algorithms can instigate or reinforce gaslighting, but also counter the effects of gaslighting, this research answers the call for investigation of paradoxical societal effects of emerging technology (Miranda et al. 2016).