Paper Type

ERF

Abstract

This study investigates how AI language models enforce refusal policies in cross-cultural humor involving colonial histories. Through systematic testing of seven colonial pairs (e.g., Germany-Namibia, France-Algeria), this paper analyzes ChatGPT’s selective engagement with sensitive narratives. Findings reveal inconsistent refusal rates: prompts concerning Germany’s colonial past face the highest restrictions, followed by British and French contexts, while Spanish, Portuguese, and Dutch colonial pairs encounter minimal refusals. Bulk requests and U.S.-related jokes activate additional safeguards, highlighting policy biases. By shifting attention from output bias to refusal patterns, this study demonstrates how ostensibly neutral safety mechanisms can reinforce digital colonialism by privileging dominant historical narratives and silencing marginalized perspectives. It also introduces refusal analysis as a novel metric for cross-cultural sensitivity in AI and underscores the urgency of culturally informed safety frameworks to mitigate systemic inequities in global discourse.

Paper Number

1192

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1192

Comments

SIGCCRIS

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

No Joke: Refusal Policies for Cross-Cultural Sensitivity

This study investigates how AI language models enforce refusal policies in cross-cultural humor involving colonial histories. Through systematic testing of seven colonial pairs (e.g., Germany-Namibia, France-Algeria), this paper analyzes ChatGPT’s selective engagement with sensitive narratives. Findings reveal inconsistent refusal rates: prompts concerning Germany’s colonial past face the highest restrictions, followed by British and French contexts, while Spanish, Portuguese, and Dutch colonial pairs encounter minimal refusals. Bulk requests and U.S.-related jokes activate additional safeguards, highlighting policy biases. By shifting attention from output bias to refusal patterns, this study demonstrates how ostensibly neutral safety mechanisms can reinforce digital colonialism by privileging dominant historical narratives and silencing marginalized perspectives. It also introduces refusal analysis as a novel metric for cross-cultural sensitivity in AI and underscores the urgency of culturally informed safety frameworks to mitigate systemic inequities in global discourse.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.