Paper Type
Complete
Abstract
Our research looks at the role lobbying plays in influencing artificial intelligence (AI) policy in Canada, especially during the agenda-setting and problem definition stages. Our research uses Critical Discourse Analysis (CDA) and policy agenda-setting theory to analyze federal parliamentary committee hearings and to uncover the underlying power dynamics of these stakeholder engagement venues. This research highlights a significant lack of representation of marginalized voices in this public forum, creating additional social exclusion for these groups in AI policymaking. As a result, policy recommendations stemming from these meetings did not properly account for AI-related risks felt by marginalized communities. Our findings show that lobbying groups use specific discursive strategies to further their self-interest, and that negativity bias strongly influences policymakers, prioritizing AI-related risks over benefits. Our findings contribute to the literature on social inclusion, lobbying, and AI governance. Thus, we emphasize the need for more equitable representation in AI policy discussions.
Paper Number
1286
Recommended Citation
Ferraiuolo, Nicolas and Ojo, Adegboyega, "Lobbyist Framing of Artificial Intelligence in Canada" (2025). AMCIS 2025 Proceedings. 12.
https://aisel.aisnet.org/amcis2025/social_inclusion/social_inclusion/12
Lobbyist Framing of Artificial Intelligence in Canada
Our research looks at the role lobbying plays in influencing artificial intelligence (AI) policy in Canada, especially during the agenda-setting and problem definition stages. Our research uses Critical Discourse Analysis (CDA) and policy agenda-setting theory to analyze federal parliamentary committee hearings and to uncover the underlying power dynamics of these stakeholder engagement venues. This research highlights a significant lack of representation of marginalized voices in this public forum, creating additional social exclusion for these groups in AI policymaking. As a result, policy recommendations stemming from these meetings did not properly account for AI-related risks felt by marginalized communities. Our findings show that lobbying groups use specific discursive strategies to further their self-interest, and that negativity bias strongly influences policymakers, prioritizing AI-related risks over benefits. Our findings contribute to the literature on social inclusion, lobbying, and AI governance. Thus, we emphasize the need for more equitable representation in AI policy discussions.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGSI