Paper Type

Complete

Abstract

Our research looks at the role lobbying plays in influencing artificial intelligence (AI) policy in Canada, especially during the agenda-setting and problem definition stages. Our research uses Critical Discourse Analysis (CDA) and policy agenda-setting theory to analyze federal parliamentary committee hearings and to uncover the underlying power dynamics of these stakeholder engagement venues. This research highlights a significant lack of representation of marginalized voices in this public forum, creating additional social exclusion for these groups in AI policymaking. As a result, policy recommendations stemming from these meetings did not properly account for AI-related risks felt by marginalized communities. Our findings show that lobbying groups use specific discursive strategies to further their self-interest, and that negativity bias strongly influences policymakers, prioritizing AI-related risks over benefits. Our findings contribute to the literature on social inclusion, lobbying, and AI governance. Thus, we emphasize the need for more equitable representation in AI policy discussions.

Paper Number

1286

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1286

Comments

SIGSI

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Lobbyist Framing of Artificial Intelligence in Canada

Our research looks at the role lobbying plays in influencing artificial intelligence (AI) policy in Canada, especially during the agenda-setting and problem definition stages. Our research uses Critical Discourse Analysis (CDA) and policy agenda-setting theory to analyze federal parliamentary committee hearings and to uncover the underlying power dynamics of these stakeholder engagement venues. This research highlights a significant lack of representation of marginalized voices in this public forum, creating additional social exclusion for these groups in AI policymaking. As a result, policy recommendations stemming from these meetings did not properly account for AI-related risks felt by marginalized communities. Our findings show that lobbying groups use specific discursive strategies to further their self-interest, and that negativity bias strongly influences policymakers, prioritizing AI-related risks over benefits. Our findings contribute to the literature on social inclusion, lobbying, and AI governance. Thus, we emphasize the need for more equitable representation in AI policy discussions.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.