Can AI be a Moral Agent? A Study of Fortune Global 500 CSR reports using LLM
Abstract
Investors, and stakeholders more generally, are increasingly interested in firm’s moral engagement with society and the environment. In order for stakeholders to find, and potentially transact with such firms, firms must signal their moral intentions. One way they can do this is through the language they use to describe their firm, which is a reflection of the firm’s ethical culture. Large Language Models (LLM), such as ChatGPT, offer a way to analyze large text data and measure the degree to which firm leaders use moral language in their stakeholder communication. Focusing on the Moral Foundation Theory’s (MFT) Care moral intuition, we empirically test if our customized GPT model can recognize moral language in corporate social responsibility (CSR) reports. Comparing the GPT results with a human coder, we find an accuracy rate of 73.3%. Namely, in 22 out of 30 opportunities, a human coder successfully confirmed GPT’s recognition of morally suffused text. Theoretically, our finding contributes to the theory of moral agency and literature of AI in IS by examining the effectiveness of a delegated moral agent based on an up-to-date LLM. Practically, our results lend support to the financial investors who use AI in making investment decisions based on company ethical culture and CSR reports.
Recommended Citation
Peifer, Jared; Luo, Yuxiao (Rain); and Brockman, Elias, "Can AI be a Moral Agent? A Study of Fortune Global 500 CSR reports using LLM" (2024). NEAIS 2024 Proceedings. 13.
https://aisel.aisnet.org/neais2024/13