Abstract
Artificial intelligence (AI) is currently being used in a wide range of areas, and is not limited to automating repetitive tasks, but to improve learning, refine ideas, and optimize processes by enhancing human skills through AI’s ability to analyze large volumes of data and produce insights (Meira, 2025). As a result, evaluating the ethical implications of these advancements is increasingly essential to develop solutions that benefit all stakeholders and prevent unintended consequences from poorly designed or implemented technologies. Some risks are perpetuating biases and inequality, compromising privacy and safety, missing opportunities to shape the future, being unable to make informed decisions, and failing to understand the implications of AI on society (Todtfeld & Weinstein, 2025). For example, MIT recently made available an AI Risk Repository (Slattery et al., 2024). However, even with the repository, it is difficult to know which risks to worry about the most, as it requires a better understanding of how these systems work. A study by the World Economic Forum (Skeet et al., 2020) revealed that 66% of people worry that technology will make it impossible to know whether what they are seeing or hearing is real. Data privacy is highly valued by consumers, with more than half of respondents — 53% — saying they would avoid buying from a company if it sells personal data for profit. Therefore, behaving ethically definitely matters when creating sustainable value, and companies can create lasting value for society by aligning their practices with the needs of all stakeholders, including the community at large. Although there is a serious ethical risk, ethical problems in AI solutions often arise because well-intentioned people fail to design and use AI with an intentional ethical context in mind. Against failing to keep intentional ethical context in mind, we are addressing the following research questions: (1) How could a model based on principles support informed decision-making in AI/LLM use given AI’s impact on individuals and society? (2) How do ethical considerations for LLMs differ significantly from broader AI concerns? With these research questions, our proposed model centers on principles adapted from (Beard & Longstaff, 2018) and in the proposed framework from (da Silva et al, 2024), explaining how LLMs fit within this framework and what new insights our model contributes. Principles considered are Ought before can, Benefit, Responsibility, Non-instrumentalism, Purpose, and Fairness, evaluating their applicability in the context of LLMs. Our research also examines the technical decisions in data collection, attribute selection, and system design. The objective is to illustrate how to effectively design solutions with a clear focus on ethical considerations at every stage of the decision-making process.
Recommended Citation
da Silva, Daniela America and Marques, Johnny, "Ethical Considerations When Using LLMs" (2025). AMCIS 2025 TREOs. 109.
https://aisel.aisnet.org/treos_amcis2025/109
Comments
tpp1280