Abstract
Generative artificial intelligence, particularly large language models (LLMs), are prone to misuse and vulnerable to security threats, raising significant safety and privacy concerns. The European Union's Artificial Intelligence Act (EUAIA) seeks to enforce AI robustness but faces implementation challenges due to the complexity of LLMs and emerging security vulnerabilities. Our research introduces a framework using ontologies, assurance cases, and factsheets to support engineers and stakeholders in understanding and documenting AI system compliance and security. This approach aims to ensure that LLMs adhere to regulatory standards and are equipped to counter potential threats.
Recommended Citation
Bueno Momcilovic, Tomas; Buesser, Beat; Zizzo, Giulio; Purcell, Mark; and Balta, Dian, "Assuring Compliance of LLMs with EUAIA Robustness Demands" (2024). Wirtschaftsinformatik 2024 Proceedings. 126.
https://aisel.aisnet.org/wi2024/126