Abstract

Generative artificial intelligence, particularly large language models (LLMs), are prone to misuse and vulnerable to security threats, raising significant safety and privacy concerns. The European Union's Artificial Intelligence Act (EUAIA) seeks to enforce AI robustness but faces implementation challenges due to the complexity of LLMs and emerging security vulnerabilities. Our research introduces a framework using ontologies, assurance cases, and factsheets to support engineers and stakeholders in understanding and documenting AI system compliance and security. This approach aims to ensure that LLMs adhere to regulatory standards and are equipped to counter potential threats.

Share

COinS