Paper Type

Short

Paper Number

PACIS2025-1959

Description

Recent and ongoing escalations of large language models, which are widely known as generative AI, have stirred numerous public attentions, excitement and concerns in various aspects. Despite beneficial potentials, concerns about possible issues like privacy violations have sparked debates, yet research remains scarce. Our study evaluates the privacy compliance of major commercial large language models (LLMs) using a structured, quantitative approach based on three distinct frameworks of privacy laws and regulations across nations. Involved models are chosen based on popularities, including OpenAI's ChatGPT, Google’s Bard/Gemini, Microsoft 365’s Copilot, Anthropic’s Claude, High-Flyer's DeepSeek, Perplexity AI’s Perplexity, and Cohere. The evaluation framework systematically examines the extent to which LLM privacy agreements align with legal standards and identifies potential compliance risks. From extracted privacy compliance scores, we conclude the overall compliance of LLMs before concluding contributions and suggesting future research directions.

Comments

Security

Share

COinS
 
Jul 6th, 12:00 AM

Comparative Analysis of Privacy Policies in Large Language Models: Compliance with Data Privacy Laws

Recent and ongoing escalations of large language models, which are widely known as generative AI, have stirred numerous public attentions, excitement and concerns in various aspects. Despite beneficial potentials, concerns about possible issues like privacy violations have sparked debates, yet research remains scarce. Our study evaluates the privacy compliance of major commercial large language models (LLMs) using a structured, quantitative approach based on three distinct frameworks of privacy laws and regulations across nations. Involved models are chosen based on popularities, including OpenAI's ChatGPT, Google’s Bard/Gemini, Microsoft 365’s Copilot, Anthropic’s Claude, High-Flyer's DeepSeek, Perplexity AI’s Perplexity, and Cohere. The evaluation framework systematically examines the extent to which LLM privacy agreements align with legal standards and identifies potential compliance risks. From extracted privacy compliance scores, we conclude the overall compliance of LLMs before concluding contributions and suggesting future research directions.