Paper Number
ICIS2025-2616
Paper Type
Complete
Abstract
This study investigates whether Large Language Models (LLMs) can serve as scalable, cost-effective substitutes for human subjects in privacy research. We benchmark LLMs against human participants to assess if they simulate a ``privacy mindset'', i.e., produce responses consistent with human-like preferences and decisions. Results show LLMs are context-sensitive and consistent with reasonable privacy choices, often showing greater concern than humans yet sharing more in real scenarios. Their responses reflect patterns associated with the privacy calculus and privacy paradox. To further align LLM behavior with human responses, we introduce a methodology combining multi-persona modeling, Generative Adversarial Networks, and interpretable machine learning. This generates synthetic subjects that closely mirror real-world privacy preferences. Experiments confirm our approach enhances alignment across datasets and contexts. Our findings position LLMs as promising tools for simulating human privacy decisions, offering a new, scalable path for privacy research with valuable applications in academia, industry, and policy.
Recommended Citation
Cheng, Xiang and Wang, Wen, "(How) Can LLMs Enhance Privacy Research?" (2025). ICIS 2025 Proceedings. 33.
https://aisel.aisnet.org/icis2025/gen_ai/gen_ai/33
(How) Can LLMs Enhance Privacy Research?
This study investigates whether Large Language Models (LLMs) can serve as scalable, cost-effective substitutes for human subjects in privacy research. We benchmark LLMs against human participants to assess if they simulate a ``privacy mindset'', i.e., produce responses consistent with human-like preferences and decisions. Results show LLMs are context-sensitive and consistent with reasonable privacy choices, often showing greater concern than humans yet sharing more in real scenarios. Their responses reflect patterns associated with the privacy calculus and privacy paradox. To further align LLM behavior with human responses, we introduce a methodology combining multi-persona modeling, Generative Adversarial Networks, and interpretable machine learning. This generates synthetic subjects that closely mirror real-world privacy preferences. Experiments confirm our approach enhances alignment across datasets and contexts. Our findings position LLMs as promising tools for simulating human privacy decisions, offering a new, scalable path for privacy research with valuable applications in academia, industry, and policy.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
12-GenAI