Abstract

Survey methods remain central to Information Systems research but face persistent limitations including common method bias, low response rates, poor data quality, limited access to hard-to-reach populations, and challenges capturing rapidly evolving technology perceptions. This research proposes a novel methodology—the LLM-Reasoning Agent Survey (LAS)—that leverages large language model reasoning agents to complement traditional surveys. Recent advances in Chain-of-Thought reasoning enable LLMs to perform human-like multi-step cognitive processes, suggesting their potential as proxies for human respondents in specific research contexts. We investigate whether LLM-reasoning agents can effectively represent typical populations and respond to survey manipulations with validity and reliability comparable to human subjects.

We develop and validate LAS by constructing LLM-reasoning agents assigned demographic, psychological, behavioral, and contextual profiles from an existing survey dataset. Agents are systematically exposed to varying information levels—from basic demographics to comprehensive psychological and contextual profiles—to determine which configurations best replicate human survey responses. We evaluate LAS performance across four critical dimensions: construct validity, internal validity, external validity, and reliability.

This research makes both methodological and theoretical contributions by demonstrating whether LLM-reasoning agents can effectively represent sample populations for IS research. While not proposing complete replacement of human subjects, LAS offers potential advantages including access to sensitive topics, simulation of hard-to-reach populations, rapid testing of emerging technologies, and cost-effective replication studies—addressing longstanding challenges in IS survey methodology.

Abstract Only

Share

COinS