Paper Type
Short
Paper Number
PACIS2025-1230
Description
As online psychological counseling platforms grow, there remains a lack of effective methods to evaluate the quality of text-based counseling responses. Existing evaluation methods, primaily rely on subjective judgments and lack standardized quantitative indicators, particularly in the assessment of specific counseling skills. This study aims to fill this gap by proposing a quantitative assessment method based on large language models (LLMs), focusing on three psychological counseling skills in responses from online counseling platforms: empathy, congruence, and positive regard—based on humanistic theory. An empirical analysis was conducted to validate the effectiveness of this method. The results indicate that through an appropriate prompting strategy for LLMs, we can automate the evaluation of the application effects of these skills. The insights gained from this study have implications for both academic understanding and the application of LLMs in the field of mental health.
Recommended Citation
Chen, Hong; Yuan, Hui; and Xu, Ruiyun, "Evaluating the Impact of Counseling Skills in Online Platforms: An LLM-Based Analysis" (2025). PACIS 2025 Proceedings. 15.
https://aisel.aisnet.org/pacis2025/ishealthcare/ishealthcare/15
Evaluating the Impact of Counseling Skills in Online Platforms: An LLM-Based Analysis
As online psychological counseling platforms grow, there remains a lack of effective methods to evaluate the quality of text-based counseling responses. Existing evaluation methods, primaily rely on subjective judgments and lack standardized quantitative indicators, particularly in the assessment of specific counseling skills. This study aims to fill this gap by proposing a quantitative assessment method based on large language models (LLMs), focusing on three psychological counseling skills in responses from online counseling platforms: empathy, congruence, and positive regard—based on humanistic theory. An empirical analysis was conducted to validate the effectiveness of this method. The results indicate that through an appropriate prompting strategy for LLMs, we can automate the evaluation of the application effects of these skills. The insights gained from this study have implications for both academic understanding and the application of LLMs in the field of mental health.
Comments
Healthcare