Location
Online
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2023 12:00 AM
End Date
7-1-2023 12:00 AM
Description
Contemporary organizations are increasingly adopting conversational agents (CAs) as intelligent and natural language-based solutions for providing services and information. CAs promote new forms of personalization, speed, cost-effectiveness, and automation. However, despite their hype in research and practice, organizations fail to sustain CAs in operations. They struggle to leverage CAs’ potential because they lack knowledge on how to evaluate and improve the quality of CAs throughout their lifecycle. We build on this research gap by conducting a design science research (DSR) project, aggregating insights from the literature and practice to derive a validated set of quality criteria for CAs. Our study contributes to CA research and guides practitioners by providing a blueprint to structure the evaluation of CAs to discover areas for systematic improvement.
Recommended Citation
Lewandowski, Tom; Poser, Mathis; Kučević, Emir; Heuer, Marvin; Hellmich, Jannis; Raykhlin, Michael; Blum, Stefan; and Böhmann, Tilo, "Leveraging the Potential of Conversational Agents: Quality Criteria for the Continuous Evaluation and Improvement" (2023). Hawaii International Conference on System Sciences 2023 (HICSS-56). 2.
https://aisel.aisnet.org/hicss-56/in/ai_based_assistants/2
Leveraging the Potential of Conversational Agents: Quality Criteria for the Continuous Evaluation and Improvement
Online
Contemporary organizations are increasingly adopting conversational agents (CAs) as intelligent and natural language-based solutions for providing services and information. CAs promote new forms of personalization, speed, cost-effectiveness, and automation. However, despite their hype in research and practice, organizations fail to sustain CAs in operations. They struggle to leverage CAs’ potential because they lack knowledge on how to evaluate and improve the quality of CAs throughout their lifecycle. We build on this research gap by conducting a design science research (DSR) project, aggregating insights from the literature and practice to derive a validated set of quality criteria for CAs. Our study contributes to CA research and guides practitioners by providing a blueprint to structure the evaluation of CAs to discover areas for systematic improvement.
https://aisel.aisnet.org/hicss-56/in/ai_based_assistants/2