Abstract

Generative Artificial Intelligence (AI) assistants have transformed the information acquisition and integration process by overcoming the limitations of traditional AI-based recommendation systems and chatbots. The way users engage with these assistants determines the quality of the output. Rather than forcing users to build fully specified queries upfront, generative AI assistants engage in iterative interactions, allowing users to refine their queries over time. Respectively, the pattern of prompt generation and style of interaction with conversational AI assistants can influence users' satisfaction with the output and further adoption behavior. Thus, it is essential to develop a classification of interaction styles with AI assistants based on users’ subjective perspectives and to examine how these styles influence subsequent user engagement. To address this issue, we conducted an exploratory study. The data for this study is gathered through an experiment, part of a class assignment in which students used the Microsoft Copilot browser add-in to complete a task. The inductive comparison of tasks' inputs and outputs reveals two distinct modes of interaction, constructive and functional each producing different class of outputs. Also, the results show that these two groups exhibit varying levels of satisfaction and intention for adopting conversational AI assistants.

Share

COinS