•  
  •  
 

Scandinavian Journal of Information Systems

Abstract

AI-based chatbots are becoming an increasingly common part of the front-line of public services. Through natural language, users can write simple queries to a chatbot which answers with appropriate information. We have investigated how a public chatbot operates in actual practice and how it answers the citizens’ questions about the rules and regulations for welfare benefits. We use the concept of citizens’ information needs to determine the quality of the chatbot’s answers. Information needs are often not formulated from the start as answerable questions. We analyse logs from chat sessions between the chatbot and the citizens, and focus on problems that arise, e.g., that the chatbot gives irrelevant answers or omits important information. The paper shows how the inner workings of the chatbot shapes the answerable questions. We conclude that responsible use of AI (such as chatbots) is a matter of design of the overall service and includes acknowledging that the AI itself can never be responsible.

Share

COinS