Virtual team communication has gained immense importance in recent years due to new work evolution and innovative IT-based communication tools. However, virtual teams face emotional obstacles within team communication. Affective chatbots can sense and understand human affective signals and leverage them to support the virtual team by increasing its emotional intelligence through behavioral and persuasive cues. However, through their capabilities such systems may also cause harm to individuals through addiction and increased vulnerability. Simultaneously, they experience higher distrust and skepticism. Therefore, affective chatbots require careful, ethical reflection on when and how to apply them in order to retain trustworthiness. In this paper, we present preliminary results of an ongoing design science research project developing design principles for affective chatbots with a specific emphasis on transparency and human autonomy. With our work we contribute theoretically with prescriptive design knowledge for the class of trustworthy affective chatbots in the context of virtual team communication. We, thereby, provide avenues towards a nascent design theory for this class of systems. Practically, our work supports providers of innovative IT-based communication tools in leveraging this knowledge and designing affective chatbots to help virtual teams in order to communicate more successfully under consideration of ethical principles.



When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.