Track Description

Our track is concerned with research on the design and evaluation of sociotechnical AI- based systems that achieve multi-sided outcomes and are meaningful to the businesses and/or society as well as the use and consequences of sociotechnical AI systems. Particularly, we welcome studies that examine impactful “AI design principles” and research that yields new principles for the design and evaluation of AI tools. This also includes studies that take on a hybridization approach and present designs of interesting human-AI hybrids in different contexts. We seek studies that are theoretical, empirical, or, technical, quantitative or qualitative, as long as the research yields impactful new insights. By “impactful,” we mean that the design principles should pivot from the existing research, theory, and practice; specifically, authors will need to demonstrate through their understanding of the existing opus, how their research builds on current knowledge on the topic, and not simply state that fact. By “multi-sided outcomes,” we mean an effective AI-based tool or system which achieves value not just for the developer or the corporation using it on consumers, employees or contributors but for those other users as well. By “socially meaningful multi-sided outcomes,” we mean that the value to the users should be in ways that go far beyond simple recommendations for movie choices, but improve people’s lives in fundamental ways (e.g., closing the income inequality gap, dampening systemic racial biases, reducing information silos, engaging with the challenges of global warming, improving the safety of seniors’ homes, etc.). By “sociotechnical AI,” we mean that studies should seek to open and provide insights into the black box of the user, the ecosystem of use and development, and the technology around it. For example, discussions of the AI tools should include the data and algorithms they are built on, how users are enticed into engaging with these tools, and what is the ecosystem that institutionalizes the tool-usage patterns that harm or foster the multi-sided outcomes. Through our interest in human-AI hybrids, we are also seeking studies that not only focus on the design of AI-based tools for the user but shed more light into how humans and AI can work collaboratively on a task.

Track Chairs:
Ann Majchrzak, University of Southern California
Kevin Hong, University of Houston
Saonee Sarker, University of Virginia


Subscribe to RSS Feed

Sunday, December 12th

A Critical Empirical Study of Black-box Explanations in AI

JEAN-MARIE JOHN-MATHEWS, Institut Mines-Télécom Business School

Accuracy and Explainability in Artificial Intelligence: Unpacking the Terms

Kathy McGrath, Brunel University

AI Divide versus Inclusion: An Empirical Evidence from an On-demand Food Delivery Platform

Yeonseo Kim, College of Business, KAIST
Jiyong Eom, KAIST College of Business
Tae Jung Yoon, KAIST
Albert Jin Chung, Mesh Korea
Myunghwan Kim, Mesh Korea

Artificial Intelligence for In-flight Services: How the Lufthansa Group Managed Explainability and Accuracy Concerns

Sergey Stroppiana Tabankov, University of Warwick
Mareike Möhlmannn, Bentley University

Creativity in data work: agricultural data in practice

Tomislav Karacic, Vrije Universiteit Amsterdam
Wendy Günther, Vrije Universiteit Amsterdam
Anastasia Sergeeva, Vrije Universiteit Amsterdam
Marleen Huysman, Vrije Universiteit Amsterdam

Explain it to Me and I will Use it: A proposal on the Impact of Explainable AI on Use Behavior

Pascal Hamm, EBS University
Hermann Felix Wittmann, R+V
Michael Klesel, University of Twente

From Ethical AI Principles to Governed AI

Akseli Seppälä, University of Turku
Teemu Birkstedt, University of Turku
Matti Mäntymäki, University of Turku

How To Train Your Algo: Investigating the Enablers of Bias in Algorithmic Development

Marta Stelmaszak, Portland State University

Hybrid Intelligence – Combining the Human in the Loop with the Computer in the Loop: A Systematic Literature Review

Christina Wiethof, Information Systems
Eva A. C. Bittner, University of Hamburg

Interruptions during a service encounter: Dealing with imperfect chatbots

Elizabeth Han, Georgia Institute of Technology
Dezhi Yin, University of South Florida
Han Zhang, Georgia Institute of Technology

Lexical Copy or Semantic Imitation? Exploring the Implications of the AI Text Generator on Medical Crowdfunding Performance

Xiaopan WANG, College of Management and Economics
Junpeng Guo, Tianjin University
Ben Choi, Nanyang Technological University
Yi Wu, Tianjin University

Only a Coward hides behind AI? Preferences in Surrogate, Moral Decision-Making

Elena Freisinger, Nuremberg Institute for Market Decisions
Sabrina Schneider, Management Center Innsbruck

Regulating Algorithmic Learning in Digital Platform Ecosystems through Data Sharing and Data Siloing: Consequences for Innovation and Welfare

Jan Kraemer, University of Passau
Shiva Shekhar, University of Passau
Janina Hofmann, University of Passau

Repertories of Evaluation in AI Ethics: Plurality in Professional Responsibility and Accountability

Pedro Seguel, McGill University
Emmanuelle Vaast, McGill University

The Case of Human-Machine Trading as Bilateral Organizational Learning

Timo Sturm, Technical University of Darmstadt
Timo Koppe, Technical University of Darmstadt
Yven Scholz, Technical University of Darmstadt
Peter Buxmann, Technical University of Darmstadt

The Effect of Bots on Human Interaction in Online Communities

Hani Safadi, University of Georgia
John P. Lalor, University of Notre Dame
Nicholas Berente, University of Notre Dame

The Online Community Management Triad - Managerial Dynamics of Community Admins in the Age of Algorithms

Yaara Welcman, Tel Aviv University
Lior Zalmanson, Tel Aviv University

Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice

Anuschka Schmitt, University of St. Gallen
Thiemo Wambsganss, University of St. Gallen
Matthias Soellner, University of Kassel
Andreas Janson, Institute of Information Management