Paper Number
2135
Paper Type
Short Paper
Abstract
Generative language models (GLMs) like GPT-3.5 or ChatGPT produce remarkable results when used for creative tasks. In this short paper, we outline our planned approach to investigating how humans brainstorm together with GLMs. Particularly, we plan to explore how the presentation of GLM suggestions affects brainstorming group effects such as cognitive stimulation or social loafing known from all-human groups. Based on group brainstorming literature and previous studies, we designed an experiment to measure underlying performance effects. In the planned between-subject experiment, participants brainstorm with a GLM. We investigate how manipulating the completeness and the communicated origin of suggestions affects performance-related constructs. In this short paper, we only report preliminary results. Our study has theoretical implications explaining how well-documented brainstorming group effects from all-human groups can help to understand and affect human-AI team performance. Practical implications include insights into designing GLM-based creativity support while balancing performance dimensions.
Recommended Citation
Memmert, Lucas, "Brainstorming with a Generative Language Model: Understanding Performance Through Brainstorming Group Effects" (2024). ECIS 2024 Proceedings. 1.
https://aisel.aisnet.org/ecis2024/track06_humanaicollab/track06_humanaicollab/1
Brainstorming with a Generative Language Model: Understanding Performance Through Brainstorming Group Effects
Generative language models (GLMs) like GPT-3.5 or ChatGPT produce remarkable results when used for creative tasks. In this short paper, we outline our planned approach to investigating how humans brainstorm together with GLMs. Particularly, we plan to explore how the presentation of GLM suggestions affects brainstorming group effects such as cognitive stimulation or social loafing known from all-human groups. Based on group brainstorming literature and previous studies, we designed an experiment to measure underlying performance effects. In the planned between-subject experiment, participants brainstorm with a GLM. We investigate how manipulating the completeness and the communicated origin of suggestions affects performance-related constructs. In this short paper, we only report preliminary results. Our study has theoretical implications explaining how well-documented brainstorming group effects from all-human groups can help to understand and affect human-AI team performance. Practical implications include insights into designing GLM-based creativity support while balancing performance dimensions.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.