Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
Representing a system as a network is critical to support systems thinking, hence several tools have been developed to derive networks from text in educational technology, modeling and simulation, or forecasting. Large-Scale Pre-Trained Language Models (PLMs) have recently come to the forefront to create question-answering systems (Q&A) that can extract networks from text. In this paper, we design and implement a Q&A system that uses GPT-3.5 together with 12 filters to extract causal maps text. Our evaluation on two topics via several policy documents finds that GPT accurately extracts relevant concept nodes but occasionally reverses causal directions and struggles with the type of causality as it lacks an understanding of event sequence. We also show that automatically extracted maps can only partially resemble human-made maps collected on the same topics. By making our Q&A system open-source on a permanent repository, researchers can evaluate it with newer PLMs as technology improves.
Recommended Citation
Giabbanelli, Philippe and Witkowicz, Nathan, "Generative AI for Systems Thinking: Can a GPT Question-Answering System Turn Text into the Causal Maps Produced by Human Readers?" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 7.
https://aisel.aisnet.org/hicss-57/st/research_and_education/7
Generative AI for Systems Thinking: Can a GPT Question-Answering System Turn Text into the Causal Maps Produced by Human Readers?
Hilton Hawaiian Village, Honolulu, Hawaii
Representing a system as a network is critical to support systems thinking, hence several tools have been developed to derive networks from text in educational technology, modeling and simulation, or forecasting. Large-Scale Pre-Trained Language Models (PLMs) have recently come to the forefront to create question-answering systems (Q&A) that can extract networks from text. In this paper, we design and implement a Q&A system that uses GPT-3.5 together with 12 filters to extract causal maps text. Our evaluation on two topics via several policy documents finds that GPT accurately extracts relevant concept nodes but occasionally reverses causal directions and struggles with the type of causality as it lacks an understanding of event sequence. We also show that automatically extracted maps can only partially resemble human-made maps collected on the same topics. By making our Q&A system open-source on a permanent repository, researchers can evaluate it with newer PLMs as technology improves.
https://aisel.aisnet.org/hicss-57/st/research_and_education/7