Loading...
Paper Type
ERF
Description
In recent years there has been an explosion of new information technologies that use artificial intelligence (AI) to improve decision-making in scientific research. However, the pace of innovation has far exceeded the capacity of researchers to evaluate such technologies. This project evaluates two new AI-powered research assistant tools for decision-making in literature review: Elicit, which uses GPT, and Research Rabbit, which uses a snowballing algorithm and natural language processing. Using a database search as a control, this project will evaluate overlap of records retrieved, proportion of records missed, time savings, and usability for each tool. The goal is to ascertain the technologies’ reliability, efficiency, and acceptance. Such thorough evaluation is necessary to establish trust in these tools’ performance and therefore to promote their adoption. This is the first known assessment of AI tools that operate by iteratively employing user’s decisions as feedback for retrieving information for literature review.
Paper Number
1299
Recommended Citation
Manning, Christy; Zhuma, Sophie; Nagrecha, Shivani; TOKO KOUTOGUI, Abdoul kafid; Yessoufou, Mouiz W. I. A; and Gruetzemacher, Richard, "Streamlining Science: Recreating Systematic Literature Reviews with AI-Powered Decision Tools" (2023). AMCIS 2023 Proceedings. 8.
https://aisel.aisnet.org/amcis2023/conf_theme/conf_theme/8
Streamlining Science: Recreating Systematic Literature Reviews with AI-Powered Decision Tools
In recent years there has been an explosion of new information technologies that use artificial intelligence (AI) to improve decision-making in scientific research. However, the pace of innovation has far exceeded the capacity of researchers to evaluate such technologies. This project evaluates two new AI-powered research assistant tools for decision-making in literature review: Elicit, which uses GPT, and Research Rabbit, which uses a snowballing algorithm and natural language processing. Using a database search as a control, this project will evaluate overlap of records retrieved, proportion of records missed, time savings, and usability for each tool. The goal is to ascertain the technologies’ reliability, efficiency, and acceptance. Such thorough evaluation is necessary to establish trust in these tools’ performance and therefore to promote their adoption. This is the first known assessment of AI tools that operate by iteratively employing user’s decisions as feedback for retrieving information for literature review.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
Conference Theme