Paper Type
Complete
Abstract
This study examines the impact of input form design on the completeness of collected data by comparing three interface designs. These designs were adapted from two previously tested design principles that appear to have conflicting effects on completeness. The interfaces were developed to collect reports of potential fraudulent activity from the public. A taxonomy was used to assess the depth and breadth of the collected data in covering essential aspects of the reports and determining their completeness. The results indicate that the interaction between the design principles may lead to outcomes different from initial expectations. Additionally, the findings demonstrate how using a conceptual model to evaluate completeness can improve our understanding of completeness in comparison to the subjective method of evaluating completeness or measuring the length of the text. This method of measuring completeness also enhances the potential for developing AI-generated fraud reports in future research.
Paper Number
2114
Recommended Citation
Nabavian, Sanaz; Robertson, Charles; Parsons, Jeffrey; and Hawkin, John, "Effects of Design choices on Data completeness: The case of suspicious transaction reports" (2025). AMCIS 2025 Proceedings. 5.
https://aisel.aisnet.org/amcis2025/sig_core/sig_core/5
Effects of Design choices on Data completeness: The case of suspicious transaction reports
This study examines the impact of input form design on the completeness of collected data by comparing three interface designs. These designs were adapted from two previously tested design principles that appear to have conflicting effects on completeness. The interfaces were developed to collect reports of potential fraudulent activity from the public. A taxonomy was used to assess the depth and breadth of the collected data in covering essential aspects of the reports and determining their completeness. The results indicate that the interaction between the design principles may lead to outcomes different from initial expectations. Additionally, the findings demonstrate how using a conceptual model to evaluate completeness can improve our understanding of completeness in comparison to the subjective method of evaluating completeness or measuring the length of the text. This method of measuring completeness also enhances the potential for developing AI-generated fraud reports in future research.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGCORE