
Author ORCID Identifier
Jaebong Son: https://orcid.org/0000-0002-5334-3117
Chang Heon Lee: https://orcid.org/0000-0002-3906-8462
Abstract
Deep neural networks (DNNs) have revolutionized analytics, enabling advancements in areas, such as large language models, computer vision, autonomous driving, and generative artificial intelligence. Their transformative potential has garnered widespread attention across academia and industry. However, the inductive reasoning nature of DNNs presents challenges in generalizing how key structural factors—such as neurons, hidden layers, and epochs—affect learning capability and interact with one another. This study adopts a positivist approach, applying deductive reasoning and empirical analysis to explore the main and moderation effects of these structural factors on DNNs learning capability, as manifested in performance outcomes. Our empirical analysis shows that neurons, hidden layers, and epochs each positively influence learning capability. These effects are further shaped by dataset complexity, with intricate patterns amplifying their impact. However, hidden layers and epochs negatively moderate the impact of neurons. These findings clarify the often difficult-to-generalize results of inductive DNN studies and help address the lack of empirical evidence regarding the roles of DNN structural factors. Additionally, they reveal that simply increasing structural factors is not an effective strategy for improving DNN performance, demonstrating the challenges of optimizing these factors.
Recommended Citation
Son, J., & Lee, C. (In press). Understanding the Factors Shaping the Learning Capability of Deep Neural Networks: A Positivist Perspective. Communications of the Association for Information Systems, 56, pp-pp. Retrieved from https://aisel.aisnet.org/cais/vol56/iss1/20
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.