Abstract
Selecting a Multi-Criteria Decision-Making (MCDM) method is critical for developing robust Decision Support Systems (DSS), yet limited attention has been given to assessing their stability under structural changes in decision problems. This study proposes a simulation-based framework for evaluating the robustness of MCDM methods when the least important criteria are iteratively removed. Four selected methods, namely Additive Ratio ASsessment (ARAS), COmplex PRoportional ASsessment (COPRAS), Measurement Alternatives and Ranking according to COmpromise Solution (MARCOS), and MultiAttributive Ideal-Real Comparative Analysis (MAIRCA) were tested across thousands of randomized scenarios, with performance assessed through mean ranking correlation, frequency of ranking alterations, and distribution of similarity values. The findings reveal consistent stability trends across methods while identifying differences in sensitivity to criteria reduction. Notably, MAIRCA and COPRAS exhibited more concise performance distributions, suggesting stronger resilience to problem changes. This work addresses a critical gap in understanding method robustness, supporting more informed selection of MCDM techniques for uncertain decision environments and enhancing the reliability of decision-making processes.
Paper Type
Short Paper
DOI
10.62036/ISD.2025.34
Assessing the impact of criteria removal on Multi-Criteria Decision-Making stability: A simulation-based sensitivity analysis
Selecting a Multi-Criteria Decision-Making (MCDM) method is critical for developing robust Decision Support Systems (DSS), yet limited attention has been given to assessing their stability under structural changes in decision problems. This study proposes a simulation-based framework for evaluating the robustness of MCDM methods when the least important criteria are iteratively removed. Four selected methods, namely Additive Ratio ASsessment (ARAS), COmplex PRoportional ASsessment (COPRAS), Measurement Alternatives and Ranking according to COmpromise Solution (MARCOS), and MultiAttributive Ideal-Real Comparative Analysis (MAIRCA) were tested across thousands of randomized scenarios, with performance assessed through mean ranking correlation, frequency of ranking alterations, and distribution of similarity values. The findings reveal consistent stability trends across methods while identifying differences in sensitivity to criteria reduction. Notably, MAIRCA and COPRAS exhibited more concise performance distributions, suggesting stronger resilience to problem changes. This work addresses a critical gap in understanding method robustness, supporting more informed selection of MCDM techniques for uncertain decision environments and enhancing the reliability of decision-making processes.
Recommended Citation
Więckowski, J., Kołodziejczyk, J. & Sałabun, W. (2025). Assessing the impact of criteria removal on Multi-Criteria Decision-Making stability: A simulation-based sensitivity analysisIn I. Luković, S. Bjeladinović, B. Delibašić, D. Barać, N. Iivari, E. Insfran, M. Lang, H. Linger, & C. Schneider (Eds.), Empowering the Interdisciplinary Role of ISD in Addressing Contemporary Issues in Digital Transformation: How Data Science and Generative AI Contributes to ISD (ISD2025 Proceedings). Belgrade, Serbia: University of Gdańsk, Department of Business Informatics & University of Belgrade, Faculty of Organizational Sciences. ISBN: 978-83-972632-1-5. https://doi.org/10.62036/ISD.2025.34