COMPARATIVE ANALYSIS OF EXPLAINABLE AI TECHNIQUES FOR ENHANCED DECISION SUPPORT SYSTEMS

Authors

  • Muhammad Ahmad
  • Muhammad Nabeel Afzal
  • Muhammad Hamza Afzal
  • Hafiz Muhammad Haroon
  • Masood Ahmad Khan
  • Muhammad Talha Tahir Bajwa

Abstract

The rapid integration of artificial intelligence (AI) into decision support systems (DSS) has raised concerns about the transparency and interpretability of complex machine learning models. To improve the interpretability and the reliability of AI-driven decision-making, the current paper assesses the popular explainable artificial intelligence (XAI) algorithms, including LIME, SHAP, feature importance algorithm, and rule-based algorithms Experiments on benchmark datasets are used to compare these methods in regards to the explanation accuracy, consistency, computational efficiency and user interpretability. The results indicate that the combination of several XAI techniques can enhance the decision support system greatly by raising the level of transparency, user confidence and quality of decisions. SHAP based methodologies are more consistent and can be interpreted globally whereas LIME has local explanations that are able to be flexible and efficient. These improvements allow making more informed and correct decisions regarding such critical areas as healthcare and finance.  The suggested research will contribute a systematic review method and practical expertise on how to select the appropriate XAI techniques and, therefore, enhance the development of a more transparent, credible and enhanced system of decision support.

Downloads

Published

2026-04-09

How to Cite

Muhammad Ahmad, Muhammad Nabeel Afzal, Muhammad Hamza Afzal, Hafiz Muhammad Haroon, Masood Ahmad Khan, & Muhammad Talha Tahir Bajwa. (2026). COMPARATIVE ANALYSIS OF EXPLAINABLE AI TECHNIQUES FOR ENHANCED DECISION SUPPORT SYSTEMS. Spectrum of Engineering Sciences, 4(4), 193–204. Retrieved from https://thesesjournal.com/index.php/1/article/view/2399