EXPLAINABLE TRANSFER LEARNING ENSEMBLE MODEL FOR ACCURATE BRAIN TUMOR CLASSIFICATION

Authors

  • Muhammad Waleed Iqbal
  • Usman Ahmed
  • Mehak Rana
  • Ayeda Shahzad
  • Muhammad Sohail Sardar
  • Hassan Abbas

Keywords:

Brain tumor classification, MRI, transfer learning, ensemble learning, explainability, Grad-CAM, SHAP, LIME

Abstract

Early and accurate detection of brain tumors from magnetic resonance imaging (MRI) is critical for patient prognosis and treatment planning. Deep learning methods, particularly convolutional neural networks (CNNs), have shown strong performance for medical image classification but often lack interpretability, which hinders clinical adoption. This paper proposes an explainable transfer learning ensemble (ETLE) framework that combines multiple pretrained CNN backbones via ensemble strategies and augments predictions with model-agnostic and model-specific explainability methods (SHAP, LIME, Grad-CAM). We evaluate the ETLE framework on publicly available brain MRI datasets, comparing single-model transfer learning baselines with ensemble variants (majority voting, weighted averaging, and stacking). Our experiments demonstrate improved accuracy, robustness to class imbalance, and clinically meaningful visual explanations that localize tumor regions. We report an ensemble accuracy of X% and class-wise F1-scores of Y% (illustrative — replace with real experimental results). The framework is designed to be reproducible and easily integrated into clinical workflows to provide both high performance and interpretability.

Downloads

Published

2025-11-29

How to Cite

Muhammad Waleed Iqbal, Usman Ahmed, Mehak Rana, Ayeda Shahzad, Muhammad Sohail Sardar, & Hassan Abbas. (2025). EXPLAINABLE TRANSFER LEARNING ENSEMBLE MODEL FOR ACCURATE BRAIN TUMOR CLASSIFICATION. Spectrum of Engineering Sciences, 3(11), 807–818. Retrieved from https://thesesjournal.com/index.php/1/article/view/1561