EXPLAINABLE TRANSFER LEARNING ENSEMBLE MODEL FOR ACCURATE BRAIN TUMOR CLASSIFICATION
Keywords:
Brain tumor classification, MRI, transfer learning, ensemble learning, explainability, Grad-CAM, SHAP, LIMEAbstract
Early and accurate detection of brain tumors from magnetic resonance imaging (MRI) is critical for patient prognosis and treatment planning. Deep learning methods, particularly convolutional neural networks (CNNs), have shown strong performance for medical image classification but often lack interpretability, which hinders clinical adoption. This paper proposes an explainable transfer learning ensemble (ETLE) framework that combines multiple pretrained CNN backbones via ensemble strategies and augments predictions with model-agnostic and model-specific explainability methods (SHAP, LIME, Grad-CAM). We evaluate the ETLE framework on publicly available brain MRI datasets, comparing single-model transfer learning baselines with ensemble variants (majority voting, weighted averaging, and stacking). Our experiments demonstrate improved accuracy, robustness to class imbalance, and clinically meaningful visual explanations that localize tumor regions. We report an ensemble accuracy of X% and class-wise F1-scores of Y% (illustrative — replace with real experimental results). The framework is designed to be reproducible and easily integrated into clinical workflows to provide both high performance and interpretability.













