EARLY AND ACCURATE BREAST CANCER DETECTION USING MULTI-MODAL IMAGING
Abstract
Breast cancer is the most common cause of death among women; early and accurate detection is essential. Current developments in deep learning and optimization techniques have been very promising for medical image processing applications, with enhanced accuracy and effectiveness than conventional diagnostic techniques. However, most current approaches rely on a single modality or suboptimal optimization techniques, resulting in false positives, false negatives, and decreased clinical reliability. This study introduces a novel approach combining YOLOv9 with Soft-NMS for tumor localization, multimodal MRI and ultrasound image fusion, Particle Swarm Optimization for feature extraction, and hyperparameter tuning.
The model was validated with three datasets: MRI breast cancer data, ultrasound data, and the Breast Cancer Wisconsin data. Experimental results show that YOLOv9 with Soft-NMS achieved 99.5% accuracy on MRI images, and PSO optimization achieved classifier accuracy by up to 99% on all the datasets. Multi-modal fusion techniques improved diagnostic accuracy further, with weighted fusion achieving 93.8% accuracy and an AUC of 96.3%. The adaptive ResNet18 model achieved sensitivity and specificity between 97–99%, verifying that it can lower false negatives and false positives significantly. By improving breast cancer detection through AI-driven multi-modal analysis, this research promotes SDG 3 (Health and Well-Being), contributes to SDG 9 (Industry, Innovation, and Infrastructure) through scalable, innovative healthcare technologies that strengthen medical infrastructure and clinical workflows, and SDG 10 (Reduced Inequalities) by promoting affordable AI solutions accessible to disadvantaged regions.













