A Deep Learning Model for Accurate Multi-Modal Feature Fusion and Segmentation of Breast Cancer Nodules
DOI:
https://doi.org/10.70917/ijcisim-2025-0003Abstract
Breast cancer is characterized by the uncontrolled growth of breast cells, resulting in the formation of tumors. In this study, we propose the AMFFR-Net model, a deep learning network capable of performing multimodal fusion and segmentation of breast cancer nodules with medical images in specific MRI, ultrasound, and mammograms. The proposed AMFFR-Net is an advanced deep neural network that can enhance the diagnosis process by utilizing multi-modal feature fusion and refinement methods to achieve precise segmentation of the regions of interest. Quantitative assessments using diverse datasets show that AMFFR-Net is superior to other baseline models in terms of different criteria. On MRI datasets, it shows an outstanding AUC score of 98.94%, an F1-score of 94.8%, and an overall accuracy of 97.0%, demonstrating that it is superior in terms of discrimination and provides more accurate delineation of anatomical structures. Such high performance is also replicated in the ultrasound and mammogram datasets, where AMFFR-Net predicts higher AUC, F1-score, and Dice coefficient values with lower HD95 values indicating accurate boundary detection.
