Wavelet-infused U-Net for Breast Ultrasound Image Segmentation

Show simple item record

dc.contributor.author Bakshi, Alif Arshad
dc.contributor.author Joarder, Reaz Hassan
dc.contributor.author Tasmi, Sidratul Tanzila
dc.date.accessioned 2025-03-11T08:23:40Z
dc.date.available 2025-03-11T08:23:40Z
dc.date.issued 2024-07-14
dc.identifier.citation [1] M. Z. Alom, Recurrent residual convolutional neural network based on u-net (r2u net) for medical image segmentation, Feb. 2018. [Online]. Available: https:// arxiv.org/abs/1802.06955. [2] M. Z. Alom, Skin cancer segmentation and classification with nabla-n and incep tion recurrent residual convolutional networks, Apr. 2019. [Online]. Available: https://arxiv.org/abs/1904.11126. [3] R. Azad, E. K. Aghdam, A. Rauland, et al., Medical image segmentation review: The success of u-net, Nov. 2022. [Online]. Available: https://arxiv.org/abs/ 2211.14830. [4] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017. doi: 10.1109/TPAMI.2016.2644615. [5] Y. Benhammou, B. Achchab, F. Herrera, and S. Tabik, “Breakhis based breast cancer automatic diagnosis using deep learning: Taxonomy, survey and insights,” Neurocomputing, vol. 375, pp. 9–24, 2020. [6] F. Benzarti and H. Amiri, “Speckle noise reduction in medical ultrasound im ages,” arXiv preprint arXiv:1305.1344, 2013. [7] J. Bertels, T. Eelbode, M. Berman, et al., “Optimizing the dice score and jac card index for medical image segmentation: Theory and practice,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd In ternational Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II 22, Springer, 2019, pp. 92–100. [8] S. M. Bhandarkar and P. Nammalwar, “Segmentation of multispectral mr im ages using a hierarchical self-organizing map,” in Proceedings 14th IEEE Sym posium on Computer-Based Medical Systems. CBMS 2001, IEEE, 2001, pp. 294– 299. 49 [9] G. Bhattacharyya, D. Doval, C. Desai, H. Chaturvedi, S. Sharma, and S. So mashekhar, Overview of breast cancer and implications of overtreatment of early stage breast cancer: An indian perspective. jco global oncol 6: 789–798, 2020. [10] M. Biswas, R. Pramanik, S. Sen, A. Sinitca, D. I. Kaplun, and R. Sarkar, “Mi crostructural segmentation using a union of attention guided u-net models with different color transformed images,” Scientific Reports, vol. 13, no. 1, Apr. 2023. doi: 10.1038/s41598-023-32318-9. [Online]. Available: https://doi.org/ 10.1038/s41598-023-32318-9. [11] S. Biswas, S. Debnath, and R. Mohapatra, “Coverless image steganography based on dwt approximation and pixel intensity averaging,” Apr. 2023, pp. 1554–1561. doi: 10.1109/ICOEI56765.2023.10125935. [12] J. Bullock, C. Cuesta-Lázaro, and A. Quera-Bofarull, “Xnet: A convolutional neural network (cnn) implementation for medical x-ray image segmentation suitable for small datasets,” in Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, SPIE, vol. 10953, 2019, pp. 453– 463. [13] J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986. [14] H. Cao, Y. Wang, J. Chen, et al., “Swin-Unet: UNET-Like Pure Transformer for Medical Image Segmentation,” in Lecture Notes in Computer Science. Jan. 2023, pp. 205–218. doi: 10 . 1007 / 978 - 3 - 031 - 25066 - 8 _ 9. [Online]. Available: https://doi.org/10.1007/978-3-031-25066-8_9. [15] A. Chaddad, J. Peng, J. Xu, and A. Bouridane, “Survey of explainable ai tech niques in healthcare,” Sensors, vol. 23, no. 2, p. 634, 2023. [16] J. Chen, TransUNET: Transformers make strong encoders for medical image seg mentation, Feb. 2021. [Online]. Available: https://arxiv.org/abs/2102. 04306. [17] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolu tion, and fully connected crfs,” IEEE transactions on pattern analysis and ma chine intelligence, vol. 40, no. 4, pp. 834–848, 2017. [18] W. Al-Dhabyani, M. Gomaa, H. Khaled, and A. Fahmy, “Dataset of breast ul trasound images,” Data in brief, vol. 28, p. 104 863, 2020. [19] A. P. Dhawan, “Wavelet transform and its applications in medical image anal ysis,” in Principles and Advanced Methods in Medical Imaging and Image Anal ysis, pp. 437–454. doi: 10 . 1142 / 9789812814807 _ 0018. eprint: https : / / 50 www.worldscientific.com/doi/pdf/10.1142/9789812814807_0018. [On line]. Available: https://www.worldscientific.com/doi/abs/10.1142/ 9789812814807_0018. [20] F. I. Diakogiannis, F. Waldner, P. Caccetta, and C. Wu, “Resunet-a: A deep learn ing framework for semantic segmentation of remotely sensed data,” ISPRS Jour nal of Photogrammetry and Remote Sensing, vol. 162, pp. 94–114, 2020. [21] M. Ding, A. Qu, H. Zhong, Z. Lai, S. Xiao, and P. He, “An enhanced vision trans former with wavelet position embedding for histopathological image classifica tion,” Pattern Recognition, vol. 140, p. 109 532, 2023. [22] D. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Informa tion Theory, vol. 41, no. 3, pp. 613–627, 1995. doi: 10.1109/18.382009. [23] S. Gould, T. Gao, and D. Koller, “Region-based segmentation and object de tection,” in Advances in Neural Information Processing Systems, Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, Eds., vol. 22, Curran As sociates, Inc., 2009. [Online]. Available: https://proceedings.neurips.cc/ paper _ files / paper / 2009 / file / a7aeed74714116f3b292a982238f83d2 - Paper.pdf. [24] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969. [25] B. Hela, M. Hela, H. Kamel, B. Sana, and M. Najla, “Breast cancer detection: A review on mammograms analysis techniques,” in 10th International Multi Conferences on Systems, Signals & Devices 2013 (SSD13), IEEE, 2013, pp. 1–6. [26] H. Huang, L. Lin, R. Tong, et al., “Unet 3+: A full-scale connected unet for med ical image segmentation,” in ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, 2020, pp. 1055–1059. [27] P. T. Huynh, A. M. Jarolimek, and S. Daye, “The false-negative mammogram.,” Radiographics, vol. 18, no. 5, pp. 1137–1154, 1998. [28] N. Ibtehaz and M. S. Rahman, “Multiresunet: Rethinking the u-net architecture for multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87, Jan. 2020. doi: 10.1016/j.neunet.2019.08.025. [Online]. Avail able: https://doi.org/10.1016/j.neunet.2019.08.025. [29] K. Ikromjanov, S. Bhattacharjee, R. I. Sumon, et al., “Region segmentation of whole-slide images for analyzing histological differentiation of prostate adeno carcinoma using ensemble efficientnetb2 u-net with transfer learning mecha nism,”Cancers, vol. 15, no. 3, p. 762, Jan. 2023. doi: 10.3390/cancers15030762. [Online]. Available: https://doi.org/10.3390/cancers15030762. 51 [30] K. M. Kelly, J. Dean, W. S. Comulada, and S.-J. Lee, “Breast cancer detection us ing automated whole breast ultrasound and mammography in radiographically dense breasts,” European radiology, vol. 20, pp. 734–742, 2010. [31] J. L. Kelsey and G. S. Berkowitz, “Breast cancer epidemiology,” Cancer research, vol. 48, no. 20, pp. 5615–5623, 1988. [32] M. Koziarski, B. Kwolek, and B. Cyganek, “Convolutional neural network-based classification of histopathological images affected by data imbalance,” in Video Analytics. Face and Facial Expression Recognition: Third International Work shop, FFER 2018, and Second InternationalWorkshop, DLPR 2018, Beijing, China, August 20, 2018, Revised Selected Papers 3, Springer, 2019, pp. 1–11. [33] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,”Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2017. doi: 10.1145/3065386. [Online]. Available: https:// dl.acm.org/doi/10.1145/3065386. [34] J. Kugelman, J. Allman, S. A. Read, et al., “A comparison of deep learning U-Net architectures for posterior segment OCT retinal layer segmentation,” Scientific Reports, vol. 12, no. 1, Sep. 2022. doi: 10.1038/s41598- 022- 18646- 2. [On line]. Available: https://doi.org/10.1038/s41598-022-18646-2. [35] A. Laine and J. Fan, “Texture classification by wavelet packet signatures,” IEEE Transactions on pattern analysis and machine intelligence, vol. 15, no. 11, pp. 1186– 1191, 1993. [36] L. Lalaoui and T. Mohamadi, “A comparative study of image region-based seg mentation algorithms,” International Journal of Advanced Computer Science and Applications, vol. 4, no. 6, 2013. [37] Q. Li and L. Shen, “WaveSNet: Wavelet integrated deep networks for image seg mentation,” in Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Springer, 2022, pp. 325–337. [38] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for seman tic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440. [39] S. Mallat, “A theory for multiresolution signal decomposition: The wavelet rep resentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674–693, 1989. doi: 10.1109/34.192463. [40] E. Michael, H. Ma, H. Li, F. Kulwa, and J. Li, “Breast cancer segmentation methods: Current status and future potentials,” BioMed research international, vol. 2021, no. 1, p. 9 962 109, 2021. 52 [41] O. V. Michailovich and A. Tannenbaum, “Despeckling of medical ultrasound images,” ieee transactions on ultrasonics, ferroelectrics, and frequency control, vol. 53, no. 1, pp. 64–78, 2006. [42] O. Oktay, J. Schlemper, L. L. Folgoc, et al., “Attention u-net: Learning where to look for the pancreas,” arXiv preprint arXiv:1804.03999, 2018. [43] W. H. Organization, Breast cancer — who.int, https://www.who.int/news room/fact-sheets/detail/breast-cancer, [Accessed 26-06-2024], 2024. [44] R. Pach, J. Legutko, and P. Kulig, “History and future directions in ultrasonog raphy,” Polish Journal of Surgery, vol. 84, no. 10, pp. 535–545, 2012. [45] N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern recognition, vol. 26, no. 9, pp. 1277–1294, 1993. [46] D. D. Patil and S. G. Deore, “Medical image segmentation: A review,” Interna tional Journal of Computer Science and Mobile Computing, vol. 2, no. 1, pp. 22– 27, 2013. [47] H. Phung, Q. Dao, and A. Tran, “Wavelet diffusion models are fast and scalable image generators,” in Proceedings of the IEEE/CVF Conference on Computer Vi sion and Pattern Recognition, 2023, pp. 10 199–10 208. [48] K. Polyak, “Breast cancer: Origins and evolution,” The Journal of Clinical Inves tigation, vol. 117, no. 11, pp. 3155–3163, Nov. 2007. doi: 10.1172/JCI33295. [Online]. Available: https://doi.org/10.1172/JCI33295. [49] K. Prabusankarlal, P. Thirumoorthy, and R. Manavalan, “Segmentation of breast lesions in ultrasound images through multiresolution analysis using undeci mated discrete wavelet transform,” Ultrasonic imaging, vol. 38, no. 6, pp. 384– 402, 2016. [50] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Ger many, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234–241. [51] I. Selesnick, R. Baraniuk, and N. Kingsbury, “The dual-tree complex wavelet transform,” English (US),IEEE Signal Processing Magazine, vol. 22, no. 6, pp. 123– 151, Nov. 2005, issn: 1053-5888. doi: 10.1109/MSP.2005.1550194. [52] L. Torrey and J. Shavlik, “Transfer learning,” in Handbook of research on ma chine learning applications and trends: algorithms, methods, and techniques, IGI global, 2010, pp. 242–264. 53 [53] M. A. Unser, “Ten good reasons for using spline wavelets,” in Wavelet Appli cations in Signal and Image Processing V, A. Aldroubi, A. F. Laine, and M. A. Unser, Eds., International Society for Optics and Photonics, vol. 3169, SPIE, 1997, pp. 422–431. doi: 10 . 1117 / 12 . 292801. [Online]. Available: https : //doi.org/10.1117/12.292801. [54] J. M. J. Valanarasu and V. M. Patel, UNeXt: MLP-based Rapid Medical Image Seg mentation Network, 2022. arXiv: 2203.04967 [eess.IV]. [Online]. Available: https://arxiv.org/abs/2203.04967. [55] O. R. Vincent, O. Folorunso, et al., “A descriptive algorithm for sobel image edge detection,” in Proceedings of informing science & IT education conference (InSITE), vol. 40, 2009, pp. 97–107. [56] L. Wang, “Early diagnosis of breast cancer,” Sensors, vol. 17, no. 7, p. 1572, 2017. [57] M. Xian, Y. Zhang, and H.-D. Cheng, “Fully automatic segmentation of breast ultrasound images based on breast characteristics in space and frequency do mains,” Pattern Recognition, vol. 48, no. 2, pp. 485–497, 2015. [58] J. Zhao, L. Zhang, M. Yin, et al., “Medical image segmentation based on wavelet analysis and gradient vector flow,” Journal of software engineering and applica tions, vol. 7, no. 12, p. 1019, 2014. [59] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, Unet++: A nested u net architecture for medical image segmentation, Jul. 2018. [Online]. Available: https://arxiv.org/abs/1807.10165 en_US
dc.identifier.uri http://hdl.handle.net/123456789/2388
dc.description Supervised by Dr. Md. Hasanul Kabir, Professor, Department of Computer Science and Engineering (CSE) Islamic University of Technology (IUT) Board Bazar, Gazipur, Bangladesh This thesis is submitted in partial fulfillment of the requirement for the degree of Bachelor of Science in Computer Science and Engineering, 2024 en_US
dc.description.abstract Breast cancer is a prevalent health concern globally. The consequences of breast can- cer and its mortality can be reduced through early detection and successful treatment. However, existing safe and non-invasive methods for segmentation, such as Ultra- sound Imaging, suffer from noise, artifacts, and incomplete boundaries, which re- quire extensive training and experience for medically accurate prediction. To this end, we propose modifications to the U-Net architecture that leverage wavelet information to incorporate inter-network smoothening and multiresolution detail. This informa- tion is robust to noise and occlusions, which are prevalent in ultrasound images, and allows the model to capture subtle tissue textures and edges crucial for differentiating between lesions and unwanted artifacts. By incorporating wavelet-based sampling on the vanilla U-Net architecture, we are able to effectively almost halve the parameter count while keeping the Dice score relatively similar (74.92% Dice Score with 17M parameters vs 76.27% Dice Score with 31.1M parameters). Additionally, by incorpo- rating multiresolutional information into the decoder architecture, we are able to im- prove segmentation accuracy over the vanilla U-Net architecture (77.83% Dice Score vs 76.27% Dice Score) and IOU score (64.37% vs 62.14%). We qualitatively and quan- titatively analyze the effectiveness of incorporating multiresolution information into various other U-Net architectures (U-Net++, U-Net 3+, and Attention U-Net) and show that our methods provide more accurate segmentation results and can provide robust models that are easier to train. This can allow automated ultrasound image segmentation for more rapid and reliable cancer detection. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering(CSE), Islamic University of Technology(IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.subject Breast Ultrasound, Image Segmentation, U-Net, Wavelet, Breast Cancer en_US
dc.title Wavelet-infused U-Net for Breast Ultrasound Image Segmentation en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics