GLAD-Net: Global-Local Adaptive Fusion and Cross-Stage Distillation for Cross-Level Multi-Scale Medical Image Segmentation

Authors: Wenkai Zhao, Lingwei Zhang, Yun Zhao, Xuecheng Bai, Zhenhuan Xu, and Yidi Li
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 2083-2097
Keywords: Medical image segmentation, Cross-Level Multi-scale feature fusion, Global-local feature modeling, Self-distillation framework

Abstract

Medical image segmentation is crucial for disease diagnosis, treatment planning, and surgical navigation.Despite the advances achieved by U-Net-based multi-scale fusion methods, challenges persist, including difficulties in synergistically modeling global and local features amid lesion variations, inadequate cross-level coupling of multi-scale information, and the presence of asymmetric supervisory signals between the encoder and decoder.Accordingly, this study introduces GLAD-Net: by incorporating a Global-Local Adaptive Module GLAM and a Selective KAN Module SKM into the U-Net backbone for dynamic weighted feature fusion to enhance hierarchical feature representation by constructing a Channel-Spatial Collaborative Attention CSCA mechanism in the cross-layer connections that exploits the continuous spatial modeling ability of the KAN network to boost multi-scale feature expression by employing a Cross-Level Multi-Scale Selective Fusion CMSF module in the decoder to merge SKM-weighted decoded features from early layers with corresponding encoded features to enhance feature representation and by applying a Cross-Stage Self-Distillation CSSD framework to reverse-distill high-level semantic features from the decoder into the early encoder stages to alleviate semantic bias.Experimental results show that GLAD-Net outperforms existing methods on most metrics in both the ISIC2017 and ISIC2018 datasets.Our source codes will be available.
📄 View Full Paper (PDF) 📋 Show Citation