LBERT-MSDF: A Dynamic Fusion Network for Multi-Scale Text Feature Extraction

Authors: MingLe Zhou, XinYu Liu, JiaChen Li, and DeLong Han
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 771-786
Keywords: {Natural language processing、Text classification 、 Bert 、 Multi-scale feature extraction 、 Weighted fusion strategy.

Abstract

Text categorization is a core task in natural language processing and plays an irreplaceable role in tasks such as spam detection, user sentiment analysis, and news topic classification. In this paper, we propose a dynamic fusion network for multi-scale semantic feature extraction based on BERT in order to improve the model's understanding of the semantics, which is difficult to capture different levels of semantic information in the text at the same time, which leads to classification ambiguity and reduces the classification accuracy, etc. This network proposes a multi-functional channel depth modulator VCDM in order to improve the model's understanding of the semantics, which enables the model to perform a deeper analysis at the macro level such as the context of the text. The small-scale feature extractor BMHA is also proposed to perform feature extraction at the micro level to build the underlying structure of the vocabulary, which complements and enhances the effect of the VCDM module. In order to be able to capture the feature information better and optimize the accuracy of the classification results, the optimal BERT output level is selected for the input and output of the model, as well as the introduction of an adaptive weight weighting mechanism to dynamically fuse the outputs of the two modules. The experimental results show that the MSDF model outperforms the existing models, achieving better accuracies of 95.03 , 93.63 , and 89.47 on the three datasets, respectively, proving the effectiveness of the MSDF model proposed in this paper on the text classification task.
📄 View Full Paper (PDF) 📋 Show Citation