ASTPGFN: A Parallel Gating Fusion Framework with Adaptive Spatio-Temporal Modeling for Human Activity Recognition

Authors: Yan Mao, Guoyin Zhang, and Cuicui Ye
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 611-623
Keywords: Wearable sensors, Human Activity Recognition, Spatiotemporal modeling, Graph convolutional network, Transformer.

Abstract

Effective spatiotemporal modeling is crucial for Human Activity Recognition HAR using wearable sensors. This paper proposes a novel HAR method integrating a feature enhancement layer, a spatiotemporal gated fusion module, and a fine-grained spatiotemporal segmentation attention module. The feature enhancement layer transforms input data to improve its representational capacity. The spatiotemporal gated fusion module extracts global spatiotemporal features using a Transformer-based temporal encoder and a residual Graph Convolutional Network GCN -based spatial encoder, with an adaptive gating mechanism for feature fusion. The fine-grained segmentation attention module further refines local spatial and temporal features to enhance feature interaction. The fully integrated features are then classified using a fully connected layer.Experimental results on multiple public datasets demonstrate that the proposed method outperforms conventional approaches in terms of recognition accuracy, robustness, and generalization. This model provides an efficient and adaptive solution for HAR using wearable sensors.
📄 View Full Paper (PDF) 📋 Show Citation