YOLO-FireAD: Efficient Fire Detection via Attention-Guided Inverted Residual Learning and Dual-Pooling Feature Preservation

Authors: Weichao Pan, Bohan Xu, Xu Wang, Chengze Lv, Shuoyang Wang, and Zhenke Duan
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 438-452
Keywords: Fire detection, Efficient models, YOLO, Attention mechanisms, Small object detection.

Abstract

Fire detection in dynamic environments faces continuous challenges, including illumination interference, frequent false detection missed detection, and difficulty in balancing efficiency and accuracy. To address the problem of feature extraction limitation and information loss in the existing YOLO-based models, this study propose You Only Look Once for Fire Detection with Attention-guided Inverted Residual and Dual-pooling Downscale Fusion YOLO-FireAD with two core innovations: 1 Attention-guided Inverted Residual Block AIR integrates hybrid channel-spatial attention with inverted residuals to adaptively enhance fire features and suppress environmental noise 2 Dual Pool Downscale Fusion Block DPDF preserves multi-scale fire patterns through learnable fusion of max-average pooling outputs, mitigating small-fire detection failures. Extensive evaluation on two public datasets shows the efficient performance of our model. Experimental results show that the proposed model maintains the advantages of lightweight 1.45M parameters, 51.8 lower than YOLOv8n 4.6GFLOPs, 43.2 lower than YOLOv8n , and mAP75 is higher than the mainstream real-time object detection models YOLOv8n, YOLOv9t, YOLOv10n, YOLO11n, YOLOv12n and other YOLOv8 variants 1.3-5.5 .
📄 View Full Paper (PDF) 📋 Show Citation