Omnidirectional Image Quality Assessment with TransVGG and Fused Saliency Guidance

Authors: Xican Tan, Jing Yu, Keke Tong, Shengfeng Lou, Jingsong Meng, Chuang Ma, and Wenzhi Chen
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 411-422
Keywords: Omnidirectional Image Quality Assessment, Parallelized Channel-and-spatial Attention Mechanism, TransVGG, Fused Saliency Guidance

Abstract

Most existing omnidirectional image quality assessment OIQA models focus on locally salient regions within viewports, neglecting the critical guiding role of global saliency in holistic quality evaluation. This limitation restricts their performance when processing complex images. To tackle this, we propose a TransVGG-based OIQA framework guided by fused saliency map. First, the SalBiNet360 network is employed to generate fused saliency maps that combine local and global saliency information, simulating human viewing behavior during omnidirectional image observation. Then a collaborative architecture integrating Swin-Transformer and VGG has been designed to synergistically extract global and local features, thereby resolving the insufficiency of diverse guidance infor-mation. To enhance long-sequence data processing, the Mamba model is utilized for efficient omnidirectional image comprehension. Then a parallel hybrid atten-tion mechanism is introduced to retrieve semantic features from saliency feature and guide the global understanding module. Experiments carried out on two OIQA datasets demonstrate that the proposed model outperforms advanced methods in performance.
📄 View Full Paper (PDF) 📋 Show Citation