Unsupervised Attention-Based Generative Adversarial Network for Remote Sensing Image Fusion

Authors: Quanli Wang, Qian Jiang, Yuting Feng, Shengfa Miao, Huangqimei Zheng, and Xin Jin
Conference: ICIC 2024 Posters, Tianjin, China, August 5-8, 2024
Pages: 563-580
Keywords: Image Fusion, Generative Adversarial Networks (GAN), Unsupervised Methods, Remote Sensing Image.

Abstract

Remote sensing image fusion combines single-band panchromatic (PAN) image with multi-spectral (MS) image to generate high quality fused image, also known as pan-sharpening. Most of the current methods suitable for remote sensing image fusion are supervised, which require proportional down-sampling of the original multi-spectral image as training image, and the original multi-spectral image as label image. This will result in poor performance of the model on full resolution images, so the unsupervised methods are more practical. Furthermore, most methods do not consider the differences between MS and PAN images and use the same modules to extract features, which results in some information loss. Therefore, we design an unsupervised attention-based generative adversarial network fusion framework (UAB-GAN), which can be trained directly on the datasets of unlabeled images. Specifically, the model framework consists of a generator and two discriminators. The generator employs different network modules with specific designs to extract unique modal features from PAN and MS images, respectively. Then two discriminators are designed to preserve the spectral and spatial information of different images. Additionally, we propose a unified loss function to integrate multi-scale spectral and spatial features without external data supervision. The effectiveness of the proposed method is demonstrated through experiments conducted on various datasets.
📄 View Full Paper (PDF) 📋 Show Citation