MT-Net: A heterogeneous image matching method based on modality transformation
Authors:
Min Nuo, Fan Wang, Xinrong Wu, Xueqi Cheng, and Xiaopeng Hu
Conference:
ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages:
2811-2822
Keywords:
Heterogeneous Images, Image Matching, Style Transfer, End-to-end Learning.
Abstract
This paper focuses on the challenges in heterogeneous image matching. The matching accuracy of heterogeneous image pairs is lower than that of homogeneous image pairs. Existing methods have attempted adaptive improvements to address the challenges in heterogeneous image matching, but accuracy still needs improvement. This is because heterogeneous image pairs exhibit significant differences, primarily due to their distinct imaging mechanisms. Regarding this issue, we propose an end-to-end hybrid framework that employs modality transformation for heterogeneous image matching. First, a modality transformation method based on style transfer is proposed to convert heterogeneous image pairs into pseudo-homologous image pairs. Second, we extract multiscale and multilevel discriminative features from the pseudo-homologous image pairs to enhance the repeatability and discrimination of keypoints. Third, a unified matching loss is proposed to optimize the method for generating pseudo- homologous images. This loss function improves the performance of the modality transformation module and even the entire network. The experiments indicate that the proposed MT-Net improves the mean matching result by 0.9 –3.5 .
BibTeX Citation:
@inproceedings{ICIC2025,
author = {Min Nuo, Fan Wang, Xinrong Wu, Xueqi Cheng, and Xiaopeng Hu},
title = {MT-Net: A heterogeneous image matching method based on modality transformation},
booktitle = {Proceedings of the 21st International Conference on Intelligent Computing (ICIC 2025)},
month = {July},
date = {26-29},
year = {2025},
address = {Ningbo, China},
pages = {2811-2822},
}