Unsupervised Local Editing of Ocular Images via Reverse-attention block

Authors: Renzhong Wu, Shenghui Liao, Jianfeng Li, Lihong Liu, Xiaoyan Kui, and Yongrong Ji
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 205-216
Keywords: Image-to-image Translation, Reverse-attention Consistency Loss, Unsupervised Learning, Generation Adversarial Networks.

Abstract

Predicting the postoperative appearance after strabismus surgery is of great sig-nificance for improving patients' understanding of the postoperative outcomes, enhancing communication between doctors and patients, and alleviating preopera-tive anxiety. Although some researchers have used image generation models to predict postoperative appearances for various diseases, existing methods typically rely on paired data for model training. Generative Adversarial Networks GANs have demonstrated strong application potential in image generation tasks, and cy-cle consistency loss has promoted the development of unsupervised image gener-ation techniques. However, traditional cycle consistency loss often results in the retention of unnecessary traces from the source image in the generated images. To address these issues, we propose an unsupervised image generation model based on GANs. By incorporating a reverse-attention block into the generator, the mod-el is guided to focus on key editing regions. Additionally, we employ reverse-attention consistency loss to maintain identity consistency while reducing unnec-essary trace residues. Furthermore, we introduce a multi-scale discriminator to ensure that the generated images have more reasonable texture details. Experi-mental results demonstrate that our model effectively reduces trace residues in the generated postoperative images and produces details that are more consistent with reality.
📄 View Full Paper (PDF) 📋 Show Citation