Coordinated Attacks through Graph Autoencoder in Federated Learning

Authors: Shuoxiang Wang, Bencan Gong, and Youlin Huang
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 675-692
Keywords: Federated Learning, Collaborative Attack, Graph Autoencoder, Dual-mode Attack

Abstract

Federated Learning FL facilitates collaborative model training across dis-tributed clients while preserving data privacy through decentralized compu-tation. Architectural limitations render FL susceptible to adversarial attacks. In current attack methodologies, the absence of collaboration among attack-ers renders them more susceptible to detection. This paper proposes CAGA Collaborative Attack via Graph Autoencoder , an innovative model poison-ing framework. Attackers exploit graph-structured correlations among be-nign local models to infer the training data characteristics of the target mod-el and subsequently adversarially reconstruct these correlations, aiming to significantly degrade the performance of global FL model through crafted malicious updates. Unlike conventional attack methodologies, CAGA lever-ages pre-trusted malicious users embedded within benign user groups to exe-cute dual-mode attacks—combining explicit adversarial actions with implicit exploitation of internal user privileges. The experimental results demonstrate that the proposed CAGA attack is highly aggressive and difficult to detect. The attack outperforms the existing GAE attack in terms of both aggressive-ness and stealth.
📄 View Full Paper (PDF) 📋 Show Citation