Membership Inference Attacks for Generative model-based One-Shot Federated Learning

Authors: Yiwen Wang, Fan Qi, and Zixin Zhang
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 1199-1215
Keywords: Membership Inference Attacks, One-Shot Federated Learning, Generative Model

Abstract

In recent years, One-Shot Federated Learning (OSFL) has gained significant at-tention for its communication efficiency. With the rise of generative models, many approaches leverage synthetic data on the server to improve global model performance. However, this efficiency introduces heightened privacy risks that remain largely unexplored. In this paper, we conduct the first systematic explora-tion of privacy risks in OSFL by designing a Membership Inference Attack (MIA) strategy tailored to this paradigm. In our strategy, we introduce a general approach designed for all generative models, which infers membership by align-ing query data with the global client distribution. Building on this, we extend the approach specifically for diffusion models, integrating global alignment with que-ry-specific fine-grained details through finetuning and conditional generation, thereby enabling more robust inference. In particular, our strategy does not rely on auxiliary data, making it particularly relevant for privacy-sensitive OSFL set-tings. Extensive experiments validate the effectiveness of the proposed strategy, highlighting the critical privacy risks posed by generative models in OSFL.
📄 View Full Paper (PDF) 📋 Show Citation