K-Aster: A novel Membership Inference Attack via Prediction Sensitivity

Authors: Ruoyi Li, Xinlong Zhao, Deshun Li and Yuyin Tan
Conference: ICIC 2024 Posters, Tianjin, China, August 5-8, 2024
Pages: 781-792
Keywords: Machine Learning, Membership Inference Attack, Aster, Prediction Sensitivi-ty.

Abstract

Membership Inference Attacks MIA are considered the fundamental priva-cy risk in Machine Learning ML , which attempt to determine whether a specific data sample is training data for a target model. However, the recently proposed Aster only reports precision and recall for the member class with-out reporting false alarm rate FAR for the non-member class and the per-formance of target models. Additionally, Aster with Jacobi matrices requires the target model to output a vector of prediction probabilities, which can be easily defended when the model outputs only labels. In this paper, we pro-pose a novel MIA method K-Aster, which only needs the output labels and partial training data of the target model to determine whether the data sam-ples were used to train a given ML model. We obtain different output labels of the target model by data enhancement. Then we extract features from the labels to fit a line and quantify the prediction sensitivity with slope. Finally, we regard the samples with lower sensitivity as training data. Experimental results of attacks on Automatic Speech Recognition ASR systems show that our method is an important extension to Aster, which can achieve low FAR and high attack precision under non-classification tasks. The source code is available at https: github.com 13053676954 K-Aster.
📄 View Full Paper (PDF) 📋 Show Citation