Membership Inference Attack against Differentially Private Deep Learning Model
Md Atiqur Rahman(a),(*), Tanzila Rahman(b), Robert Laganière(a), Noman Mohammed(b)
Transactions on Data Privacy 11:1 (2018) 61 - 79
Abstract, PDF
(a) School of Electrical Engineering and Computer Science, University of Ottawa, ON, K1N 6N5, Canada.
(b) Department of Computer Science, University of Manitoba, MB, R3T2N2, Canada.
e-mail:mrahm021 @uottawa.ca; rahmant4 @myumanitoba.ca; laganier @eecs.uottawa.ca; noman @cs.umanitoba.ca
|
Abstract
The unprecedented success of deep learning is largely dependent on the availability of massive amount of training data. In many cases, these data are crowd-sourced and may contain sensitive and confidential information, therefore, pose privacy concerns. As a result, privacy-preserving deep learning has been gaining increasing focus nowadays. One of the promising approaches for privacy-preserving deep learning is to employ differential privacy during model training which aims to prevent the leakage of sensitive information about the training data via the trained model. While these models are considered to be immune to privacy attacks, with the advent of recent and sophisticated attack models, it is not clear how well these models trade-off utility for privacy. In this paper, we systematically study the impact of a sophisticated machine learning based privacy attack called the membership inference attack against a state-of-the-art differentially private deep model. More specifically, given a differentially private deep model with its associated utility, we investigate how much we can infer about the model's training data. Our experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility. For evaluating our experiments, we use the CIFAR-10 and MNIST datasets and the corresponding classification tasks.
|