Bayesian-based Anonymization Framework against Background Knowledge Attack in Continuous Data Publishing
Fatemeh Amiri(a),(*), Nasser Yazdani(b), Azadeh Shakery(b),(c), Shen-Shyang Ho(d)
Transactions on Data Privacy 12:3 (2019) 197 - 225
Abstract, PDF
(a) Department of Computer Engineering and Information Technology, Hamedan University of Technology, Hamedan, Iran.
(b) School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
(c) School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran.
(d) Department of Computer Science, Rowan University, Glassboro, US.
e-mail:f.amiri @hut.ac.ir; yazdani @ut.ac.ir; shakery @ut.ac.ir; hos @rowan.edu
|
Abstract
In many real world situations, data are updated and released over time. In each release, the attributes are fixed but the number of records may vary, and the attribute values may be modified. Privacy can be compromised due to the disclosure of information when one combines different release versions of the data. Preventing information disclosure becomes more difficult when the adversary possesses two kinds of background knowledge: correlations among sensitive attribute values over time and compromised records. In this paper, we propose a Bayesian-based anonymization framework to protect against these kinds of background knowledge in a continuous data publishing setting. The proposed framework mimics the adversary's reasoning method in continuous release and estimates her posterior belief using a Bayesian approach. Moreover, we analyze threat deriving from the compromised records in the current release and the following ones. Experimental results on two datasets show that our proposed framework outperforms JS-reduce, the state of the art approach for continuous data publishing, in terms of the adversary's information gain as well as data utility and privacy loss.
|