P-Sensitive K-Anonymity with Generalization Constraints
Alina Campan(a),(*), Traian Marius Truta(a), Nicholas Cooper(a)
Transactions on Data Privacy 3:2 (2010) 65 - 89
Abstract, PDF
(a) Department of Computer Science; Northern Kentucky University; Highland Heights; KY 41099; USA.
e-mail:campana1 @nku.edu; trutat1 @nku.edu; coopern1 @nku.edu
|
Abstract
Numerous privacy models based on the k-anonymity property and extending the k-anonymity model have been introduced in the last few years in data privacy research: l-diversity, p-sensitive k-anonymity, (α, k) anonymity, t-closeness, etc. While differing in their methods and quality of their results, they all focus first on masking the data, and then protecting the quality of the data as a whole. We consider a new approach, where requirements on the amount of distortion allowed on the initial data are imposed in order to preserve its usefulness. Our approach consists of specifying quasiidentifiers' generalization constraints, and achieving p-sensitive k-anonymity within the imposed constraints. We think that limiting the amount of allowed generalization when masking microdata is indispensable for real life datasets and applications. In this paper, the constrained p-sensitive k-anonymity model is introduced and an algorithm for generating constrained p-sensitive k-anonymous microdata is presented. Our experiments have shown that the proposed algorithm is comparable with existing algorithms used for generating p-sensitive k-anonymity with respect to the results' quality, and obviously the obtained masked microdata complies with the generalization constraints as indicated by the user.
|