Tags:$\varepsilon$-Differential Privacy, $k$-Anonymity, $Pk$-Anonymity, Probabilistic Anonymization and Record-Independence
Abstract:
This paper explores the relationships between two privacy protection measures: $Pk$-anonymity and $\varepsilon$-differential privacy. $Pk$-anonymity and $\varepsilon$-differential privacy are proposed by Ikarashi et al. and Dwork et al., respectively, and they are independent privacy measures. The previous research has indicated the relationships between $k$-anonymity and $(\beta, \varepsilon, \delta)$-differential privacy under sampling, and precisely, have shown that a $k$-anonymization algorithm can satisfy $(\beta, \varepsilon, \delta)$-differential privacy under sampling within a range of parameters. Although $k$-anonymity is a stronger notion than $Pk$-anonymity, $(\beta, \varepsilon, \delta)$-differential privacy under sampling is a weaker one than $\varepsilon$-differential privacy. We introduce a property of anonymization, named record-independence where the processing of one record is not affected by the values of other records, and show that a $Pk$-anonymization algorithm can satisfy $\varepsilon$-differential privacy within a range of parameters under the condition where the anonymization algorithm is record-independent. With the fact that $k$-anonymity implies $Pk$-anonymity, $k$-anonymity meets $\varepsilon$-differential privacy. Then, it implies that an algorithm with a strong privacy notion can satisfy a strong one in another privacy measure. Numerical experiments are then performed to give relations among the parameters of $Pk$-anonymity and $\varepsilon$-differential privacy.