DP-EM: Differentially Private Expectation Maximization
| Authors |
|
|---|---|
| Publication date | 2017 |
| Journal | Proceedings of Machine Learning Research |
| Event | Conference on Artificial Intelligence and Statistics 2017 |
| Volume | Issue number | 54 |
| Pages (from-to) | 896-904 |
| Organisations |
|
| Abstract |
The iterative nature of the expectation maximization (EM) algorithm presents a challenge for privacy-preserving estimation, as each iteration increases the amount of noise needed. We propose a practical private EM algorithm that overcomes this challenge using two innovations: (1) a novel moment perturbation formulation for differentially private EM (DP-EM), and (2) the use of two recently developed composition methods to bound the privacy “cost” of multiple EM iterations: the moments accountant (MA) and zero-mean concentrated differential privacy (zCDP). Both MA and zCDP bound the moment generating function of the privacy loss random variable and achieve a refined tail bound, which effectively decrease the amount of additive noise. We present empirical results showing the benefits of our approach, as well as similar performance between these two composition methods in the DP-EM setting for Gaussian mixture models. Our approach can be readily extended to many iterative learning algorithms, opening up various exciting future directions.
|
| Document type | Article |
| Note | Artificial Intelligence and Statistics, 20-22 April 2017, Fort Lauderdale, FL, USA. - With supplementary file. |
| Language | English |
| Published at | http://proceedings.mlr.press/v54/park17c.html |
| Downloads |
park17c
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |