Differentially private learning of distributed deep models
Bernhard A. Moser
|Title||Differentially private learning of distributed deep models|
|Booktitle||MAP '20 Adjunct: Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization|
This study presents an optimal differential privacy framework for learning of distributed deep models. The deep models, consisting of a nested composition of mappings, are learned analytically in a private setting using variational optimization methodology. An optimal (ε,δ)-differentially private noise adding mechanism is used and the effect of added data noise on the utility is alleviated using a rule-based fuzzy system. The private local data is separated from globally shared data through a privacy-wall and a fuzzy model is used to aggregate robustly the local deep fuzzy models for building the global model.