Differentially private learning of distributed deep models

M. Kumar, M. Roßbory, B. Moser, B. Freudenthaler. Differentially private learning of distributed deep models. pages 193-200, DOI https://doi.org/10.1145/3386392.3399562, 7, 2020.

Autoren
  • Mohit Kumar
  • Michael Roßbory
  • Bernhard A. Moser
  • Bernhard Freudenthaler
BuchMAP '20 Adjunct: Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization
TypIn Konferenzband
VerlagACM
DOIhttps://doi.org/10.1145/3386392.3399562
ISBN978-1-4503-7950-2
Monat7
Jahr2020
Seiten193-200
Abstract

This study presents an optimal differential privacy framework for learning of distributed deep models. The deep models, consisting of a nested composition of mappings, are learned analytically in a private setting using variational optimization methodology. An optimal (ε,δ)-differentially private noise adding mechanism is used and the effect of added data noise on the utility is alleviated using a rule-based fuzzy system. The private local data is separated from globally shared data through a privacy-wall and a fuzzy model is used to aggregate robustly the local deep fuzzy models for building the global model.