<fontsize=4><b>Deep Learning with Differential Privacy</b></font>
Open Sourced By: Xin Pan
### Introduction for [dp_sgd/README.md](dp_sgd/README.md)
Machine learning techniques based on neural networks are achieving remarkable
results in a wide variety of domains. Often, the training of models requires
large, representative datasets, which may be crowdsourced and contain sensitive
information. The models should not expose private information in these datasets.
Addressing this goal, we develop new algorithmic techniques for learning and a
refined analysis of privacy costs within the framework of differential privacy.
Our implementation and experiments demonstrate that we can train deep neural
networks with non-convex objectives, under a modest privacy budget, and at a
manageable cost in software complexity, training efficiency, and model quality.
paper: https://arxiv.org/abs/1607.00133
# Deep Learning with Differential Privacy
Most of the content from this directory has moved to the [tensorflow/privacy](https://github.com/tensorflow/privacy) repository, which is dedicated to learning with (differential) privacy. The remaining code is related to the PATE papers from ICLR 2017 and 2018.
### Introduction for [multiple_teachers/README.md](multiple_teachers/README.md)
...
...
@@ -29,3 +13,10 @@ private manner by noisily aggregating the teacher decisions before feeding them
to the student during training.
paper: https://arxiv.org/abs/1610.05755
### Introduction for [pate/README.md](pate/README.md)
Implementation of an RDP privacy accountant and smooth sensitivity analysis for the PATE framework. The underlying theory and supporting experiments appear in "Scalable Private Learning with PATE" by Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson (ICLR 2018)