Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

Document Type

Article - Abstract Only

Publication Date

2017

Keywords

Privacy, Training, Machine learning, Data privacy, Neurons, Biological neural networks, Computational modeling, affine transforms, data privacy, Laplace transforms, learning (artificial intelligence), neural nets, adaptive Laplace mechanism, differential privacy preservation, deep learning, privacy budget consumption, training steps, deep neural networks, affine transformations, loss functions, Deep learning, Differential privacy, Laplace mechanism

Abstract

In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds "more noise" into features which are "less relevant" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.

Comments

Principal Investigator: Xintao Wu

Acknowledgements: This work is supported by the NIH grant R01GM103309 to the SMASH project. Wu is also supported by NSF grant DGE-1523115 and IIS-1502273.

This document is currently not available here.

Share

COinS