Unverified Commit 29ff2d42 authored by Neal Wu's avatar Neal Wu Committed by GitHub
Browse files

Merge branch 'master' into clear_softmax_warning_mnist

parents 6e611ff4 7ef602be
<font size=4><b>Deep Learning with Differential Privacy</b></font> <font size=4><b>Deep Learning with Differential Privacy</b></font>
Open Sourced By: Xin Pan (xpan@google.com, github: panyx0718) Open Sourced By: Xin Pan
### Introduction for [dp_sgd/README.md](dp_sgd/README.md) ### Introduction for [dp_sgd/README.md](dp_sgd/README.md)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
<b>Authors:</b> <b>Authors:</b>
Oriol Vinyals (vinyals@google.com, github: OriolVinyals), Oriol Vinyals (vinyals@google.com, github: OriolVinyals),
Xin Pan (xpan@google.com, github: panyx0718) Xin Pan
<b>Paper Authors:</b> <b>Paper Authors:</b>
......
...@@ -8,7 +8,7 @@ This is an implementation based on my understanding, with small ...@@ -8,7 +8,7 @@ This is an implementation based on my understanding, with small
variations. It doesn't necessarily represents the paper published variations. It doesn't necessarily represents the paper published
by the original authors. by the original authors.
Authors: Xin Pan (Github: panyx0718), Anelia Angelova Authors: Xin Pan, Anelia Angelova
<b>Results:</b> <b>Results:</b>
......
<font size=4><b>Reproduced ResNet on CIFAR-10 and CIFAR-100 dataset.</b></font> <font size=4><b>Reproduced ResNet on CIFAR-10 and CIFAR-100 dataset.</b></font>
contact: panyx0718 (xpan@google.com) Xin Pan
<b>Dataset:</b> <b>Dataset:</b>
......
...@@ -2,7 +2,7 @@ Sequence-to-Sequence with Attention Model for Text Summarization. ...@@ -2,7 +2,7 @@ Sequence-to-Sequence with Attention Model for Text Summarization.
Authors: Authors:
Xin Pan (xpan@google.com, github:panyx0718), Xin Pan
Peter Liu (peterjliu@google.com, github:peterjliu) Peter Liu (peterjliu@google.com, github:peterjliu)
<b>Introduction</b> <b>Introduction</b>
......
...@@ -204,7 +204,7 @@ def inference(images): ...@@ -204,7 +204,7 @@ def inference(images):
kernel = _variable_with_weight_decay('weights', kernel = _variable_with_weight_decay('weights',
shape=[5, 5, 3, 64], shape=[5, 5, 3, 64],
stddev=5e-2, stddev=5e-2,
wd=0.0) wd=None)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
pre_activation = tf.nn.bias_add(conv, biases) pre_activation = tf.nn.bias_add(conv, biases)
...@@ -223,7 +223,7 @@ def inference(images): ...@@ -223,7 +223,7 @@ def inference(images):
kernel = _variable_with_weight_decay('weights', kernel = _variable_with_weight_decay('weights',
shape=[5, 5, 64, 64], shape=[5, 5, 64, 64],
stddev=5e-2, stddev=5e-2,
wd=0.0) wd=None)
conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME') conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1)) biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1))
pre_activation = tf.nn.bias_add(conv, biases) pre_activation = tf.nn.bias_add(conv, biases)
...@@ -262,7 +262,7 @@ def inference(images): ...@@ -262,7 +262,7 @@ def inference(images):
# and performs the softmax internally for efficiency. # and performs the softmax internally for efficiency.
with tf.variable_scope('softmax_linear') as scope: with tf.variable_scope('softmax_linear') as scope:
weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES],
stddev=1/192.0, wd=0.0) stddev=1/192.0, wd=None)
biases = _variable_on_cpu('biases', [NUM_CLASSES], biases = _variable_on_cpu('biases', [NUM_CLASSES],
tf.constant_initializer(0.0)) tf.constant_initializer(0.0))
softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name) softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment