Unverified Commit 29ff2d42 authored by Neal Wu's avatar Neal Wu Committed by GitHub
Browse files

Merge branch 'master' into clear_softmax_warning_mnist

parents 6e611ff4 7ef602be
<font size=4><b>Deep Learning with Differential Privacy</b></font>
Open Sourced By: Xin Pan (xpan@google.com, github: panyx0718)
Open Sourced By: Xin Pan
### Introduction for [dp_sgd/README.md](dp_sgd/README.md)
......
......@@ -3,7 +3,7 @@
<b>Authors:</b>
Oriol Vinyals (vinyals@google.com, github: OriolVinyals),
Xin Pan (xpan@google.com, github: panyx0718)
Xin Pan
<b>Paper Authors:</b>
......
......@@ -8,7 +8,7 @@ This is an implementation based on my understanding, with small
variations. It doesn't necessarily represents the paper published
by the original authors.
Authors: Xin Pan (Github: panyx0718), Anelia Angelova
Authors: Xin Pan, Anelia Angelova
<b>Results:</b>
......
<font size=4><b>Reproduced ResNet on CIFAR-10 and CIFAR-100 dataset.</b></font>
contact: panyx0718 (xpan@google.com)
Xin Pan
<b>Dataset:</b>
......
......@@ -2,7 +2,7 @@ Sequence-to-Sequence with Attention Model for Text Summarization.
Authors:
Xin Pan (xpan@google.com, github:panyx0718),
Xin Pan
Peter Liu (peterjliu@google.com, github:peterjliu)
<b>Introduction</b>
......
......@@ -204,7 +204,7 @@ def inference(images):
kernel = _variable_with_weight_decay('weights',
shape=[5, 5, 3, 64],
stddev=5e-2,
wd=0.0)
wd=None)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
pre_activation = tf.nn.bias_add(conv, biases)
......@@ -223,7 +223,7 @@ def inference(images):
kernel = _variable_with_weight_decay('weights',
shape=[5, 5, 64, 64],
stddev=5e-2,
wd=0.0)
wd=None)
conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1))
pre_activation = tf.nn.bias_add(conv, biases)
......@@ -262,7 +262,7 @@ def inference(images):
# and performs the softmax internally for efficiency.
with tf.variable_scope('softmax_linear') as scope:
weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES],
stddev=1/192.0, wd=0.0)
stddev=1/192.0, wd=None)
biases = _variable_on_cpu('biases', [NUM_CLASSES],
tf.constant_initializer(0.0))
softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment