Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
29ff2d42
Unverified
Commit
29ff2d42
authored
Jan 31, 2018
by
Neal Wu
Committed by
GitHub
Jan 31, 2018
Browse files
Merge branch 'master' into clear_softmax_warning_mnist
parents
6e611ff4
7ef602be
Changes
6
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
8 additions
and
8 deletions
+8
-8
research/differential_privacy/README.md
research/differential_privacy/README.md
+1
-1
research/lm_1b/README.md
research/lm_1b/README.md
+1
-1
research/next_frame_prediction/README.md
research/next_frame_prediction/README.md
+1
-1
research/resnet/README.md
research/resnet/README.md
+1
-1
research/textsum/README.md
research/textsum/README.md
+1
-1
tutorials/image/cifar10/cifar10.py
tutorials/image/cifar10/cifar10.py
+3
-3
No files found.
research/differential_privacy/README.md
View file @
29ff2d42
<font
size=
4
><b>
Deep Learning with Differential Privacy
</b></font>
<font
size=
4
><b>
Deep Learning with Differential Privacy
</b></font>
Open Sourced By: Xin Pan
(xpan@google.com, github: panyx0718)
Open Sourced By: Xin Pan
### Introduction for [dp_sgd/README.md](dp_sgd/README.md)
### Introduction for [dp_sgd/README.md](dp_sgd/README.md)
...
...
research/lm_1b/README.md
View file @
29ff2d42
...
@@ -3,7 +3,7 @@
...
@@ -3,7 +3,7 @@
<b>
Authors:
</b>
<b>
Authors:
</b>
Oriol Vinyals (vinyals@google.com, github: OriolVinyals),
Oriol Vinyals (vinyals@google.com, github: OriolVinyals),
Xin Pan
(xpan@google.com, github: panyx0718)
Xin Pan
<b>
Paper Authors:
</b>
<b>
Paper Authors:
</b>
...
...
research/next_frame_prediction/README.md
View file @
29ff2d42
...
@@ -8,7 +8,7 @@ This is an implementation based on my understanding, with small
...
@@ -8,7 +8,7 @@ This is an implementation based on my understanding, with small
variations. It doesn't necessarily represents the paper published
variations. It doesn't necessarily represents the paper published
by the original authors.
by the original authors.
Authors: Xin Pan
(Github: panyx0718)
, Anelia Angelova
Authors: Xin Pan, Anelia Angelova
<b>
Results:
</b>
<b>
Results:
</b>
...
...
research/resnet/README.md
View file @
29ff2d42
<font
size=
4
><b>
Reproduced ResNet on CIFAR-10 and CIFAR-100 dataset.
</b></font>
<font
size=
4
><b>
Reproduced ResNet on CIFAR-10 and CIFAR-100 dataset.
</b></font>
contact: panyx0718 (xpan@google.com)
Xin Pan
<b>
Dataset:
</b>
<b>
Dataset:
</b>
...
...
research/textsum/README.md
View file @
29ff2d42
...
@@ -2,7 +2,7 @@ Sequence-to-Sequence with Attention Model for Text Summarization.
...
@@ -2,7 +2,7 @@ Sequence-to-Sequence with Attention Model for Text Summarization.
Authors:
Authors:
Xin Pan
(xpan@google.com, github:panyx0718),
Xin Pan
Peter Liu (peterjliu@google.com, github:peterjliu)
Peter Liu (peterjliu@google.com, github:peterjliu)
<b>
Introduction
</b>
<b>
Introduction
</b>
...
...
tutorials/image/cifar10/cifar10.py
View file @
29ff2d42
...
@@ -204,7 +204,7 @@ def inference(images):
...
@@ -204,7 +204,7 @@ def inference(images):
kernel
=
_variable_with_weight_decay
(
'weights'
,
kernel
=
_variable_with_weight_decay
(
'weights'
,
shape
=
[
5
,
5
,
3
,
64
],
shape
=
[
5
,
5
,
3
,
64
],
stddev
=
5e-2
,
stddev
=
5e-2
,
wd
=
0.0
)
wd
=
None
)
conv
=
tf
.
nn
.
conv2d
(
images
,
kernel
,
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
conv
=
tf
.
nn
.
conv2d
(
images
,
kernel
,
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
biases
=
_variable_on_cpu
(
'biases'
,
[
64
],
tf
.
constant_initializer
(
0.0
))
biases
=
_variable_on_cpu
(
'biases'
,
[
64
],
tf
.
constant_initializer
(
0.0
))
pre_activation
=
tf
.
nn
.
bias_add
(
conv
,
biases
)
pre_activation
=
tf
.
nn
.
bias_add
(
conv
,
biases
)
...
@@ -223,7 +223,7 @@ def inference(images):
...
@@ -223,7 +223,7 @@ def inference(images):
kernel
=
_variable_with_weight_decay
(
'weights'
,
kernel
=
_variable_with_weight_decay
(
'weights'
,
shape
=
[
5
,
5
,
64
,
64
],
shape
=
[
5
,
5
,
64
,
64
],
stddev
=
5e-2
,
stddev
=
5e-2
,
wd
=
0.0
)
wd
=
None
)
conv
=
tf
.
nn
.
conv2d
(
norm1
,
kernel
,
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
conv
=
tf
.
nn
.
conv2d
(
norm1
,
kernel
,
[
1
,
1
,
1
,
1
],
padding
=
'SAME'
)
biases
=
_variable_on_cpu
(
'biases'
,
[
64
],
tf
.
constant_initializer
(
0.1
))
biases
=
_variable_on_cpu
(
'biases'
,
[
64
],
tf
.
constant_initializer
(
0.1
))
pre_activation
=
tf
.
nn
.
bias_add
(
conv
,
biases
)
pre_activation
=
tf
.
nn
.
bias_add
(
conv
,
biases
)
...
@@ -262,7 +262,7 @@ def inference(images):
...
@@ -262,7 +262,7 @@ def inference(images):
# and performs the softmax internally for efficiency.
# and performs the softmax internally for efficiency.
with
tf
.
variable_scope
(
'softmax_linear'
)
as
scope
:
with
tf
.
variable_scope
(
'softmax_linear'
)
as
scope
:
weights
=
_variable_with_weight_decay
(
'weights'
,
[
192
,
NUM_CLASSES
],
weights
=
_variable_with_weight_decay
(
'weights'
,
[
192
,
NUM_CLASSES
],
stddev
=
1
/
192.0
,
wd
=
0.0
)
stddev
=
1
/
192.0
,
wd
=
None
)
biases
=
_variable_on_cpu
(
'biases'
,
[
NUM_CLASSES
],
biases
=
_variable_on_cpu
(
'biases'
,
[
NUM_CLASSES
],
tf
.
constant_initializer
(
0.0
))
tf
.
constant_initializer
(
0.0
))
softmax_linear
=
tf
.
add
(
tf
.
matmul
(
local4
,
weights
),
biases
,
name
=
scope
.
name
)
softmax_linear
=
tf
.
add
(
tf
.
matmul
(
local4
,
weights
),
biases
,
name
=
scope
.
name
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment