Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
8b667903
Commit
8b667903
authored
Jan 19, 2017
by
james mike dupont
Browse files
untie
parent
f565b808
Changes
16
Hide whitespace changes
Inline
Side-by-side
Showing
16 changed files
with
17 additions
and
17 deletions
+17
-17
differential_privacy/dp_sgd/dp_optimizer/dp_pca.py
differential_privacy/dp_sgd/dp_optimizer/dp_pca.py
+1
-1
differential_privacy/multiple_teachers/analysis.py
differential_privacy/multiple_teachers/analysis.py
+1
-1
differential_privacy/multiple_teachers/deep_cnn.py
differential_privacy/multiple_teachers/deep_cnn.py
+2
-2
differential_privacy/privacy_accountant/tf/accountant.py
differential_privacy/privacy_accountant/tf/accountant.py
+1
-1
im2txt/im2txt/data/build_mscoco_data.py
im2txt/im2txt/data/build_mscoco_data.py
+1
-1
inception/inception/inception_distributed_train.py
inception/inception/inception_distributed_train.py
+1
-1
inception/inception/inception_eval.py
inception/inception/inception_eval.py
+1
-1
inception/inception/inception_train.py
inception/inception/inception_train.py
+1
-1
inception/inception/slim/ops.py
inception/inception/slim/ops.py
+1
-1
namignizer/data_utils.py
namignizer/data_utils.py
+1
-1
namignizer/names.py
namignizer/names.py
+1
-1
neural_programmer/data_utils.py
neural_programmer/data_utils.py
+1
-1
neural_programmer/model.py
neural_programmer/model.py
+1
-1
slim/deployment/model_deploy.py
slim/deployment/model_deploy.py
+1
-1
slim/nets/inception_resnet_v2.py
slim/nets/inception_resnet_v2.py
+1
-1
slim/nets/inception_v4.py
slim/nets/inception_v4.py
+1
-1
No files found.
differential_privacy/dp_sgd/dp_optimizer/dp_pca.py
View file @
8b667903
...
@@ -27,7 +27,7 @@ def ComputeDPPrincipalProjection(data, projection_dims,
...
@@ -27,7 +27,7 @@ def ComputeDPPrincipalProjection(data, projection_dims,
Args:
Args:
data: the input data, each row is a data vector.
data: the input data, each row is a data vector.
projection_dims: the projection dimension.
projection_dims: the projection dimension.
sanitizer: the sanitizer used for ach
e
iving privacy.
sanitizer: the sanitizer used for achi
e
ving privacy.
eps_delta: (eps, delta) pair.
eps_delta: (eps, delta) pair.
sigma: if not None, use noise sigma; otherwise compute it using
sigma: if not None, use noise sigma; otherwise compute it using
eps_delta pair.
eps_delta pair.
...
...
differential_privacy/multiple_teachers/analysis.py
View file @
8b667903
...
@@ -287,7 +287,7 @@ def main(unused_argv):
...
@@ -287,7 +287,7 @@ def main(unused_argv):
if
min
(
eps_list_nm
)
==
eps_list_nm
[
-
1
]:
if
min
(
eps_list_nm
)
==
eps_list_nm
[
-
1
]:
print
"Warning: May not have used enough values of l"
print
"Warning: May not have used enough values of l"
# Data indpendent bound, as mechanism is
# Data ind
e
pendent bound, as mechanism is
# 2*noise_eps DP.
# 2*noise_eps DP.
data_ind_log_mgf
=
np
.
array
([
0.0
for
_
in
l_list
])
data_ind_log_mgf
=
np
.
array
([
0.0
for
_
in
l_list
])
data_ind_log_mgf
+=
num_examples
*
np
.
array
(
data_ind_log_mgf
+=
num_examples
*
np
.
array
(
...
...
differential_privacy/multiple_teachers/deep_cnn.py
View file @
8b667903
...
@@ -84,7 +84,7 @@ def inference(images, dropout=False):
...
@@ -84,7 +84,7 @@ def inference(images, dropout=False):
"""Build the CNN model.
"""Build the CNN model.
Args:
Args:
images: Images returned from distorted_inputs() or inputs().
images: Images returned from distorted_inputs() or inputs().
dropout: Boolean controling whether to use dropout or not
dropout: Boolean control
l
ing whether to use dropout or not
Returns:
Returns:
Logits
Logits
"""
"""
...
@@ -194,7 +194,7 @@ def inference_deeper(images, dropout=False):
...
@@ -194,7 +194,7 @@ def inference_deeper(images, dropout=False):
"""Build a deeper CNN model.
"""Build a deeper CNN model.
Args:
Args:
images: Images returned from distorted_inputs() or inputs().
images: Images returned from distorted_inputs() or inputs().
dropout: Boolean controling whether to use dropout or not
dropout: Boolean control
l
ing whether to use dropout or not
Returns:
Returns:
Logits
Logits
"""
"""
...
...
differential_privacy/privacy_accountant/tf/accountant.py
View file @
8b667903
...
@@ -152,7 +152,7 @@ class MomentsAccountant(object):
...
@@ -152,7 +152,7 @@ class MomentsAccountant(object):
We further assume that at each step, the mechanism operates on a random
We further assume that at each step, the mechanism operates on a random
sample with sampling probability q = batch_size / total_examples. Then
sample with sampling probability q = batch_size / total_examples. Then
E[exp(L X)] = E[(Pr[M(D)==x / Pr[M(D')==x])^L]
E[exp(L X)] = E[(Pr[M(D)==x / Pr[M(D')==x])^L]
By distinguishi
g
n two cases of wether D < D' or D' < D, we have
By distinguishin
g
two cases of w
h
ether D < D' or D' < D, we have
that
that
E[exp(L X)] <= max (I1, I2)
E[exp(L X)] <= max (I1, I2)
where
where
...
...
im2txt/im2txt/data/build_mscoco_data.py
View file @
8b667903
...
@@ -424,7 +424,7 @@ def _load_and_process_metadata(captions_file, image_dir):
...
@@ -424,7 +424,7 @@ def _load_and_process_metadata(captions_file, image_dir):
(
len
(
id_to_filename
),
captions_file
))
(
len
(
id_to_filename
),
captions_file
))
# Process the captions and combine the data into a list of ImageMetadata.
# Process the captions and combine the data into a list of ImageMetadata.
print
(
"Proc
c
essing captions."
)
print
(
"Processing captions."
)
image_metadata
=
[]
image_metadata
=
[]
num_captions
=
0
num_captions
=
0
for
image_id
,
base_filename
in
id_to_filename
:
for
image_id
,
base_filename
in
id_to_filename
:
...
...
inception/inception/inception_distributed_train.py
View file @
8b667903
...
@@ -89,7 +89,7 @@ RMSPROP_EPSILON = 1.0 # Epsilon term for RMSProp.
...
@@ -89,7 +89,7 @@ RMSPROP_EPSILON = 1.0 # Epsilon term for RMSProp.
def
train
(
target
,
dataset
,
cluster_spec
):
def
train
(
target
,
dataset
,
cluster_spec
):
"""Train Inception on a dataset for a number of steps."""
"""Train Inception on a dataset for a number of steps."""
# Number of workers and parameter servers are infered from the workers and ps
# Number of workers and parameter servers are infer
r
ed from the workers and ps
# hosts string.
# hosts string.
num_workers
=
len
(
cluster_spec
.
as_dict
()[
'worker'
])
num_workers
=
len
(
cluster_spec
.
as_dict
()[
'worker'
])
num_parameter_servers
=
len
(
cluster_spec
.
as_dict
()[
'ps'
])
num_parameter_servers
=
len
(
cluster_spec
.
as_dict
()[
'ps'
])
...
...
inception/inception/inception_eval.py
View file @
8b667903
...
@@ -77,7 +77,7 @@ def _eval_once(saver, summary_writer, top_1_op, top_5_op, summary_op):
...
@@ -77,7 +77,7 @@ def _eval_once(saver, summary_writer, top_1_op, top_5_op, summary_op):
# /my-favorite-path/imagenet_train/model.ckpt-0,
# /my-favorite-path/imagenet_train/model.ckpt-0,
# extract global_step from it.
# extract global_step from it.
global_step
=
ckpt
.
model_checkpoint_path
.
split
(
'/'
)[
-
1
].
split
(
'-'
)[
-
1
]
global_step
=
ckpt
.
model_checkpoint_path
.
split
(
'/'
)[
-
1
].
split
(
'-'
)[
-
1
]
print
(
'Succesfully loaded model from %s at step=%s.'
%
print
(
'Succes
s
fully loaded model from %s at step=%s.'
%
(
ckpt
.
model_checkpoint_path
,
global_step
))
(
ckpt
.
model_checkpoint_path
,
global_step
))
else
:
else
:
print
(
'No checkpoint file found'
)
print
(
'No checkpoint file found'
)
...
...
inception/inception/inception_train.py
View file @
8b667903
...
@@ -290,7 +290,7 @@ def train(dataset):
...
@@ -290,7 +290,7 @@ def train(dataset):
variable_averages
=
tf
.
train
.
ExponentialMovingAverage
(
variable_averages
=
tf
.
train
.
ExponentialMovingAverage
(
inception
.
MOVING_AVERAGE_DECAY
,
global_step
)
inception
.
MOVING_AVERAGE_DECAY
,
global_step
)
# Another possib
l
ility is to use tf.slim.get_variables().
# Another possibility is to use tf.slim.get_variables().
variables_to_average
=
(
tf
.
trainable_variables
()
+
variables_to_average
=
(
tf
.
trainable_variables
()
+
tf
.
moving_average_variables
())
tf
.
moving_average_variables
())
variables_averages_op
=
variable_averages
.
apply
(
variables_to_average
)
variables_averages_op
=
variable_averages
.
apply
(
variables_to_average
)
...
...
inception/inception/slim/ops.py
View file @
8b667903
...
@@ -15,7 +15,7 @@
...
@@ -15,7 +15,7 @@
"""Contains convenience wrappers for typical Neural Network TensorFlow layers.
"""Contains convenience wrappers for typical Neural Network TensorFlow layers.
Additionally it maintains a collection with update_ops that need to be
Additionally it maintains a collection with update_ops that need to be
updated after the ops have been computed, for ex
m
aple to update moving means
updated after the ops have been computed, for exa
m
ple to update moving means
and moving variances of batch_norm.
and moving variances of batch_norm.
Ops that have different behavior during training or eval have an is_training
Ops that have different behavior during training or eval have an is_training
...
...
namignizer/data_utils.py
View file @
8b667903
...
@@ -58,7 +58,7 @@ def _letter_to_number(letter):
...
@@ -58,7 +58,7 @@ def _letter_to_number(letter):
def
namignizer_iterator
(
names
,
counts
,
batch_size
,
num_steps
,
epoch_size
):
def
namignizer_iterator
(
names
,
counts
,
batch_size
,
num_steps
,
epoch_size
):
"""Takes a list of names and counts like those output from read_names, and
"""Takes a list of names and counts like those output from read_names, and
makes an iterator yielding a batch_size by num_steps array of random names
makes an iterator yielding a batch_size by num_steps array of random names
separated by an end of name token. The names are cho
o
sen randomly according
separated by an end of name token. The names are chosen randomly according
to their counts. The batch may end mid-name
to their counts. The batch may end mid-name
Args:
Args:
...
...
namignizer/names.py
View file @
8b667903
...
@@ -14,7 +14,7 @@
...
@@ -14,7 +14,7 @@
"""A library showing off sequence recognition and generation with the simple
"""A library showing off sequence recognition and generation with the simple
example of names.
example of names.
We use recurrent neural nets to learn complex functions able to recogize and
We use recurrent neural nets to learn complex functions able to recog
n
ize and
generate sequences of a given form. This can be used for natural language
generate sequences of a given form. This can be used for natural language
syntax recognition, dynamically generating maps or puzzles and of course
syntax recognition, dynamically generating maps or puzzles and of course
baby name generation.
baby name generation.
...
...
neural_programmer/data_utils.py
100755 → 100644
View file @
8b667903
...
@@ -223,7 +223,7 @@ def list_join(a):
...
@@ -223,7 +223,7 @@ def list_join(a):
def
group_by_max
(
table
,
number
):
def
group_by_max
(
table
,
number
):
#computes the most frequently occuring entry in a column
#computes the most frequently occur
r
ing entry in a column
answer
=
[]
answer
=
[]
for
i
in
range
(
len
(
table
)):
for
i
in
range
(
len
(
table
)):
temp
=
[]
temp
=
[]
...
...
neural_programmer/model.py
100755 → 100644
View file @
8b667903
...
@@ -135,7 +135,7 @@ class Graph():
...
@@ -135,7 +135,7 @@ class Graph():
#Attention on quetsion to decide the question number to passed to comparison ops
#Attention on quetsion to decide the question number to passed to comparison ops
def
compute_ans
(
op_embedding
,
comparison
):
def
compute_ans
(
op_embedding
,
comparison
):
op_embedding
=
tf
.
expand_dims
(
op_embedding
,
0
)
op_embedding
=
tf
.
expand_dims
(
op_embedding
,
0
)
#dot product of operation embedding with hidden state to the left of the number occurence
#dot product of operation embedding with hidden state to the left of the number occur
r
ence
first
=
tf
.
transpose
(
first
=
tf
.
transpose
(
tf
.
matmul
(
op_embedding
,
tf
.
matmul
(
op_embedding
,
tf
.
transpose
(
tf
.
transpose
(
...
...
slim/deployment/model_deploy.py
View file @
8b667903
...
@@ -306,7 +306,7 @@ def optimize_clones(clones, optimizer,
...
@@ -306,7 +306,7 @@ def optimize_clones(clones, optimizer,
regularization_losses
=
None
regularization_losses
=
None
# Compute the total_loss summing all the clones_losses.
# Compute the total_loss summing all the clones_losses.
total_loss
=
tf
.
add_n
(
clones_losses
,
name
=
'total_loss'
)
total_loss
=
tf
.
add_n
(
clones_losses
,
name
=
'total_loss'
)
# Sum the gradients ac
c
ross clones.
# Sum the gradients across clones.
grads_and_vars
=
_sum_clones_gradients
(
grads_and_vars
)
grads_and_vars
=
_sum_clones_gradients
(
grads_and_vars
)
return
total_loss
,
grads_and_vars
return
total_loss
,
grads_and_vars
...
...
slim/nets/inception_resnet_v2.py
View file @
8b667903
...
@@ -191,7 +191,7 @@ def inception_resnet_v2(inputs, num_classes=1001, is_training=True,
...
@@ -191,7 +191,7 @@ def inception_resnet_v2(inputs, num_classes=1001, is_training=True,
end_points
[
'Mixed_6a'
]
=
net
end_points
[
'Mixed_6a'
]
=
net
net
=
slim
.
repeat
(
net
,
20
,
block17
,
scale
=
0.10
)
net
=
slim
.
repeat
(
net
,
20
,
block17
,
scale
=
0.10
)
# Auxil
l
ary tower
# Auxil
i
ary tower
with
tf
.
variable_scope
(
'AuxLogits'
):
with
tf
.
variable_scope
(
'AuxLogits'
):
aux
=
slim
.
avg_pool2d
(
net
,
5
,
stride
=
3
,
padding
=
'VALID'
,
aux
=
slim
.
avg_pool2d
(
net
,
5
,
stride
=
3
,
padding
=
'VALID'
,
scope
=
'Conv2d_1a_3x3'
)
scope
=
'Conv2d_1a_3x3'
)
...
...
slim/nets/inception_v4.py
View file @
8b667903
...
@@ -269,7 +269,7 @@ def inception_v4(inputs, num_classes=1001, is_training=True,
...
@@ -269,7 +269,7 @@ def inception_v4(inputs, num_classes=1001, is_training=True,
reuse: whether or not the network and its variables should be reused. To be
reuse: whether or not the network and its variables should be reused. To be
able to reuse 'scope' must be given.
able to reuse 'scope' must be given.
scope: Optional variable_scope.
scope: Optional variable_scope.
create_aux_logits: Whether to include the auxil
l
iary logits.
create_aux_logits: Whether to include the auxiliary logits.
Returns:
Returns:
logits: the logits outputs of the model.
logits: the logits outputs of the model.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment