Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
12c0bcbb
Commit
12c0bcbb
authored
Apr 05, 2017
by
Konstantinos Bousmalis
Browse files
Merge branch 'master' of github.com:bousmalis/models
parents
0b8ee18f
876a9325
Changes
8
Show whitespace changes
Inline
Side-by-side
Showing
8 changed files
with
34 additions
and
68 deletions
+34
-68
README.md
README.md
+1
-1
domain_adaptation/README.md
domain_adaptation/README.md
+12
-48
domain_adaptation/domain_separation/dsn.py
domain_adaptation/domain_separation/dsn.py
+6
-4
domain_adaptation/domain_separation/dsn_test.py
domain_adaptation/domain_separation/dsn_test.py
+1
-1
domain_adaptation/domain_separation/dsn_train.py
domain_adaptation/domain_separation/dsn_train.py
+1
-1
domain_adaptation/domain_separation/losses.py
domain_adaptation/domain_separation/losses.py
+7
-7
domain_adaptation/domain_separation/models_test.py
domain_adaptation/domain_separation/models_test.py
+1
-1
domain_adaptation/domain_separation/utils.py
domain_adaptation/domain_separation/utils.py
+5
-5
No files found.
README.md
View file @
12c0bcbb
...
@@ -9,9 +9,9 @@ To propose a model for inclusion please submit a pull request.
...
@@ -9,9 +9,9 @@ To propose a model for inclusion please submit a pull request.
## Models
## Models
-
[
autoencoder
](
autoencoder
)
: various autoencoders.
-
[
autoencoder
](
autoencoder
)
: various autoencoders.
-
[
domain_adaptation
](
domain_adaptation
)
: Domain Separation Networks.
-
[
compression
](
compression
)
: compressing and decompressing images using a pre-trained Residual GRU network.
-
[
compression
](
compression
)
: compressing and decompressing images using a pre-trained Residual GRU network.
-
[
differential_privacy
](
differential_privacy
)
: privacy-preserving student models from multiple teachers.
-
[
differential_privacy
](
differential_privacy
)
: privacy-preserving student models from multiple teachers.
-
[
domain_adaptation
](
domain_adaptation
)
: domain separation networks.
-
[
im2txt
](
im2txt
)
: image-to-text neural network for image captioning.
-
[
im2txt
](
im2txt
)
: image-to-text neural network for image captioning.
-
[
inception
](
inception
)
: deep convolutional networks for computer vision.
-
[
inception
](
inception
)
: deep convolutional networks for computer vision.
-
[
learning_to_remember_rare_events
](
learning_to_remember_rare_events
)
: a large-scale life-long memory module for use in deep learning.
-
[
learning_to_remember_rare_events
](
learning_to_remember_rare_events
)
: a large-scale life-long memory module for use in deep learning.
...
...
domain_adaptation/
domain_separation/
README.md
→
domain_adaptation/README.md
View file @
12c0bcbb
# Domain Sep
e
ration Networks
# Domain Sep
a
ration Networks
## Introduction
## Introduction
...
@@ -25,14 +25,20 @@ Twitter @bousmalis.
...
@@ -25,14 +25,20 @@ Twitter @bousmalis.
In order to run the MNIST to MNIST-M experiments with DANNs and/or DANNs with
In order to run the MNIST to MNIST-M experiments with DANNs and/or DANNs with
domain separation (DSNs) you will need to set the directory you used to download
domain separation (DSNs) you will need to set the directory you used to download
MNIST and MNIST-M:
\
MNIST and MNIST-M:
\
```
$ export DSN_DATA_DIR=/your/dir
$ export DSN_DATA_DIR=/your/dir
```
Then you need to build the binaries with Bazel:
Then you need to build the binaries with Bazel:
```
$ bazel build -c opt domain_adaptation/domain_separation/...
$ bazel build -c opt domain_adaptation/domain_separation/...
```
You can then train with the following command:
You can then train with the following command:
```
$ ./bazel-bin/domain_adaptation/domain_separation/dsn_train \
$ ./bazel-bin/domain_adaptation/domain_separation/dsn_train \
--similarity_loss=dann_loss \
--similarity_loss=dann_loss \
--basic_tower=dann_mnist \
--basic_tower=dann_mnist \
...
@@ -46,55 +52,13 @@ $ ./bazel-bin/domain_adaptation/domain_separation/dsn_train \
...
@@ -46,55 +52,13 @@ $ ./bazel-bin/domain_adaptation/domain_separation/dsn_train \
--master="" \
--master="" \
--dataset_dir=${DSN_DATA_DIR} \
--dataset_dir=${DSN_DATA_DIR} \
-v --use_logging
-v --use_logging
```
Evaluation can be invoked with the following command:
Evaluation can be invoked with the following command:
\
```
$ ./bazel-bin/domain_adaptation/domain_separation/dsn_eval \
$ ./bazel-bin/domain_adaptation/domain_separation/dsn_eval \
-v --dataset mnist_m --split test --num_examples=9001 \
-v --dataset mnist_m --split test --num_examples=9001 \
--dataset_dir=${DSN_DATA_DIR}
--dataset_dir=${DSN_DATA_DIR}
# Domain Seperation Networks
```
## Introduction
This code is the code used for the "Domain Separation Networks" paper
by Bousmalis K., Trigeorgis G., et al. which was presented at NIPS 2016. The
paper can be found here: https://arxiv.org/abs/1608.06019
## Contact
This code was open-sourced by Konstantinos Bousmalis (konstantinos@google.com, github:bousmalis)
## Installation
You will need to have the following installed on your machine before trying out the DSN code.
*
Tensorflow: https://www.tensorflow.org/install/
*
Bazel: https://bazel.build/
## Running the code for adapting MNIST to MNIST-M
In order to run the MNIST to MNIST-M experiments with DANNs and/or DANNs with
domain separation (DSNs) you will need to set the directory you used to download
MNIST and MNIST-M:
\
$ export DSN_DATA_DIR=/your/dir
Then you need to build the binaries with Bazel:
$ bazel build -c opt domain_adaptation/domain_separation/...
Add models and models/slim to your $PYTHONPATH:
$ export PYTHONPATH=$PYTHONPATH:$PWD/slim
\
$ export PYTHONPATH=$PYTHONPATH:$PWD
You can then train with the following command:
$ ./bazel-bin/domain_adaptation/domain_separation/dsn_train
\
--similarity_loss=dann_loss
\
--basic_tower=dann_mnist
\
--source_dataset=mnist
\
--target_dataset=mnist_m
\
--learning_rate=0.0117249
\
--gamma_weight=0.251175
\
--weight_decay=1e-6
\
--layers_to_regularize=fc3
\
--nouse_separation
\
--master=""
\
--dataset_dir=${DSN_DATA_DIR}
\
-v --use_logging
domain_adaptation/domain_separation/dsn.py
View file @
12c0bcbb
...
@@ -282,15 +282,17 @@ def add_autoencoders(source_data, source_shared, target_data, target_shared,
...
@@ -282,15 +282,17 @@ def add_autoencoders(source_data, source_shared, target_data, target_shared,
# Add summaries
# Add summaries
source_reconstructions
=
tf
.
concat
(
source_reconstructions
=
tf
.
concat
(
map
(
normalize_images
,
[
axis
=
2
,
values
=
map
(
normalize_images
,
[
source_data
,
source_recons
,
source_shared_recons
,
source_data
,
source_recons
,
source_shared_recons
,
source_private_recons
source_private_recons
])
,
2
)
]))
target_reconstructions
=
tf
.
concat
(
target_reconstructions
=
tf
.
concat
(
map
(
normalize_images
,
[
axis
=
2
,
values
=
map
(
normalize_images
,
[
target_data
,
target_recons
,
target_shared_recons
,
target_data
,
target_recons
,
target_shared_recons
,
target_private_recons
target_private_recons
])
,
2
)
]))
tf
.
summary
.
image
(
tf
.
summary
.
image
(
'Source Images:Recons:RGB'
,
'Source Images:Recons:RGB'
,
source_reconstructions
[:,
:,
:,
:
3
],
source_reconstructions
[:,
:,
:,
:
3
],
...
...
domain_adaptation/domain_separation/dsn_test.py
View file @
12c0bcbb
...
@@ -26,7 +26,7 @@ class HelperFunctionsTest(tf.test.TestCase):
...
@@ -26,7 +26,7 @@ class HelperFunctionsTest(tf.test.TestCase):
with
self
.
test_session
()
as
sess
:
with
self
.
test_session
()
as
sess
:
# Test for when global_step < domain_separation_startpoint
# Test for when global_step < domain_separation_startpoint
step
=
tf
.
contrib
.
slim
.
get_or_create_global_step
()
step
=
tf
.
contrib
.
slim
.
get_or_create_global_step
()
sess
.
run
(
tf
.
initialize_
al
l
_variables
())
# global_step = 0
sess
.
run
(
tf
.
glob
al_variables
_initializer
())
# global_step = 0
params
=
{
'domain_separation_startpoint'
:
2
}
params
=
{
'domain_separation_startpoint'
:
2
}
weight
=
dsn
.
dsn_loss_coefficient
(
params
)
weight
=
dsn
.
dsn_loss_coefficient
(
params
)
weight_np
=
sess
.
run
(
weight
)
weight_np
=
sess
.
run
(
weight
)
...
...
domain_adaptation/domain_separation/dsn_train.py
View file @
12c0bcbb
...
@@ -70,7 +70,7 @@ tf.app.flags.DEFINE_string('train_log_dir', '/tmp/da/',
...
@@ -70,7 +70,7 @@ tf.app.flags.DEFINE_string('train_log_dir', '/tmp/da/',
tf
.
app
.
flags
.
DEFINE_string
(
tf
.
app
.
flags
.
DEFINE_string
(
'layers_to_regularize'
,
'fc3'
,
'layers_to_regularize'
,
'fc3'
,
'Comma-sep
e
rated list of layer names to use MMD regularization on.'
)
'Comma-sep
a
rated list of layer names to use MMD regularization on.'
)
tf
.
app
.
flags
.
DEFINE_float
(
'learning_rate'
,
.
01
,
'The learning rate'
)
tf
.
app
.
flags
.
DEFINE_float
(
'learning_rate'
,
.
01
,
'The learning rate'
)
...
...
domain_adaptation/domain_separation/losses.py
View file @
12c0bcbb
...
@@ -100,7 +100,7 @@ def mmd_loss(source_samples, target_samples, weight, scope=None):
...
@@ -100,7 +100,7 @@ def mmd_loss(source_samples, target_samples, weight, scope=None):
tag
=
'MMD Loss'
tag
=
'MMD Loss'
if
scope
:
if
scope
:
tag
=
scope
+
tag
tag
=
scope
+
tag
tf
.
contrib
.
deprecated
.
scalar_
summary
(
tag
,
loss_value
)
tf
.
summary
.
scalar
(
tag
,
loss_value
)
tf
.
losses
.
add_loss
(
loss_value
)
tf
.
losses
.
add_loss
(
loss_value
)
return
loss_value
return
loss_value
...
@@ -135,7 +135,7 @@ def correlation_loss(source_samples, target_samples, weight, scope=None):
...
@@ -135,7 +135,7 @@ def correlation_loss(source_samples, target_samples, weight, scope=None):
tag
=
'Correlation Loss'
tag
=
'Correlation Loss'
if
scope
:
if
scope
:
tag
=
scope
+
tag
tag
=
scope
+
tag
tf
.
contrib
.
deprecated
.
scalar_
summary
(
tag
,
corr_loss
)
tf
.
summary
.
scalar
(
tag
,
corr_loss
)
tf
.
losses
.
add_loss
(
corr_loss
)
tf
.
losses
.
add_loss
(
corr_loss
)
return
corr_loss
return
corr_loss
...
@@ -155,11 +155,11 @@ def dann_loss(source_samples, target_samples, weight, scope=None):
...
@@ -155,11 +155,11 @@ def dann_loss(source_samples, target_samples, weight, scope=None):
"""
"""
with
tf
.
variable_scope
(
'dann'
):
with
tf
.
variable_scope
(
'dann'
):
batch_size
=
tf
.
shape
(
source_samples
)[
0
]
batch_size
=
tf
.
shape
(
source_samples
)[
0
]
samples
=
tf
.
concat
([
source_samples
,
target_samples
]
,
0
)
samples
=
tf
.
concat
(
axis
=
0
,
values
=
[
source_samples
,
target_samples
])
samples
=
slim
.
flatten
(
samples
)
samples
=
slim
.
flatten
(
samples
)
domain_selection_mask
=
tf
.
concat
(
domain_selection_mask
=
tf
.
concat
(
[
tf
.
zeros
((
batch_size
,
1
)),
tf
.
ones
((
batch_size
,
1
))]
,
0
)
axis
=
0
,
values
=
[
tf
.
zeros
((
batch_size
,
1
)),
tf
.
ones
((
batch_size
,
1
))])
# Perform the gradient reversal and be careful with the shape.
# Perform the gradient reversal and be careful with the shape.
grl
=
grl_ops
.
gradient_reversal
(
samples
)
grl
=
grl_ops
.
gradient_reversal
(
samples
)
...
@@ -184,9 +184,9 @@ def dann_loss(source_samples, target_samples, weight, scope=None):
...
@@ -184,9 +184,9 @@ def dann_loss(source_samples, target_samples, weight, scope=None):
tag_loss
=
scope
+
tag_loss
tag_loss
=
scope
+
tag_loss
tag_accuracy
=
scope
+
tag_accuracy
tag_accuracy
=
scope
+
tag_accuracy
tf
.
contrib
.
deprecated
.
scalar_
summary
(
tf
.
summary
.
scalar
(
tag_loss
,
domain_loss
,
name
=
'domain_loss_summary'
)
tag_loss
,
domain_loss
,
name
=
'domain_loss_summary'
)
tf
.
contrib
.
deprecated
.
scalar_
summary
(
tf
.
summary
.
scalar
(
tag_accuracy
,
domain_accuracy
,
name
=
'domain_accuracy_summary'
)
tag_accuracy
,
domain_accuracy
,
name
=
'domain_accuracy_summary'
)
return
domain_loss
return
domain_loss
...
@@ -216,7 +216,7 @@ def difference_loss(private_samples, shared_samples, weight=1.0, name=''):
...
@@ -216,7 +216,7 @@ def difference_loss(private_samples, shared_samples, weight=1.0, name=''):
cost
=
tf
.
reduce_mean
(
tf
.
square
(
correlation_matrix
))
*
weight
cost
=
tf
.
reduce_mean
(
tf
.
square
(
correlation_matrix
))
*
weight
cost
=
tf
.
where
(
cost
>
0
,
cost
,
0
,
name
=
'value'
)
cost
=
tf
.
where
(
cost
>
0
,
cost
,
0
,
name
=
'value'
)
tf
.
contrib
.
deprecated
.
scalar_
summary
(
'losses/Difference Loss {}'
.
format
(
name
),
tf
.
summary
.
scalar
(
'losses/Difference Loss {}'
.
format
(
name
),
cost
)
cost
)
assert_op
=
tf
.
Assert
(
tf
.
is_finite
(
cost
),
[
cost
])
assert_op
=
tf
.
Assert
(
tf
.
is_finite
(
cost
),
[
cost
])
with
tf
.
control_dependencies
([
assert_op
]):
with
tf
.
control_dependencies
([
assert_op
]):
...
...
domain_adaptation/domain_separation/models_test.py
View file @
12c0bcbb
...
@@ -115,7 +115,7 @@ class DecoderTest(tf.test.TestCase):
...
@@ -115,7 +115,7 @@ class DecoderTest(tf.test.TestCase):
width
=
width
,
width
=
width
,
channels
=
channels
,
channels
=
channels
,
batch_norm_params
=
batch_norm_params
)
batch_norm_params
=
batch_norm_params
)
sess
.
run
(
tf
.
initialize_
al
l
_variables
())
sess
.
run
(
tf
.
glob
al_variables
_initializer
())
output_np
=
sess
.
run
(
output
)
output_np
=
sess
.
run
(
output
)
self
.
assertEqual
(
output_np
.
shape
,
(
32
,
height
,
width
,
channels
))
self
.
assertEqual
(
output_np
.
shape
,
(
32
,
height
,
width
,
channels
))
self
.
assertTrue
(
np
.
any
(
output_np
))
self
.
assertTrue
(
np
.
any
(
output_np
))
...
...
domain_adaptation/domain_separation/utils.py
View file @
12c0bcbb
...
@@ -75,15 +75,15 @@ def reshape_feature_maps(features_tensor):
...
@@ -75,15 +75,15 @@ def reshape_feature_maps(features_tensor):
num_filters
)
num_filters
)
num_filters_sqrt
=
int
(
num_filters_sqrt
)
num_filters_sqrt
=
int
(
num_filters_sqrt
)
conv_summary
=
tf
.
unstack
(
features_tensor
,
axis
=
3
)
conv_summary
=
tf
.
unstack
(
features_tensor
,
axis
=
3
)
conv_one_row
=
tf
.
concat
(
conv_summary
[
0
:
num_filters_sqrt
]
,
2
)
conv_one_row
=
tf
.
concat
(
axis
=
2
,
values
=
conv_summary
[
0
:
num_filters_sqrt
])
ind
=
1
ind
=
1
conv_final
=
conv_one_row
conv_final
=
conv_one_row
for
ind
in
range
(
1
,
num_filters_sqrt
):
for
ind
in
range
(
1
,
num_filters_sqrt
):
conv_one_row
=
tf
.
concat
(
conv_summary
[
conv_one_row
=
tf
.
concat
(
axis
=
2
,
ind
*
num_filters_sqrt
+
0
:
ind
*
num_filters_sqrt
+
num_filters_sqrt
],
values
=
conv_summary
[
2
)
ind
*
num_filters_sqrt
+
0
:
ind
*
num_filters_sqrt
+
num_filters_sqrt
]
)
conv_final
=
tf
.
concat
(
conv_final
=
tf
.
concat
(
[
tf
.
squeeze
(
conv_final
),
tf
.
squeeze
(
conv_one_row
)]
,
1
)
axis
=
1
,
values
=
[
tf
.
squeeze
(
conv_final
),
tf
.
squeeze
(
conv_one_row
)])
conv_final
=
tf
.
expand_dims
(
conv_final
,
-
1
)
conv_final
=
tf
.
expand_dims
(
conv_final
,
-
1
)
return
conv_final
return
conv_final
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment