Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
6c560cb3
Unverified
Commit
6c560cb3
authored
Oct 24, 2018
by
josh11b
Committed by
GitHub
Oct 24, 2018
Browse files
AllReduceCrossTowerOps -> AllReduceCrossDeviceOps
parent
780f5265
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
official/utils/misc/distribution_utils.py
official/utils/misc/distribution_utils.py
+2
-2
No files found.
official/utils/misc/distribution_utils.py
View file @
6c560cb3
...
...
@@ -27,7 +27,7 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
Args:
num_gpus: Number of GPUs to run this model.
all_reduce_alg: Specify which algorithm to use when performing all-reduce.
See tf.contrib.distribute.AllReduceCross
Tower
Ops for available algorithms.
See tf.contrib.distribute.AllReduceCross
Device
Ops for available algorithms.
If None, DistributionStrategy will choose based on device topology.
Returns:
...
...
@@ -41,7 +41,7 @@ def get_distribution_strategy(num_gpus, all_reduce_alg=None):
if
all_reduce_alg
:
return
tf
.
contrib
.
distribute
.
MirroredStrategy
(
num_gpus
=
num_gpus
,
cross_tower_ops
=
tf
.
contrib
.
distribute
.
AllReduceCross
Tower
Ops
(
cross_tower_ops
=
tf
.
contrib
.
distribute
.
AllReduceCross
Device
Ops
(
all_reduce_alg
,
num_packs
=
2
))
else
:
return
tf
.
contrib
.
distribute
.
MirroredStrategy
(
num_gpus
=
num_gpus
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment