Commit 03038d9c authored by Ayush Dubey's avatar Ayush Dubey Committed by A. Unique TensorFlower
Browse files

Update resnet README with multi-worker benchmark instructions.

PiperOrigin-RevId: 290963657
parent d4eedbb9
...@@ -113,6 +113,19 @@ distributed training across the GPUs. ...@@ -113,6 +113,19 @@ distributed training across the GPUs.
If you wish to run without `tf.distribute.Strategy`, you can do so by setting If you wish to run without `tf.distribute.Strategy`, you can do so by setting
`--distribution_strategy=off`. `--distribution_strategy=off`.
### Running on multiple GPU hosts
You can also train these models on multiple hosts, each with GPUs, using
`tf.distribute.Strategy`.
The easiest way to run multi-host benchmarks is to set the
[`TF_CONFIG`](https://www.tensorflow.org/guide/distributed_training#TF_CONFIG)
appropriately at each host. e.g., to run using `MultiWorkerMirroredStrategy` on
2 hosts, the `cluster` in `TF_CONFIG` should have 2 `host:port` entries, and
host `i` should have the `task` in `TF_CONFIG` set to `{"type": "worker",
"index": i}`. `MultiWorkerMirroredStrategy` will automatically use all the
available GPUs at each host.
### Running on Cloud TPUs ### Running on Cloud TPUs
Note: This model will **not** work with TPUs on Colab. Note: This model will **not** work with TPUs on Colab.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment