Unverified Commit e3e70f95 authored by Amine Abdaoui's avatar Amine Abdaoui Committed by GitHub
Browse files

docs(examples): fix link to TPU launcher script (#11427)

parent d7633a4e
...@@ -119,7 +119,7 @@ When using PyTorch, we support TPUs thanks to `pytorch/xla`. For more context an ...@@ -119,7 +119,7 @@ When using PyTorch, we support TPUs thanks to `pytorch/xla`. For more context an
very detailed [pytorch/xla README](https://github.com/pytorch/xla/blob/master/README.md). very detailed [pytorch/xla README](https://github.com/pytorch/xla/blob/master/README.md).
In this repo, we provide a very simple launcher script named In this repo, we provide a very simple launcher script named
[xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/xla_spawn.py) that lets you run our [xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/xla_spawn.py) that lets you run our
example scripts on multiple TPU cores without any boilerplate. Just pass a `--num_cores` flag to this script, then your example scripts on multiple TPU cores without any boilerplate. Just pass a `--num_cores` flag to this script, then your
regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for
`torch.distributed`): `torch.distributed`):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment