Unverified Commit 008c2d0b authored by Aleksander Smywiński-Pohl's avatar Aleksander Smywiński-Pohl Committed by GitHub
Browse files

Fix typo in documentation (#13494)

* Fix typo in deepspeed documentation

* Add missing import in deepspeed configuration

* Fix path in translation examples
parent 1c191efc
...@@ -42,7 +42,7 @@ and you also will find examples of these below. ...@@ -42,7 +42,7 @@ and you also will find examples of these below.
Here is an example of a translation fine-tuning with a MarianMT model: Here is an example of a translation fine-tuning with a MarianMT model:
```bash ```bash
python examples/pytorch/seq2seq/run_translation.py \ python examples/pytorch/translation/run_translation.py \
--model_name_or_path Helsinki-NLP/opus-mt-en-ro \ --model_name_or_path Helsinki-NLP/opus-mt-en-ro \
--do_train \ --do_train \
--do_eval \ --do_eval \
...@@ -62,7 +62,7 @@ MBart and some T5 models require special handling. ...@@ -62,7 +62,7 @@ MBart and some T5 models require special handling.
T5 models `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` must use an additional argument: `--source_prefix "translate {source_lang} to {target_lang}"`. For example: T5 models `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` must use an additional argument: `--source_prefix "translate {source_lang} to {target_lang}"`. For example:
```bash ```bash
python examples/pytorch/seq2seq/run_translation.py \ python examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-small \ --model_name_or_path t5-small \
--do_train \ --do_train \
--do_eval \ --do_eval \
...@@ -85,7 +85,7 @@ For the aforementioned group of T5 models it's important to remember that if you ...@@ -85,7 +85,7 @@ For the aforementioned group of T5 models it's important to remember that if you
MBart models require a different format for `--source_lang` and `--target_lang` values, e.g. instead of `en` it expects `en_XX`, for `ro` it expects `ro_RO`. The full MBart specification for language codes can be found [here](https://huggingface.co/facebook/mbart-large-cc25). For example: MBart models require a different format for `--source_lang` and `--target_lang` values, e.g. instead of `en` it expects `en_XX`, for `ro` it expects `ro_RO`. The full MBart specification for language codes can be found [here](https://huggingface.co/facebook/mbart-large-cc25). For example:
```bash ```bash
python examples/pytorch/seq2seq/run_translation.py \ python examples/pytorch/translation/run_translation.py \
--model_name_or_path facebook/mbart-large-en-ro \ --model_name_or_path facebook/mbart-large-en-ro \
--do_train \ --do_train \
--do_eval \ --do_eval \
...@@ -104,7 +104,7 @@ And here is how you would use the translation finetuning on your own files, afte ...@@ -104,7 +104,7 @@ And here is how you would use the translation finetuning on your own files, afte
values for the arguments `--train_file`, `--validation_file` to match your setup: values for the arguments `--train_file`, `--validation_file` to match your setup:
```bash ```bash
python examples/pytorch/seq2seq/run_translation.py \ python examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-small \ --model_name_or_path t5-small \
--do_train \ --do_train \
--do_eval \ --do_eval \
...@@ -133,7 +133,7 @@ Here the languages are Romanian (`ro`) and English (`en`). ...@@ -133,7 +133,7 @@ Here the languages are Romanian (`ro`) and English (`en`).
If you want to use a pre-processed dataset that leads to high BLEU scores, but for the `en-de` language pair, you can use `--dataset_name stas/wmt14-en-de-pre-processed`, as following: If you want to use a pre-processed dataset that leads to high BLEU scores, but for the `en-de` language pair, you can use `--dataset_name stas/wmt14-en-de-pre-processed`, as following:
```bash ```bash
python examples/pytorch/seq2seq/run_translation.py \ python examples/pytorch/translation/run_translation.py \
--model_name_or_path t5-small \ --model_name_or_path t5-small \
--do_train \ --do_train \
--do_eval \ --do_eval \
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment