"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "651bfb7ad57387b7d645420f56195b9134fff99f"
Unverified Commit f58b9c05 authored by Gorkem Ozkaya's avatar Gorkem Ozkaya Committed by GitHub
Browse files

Update translation.mdx (#18169)

* Update translation.mdx

* update translation.mdx by running make style
parent b5169527
...@@ -93,10 +93,32 @@ Use 🤗 Datasets [`~datasets.Dataset.map`] function to apply the preprocessing ...@@ -93,10 +93,32 @@ Use 🤗 Datasets [`~datasets.Dataset.map`] function to apply the preprocessing
>>> tokenized_books = books.map(preprocess_function, batched=True) >>> tokenized_books = books.map(preprocess_function, batched=True)
``` ```
<frameworkcontent>
<pt>
Load T5 with [`AutoModelForSeq2SeqLM`]:
```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
</pt>
<tf>
Load T5 with [`TFAutoModelForSeq2SeqLM`]:
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
</tf>
</frameworkcontent>
Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dynamically pad* your text and labels to the length of the longest element in its batch, so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient. Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dynamically pad* your text and labels to the length of the longest element in its batch, so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient.
<frameworkcontent> <frameworkcontent>
<pt> <pt>
```py ```py
>>> from transformers import DataCollatorForSeq2Seq >>> from transformers import DataCollatorForSeq2Seq
...@@ -104,6 +126,7 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna ...@@ -104,6 +126,7 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna
``` ```
</pt> </pt>
<tf> <tf>
```py ```py
>>> from transformers import DataCollatorForSeq2Seq >>> from transformers import DataCollatorForSeq2Seq
...@@ -116,13 +139,6 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna ...@@ -116,13 +139,6 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna
<frameworkcontent> <frameworkcontent>
<pt> <pt>
Load T5 with [`AutoModelForSeq2SeqLM`]:
```py
>>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
<Tip> <Tip>
...@@ -137,6 +153,8 @@ At this point, only three steps remain: ...@@ -137,6 +153,8 @@ At this point, only three steps remain:
3. Call [`~Trainer.train`] to fine-tune your model. 3. Call [`~Trainer.train`] to fine-tune your model.
```py ```py
>>> from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
>>> training_args = Seq2SeqTrainingArguments( >>> training_args = Seq2SeqTrainingArguments(
... output_dir="./results", ... output_dir="./results",
... evaluation_strategy="epoch", ... evaluation_strategy="epoch",
...@@ -194,14 +212,6 @@ Set up an optimizer function, learning rate schedule, and some training hyperpar ...@@ -194,14 +212,6 @@ Set up an optimizer function, learning rate schedule, and some training hyperpar
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
``` ```
Load T5 with [`TFAutoModelForSeq2SeqLM`]:
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method): Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method):
```py ```py
...@@ -222,4 +232,4 @@ For a more in-depth example of how to fine-tune a model for translation, take a ...@@ -222,4 +232,4 @@ For a more in-depth example of how to fine-tune a model for translation, take a
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb). or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb).
</Tip> </Tip>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment