Unverified Commit f58b9c05 authored by Gorkem Ozkaya's avatar Gorkem Ozkaya Committed by GitHub
Browse files

Update translation.mdx (#18169)

* Update translation.mdx

* update translation.mdx by running make style
parent b5169527
...@@ -93,10 +93,32 @@ Use 🤗 Datasets [`~datasets.Dataset.map`] function to apply the preprocessing ...@@ -93,10 +93,32 @@ Use 🤗 Datasets [`~datasets.Dataset.map`] function to apply the preprocessing
>>> tokenized_books = books.map(preprocess_function, batched=True) >>> tokenized_books = books.map(preprocess_function, batched=True)
``` ```
<frameworkcontent>
<pt>
Load T5 with [`AutoModelForSeq2SeqLM`]:
```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
</pt>
<tf>
Load T5 with [`TFAutoModelForSeq2SeqLM`]:
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
</tf>
</frameworkcontent>
Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dynamically pad* your text and labels to the length of the longest element in its batch, so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient. Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dynamically pad* your text and labels to the length of the longest element in its batch, so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient.
<frameworkcontent> <frameworkcontent>
<pt> <pt>
```py ```py
>>> from transformers import DataCollatorForSeq2Seq >>> from transformers import DataCollatorForSeq2Seq
...@@ -104,6 +126,7 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna ...@@ -104,6 +126,7 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna
``` ```
</pt> </pt>
<tf> <tf>
```py ```py
>>> from transformers import DataCollatorForSeq2Seq >>> from transformers import DataCollatorForSeq2Seq
...@@ -116,13 +139,6 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna ...@@ -116,13 +139,6 @@ Use [`DataCollatorForSeq2Seq`] to create a batch of examples. It will also *dyna
<frameworkcontent> <frameworkcontent>
<pt> <pt>
Load T5 with [`AutoModelForSeq2SeqLM`]:
```py
>>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
<Tip> <Tip>
...@@ -137,6 +153,8 @@ At this point, only three steps remain: ...@@ -137,6 +153,8 @@ At this point, only three steps remain:
3. Call [`~Trainer.train`] to fine-tune your model. 3. Call [`~Trainer.train`] to fine-tune your model.
```py ```py
>>> from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
>>> training_args = Seq2SeqTrainingArguments( >>> training_args = Seq2SeqTrainingArguments(
... output_dir="./results", ... output_dir="./results",
... evaluation_strategy="epoch", ... evaluation_strategy="epoch",
...@@ -194,14 +212,6 @@ Set up an optimizer function, learning rate schedule, and some training hyperpar ...@@ -194,14 +212,6 @@ Set up an optimizer function, learning rate schedule, and some training hyperpar
>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
``` ```
Load T5 with [`TFAutoModelForSeq2SeqLM`]:
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-small")
```
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method): Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method):
```py ```py
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment