@@ -341,7 +341,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
...
@@ -341,7 +341,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
>>> model.compile(optimizer=optimizer)
>>> model.compile(optimizer=optimizer)
```
```
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](./main_classes/keras_callbacks).
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
@@ -412,7 +412,7 @@ Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Data
...
@@ -412,7 +412,7 @@ Convert your datasets to the `tf.data.Dataset` format using the [`~datasets.Data
... )
... )
```
```
To compute the accuracy from the predictions and push your model to the 🤗 Hub, use [Keras callbacks](./main_classes/keras_callbacks).
To compute the accuracy from the predictions and push your model to the 🤗 Hub, use [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [`KerasMetricCallback`],
Pass your `compute_metrics` function to [`KerasMetricCallback`],
and use the [`PushToHubCallback`] to upload the model:
and use the [`PushToHubCallback`] to upload the model:
...
@@ -587,4 +587,4 @@ To visualize the results, load the [dataset color palette](https://github.com/te
...
@@ -587,4 +587,4 @@ To visualize the results, load the [dataset color palette](https://github.com/te
<div class="flex justify-center">
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"/>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"/>
@@ -16,7 +16,7 @@ specific language governing permissions and limitations under the License.
...
@@ -16,7 +16,7 @@ specific language governing permissions and limitations under the License.
<Youtube id="leNG9fN9FQU"/>
<Youtube id="leNG9fN9FQU"/>
Text classification is a common NLP task that assigns a label or class to text. Some of the largest companies run text classification in production for a wide range of practical applications. One of the most popular forms of text classification is sentiment analysis, which assigns a label like 🙂 positive, 🙁 negative, or 😐 neutral to a sequence of text.
Text classification is a common NLP task that assigns a label or class to text. Some of the largest companies run text classification in production for a wide range of practical applications. One of the most popular forms of text classification is sentiment analysis, which assigns a label like 🙂 positive, 🙁 negative, or 😐 neutral to a sequence of text.
This guide will show you how to:
This guide will show you how to:
...
@@ -69,7 +69,7 @@ Then take a look at an example:
...
@@ -69,7 +69,7 @@ Then take a look at an example:
}
}
```
```
There are two fields in this dataset:
There are two fields in this dataset:
- `text`: the movie review text.
- `text`: the movie review text.
- `label`: a value that is either `0` for a negative review or `1` for a positive review.
- `label`: a value that is either `0` for a negative review or `1` for a positive review.
...
@@ -267,7 +267,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
...
@@ -267,7 +267,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
>>> model.compile(optimizer=optimizer)
>>> model.compile(optimizer=optimizer)
```
```
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](./main_classes/keras_callbacks).
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
@@ -19,7 +19,7 @@ specific language governing permissions and limitations under the License.
...
@@ -19,7 +19,7 @@ specific language governing permissions and limitations under the License.
Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be:
Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be:
- Extractive: extract the most relevant information from a document.
- Extractive: extract the most relevant information from a document.
- Abstractive: generate new text that captures the most relevant information.
- Abstractive: generate new text that captures the most relevant information.
This guide will show you how to:
This guide will show you how to:
...
@@ -275,7 +275,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
...
@@ -275,7 +275,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
>>> model.compile(optimizer=optimizer)
>>> model.compile(optimizer=optimizer)
```
```
The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](./main_classes/keras_callbacks).
The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
...
@@ -354,7 +354,7 @@ Tokenize the text and return the `input_ids` as PyTorch tensors:
...
@@ -354,7 +354,7 @@ Tokenize the text and return the `input_ids` as PyTorch tensors:
Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](./main_classes/text_generation) API.
Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
```py
```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> from transformers import AutoModelForSeq2SeqLM
...
@@ -380,7 +380,7 @@ Tokenize the text and return the `input_ids` as TensorFlow tensors:
...
@@ -380,7 +380,7 @@ Tokenize the text and return the `input_ids` as TensorFlow tensors:
Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](./main_classes/text_generation) API.
Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
```py
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> from transformers import TFAutoModelForSeq2SeqLM
...
@@ -396,4 +396,4 @@ Decode the generated token ids back into text:
...
@@ -396,4 +396,4 @@ Decode the generated token ids back into text:
'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.'
'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.'
@@ -16,7 +16,7 @@ specific language governing permissions and limitations under the License.
...
@@ -16,7 +16,7 @@ specific language governing permissions and limitations under the License.
<Youtube id="wVHdVlPScxA"/>
<Youtube id="wVHdVlPScxA"/>
Token classification assigns a label to individual tokens in a sentence. One of the most common token classification tasks is Named Entity Recognition (NER). NER attempts to find a label for each entity in a sentence, such as a person, location, or organization.
Token classification assigns a label to individual tokens in a sentence. One of the most common token classification tasks is Named Entity Recognition (NER). NER attempts to find a label for each entity in a sentence, such as a person, location, or organization.
This guide will show you how to:
This guide will show you how to:
...
@@ -369,7 +369,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
...
@@ -369,7 +369,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
>>> model.compile(optimizer=optimizer)
>>> model.compile(optimizer=optimizer)
```
```
The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](./main_classes/keras_callbacks).
The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
@@ -284,7 +284,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
...
@@ -284,7 +284,7 @@ Configure the model for training with [`compile`](https://keras.io/api/models/mo
>>> model.compile(optimizer=optimizer)
>>> model.compile(optimizer=optimizer)
```
```
The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](./main_classes/keras_callbacks).
The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
...
@@ -362,7 +362,7 @@ Tokenize the text and return the `input_ids` as PyTorch tensors:
...
@@ -362,7 +362,7 @@ Tokenize the text and return the `input_ids` as PyTorch tensors:
Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](./main_classes/text_generation) API.
Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
```py
```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> from transformers import AutoModelForSeq2SeqLM
...
@@ -388,7 +388,7 @@ Tokenize the text and return the `input_ids` as TensorFlow tensors:
...
@@ -388,7 +388,7 @@ Tokenize the text and return the `input_ids` as TensorFlow tensors:
Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](./main_classes/text_generation) API.
Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
```py
```py
>>> from transformers import TFAutoModelForSeq2SeqLM
>>> from transformers import TFAutoModelForSeq2SeqLM
...
@@ -404,4 +404,4 @@ Decode the generated token ids back into text:
...
@@ -404,4 +404,4 @@ Decode the generated token ids back into text:
'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.'
'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.'