"tests/vscode:/vscode.git/clone" did not exist on "235be08569000a5361354f766972e653212bf0d3"
Unverified Commit ffc319e7 authored by Steven Liu's avatar Steven Liu Committed by GitHub
Browse files

Fix links in guides (#16182)

* 🖍 fix links in guides

* 🖍 apply feedback
parent 277fc2cc
......@@ -171,7 +171,7 @@ Load Wav2Vec2 with [`AutoModelForCTC`]. For `ctc_loss_reduction`, it is often be
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......
......@@ -106,7 +106,7 @@ Load Wav2Vec2 with [`AutoModelForAudioClassification`]. Specify the number of la
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......
......@@ -126,7 +126,7 @@ Load ViT with [`AutoModelForImageClassification`]. Specify the number of labels,
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......
......@@ -212,7 +212,7 @@ Load DistilGPT2 with [`AutoModelForCausalLM`]:
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -247,7 +247,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
......@@ -317,7 +317,7 @@ Load DistilRoBERTa with [`AutoModelForMaskedlM`]:
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -353,7 +353,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
......
......@@ -188,7 +188,7 @@ Load BERT with [`AutoModelForMultipleChoice`]:
<Tip>
If you aren't familiar with fine-tuning a model with Trainer, take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with Trainer, take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -227,7 +227,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
......
......@@ -163,7 +163,7 @@ Load DistilBERT with [`AutoModelForQuestionAnswering`]:
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -202,7 +202,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
......
......@@ -103,7 +103,7 @@ Load DistilBERT with [`AutoModelForSequenceClassification`] along with the numbe
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -147,21 +147,21 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
Convert your datasets to the `tf.data.Dataset` format with [`to_tf_dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset). Specify inputs and labels in `columns`, whether to shuffle the dataset order, batch size, and the data collator:
```py
>>> tf_train_dataset = tokenized_imdb["train"].to_tf_dataset(
>>> tf_train_set = tokenized_imdb["train"].to_tf_dataset(
... columns=["attention_mask", "input_ids", "label"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_validation_dataset = tokenized_imdb["train"].to_tf_dataset(
>>> tf_validation_set = tokenized_imdb["test"].to_tf_dataset(
... columns=["attention_mask", "input_ids", "label"],
... shuffle=False,
... batch_size=16,
......
......@@ -122,7 +122,7 @@ Load T5 with [`AutoModelForSeq2SeqLM`]:
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -163,7 +163,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
......
......@@ -163,7 +163,7 @@ Load DistilBERT with [`AutoModelForTokenClassification`] along with the number o
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -202,7 +202,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
......
......@@ -124,7 +124,7 @@ Load T5 with [`AutoModelForSeq2SeqLM`]:
<Tip>
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](training#finetune-with-trainer)!
If you aren't familiar with fine-tuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#finetune-with-trainer)!
</Tip>
......@@ -165,7 +165,7 @@ To fine-tune a model in TensorFlow is just as easy, with only a few differences.
<Tip>
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](training#finetune-with-keras)!
If you aren't familiar with fine-tuning a model with Keras, take a look at the basic tutorial [here](../training#finetune-with-keras)!
</Tip>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment