"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "dbf7bfafa7d9a0e5d7963c5d15350ea6b34060ab"
Unverified Commit ec3dfe5e authored by Wonhyeong Seo's avatar Wonhyeong Seo Committed by GitHub
Browse files

🌐 [i18n-KO] Fixed Korean and English `quicktour.md` (#24664)



* fix: english/korean quicktour.md

* fix: resolve suggestions
Co-authored-by: default avatarHyeonseo Yun <0525yhs@gmail.com>
Co-authored-by: default avatarSohyun Sim <96299403+sim-so@users.noreply.github.com>
Co-authored-by: default avatarKihoon Son <75935546+kihoon71@users.noreply.github.com>

* fix: follow glossary

* 파인튜닝 -> 미세조정

---------
Co-authored-by: default avatarHyeonseo Yun <0525yhs@gmail.com>
Co-authored-by: default avatarSohyun Sim <96299403+sim-so@users.noreply.github.com>
Co-authored-by: default avatarKihoon Son <75935546+kihoon71@users.noreply.github.com>
parent 83f9314d
......@@ -64,7 +64,7 @@ For a complete list of available tasks, check out the [pipeline API reference](.
| Audio classification | assign a label to some audio data | Audio | pipeline(task=“audio-classification”) |
| Automatic speech recognition | transcribe speech into text | Audio | pipeline(task=“automatic-speech-recognition”) |
| Visual question answering | answer a question about the image, given an image and a question | Multimodal | pipeline(task=“vqa”) |
| Document question answering | answer a question about a document, given an image and a question | Multimodal | pipeline(task="document-question-answering") |
| Document question answering | answer a question about the document, given a document and a question | Multimodal | pipeline(task="document-question-answering") |
| Image captioning | generate a caption for a given image | Multimodal | pipeline(task="image-to-text") |
Start by creating an instance of [`pipeline`] and specifying a task you want to use it for. In this guide, you'll use the [`pipeline`] for sentiment analysis as an example:
......@@ -289,7 +289,7 @@ See the [task summary](./task_summary) for tasks supported by an [`AutoModel`] c
</Tip>
Now pass your preprocessed batch of inputs directly to the model by passing the dictionary keys directly to the tensors:
Now pass your preprocessed batch of inputs directly to the model. You can pass the tensors as-is:
```py
>>> tf_outputs = tf_model(tf_batch)
......@@ -410,7 +410,7 @@ All models are a standard [`torch.nn.Module`](https://pytorch.org/docs/stable/nn
Depending on your task, you'll typically pass the following parameters to [`Trainer`]:
1. A [`PreTrainedModel`] or a [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module):
1. You'll start with a [`PreTrainedModel`] or a [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module):
```py
>>> from transformers import AutoModelForSequenceClassification
......@@ -432,7 +432,7 @@ Depending on your task, you'll typically pass the following parameters to [`Trai
... )
```
3. A preprocessing class like a tokenizer, image processor, feature extractor, or processor:
3. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:
```py
>>> from transformers import AutoTokenizer
......@@ -512,7 +512,7 @@ All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs
>>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
```
2. A preprocessing class like a tokenizer, image processor, feature extractor, or processor:
2. Load a preprocessing class like a tokenizer, image processor, feature extractor, or processor:
```py
>>> from transformers import AutoTokenizer
......
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment