Unverified Commit 8feede22 authored by Shotaro Ishihara's avatar Shotaro Ishihara Committed by GitHub
Browse files

Fix broken code blocks in README.md (#15967)

at transformers/examples/pytorch/contrastive-image-text
parent 1e8f3799
...@@ -39,13 +39,14 @@ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip ...@@ -39,13 +39,14 @@ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
wget http://images.cocodataset.org/annotations/image_info_test2017.zip wget http://images.cocodataset.org/annotations/image_info_test2017.zip
cd .. cd ..
``` ```
```suggestion
Having downloaded COCO dataset manually you should be able to load with the `ydshieh/coc_dataset_script` dataset loading script: Having downloaded COCO dataset manually you should be able to load with the `ydshieh/coc_dataset_script` dataset loading script:
```py ```py
COCO_DIR = "data" COCO_DIR = "data"
ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR) ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR)
```
### Create a model from a vision encoder model and a text decoder model ### Create a model from a vision encoder model and a text decoder model
Next, we create a [VisionTextDualEncoderModel](https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder#visiontextdualencoder). Next, we create a [VisionTextDualEncoderModel](https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder#visiontextdualencoder).
The `VisionTextDualEncoderModel` class let's you load any vision and text encoder model to create a dual encoder. The `VisionTextDualEncoderModel` class let's you load any vision and text encoder model to create a dual encoder.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment