Unverified Commit 8feede22 authored by Shotaro Ishihara's avatar Shotaro Ishihara Committed by GitHub
Browse files

Fix broken code blocks in README.md (#15967)

at transformers/examples/pytorch/contrastive-image-text
parent 1e8f3799
......@@ -39,13 +39,14 @@ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
wget http://images.cocodataset.org/annotations/image_info_test2017.zip
cd ..
```
```suggestion
Having downloaded COCO dataset manually you should be able to load with the `ydshieh/coc_dataset_script` dataset loading script:
```py
COCO_DIR = "data"
ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR)
```
### Create a model from a vision encoder model and a text decoder model
Next, we create a [VisionTextDualEncoderModel](https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder#visiontextdualencoder).
The `VisionTextDualEncoderModel` class let's you load any vision and text encoder model to create a dual encoder.
......@@ -95,4 +96,4 @@ python examples/pytorch/contrastive-image-text/run_clip.py \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir \
--push_to_hub
```
\ No newline at end of file
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment