"...git@developer.sourcefind.cn:hehl2/torchaudio.git" did not exist on "9cf59e751a3762225b87859fdaea014b89eb2292"
Commit 32b84ec2 authored by chenzk's avatar chenzk
Browse files

Update url.md

parent 4dbc9160
...@@ -87,10 +87,10 @@ pip install -v -e . ...@@ -87,10 +87,10 @@ pip install -v -e .
| images | SCNet快速下载链接| Annotation File | | images | SCNet快速下载链接| Annotation File |
|-------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------|--------------------------------------------------------------------------------------------| |-------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|
| [Objects365 train](https://opendatalab.com/OpenDataLab/Objects365_v1)|[Objects365 train SCNet快速下载链接](http://113.200.138.88:18080/aidatasets/opendatalab/objects365-v1) | [objects365_train.json](https://opendatalab.com/OpenDataLab/Objects365_v1) | | [Objects365 train](https://opendatalab.com/OpenDataLab/Objects365_v1)|[SCNet] | [objects365_train.json](https://opendatalab.com/OpenDataLab/Objects365_v1) |
| [GQA](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)|[GQA SCNet快速下载链接](http://113.200.138.88:18080/aidatasets/project-dependency/gqa) | [final_mixed_train_no_coco.json](https://huggingface.co/GLIPModel/GLIP/tree/main/mdetr_annotations) | | [GQA](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)|[SCNet] | [final_mixed_train_no_coco.json](https://huggingface.co/GLIPModel/GLIP/tree/main/mdetr_annotations) |
| [Flickr30k](https://opendatalab.com/OpenDataLab/Flickr30k)|[Flickr30k SCNet快速下载链接](http://113.200.138.88:18080/aidatasets/flickr30k) | [final_flickr_separateGT_train.json](https://huggingface.co/GLIPModel/GLIP/tree/main/mdetr_annotations) | | [Flickr30k](https://opendatalab.com/OpenDataLab/Flickr30k)|[SCNet] | [final_flickr_separateGT_train.json](https://huggingface.co/GLIPModel/GLIP/tree/main/mdetr_annotations) |
| [COCO val2017](https://opendatalab.com/OpenDataLab/COCO_2017)| [COCO val2017SCNet快速下载链接](http://113.200.138.88:18080/aidatasets/coco2017) | [lvis_v1_minival_inserted_image_name.json](https://huggingface.co/GLIPModel/GLIP/blob/main/lvis_v1_minival_inserted_image_name.json) | | [COCO val2017](https://opendatalab.com/OpenDataLab/COCO_2017)| [SCNet] | [lvis_v1_minival_inserted_image_name.json](https://huggingface.co/GLIPModel/GLIP/blob/main/lvis_v1_minival_inserted_image_name.json) |
将所有数据下载并放置于data文件夹下,数据目录如下: 将所有数据下载并放置于data文件夹下,数据目录如下:
...@@ -119,10 +119,6 @@ mixed_grounding对应GQA数据 ...@@ -119,10 +119,6 @@ mixed_grounding对应GQA数据
``` ```
## 训练 ## 训练
首先下载clip-vit模型文件,放于openai目录下: 首先下载clip-vit模型文件,放于openai目录下:
SCnet快速下载链接:
[clip-vit-large-patch14-336](http://113.200.138.88:18080/aimodels/clip-vit-large-patch14-336)
[clip-vit-base-patch32](http://113.200.138.88:18080/aimodels/clip-vit-base-patch32)
官方下载链接: 官方下载链接:
[clip-vit-large-patch14-336下载](https://huggingface.co/openai/clip-vit-large-patch14-336) [clip-vit-large-patch14-336下载](https://huggingface.co/openai/clip-vit-large-patch14-336)
[clip-vit-base-patch32下载](https://huggingface.co/openai/clip-vit-base-patch32) [clip-vit-base-patch32下载](https://huggingface.co/openai/clip-vit-base-patch32)
...@@ -147,9 +143,7 @@ chmod +x tools/dist_train.sh ...@@ -147,9 +143,7 @@ chmod +x tools/dist_train.sh
## 推理 ## 推理
根据以下链接下载想要的模型权重文件,放到weights文件夹下: 根据以下链接下载想要的模型权重文件,放到weights文件夹下:
[模型权重文件下载](https://hf-mirror.com/wondervictor/YOLO-World/tree/main) [YOLO-World](https://hf-mirror.com/wondervictor/YOLO-World/tree/main)
[模型权重文件SCNet快速下载](http://113.200.138.88:18080/aimodels/wondervictor/YOLO-World)
注意:模型配置文件与权重文件应一一对应 注意:模型配置文件与权重文件应一一对应
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment