"docs/img/git@developer.sourcefind.cn:OpenDAS/nni.git" did not exist on "19173aa4370e36cba96ee7049eaaa0dceda5007c"
Commit 84b97851 authored by chenych's avatar chenych
Browse files

First commit

parents
Pipeline #656 failed with stages
in 0 seconds
*.mdb
*.tar
*.ipynb
*.zip
*.eps
*.pdf
### Linux ###
*~
# temporary files which can be created if a process still has a handle open of a deleted file
.fuse_hidden*
# KDE directory preferences
.directory
# Linux trash folder which might appear on any partition or disk
.Trash-*
# .nfs files are created when an open file is removed but is still being accessed
.nfs*
### OSX ###
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
### Python Patch ###
.venv/
### Python.VirtualEnv Stack ###
# Virtualenv
# http://iamzed.com/2009/05/07/a-primer-on-virtualenv/
[Bb]in
[Ii]nclude
[Ll]ib64
[Ll]ocal
[Ss]cripts
pyvenv.cfg
pip-selfcheck.json
### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db
# Dump file
*.stackdump
# Folder config file
[Dd]esktop.ini
# Recycle Bin used on file shares
$RECYCLE.BIN/
# Windows Installer files
*.cab
*.msi
*.msix
*.msm
*.msp
# Windows shortcuts
*.lnk
.idea/
.vscode/
output/
exp/
data/
*.pyc
*.mp4
*.zip
\ No newline at end of file
MIT License
Copyright (c) 2022 HDETR-group (Yuhui Yuan,Ding Jia,Haodi He,Xiaopei Wu,Haojun Yu)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# HDETR
## 论文
[DETRs with Hybrid Matching](https://arxiv.org/abs/2207.13080)
## 模型结构
基于DETR结构,在匹配阶段加入一对多的匹配分支。
<div align=center>
<img src="./doc/hybrid.png"/>
</div>
## 算法原理
H-DETR引入一对多匹配分支,将原始的一对一匹配分支与一个辅助的一对多匹配分支结合起来,允许多个查询分配给每个正样本,增加正样本查询数量,提高训练效果。此外,H-DETR在推理过程中仍然使用原始的一对一匹配分支,以保持DETR的优势。
<div align=center>
<img src="./doc/methods.png"/>
</div>
## 环境配置
注意:requirements.txt安装完成后,还需要额外安装下列包
```
pip install openmim
mim install mmcv-full (注意版本是不是1.7.1)
pip install mmdet==2.26.0 (对应mmcv 1.7.1)
```
-v 路径、docker_name和imageID根据实际情况修改
### Docker(方法一)
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.13.1-centos7.6-dtk-23.04.1-py38-latest
docker run -it -v /path/your_code_data/:/path/ your_code_data/ --shm-size=80G --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name docker_name imageID bash
cd /your_code_path/HDETR_pytorch
pip install -r requirements.txt
```
### Dockerfile(方法二)
```
cd ./docker
cp ../requirements.txt requirements.txt
docker build --no-cache -t hdetr:latest .
docker run -it -v /path/your_code_data/:/path/your_code_data/ --shm-size=80G --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name docker_name imageID bash
```
### Anaconda(方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装: https://developer.hpccube.com/tool/
```
DTK软件栈:dtk23.04.1
python:python3.8
torch:1.13.1
torchvision:0.14.1
```
Tips:以上dtk软件栈、python、torch等DCU相关工具版本需要严格一一对应
2、其他非特殊库直接按照requirements.txt安装
```
pip3 install -r requirements.txt
```
## 数据集
COCO2017
[训练数据](http://images.cocodataset.org/zips/train2017.zip)
[验证数据](http://images.cocodataset.org/zips/val2017.zip)
[测试数据](http://images.cocodataset.org/zips/test2017.zip)
[标签数据](https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip)
数据集的目录结构如下:
```
├── COCO2017
│ ├── images
│ ├── train2017
│ ├── val2017
│ └── test2017
│ ├── annotations
│ ├── instances_train2017.json
│ └── instances_val2017.json
```
训练/验证集数据准备:
训练/验证集都是采用的COCO的数据格式,如果使用自己的标注数据,请先将标注数据转换成COCO的格式,并按照上面的目录结构进行存放。
## 训练
训练前的准备工作:
1. 进行下面步骤编译
```
cd ./models/ops
bash ./make.sh
```
2. 进行单元测试,需要所有的结果都为TRUE
```
python test.py
cd ../../
```
3. 选择需要训练的模型的config, 设置<config path>为需要训练的模型属性, 设置<coco path>为当前环境中训练数据对应地址。
Tips:
1. 如果有预训练模型,修改config中的 --pretrained_backbone_path 为保存的预训练模型地址;
2. 如果使用backbone为swin,可前往 https://github.com/microsoft/Swin-Transformer 选择对应的预训练模型后再进行训练步骤。
### 单机单卡
```
bash train.sh
```
### 单机多卡
```
bash train_multi.sh
```
### 多机多卡
#### slurm cluster训练方式
<partition> 分区名称
<job_name> 本次执行的任务名称,建议可以使用{模型}_卡数_单卡bs_日期进行命名
<config path> 需要训练的模型属性, 参见configs文件夹下选择
1 node with 4 DCUs:
```
GPUS_PER_NODE=4 ./tools/run_dist_slurm.sh <partition> <job_name> 4 <config path>
```
2 nodes(example is 2) of each with 4 DCUs:
```
GPUS_PER_NODE=4 ./tools/run_dist_slurm.sh <partition> <job_name> 8 <config path>
```
## 推理
验证前需提前准备好预训练模型,<checkpoint path>设置为模型地址,<coco path>为当前环境中推理数据的对应地址,数据应为COCO数据格式。
如没有预训练模型,可从 参考资料 中提供的模型下载,选择模型对应的config后进行效果验证。
如果想要查看预测效果(预测结果输出到图片上),请执行:
```
python test.py --pre_trained_model <checkpoint path> --coco_path <coco path>
```
其余对应参数与训练模型参数需一致,详情请参考代码里面的参数配置:
#### 单卡推理
```
bash val.sh
```
#### 多卡推理
```
bash val_multi.sh
```
## result
COCO2017测试集上的单张图像结果展示:
<div align=center>
<img src="./doc/results.jpg"/>
</div>
### 精度
在COCO2017的测试集上进行单卡测试,结果如下表所示
根据测试结果情况填写表格:
| Name | Backbone | query | epochs | AP |
| :--------: | :------: | :------: | :------: | :------: |
| H-Deformable-DETR + tricks(our) | R50 | 300 | 12 | xxx |
| H-Deformable-DETR + tricks | R50 | 300 | 12 | 48.7 |
## 应用场景
### 算法类别
目标检测
### 热点应用行业
网安,交通,政府
## 源码仓库及问题反馈
https://developer.hpccube.com/codes/modelzoo/hdetr_pytorch
## 参考资料
https://github.com/HDETR/H-Deformable-DETR
# ------------------------------------------------------------------------
# Deformable DETR
# Copyright (c) 2020 SenseTime. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
# ------------------------------------------------------------------------
"""
Benchmark inference speed of Deformable DETR.
"""
import os
import time
import argparse
import torch
from main import get_args_parser as get_main_args_parser
from models import build_model
from datasets import build_dataset
from util.misc import nested_tensor_from_tensor_list
def get_benckmark_arg_parser():
parser = argparse.ArgumentParser("Benchmark inference speed of Deformable DETR.")
parser.add_argument(
"--num_iters", type=int, default=300, help="total iters to benchmark speed"
)
parser.add_argument(
"--warm_iters",
type=int,
default=5,
help="ignore first several iters that are very slow",
)
parser.add_argument(
"--batch_size", type=int, default=1, help="batch size in inference"
)
parser.add_argument("--resume", type=str, help="load the pre-trained checkpoint")
return parser
@torch.no_grad()
def measure_average_inference_time(model, inputs, num_iters=100, warm_iters=5):
ts = []
for iter_ in range(num_iters):
torch.cuda.synchronize()
t_ = time.perf_counter()
model(inputs)
torch.cuda.synchronize()
t = time.perf_counter() - t_
if iter_ >= warm_iters:
ts.append(t)
print(ts)
return sum(ts) / len(ts)
def benchmark():
args, _ = get_benckmark_arg_parser().parse_known_args()
main_args = get_main_args_parser().parse_args(_)
assert (
args.warm_iters < args.num_iters and args.num_iters > 0 and args.warm_iters >= 0
)
assert args.batch_size > 0
assert args.resume is None or os.path.exists(args.resume)
dataset = build_dataset("val", main_args)
model, _, _ = build_model(main_args)
model.cuda()
model.eval()
if args.resume is not None:
ckpt = torch.load(args.resume, map_location=lambda storage, loc: storage)
model.load_state_dict(ckpt["model"])
inputs = nested_tensor_from_tensor_list(
[dataset.__getitem__(0)[0].cuda() for _ in range(args.batch_size)]
)
t = measure_average_inference_time(model, inputs, args.num_iters, args.warm_iters)
return 1.0 / t * args.batch_size
if __name__ == "__main__":
fps = benchmark()
print(f"Inference Speed: {fps:.1f} FPS")
#!/usr/bin/env bash
set -x
EXP_DIR=exps/one_stage/deformable-detr-baseline/12eps/r50_deformable_detr
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 50 \
--lr_drop 40 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/one_stage/deformable-detr-baseline/12eps/r50_deformable_detr_plus_iterative_bbox_refinement
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 50 \
--lr_drop 40 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/one_stage/deformable-detr-baseline/12eps/r50_deformable_detr_single_scale
PY_ARGS=${@:1}
python -u main.py \
--num_feature_levels 1 \
--output_dir ${EXP_DIR} \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 50 \
--lr_drop 40 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/one_stage/deformable-detr-baseline/12eps/r50_deformable_detr_single_scale_dc5
PY_ARGS=${@:1}
python -u main.py \
--num_feature_levels 1 \
--dilation \
--output_dir ${EXP_DIR} \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 50 \
--lr_drop 40 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/12eps/r50_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 12 \
--lr_drop 11 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/12eps/r50_dp0_mqs_lft_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 12 \
--lr_drop 11 \
--dropout 0.0 \
--mixed_selection \
--look_forward_twice \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/12eps/r50_n1800_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 1800 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 12 \
--lr_drop 11 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/12eps/swin/swin_large_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 12 \
--lr_drop 11 \
--backbone swin_large \
--pretrained_backbone_path /mnt/pretrained_backbone/swin_large_patch4_window7_224_22k.pth \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/12eps/swin/swin_large_dp0_mqs_lft_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 12 \
--lr_drop 11 \
--dropout 0.0 \
--mixed_selection \
--look_forward_twice \
--backbone swin_large \
--pretrained_backbone_path /mnt/pretrained_backbone/swin_large_patch4_window7_224_22k.pth \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/12eps/swin/swin_tiny_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 12 \
--lr_drop 11 \
--backbone swin_tiny \
--pretrained_backbone_path /mnt/pretrained_backbone/swin_tiny_patch4_window7_224.pth \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/12eps/swin/swin_tiny_dp0_mqs_lft_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 12 \
--lr_drop 11 \
--dropout 0.0 \
--mixed_selection \
--look_forward_twice \
--backbone swin_tiny \
--pretrained_backbone_path /mnt/pretrained_backbone/swin_tiny_patch4_window7_224.pth \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/24eps/r50_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 24 \
--lr_drop 20 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/24eps/r50_dp0_mqs_lft_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 24 \
--lr_drop 20 \
--dropout 0.0 \
--mixed_selection \
--look_forward_twice \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/24eps/r50_n1800_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 1800 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 24 \
--lr_drop 20 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/36eps/r50_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 36 \
--lr_drop 30 \
${PY_ARGS}
#!/usr/bin/env bash
set -x
EXP_DIR=exps/two_stage/deformable-detr-baseline/36eps/r50_dp0_mqs_lft_deformable_detr_plus_iterative_bbox_refinement_plus_plus_two_stage
PY_ARGS=${@:1}
python -u main.py \
--output_dir ${EXP_DIR} \
--with_box_refine \
--two_stage \
--dim_feedforward 2048 \
--num_queries_one2one 300 \
--num_queries_one2many 0 \
--k_one2many 0 \
--epochs 36 \
--lr_drop 30 \
--dropout 0.0 \
--mixed_selection \
--look_forward_twice \
${PY_ARGS}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment