Commit e5f6a32a authored by dengjb's avatar dengjb
Browse files

update code

parent 1189a8ad
Pipeline #718 failed with stages
in 0 seconds
# 学习训练和测试
## 训练
本节将介绍如何在支持的数据集上训练现有模型。
支持以下训练环境:
- CPU
- 单 GPU
- 单节点多 GPU
- 多节点
您还可以使用 Slurm 管理作业。
重要:
- 在训练过程中,您可以通过修改 `train_cfg` 来改变评估间隔。
`train_cfg = dict(val_interval=10)`。这意味着每 10 个 epoch 对模型进行一次评估。
- 所有配置文件中的默认学习率为 8 个 GPU。
根据[线性扩展规则](https://arxiv.org/abs/1706.02677)
如果在每个 GPU 上使用不同的 GPU 或图像,则需要设置与批次大小成比例的学习率、
例如,8 个 GPU * 1 个图像/GPU 的学习率为 `lr=0.01`,16 个 GPU * 2 个图像/GPU 的学习率为 lr=0.04。
- 在训练过程中,日志文件和检查点将保存到工作目录、
该目录由 CLI 参数 `--work-dir`指定。它默认使用 `./work_dirs/CONFIG_NAME`
- 如果需要混合精度训练,只需指定 CLI 参数 `--amp`
#### 1.在 CPU 上训练
该模型默认放在 cuda 设备上。
仅当没有 cuda 设备时,该模型才会放在 CPU 上。
因此,如果要在 CPU 上训练模型,则需要先 `export CUDA_VISIBLE_DEVICES=-1` 以禁用 GPU 可见性。
更多细节参见 [MMEngine](https://github.com/open-mmlab/mmengine/blob/ca282aee9e402104b644494ca491f73d93a9544f/mmengine/runner/runner.py#L849-L850).
```shell 脚本
CUDA_VISIBLE_DEVICES=-1 python tools/train.py ${CONFIG_FILE} [optional arguments]
```
在 CPU 上训练 MOT 模型 QDTrack 的示例:
```shell 脚本
CUDA_VISIBLE_DEVICES=-1 python tools/train.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
```
#### 2. 在单 GPU 上训练
如果您想在单 GPU 上训练模型, 您可以按照如下方法直接使用 `tools/train.py`.
```shell 脚本
python tools/train.py ${CONFIG_FILE} [optional arguments]
```
您可以使用 `export CUDA_VISIBLE_DEVICES=$GPU_ID` 命令选择GPU.
在单 GPU 上训练 MOT 模型 QDTrack 的示例:
```shell 脚本
CUDA_VISIBLE_DEVICES=2 python tools/train.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
```
#### 3. 在单节点多 GPU 上进行训练
我们提供了 `tools/dist_train.sh`,用于在多个 GPU 上启动训练。
基本用法如下。
```shell 脚本
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
如果您想在一台机器上启动多个作业、
例如,在拥有 8 个 GPU 的机器上启动 2 个 4-GPU 训练作业、
需要为每个作业指定不同的端口(默认为 29500),以避免通信冲突。
例如,可以在命令中设置端口如下。
```shell 脚本
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
```
在单节点多 GPU 上训练 MOT 模型 QDTrack 的示例:
```shell脚本
bash ./tools/dist_train.sh configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py 8
```
#### 4. 在多个节点上训练
如果使用以太网连接多台机器,只需运行以下命令即可:
在第一台机器上
```shell 脚本
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
```
在第二台机器上:
```shell script
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
```
如果没有 InfiniBand 等高速网络,速度通常会很慢。
#### 5. 使用 Slurm 进行训练
[Slurm](https://slurm.schedmd.com/)是一个用于计算集群的优秀作业调度系统。
在 Slurm 管理的集群上,您可以使用 `slurm_train.sh` 生成训练作业。
它支持单节点和多节点训练。
基本用法如下。
```shell 脚本
bash ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR} ${GPUS}
```
使用 Slurm 训练 MOT 模型 QDTrack 的示例:
```shell脚本
PORT=29501 \
GPUS_PER_NODE=8 \
SRUN_ARGS="--quotatype=reserved" \
bash ./tools/slurm_train.sh \
mypartition \
mottrack
configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
./work_dirs/QDTrack \
8
```
## 测试
本节将介绍如何在支持的数据集上测试现有模型。
支持以下测试环境:
- CPU
- 单 GPU
- 单节点多 GPU
- 多节点
您还可以使用 Slurm 管理作业。
重要:
- 在 MOT 中,某些算法(如 `DeepSORT``SORT``StrongSORT`)需要分别加载 `reid` 的权重和 `detector` 的权重。
其他算法,如`ByteTrack``OCSORT``QDTrack`则不需要。因此,我们提供了 `--checkpoint``--detector``--reid`来加载权重。
- 我们提供了两种评估和测试模型的方法,即基于视频的测试和基于图像的测试。 有些算法如 `StrongSORT`, `Mask2former` 只支持基于视频的测试. 如果您的 GPU 内存无法容纳整个视频,您可以通过设置采样器类型来切换测试方式。
例如
基于视频的测试:`sampler=dict(type='DefaultSampler', shuffle=False, round_up=False)`
基于图像的测试:`sampler=dict(type='TrackImgSampler')`
- 您可以通过修改 evaluator 中的关键字 `outfile_prefix` 来设置结果保存路径。
例如,`val_evaluator = dict(outfile_prefix='results/sort_mot17')`
否则,将创建一个临时文件,并在评估后删除。
- 如果您只想要格式化的结果而不需要评估,可以设置 `format_only=True`
例如,`test_evaluator = dict(type='MOTChallengeMetric', metric=['HOTA', 'CLEAR', 'Identity'], outfile_prefix='sort_mot17_results', format_only=True)`
#### 1. 在 CPU 上测试
模型默认在 cuda 设备上运行。
只有在没有 cuda 设备的情况下,模型才会在 CPU 上运行。
因此,如果要在 CPU 上测试模型,您需要 `export CUDA_VISIBLE_DEVICES=-1` 先禁用 GPU 可见性。
更多细节请参考[MMEngine](https://github.com/open-mmlab/mmengine/blob/ca282aee9e402104b644494ca491f73d93a9544f/mmengine/runner/runner.py#L849-L850).
```shell 脚本
CUDA_VISIBLE_DEVICES=-1 python tools/test_tracking.py ${CONFIG_FILE} [optional arguments]
```
在 CPU 上测试 MOT 模型 SORT 的示例:
```shell 脚本
CUDA_VISIBLE_DEVICES=-1 python tools/test_tracking.py configs/sort/sort_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py --detector ${CHECKPOINT_FILE}
```
#### 2. 在单 GPU 上测试
如果您想在单 GPU 上测试模型,可以直接使用 `tools/test_tracking.py`,如下所示。
```shell 脚本
python tools/test_tracking.py ${CONFIG_FILE} [optional arguments]
```
您可以使用 `export CUDA_VISIBLE_DEVICES=$GPU_ID` 来选择 GPU。
在单 GPU 上测试 MOT 模型 QDTrack 的示例:
```shell 脚本
CUDA_VISIBLE_DEVICES=2 python tools/test_tracking.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py --detector ${CHECKPOINT_FILE}
```
#### 3. 在单节点多 GPU 上进行测试
我们提供了 `tools/dist_test_tracking.sh`,用于在多个 GPU 上启动测试。
基本用法如下。
```shell 脚本
bash ./tools/dist_test_tracking.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
在单节点多 GPU 上测试 MOT 模型 DeepSort 的示例:
```shell 脚本
bash ./tools/dist_test_tracking.sh configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py 8 --detector ${CHECKPOINT_FILE} --reid ${CHECKPOINT_FILE}
```
#### 4. 在多个节点上测试
您可以在多个节点上进行测试,这与 "在多个节点上进行训练 "类似。
#### 5. 使用 Slurm 进行测试
在 Slurm 管理的集群上,您可以使用 `slurm_test_tracking.sh` 生成测试作业。
它支持单节点和多节点测试。
基本用法如下。
```shell 脚本
[GPUS=${GPUS}] bash tools/slurm_test_tracking.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} [optional arguments]
```
使用 Slurm 测试 VIS 模型 Mask2former 的示例:
```shell 脚本
GPUS=8
bash tools/slurm_test_tracking.sh \
mypartition \
vis \
configs/mask2former_vis/mask2former_r50_8xb2-8e_youtubevis2021.py \
--checkpoint ${CHECKPOINT_FILE}
```
# 了解可视化
## 本地的可视化
这一节将会展示如何使用本地的工具可视化 detection/tracking 的运行结果。
如果你想要画出预测结果的图像,你可以如下示例,将 `TrackVisualizationHook` 中的 draw 的参数设置为 `draw=True`
```shell
default_hooks = dict(visualization=dict(type='TrackVisualizationHook', draw=True))
```
`TrackVisualizationHook` 共有如下参数:
- `draw`: 是否绘制预测结果。如果选择 False,将不会显示图像。该参数默认设置为 False。
- `interval`: 可视化的间隔。默认值为 30。
- `score_thr`: 确定是否可视化边界框和掩码的阈值。默认值是 0.3。
- `show`: 是否展示绘制的图像。默认不显示。
- `wait_time`: 展示的时间间隔(秒)。默认为 0。
- `test_out_dir`: 测试过程中绘制图像保存的目录。
- `backend_args`: 用于实例化文件客户端的参数。默认值为 `None `
`TrackVisualizationHook` 中,将调用 `TrackLocalVisualizer` 来实现 MOT 和 VIS 任务的可视化。具体细节如下。
你可以通过 MMEngine 获取 [Visualization](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/advanced_tutorials/visualization.md)[Hook](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/hook.md) 的更多细节。
### Tracking 的可视化
我们使用 `TrackLocalVisualizer` 这个类以实现跟踪任务可视化。调用方式如下:
```python
visualizer = dict(type='TrackLocalVisualizer')
```
visualizer 共有如下的参数:
- `name`: 所选实例的名称。默认值为 ‘visualizer’。
- `image`: 用于绘制的原始图像。格式需要为 RGB。默认为 None。
- `vis_backends`: 可视化后端配置列表。默认为 None。
- `save_dir`: 所有后端存储的保存文件目录。如果为 None,后端将不会保存任何数据。
- `line_width`: 边框宽度。默认值为 3。
- `alpha`: 边界框和掩码的透明度。默认为 0.8。
这里提供了一个 DeepSORT 的可视化示例:
![test_img_89](https://user-images.githubusercontent.com/99722489/186062929-6d0e4663-0d8e-4045-9ec8-67e0e41da876.png)
# 在标准数据集上训练预定义的模型
MMDetection 也为训练检测模型提供了开盖即食的工具。本节将展示在标准数据集(比如 COCO)上如何训练一个预定义的模型。
### 数据集
训练需要准备好数据集,细节请参考 [数据集准备](#%E6%95%B0%E6%8D%AE%E9%9B%86%E5%87%86%E5%A4%87)
**注意**
目前,`configs/cityscapes` 文件夹下的配置文件都是使用 COCO 预训练权值进行初始化的。如果网络连接不可用或者速度很慢,你可以提前下载现存的模型。否则可能在训练的开始会有错误发生。
### 学习率自动缩放
**注意**:在配置文件中的学习率是在 8 块 GPU,每块 GPU 有 2 张图像(批大小为 8\*2=16)的情况下设置的。其已经设置在 `config/_base_/schedules/schedule_1x.py` 中的 `auto_scale_lr.base_batch_size`。学习率会基于批次大小为 `16`时的值进行自动缩放。同时,为了不影响其他基于 mmdet 的 codebase,启用自动缩放标志 `auto_scale_lr.enable` 默认设置为 `False`
如果要启用此功能,需在命令添加参数 `--auto-scale-lr`。并且在启动命令之前,请检查下即将使用的配置文件的名称,因为配置名称指示默认的批处理大小。
在默认情况下,批次大小是 `8 x 2 = 16`,例如:`faster_rcnn_r50_caffe_fpn_90k_coco.py` 或者 `pisa_faster_rcnn_x101_32x4d_fpn_1x_coco.py`;若不是默认批次,你可以在配置文件看到像 `_NxM_` 字样的,例如:`cornernet_hourglass104_mstest_32x3_210e_coco.py` 的批次大小是 `32 x 3 = 96`, 或者 `scnet_x101_64x4d_fpn_8x1_20e_coco.py` 的批次大小是 `8 x 1 = 8`
**请记住:如果使用不是默认批次大小为 `16`的配置文件,请检查配置文件中的底部,会有 `auto_scale_lr.base_batch_size`。如果找不到,可以在其继承的 `_base_=[xxx]` 文件中找到。另外,如果想使用自动缩放学习率的功能,请不要修改这些值。**
学习率自动缩放基本用法如下:
```shell
python tools/train.py \
${CONFIG_FILE} \
--auto-scale-lr \
[optional arguments]
```
执行命令之后,会根据机器的GPU数量和训练的批次大小对学习率进行自动缩放,缩放方式详见 [线性扩展规则](https://arxiv.org/abs/1706.02677) ,比如:在 4 块 GPU 并且每张 GPU 上有 2 张图片的情况下 `lr=0.01`,那么在 16 块 GPU 并且每张 GPU 上有 4 张图片的情况下, LR 会自动缩放至 `lr=0.08`
如果不启用该功能,则需要根据 [线性扩展规则](https://arxiv.org/abs/1706.02677) 来手动计算并修改配置文件里面 `optimizer.lr` 的值。
### 使用单 GPU 训练
我们提供了 `tools/train.py` 来开启在单张 GPU 上的训练任务。基本使用如下:
```shell
python tools/train.py \
${CONFIG_FILE} \
[optional arguments]
```
在训练期间,日志文件和 checkpoint 文件将会被保存在工作目录下,它需要通过配置文件中的 `work_dir` 或者 CLI 参数中的 `--work-dir` 来指定。
默认情况下,模型将在每轮训练之后在 validation 集上进行测试,测试的频率可以通过设置配置文件来指定:
```python
# 每 12 轮迭代进行一次测试评估
train_cfg = dict(val_interval=12)
```
这个工具接受以下参数:
- `--work-dir ${WORK_DIR}`: 覆盖工作目录.
- `--resume`:自动从work_dir中的最新检查点恢复.
- `--resume ${CHECKPOINT_FILE}`: 从某个 checkpoint 文件继续训练.
- `--cfg-options 'Key=value'`: 覆盖使用的配置文件中的其他设置.
**注意**
`resume``load-from` 的区别:
`resume` 既加载了模型的权重和优化器的状态,也会继承指定 checkpoint 的迭代次数,不会重新开始训练。`load-from` 则是只加载模型的权重,它的训练是从头开始的,经常被用于微调模型。其中load-from需要写入配置文件中,而resume作为命令行参数传入。
### 使用 CPU 训练
使用 CPU 训练的流程和使用单 GPU 训练的流程一致,我们仅需要在训练流程开始前禁用 GPU。
```shell
export CUDA_VISIBLE_DEVICES=-1
```
之后运行单 GPU 训练脚本即可。
**注意**
我们不推荐用户使用 CPU 进行训练,这太过缓慢。我们支持这个功能是为了方便用户在没有 GPU 的机器上进行调试。
### 在多 GPU 上训练
我们提供了 `tools/dist_train.sh` 来开启在多 GPU 上的训练。基本使用如下:
```shell
bash ./tools/dist_train.sh \
${CONFIG_FILE} \
${GPU_NUM} \
[optional arguments]
```
可选参数和单 GPU 训练的可选参数一致。
#### 同时启动多个任务
如果你想在一台机器上启动多个任务的话,比如在一个有 8 块 GPU 的机器上启动 2 个需要 4 块GPU的任务,你需要给不同的训练任务指定不同的端口(默认为 29500)来避免冲突。
如果你使用 `dist_train.sh` 来启动训练任务,你可以使用命令来设置端口。
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
```
### 使用多台机器训练
如果您想使用由 ethernet 连接起来的多台机器, 您可以使用以下命令:
在第一台机器上:
```shell
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
在第二台机器上:
```shell
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
但是,如果您不使用高速网路连接这几台机器的话,训练将会非常慢。
### 使用 Slurm 来管理任务
Slurm 是一个常见的计算集群调度系统。在 Slurm 管理的集群上,你可以使用 `slurm.sh` 来开启训练任务。它既支持单节点训练也支持多节点训练。
基本使用如下:
```shell
[GPUS=${GPUS}] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR}
```
以下是在一个名称为 _dev_ 的 Slurm 分区上,使用 16 块 GPU 来训练 Mask R-CNN 的例子,并且将 `work-dir` 设置在了某些共享文件系统下。
```shell
GPUS=16 ./tools/slurm_train.sh dev mask_r50_1x configs/mask_rcnn_r50_fpn_1x_coco.py /nfs/xxxx/mask_rcnn_r50_fpn_1x
```
你可以查看 [源码](https://github.com/open-mmlab/mmdetection/blob/main/tools/slurm_train.sh) 来检查全部的参数和环境变量.
在使用 Slurm 时,端口需要以下方的某个方法之一来设置。
1. 通过 `--options` 来设置端口。我们非常建议用这种方法,因为它无需改变原始的配置文件。
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR} --cfg-options 'dist_params.port=29500'
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR} --cfg-options 'dist_params.port=29501'
```
2. 修改配置文件来设置不同的交流端口。
`config1.py` 中,设置:
```python
dist_params = dict(backend='nccl', port=29500)
```
`config2.py` 中,设置:
```python
dist_params = dict(backend='nccl', port=29501)
```
然后你可以使用 `config1.py``config2.py` 来启动两个任务了。
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR}
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR}
```
# 在自定义数据集上进行训练
通过本文档,你将会知道如何使用自定义数据集对预先定义好的模型进行推理,测试以及训练。我们使用 [balloon dataset](https://github.com/matterport/Mask_RCNN/tree/master/samples/balloon) 作为例子来描述整个过程。
基本步骤如下:
1. 准备自定义数据集
2. 准备配置文件
3. 在自定义数据集上进行训练,测试和推理。
## 准备自定义数据集
MMDetection 一共支持三种形式应用新数据集:
1. 将数据集重新组织为 COCO 格式。
2. 将数据集重新组织为一个中间格式。
3. 实现一个新的数据集。
我们通常建议使用前面两种方法,因为它们通常来说比第三种方法要简单。
在本文档中,我们展示一个例子来说明如何将数据转化为 COCO 格式。
**注意**:在 MMDetection 3.0 之后,数据集和指标已经解耦(除了 CityScapes)。因此,用户在验证阶段使用任意的评价指标来评价模型在任意数据集上的性能。比如,用 VOC 评价指标来评价模型在 COCO 数据集的性能,或者同时使用 VOC 评价指标和 COCO 评价指标来评价模型在 OpenImages 数据集上的性能。
### COCO标注格式
用于实例分割的 COCO 数据集格式如下所示,其中的键(key)都是必要的,参考[这里](https://cocodataset.org/#format-data)来获取更多细节。
```json
{
"images": [image],
"annotations": [annotation],
"categories": [category]
}
image = {
"id": int,
"width": int,
"height": int,
"file_name": str,
}
annotation = {
"id": int,
"image_id": int,
"category_id": int,
"segmentation": RLE or [polygon],
"area": float,
"bbox": [x,y,width,height], # (x, y) bbox 左上角的坐标
"iscrowd": 0 or 1,
}
categories = [{
"id": int,
"name": str,
"supercategory": str,
}]
```
现在假设我们使用 balloon dataset。
下载了数据集之后,我们需要实现一个函数将标注格式转化为 COCO 格式。然后我们就可以使用已经实现的 `CocoDataset` 类来加载数据并进行训练以及评测。
如果你浏览过新数据集,你会发现格式如下:
```json
{'base64_img_data': '',
'file_attributes': {},
'filename': '34020010494_e5cb88e1c4_k.jpg',
'fileref': '',
'regions': {'0': {'region_attributes': {},
'shape_attributes': {'all_points_x': [1020,
1000,
994,
1003,
1023,
1050,
1089,
1134,
1190,
1265,
1321,
1361,
1403,
1428,
1442,
1445,
1441,
1427,
1400,
1361,
1316,
1269,
1228,
1198,
1207,
1210,
1190,
1177,
1172,
1174,
1170,
1153,
1127,
1104,
1061,
1032,
1020],
'all_points_y': [963,
899,
841,
787,
738,
700,
663,
638,
621,
619,
643,
672,
720,
765,
800,
860,
896,
942,
990,
1035,
1079,
1112,
1129,
1134,
1144,
1153,
1166,
1166,
1150,
1136,
1129,
1122,
1112,
1084,
1037,
989,
963],
'name': 'polygon'}}},
'size': 1115004}
```
标注文件时是 JSON 格式的,其中所有键(key)组成了一张图片的所有标注。
其中将 balloon dataset 转化为 COCO 格式的代码如下所示。
```python
import os.path as osp
import mmcv
from mmengine.fileio import dump, load
from mmengine.utils import track_iter_progress
def convert_balloon_to_coco(ann_file, out_file, image_prefix):
data_infos = load(ann_file)
annotations = []
images = []
obj_count = 0
for idx, v in enumerate(track_iter_progress(data_infos.values())):
filename = v['filename']
img_path = osp.join(image_prefix, filename)
height, width = mmcv.imread(img_path).shape[:2]
images.append(
dict(id=idx, file_name=filename, height=height, width=width))
for _, obj in v['regions'].items():
assert not obj['region_attributes']
obj = obj['shape_attributes']
px = obj['all_points_x']
py = obj['all_points_y']
poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
poly = [p for x in poly for p in x]
x_min, y_min, x_max, y_max = (min(px), min(py), max(px), max(py))
data_anno = dict(
image_id=idx,
id=obj_count,
category_id=0,
bbox=[x_min, y_min, x_max - x_min, y_max - y_min],
area=(x_max - x_min) * (y_max - y_min),
segmentation=[poly],
iscrowd=0)
annotations.append(data_anno)
obj_count += 1
coco_format_json = dict(
images=images,
annotations=annotations,
categories=[{
'id': 0,
'name': 'balloon'
}])
dump(coco_format_json, out_file)
if __name__ == '__main__':
convert_balloon_to_coco(ann_file='data/balloon/train/via_region_data.json',
out_file='data/balloon/train/annotation_coco.json',
image_prefix='data/balloon/train')
convert_balloon_to_coco(ann_file='data/balloon/val/via_region_data.json',
out_file='data/balloon/val/annotation_coco.json',
image_prefix='data/balloon/val')
```
使用如上的函数,用户可以成功将标注文件转化为 JSON 格式,之后可以使用 `CocoDataset` 对模型进行训练,并用 `CocoMetric` 评测。
## 准备配置文件
第二步需要准备一个配置文件来成功加载数据集。假设我们想要用 balloon dataset 来训练配备了 FPN 的 Mask R-CNN ,如下是我们的配置文件。假设配置文件命名为 `mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py`,相应保存路径为 `configs/balloon/`,配置文件内容如下所示。详细的配置文件方法可以参考[学习配置文件 — MMDetection 3.0.0 文档](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/config.html#base)
```python
# 新配置继承了基本配置,并做了必要的修改
_base_ = '../mask_rcnn/mask-rcnn_r50-caffe_fpn_ms-poly-1x_coco.py'
# 我们还需要更改 head 中的 num_classes 以匹配数据集中的类别数
model = dict(
roi_head=dict(
bbox_head=dict(num_classes=1), mask_head=dict(num_classes=1)))
# 修改数据集相关配置
data_root = 'data/balloon/'
metainfo = {
'classes': ('balloon', ),
'palette': [
(220, 20, 60),
]
}
train_dataloader = dict(
batch_size=1,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
ann_file='train/annotation_coco.json',
data_prefix=dict(img='train/')))
val_dataloader = dict(
dataset=dict(
data_root=data_root,
metainfo=metainfo,
ann_file='val/annotation_coco.json',
data_prefix=dict(img='val/')))
test_dataloader = val_dataloader
# 修改评价指标相关配置
val_evaluator = dict(ann_file=data_root + 'val/annotation_coco.json')
test_evaluator = val_evaluator
# 使用预训练的 Mask R-CNN 模型权重来做初始化,可以提高模型性能
load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth'
```
## 训练一个新的模型
为了使用新的配置方法来对模型进行训练,你只需要运行如下命令。
```shell
python tools/train.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py
```
参考 [在标准数据集上训练预定义的模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/train.html#id1) 来获取更多详细的使用方法。
## 测试以及推理
为了测试训练完毕的模型,你只需要运行如下命令。
```shell
python tools/test.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py work_dirs/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon/epoch_12.pth
```
参考 [测试现有模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/test.html) 来获取更多详细的使用方法。
# 实用的钩子
MMDetection 和 MMEngine 为用户提供了多种多样实用的钩子(Hook),包括 `MemoryProfilerHook``NumClassCheckHook` 等等。
这篇教程介绍了 MMDetection 中实现的钩子功能及使用方式。若使用 MMEngine 定义的钩子请参考 [MMEngine 的钩子API文档](https://github.com/open-mmlab/mmengine/tree/main/docs/en/tutorials/hook.md).
## CheckInvalidLossHook
## NumClassCheckHook
## MemoryProfilerHook
[内存分析钩子](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/memory_profiler_hook.py)
记录了包括虚拟内存、交换内存、当前进程在内的所有内存信息,它能够帮助捕捉系统的使用状况与发现隐藏的内存泄露问题。为了使用这个钩子,你需要先通过 `pip install memory_profiler psutil` 命令安装 `memory_profiler``psutil`
### 使用
为了使用这个钩子,使用者需要添加如下代码至 config 文件
```python
custom_hooks = [
dict(type='MemoryProfilerHook', interval=50)
]
```
### 结果
在训练中,你会看到 `MemoryProfilerHook` 记录的如下信息:
```text
The system has 250 GB (246360 MB + 9407 MB) of memory and 8 GB (5740 MB + 2452 MB) of swap memory in total. Currently 9407 MB (4.4%) of memory and 5740 MB (29.9%) of swap memory were consumed. And the current training process consumed 5434 MB of memory.
```
```text
2022-04-21 08:49:56,881 - mmengine - INFO - Memory information available_memory: 246360 MB, used_memory: 9407 MB, memory_utilization: 4.4 %, available_swap_memory: 5740 MB, used_swap_memory: 2452 MB, swap_memory_utilization: 29.9 %, current_process_memory: 5434 MB
```
## SetEpochInfoHook
## SyncNormHook
## SyncRandomSizeHook
## YOLOXLrUpdaterHook
## YOLOXModeSwitchHook
## 如何实现自定义钩子
通常,从模型训练的开始到结束,共有20个点位可以执行钩子。我们可以实现自定义钩子在不同点位执行,以便在训练中实现自定义操作。
- global points: `before_run`, `after_run`
- points in training: `before_train`, `before_train_epoch`, `before_train_iter`, `after_train_iter`, `after_train_epoch`, `after_train`
- points in validation: `before_val`, `before_val_epoch`, `before_val_iter`, `after_val_iter`, `after_val_epoch`, `after_val`
- points at testing: `before_test`, `before_test_epoch`, `before_test_iter`, `after_test_iter`, `after_test_epoch`, `after_test`
- other points: `before_save_checkpoint`, `after_save_checkpoint`
比如,我们要实现一个检查 loss 的钩子,当损失为 NaN 时自动结束训练。我们可以把这个过程分为三步:
1. 在 MMEngine 实现一个继承于 `Hook` 类的新钩子,并实现 `after_train_iter` 方法用于检查每 `n` 次训练迭代后损失是否变为 NaN 。
2. 使用 `@HOOKS.register_module()` 注册实现好了的自定义钩子,如下列代码所示。
3. 在配置文件中添加 `custom_hooks = [dict(type='MemoryProfilerHook', interval=50)]`
```python
from typing import Optional
import torch
from mmengine.hooks import Hook
from mmengine.runner import Runner
from mmdet.registry import HOOKS
@HOOKS.register_module()
class CheckInvalidLossHook(Hook):
"""Check invalid loss hook.
This hook will regularly check whether the loss is valid
during training.
Args:
interval (int): Checking interval (every k iterations).
Default: 50.
"""
def __init__(self, interval: int = 50) -> None:
self.interval = interval
def after_train_iter(self,
runner: Runner,
batch_idx: int,
data_batch: Optional[dict] = None,
outputs: Optional[dict] = None) -> None:
"""Regularly check whether the loss is valid every n iterations.
Args:
runner (:obj:`Runner`): The runner of the training process.
batch_idx (int): The index of the current batch in the train loop.
data_batch (dict, Optional): Data from dataloader.
Defaults to None.
outputs (dict, Optional): Outputs from model. Defaults to None.
"""
if self.every_n_train_iters(runner, self.interval):
assert torch.isfinite(outputs['loss']), \
runner.logger.info('loss become infinite or NaN!')
```
请参考 [自定义训练配置](../advanced_guides/customize_runtime.md) 了解更多与自定义钩子相关的内容。
除了训练和测试脚本,我们还在 `tools/` 目录下提供了许多有用的工具。
## 日志分析
`tools/analysis_tools/analyze_logs.py` 可利用指定的训练 log 文件绘制 loss/mAP 曲线图,
第一次运行前请先运行 `pip install seaborn` 安装必要依赖.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--eval-interval ${EVALUATION_INTERVAL}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
```
![loss curve image](../../../resources/loss_curve.png)
样例:
- 绘制分类损失曲线图
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
```
- 绘制分类损失、回归损失曲线图,保存图片为对应的 pdf 文件
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
```
- 在相同图像中比较两次运行结果的 bbox mAP
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log1.json log2.json --keys bbox_mAP --legend run1 run2
```
- 计算平均训练速度
```shell
python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers]
```
输出以如下形式展示
```text
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
slowest epoch 11, average time is 1.2024
fastest epoch 1, average time is 1.1909
time std over epochs is 0.0028
average iter time: 1.1959 s/iter
```
## 结果分析
使用 `tools/analysis_tools/analyze_results.py` 可计算每个图像 mAP,随后根据真实标注框与预测框的比较结果,展示或保存最高与最低 top-k 得分的预测图像。
**使用方法**
```shell
python tools/analysis_tools/analyze_results.py \
${CONFIG} \
${PREDICTION_PATH} \
${SHOW_DIR} \
[--show] \
[--wait-time ${WAIT_TIME}] \
[--topk ${TOPK}] \
[--show-score-thr ${SHOW_SCORE_THR}] \
[--cfg-options ${CFG_OPTIONS}]
```
各个参数选项的作用:
- `config`: model config 文件的路径。
- `prediction_path`: 使用 `tools/test.py` 输出的 pickle 格式结果文件。
- `show_dir`: 绘制真实标注框与预测框的图像存放目录。
- `--show`:决定是否展示绘制 box 后的图片,默认值为 `False`
- `--wait-time`: show 时间的间隔,若为 0 表示持续显示。
- `--topk`: 根据最高或最低 `topk` 概率排序保存的图片数量,若不指定,默认设置为 `20`
- `--show-score-thr`: 能够展示的概率阈值,默认为 `0`
- `--cfg-options`: 如果指定,可根据指定键值对覆盖更新配置文件的对应选项
**样例**:
假设你已经通过 `tools/test.py` 得到了 pickle 格式的结果文件,路径为 './result.pkl'。
1. 测试 Faster R-CNN 并可视化结果,保存图片至 `results/`
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--show
```
2. 测试 Faster R-CNN 并指定 top-k 参数为 50,保存结果图片至 `results/`
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--topk 50
```
3. 如果你想过滤低概率的预测结果,指定 `show-score-thr` 参数
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--show-score-thr 0.3
```
## 多模型检测结果融合
`tools/analysis_tools/fuse_results.py` 可使用 Weighted Boxes Fusion(WBF) 方法将多个模型的检测结果进行融合。(当前仅支持 COCO 格式)
**使用方法**
```shell
python tools/analysis_tools/fuse_results.py \
${PRED_RESULTS} \
[--annotation ${ANNOTATION}] \
[--weights ${WEIGHTS}] \
[--fusion-iou-thr ${FUSION_IOU_THR}] \
[--skip-box-thr ${SKIP_BOX_THR}] \
[--conf-type ${CONF_TYPE}] \
[--eval-single ${EVAL_SINGLE}] \
[--save-fusion-results ${SAVE_FUSION_RESULTS}] \
[--out-dir ${OUT_DIR}]
```
各个参数选项的作用:
- `pred-results`: 多模型测试结果的保存路径。(目前仅支持 json 格式)
- `--annotation`: 真实标注框的保存路径。
- `--weights`: 模型融合权重。默认设置下,每个模型的权重均为1。
- `--fusion-iou-thr`: 在WBF算法中,匹配成功的 IoU 阈值,默认值为`0.55`
- `--skip-box-thr`: WBF算法中需剔除的置信度阈值,置信度小于该值的 bbox 会被剔除,默认值为`0`
- `--conf-type`: 如何计算融合后 bbox 的置信度。有以下四种选项:
- `avg`: 取平均值,默认为此选项。
- `max`: 取最大值。
- `box_and_model_avg`: box和模型尺度的加权平均值。
- `absent_model_aware_avg`: 考虑缺失模型的加权平均值。
- `--eval-single`: 是否评估每个单一模型,默认值为`False`
- `--save-fusion-results`: 是否保存融合结果,默认值为`False`
- `--out-dir`: 融合结果保存的路径。
**样例**:
假设你已经通过 `tools/test.py` 得到了3个模型的 json 格式的结果文件,路径分别为 './faster-rcnn_r50-caffe_fpn_1x_coco.json', './retinanet_r50-caffe_fpn_1x_coco.json', './cascade-rcnn_r50-caffe_fpn_1x_coco.json',真实标注框的文件路径为'./annotation.json'。
1. 融合三个模型的预测结果并评估其效果
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
```
2. 同时评估每个单一模型与融合结果
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
--eval-single
```
3. 融合三个模型的预测结果并保存
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
--save-fusion-results \
--out-dir outputs/fusion
```
## 可视化
### 可视化数据集
`tools/analysis_tools/browse_dataset.py` 可帮助使用者检查所使用的检测数据集(包括图像和标注),或保存图像至指定目录。
```shell
python tools/analysis_tools/browse_dataset.py ${CONFIG} [-h] [--skip-type ${SKIP_TYPE[SKIP_TYPE...]}] [--output-dir ${OUTPUT_DIR}] [--not-show] [--show-interval ${SHOW_INTERVAL}]
```
### 可视化模型
在可视化之前,需要先转换模型至 ONNX 格式,[可参考此处](#convert-mmdetection-model-to-onnx-experimental)
注意,现在只支持 RetinaNet,之后的版本将会支持其他模型
转换后的模型可以被其他工具可视化[Netron](https://github.com/lutzroeder/netron)
### 可视化预测结果
如果你想要一个轻量 GUI 可视化检测结果,你可以参考 [DetVisGUI project](https://github.com/Chien-Hung/DetVisGUI/tree/mmdetection)
## 误差分析
`tools/analysis_tools/coco_error_analysis.py` 使用不同标准分析每个类别的 COCO 评估结果。同时将一些有帮助的信息体现在图表上。
```shell
python tools/analysis_tools/coco_error_analysis.py ${RESULT} ${OUT_DIR} [-h] [--ann ${ANN}] [--types ${TYPES[TYPES...]}]
```
样例:
假设你已经把 [Mask R-CNN checkpoint file](https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth) 放置在文件夹 'checkpoint' 中(其他模型请在 [model zoo](./model_zoo.md) 中获取)。
为了保存 bbox 结果信息,我们需要用下列方式修改 `test_evaluator` :
1. 查找当前 config 文件相对应的 'configs/base/datasets' 数据集信息。
2. 用当前数据集 config 中的 test_evaluator 以及 test_dataloader 替换原始文件的 test_evaluator 以及 test_dataloader。
3. 使用以下命令得到 bbox 或 segmentation 的 json 格式文件。
```shell
python tools/test.py \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoint/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
```
1. 得到每一类的 COCO bbox 误差结果,并保存分析结果图像至指定目录。(在 [config](../../../configs/_base_/datasets/coco_instance.py) 中默认目录是 './work_dirs/coco_instance/test')
```shell
python tools/analysis_tools/coco_error_analysis.py \
results.bbox.json \
results \
--ann=data/coco/annotations/instances_val2017.json \
```
2. 得到每一类的 COCO 分割误差结果,并保存分析结果图像至指定目录。
```shell
python tools/analysis_tools/coco_error_analysis.py \
results.segm.json \
results \
--ann=data/coco/annotations/instances_val2017.json \
--types='segm'
```
## 模型服务部署
如果你想使用 [`TorchServe`](https://pytorch.org/serve/) 搭建一个 `MMDetection` 模型服务,可以参考以下步骤:
### 1. 安装 TorchServe
假设你已经成功安装了包含 `PyTorch``MMDetection``Python` 环境,那么你可以运行以下命令来安装 `TorchServe` 及其依赖项。有关更多其他安装选项,请参考[快速入门](https://github.com/pytorch/serve/blob/master/README.md#serve-a-model)
```shell
python -m pip install torchserve torch-model-archiver torch-workflow-archiver nvgpu
```
**注意**: 如果你想在 docker 中使用`TorchServe`,请参考[torchserve docker](https://github.com/pytorch/serve/blob/master/docker/README.md)
### 2. 把 MMDetection 模型转换至 TorchServe
```shell
python tools/deployment/mmdet2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```
### 3. 启动 `TorchServe`
```shell
torchserve --start --ncs \
--model-store ${MODEL_STORE} \
--models ${MODEL_NAME}.mar
```
### 4. 测试部署效果
```shell
curl -O curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/3dogs.jpg
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T 3dogs.jpg
```
你可以得到下列 json 信息:
```json
[
{
"class_label": 16,
"class_name": "dog",
"bbox": [
294.63409423828125,
203.99111938476562,
417.048583984375,
281.62744140625
],
"score": 0.9987992644309998
},
{
"class_label": 16,
"class_name": "dog",
"bbox": [
404.26019287109375,
126.0080795288086,
574.5091552734375,
293.6662292480469
],
"score": 0.9979367256164551
},
{
"class_label": 16,
"class_name": "dog",
"bbox": [
197.2144775390625,
93.3067855834961,
307.8505554199219,
276.7560119628906
],
"score": 0.993338406085968
}
]
```
#### 结果对比
你也可以使用 `test_torchserver.py` 来比较 `TorchServe``PyTorch` 的结果,并可视化:
```shell
python tools/deployment/test_torchserver.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}] [--score-thr ${SCORE_THR}] [--work-dir ${WORK_DIR}]
```
样例:
```shell
python tools/deployment/test_torchserver.py \
demo/demo.jpg \
configs/yolo/yolov3_d53_8xb8-320-273e_coco.py \
checkpoint/yolov3_d53_320_273e_coco-421362b6.pth \
yolov3 \
--work-dir ./work-dir
```
### 5. 停止 `TorchServe`
```shell
torchserve --stop
```
## 模型复杂度
`tools/analysis_tools/get_flops.py` 工具可用于计算指定模型的 FLOPs、参数量大小(改编自 [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) )。
```shell
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
```
获得的结果如下:
```text
==============================
Input shape: (3, 1280, 800)
Flops: 239.32 GFLOPs
Params: 37.74 M
==============================
```
**注意**:这个工具还只是实验性质,我们不保证这个数值是绝对正确的。你可以将他用于简单的比较,但如果用于科技论文报告需要再三检查确认。
1. FLOPs 与输入的形状大小相关,参数量没有这个关系,默认的输入形状大小为 (1, 3, 1280, 800) 。
2. 一些算子并不计入 FLOPs,比如 GN 或其他自定义的算子。你可以参考 [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/2.x/mmcv/cnn/utils/flops_counter.py) 查看更详细的说明。
3. 两阶段检测的 FLOPs 大小取决于 proposal 的数量。
## 模型转换
### MMDetection 模型转换至 ONNX 格式
我们提供了一个脚本用于转换模型至 [ONNX](https://github.com/onnx/onnx) 格式。同时还支持比较 Pytorch 与 ONNX 模型的输出结果以便对照。更详细的内容可以参考 [mmdeploy](https://github.com/open-mmlab/mmdeploy)。
### MMDetection 1.x 模型转换至 MMDetection 2.x 模型
`tools/model_converters/upgrade_model_version.py` 可将旧版本的 MMDetection checkpoints 转换至新版本。但要注意此脚本不保证在新版本加入非兼容更新后还能正常转换,建议您直接使用新版本的 checkpoints。
```shell
python tools/model_converters/upgrade_model_version.py ${IN_FILE} ${OUT_FILE} [-h] [--num-classes NUM_CLASSES]
```
### RegNet 模型转换至 MMDetection 模型
`tools/model_converters/regnet2mmdet.py` 将 pycls 编码的预训练 RegNet 模型转换为 MMDetection 风格。
```shell
python tools/model_converters/regnet2mmdet.py ${SRC} ${DST} [-h]
```
### Detectron ResNet 模型转换至 Pytorch 模型
`tools/model_converters/detectron2pytorch.py` 将 detectron 的原始预训练 RegNet 模型转换为 MMDetection 风格。
```shell
python tools/model_converters/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h]
```
### 制作发布用模型
`tools/model_converters/publish_model.py` 可用来制作一个发布用的模型。
在发布模型至 AWS 之前,你可能需要:
1. 将模型转换至 CPU 张量
2. 删除优化器状态
3. 计算 checkpoint 文件的 hash 值,并将 hash 号码记录至文件名。
```shell
python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
```
样例:
```shell
python tools/model_converters/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth
```
最后输出的文件名如下所示: `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`.
## 数据集转换
`tools/data_converters/` 提供了将 Cityscapes 数据集与 Pascal VOC 数据集转换至 COCO 数据集格式的工具
```shell
python tools/dataset_converters/cityscapes.py ${CITYSCAPES_PATH} [-h] [--img-dir ${IMG_DIR}] [--gt-dir ${GT_DIR}] [-o ${OUT_DIR}] [--nproc ${NPROC}]
python tools/dataset_converters/pascal_voc.py ${DEVKIT_PATH} [-h] [-o ${OUT_DIR}]
```
## 数据集下载
`tools/misc/download_dataset.py` 可以下载各类形如 COCO, VOC, LVIS 数据集。
```shell
python tools/misc/download_dataset.py --dataset-name coco2017
python tools/misc/download_dataset.py --dataset-name voc2007
python tools/misc/download_dataset.py --dataset-name lvis
```
对于中国境内的用户,我们也推荐使用开源数据平台 [OpenDataLab](https://opendatalab.com/?source=OpenMMLab%20GitHub) 来获取这些数据集,以获得更好的下载体验:
- [COCO2017](https://opendatalab.com/COCO_2017/download?source=OpenMMLab%20GitHub)
- [VOC2007](https://opendatalab.com/PASCAL_VOC2007/download?source=OpenMMLab%20GitHub)
- [VOC2012](https://opendatalab.com/PASCAL_VOC2012/download?source=OpenMMLab%20GitHub)
- [LVIS](https://opendatalab.com/LVIS/download?source=OpenMMLab%20GitHub)
## 基准测试
### 鲁棒性测试基准
`tools/analysis_tools/test_robustness.py` 及 `tools/analysis_tools/robustness_eval.py` 帮助使用者衡量模型的鲁棒性。其核心思想来源于 [Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming](https://arxiv.org/abs/1907.07484)。如果你想了解如何在污损图像上评估模型的效果,以及参考该基准的一组标准模型,请参照 [robustness_benchmarking.md](robustness_benchmarking.md)。
### FPS 测试基准
`tools/analysis_tools/benchmark.py` 可帮助使用者计算 FPS,FPS 计算包括了模型向前传播与后处理过程。为了得到更精确的计算值,现在的分布式计算模式只支持一个 GPU。
```shell
python -m torch.distributed.launch --nproc_per_node=1 --master_port=${PORT} tools/analysis_tools/benchmark.py \
${CONFIG} \
[--checkpoint ${CHECKPOINT}] \
[--repeat-num ${REPEAT_NUM}] \
[--max-iter ${MAX_ITER}] \
[--log-interval ${LOG_INTERVAL}] \
--launcher pytorch
```
样例:假设你已经下载了 `Faster R-CNN` 模型 checkpoint 并放置在 `checkpoints/` 目录下。
```shell
python -m torch.distributed.launch --nproc_per_node=1 --master_port=29500 tools/analysis_tools/benchmark.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
--launcher pytorch
```
## 更多工具
### 以某个评估标准进行评估
`tools/analysis_tools/eval_metric.py` 根据配置文件中的评估方式对 pkl 结果文件进行评估。
```shell
python tools/analysis_tools/eval_metric.py ${CONFIG} ${PKL_RESULTS} [-h] [--format-only] [--eval ${EVAL[EVAL ...]}]
[--cfg-options ${CFG_OPTIONS [CFG_OPTIONS ...]}]
[--eval-options ${EVAL_OPTIONS [EVAL_OPTIONS ...]}]
```
### 打印全部 config
`tools/misc/print_config.py` 可将所有配置继承关系展开,完全打印相应的配置文件。
```shell
python tools/misc/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]
```
## 超参数优化
### YOLO Anchor 优化
`tools/analysis_tools/optimize_anchors.py` 提供了两种方法优化 YOLO 的 anchors。
其中一种方法使用 K 均值 anchor 聚类(k-means anchor cluster),源自 [darknet](https://github.com/AlexeyAB/darknet/blob/master/src/detector.c#L1421)。
```shell
python tools/analysis_tools/optimize_anchors.py ${CONFIG} --algorithm k-means --input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} --output-dir ${OUTPUT_DIR}
```
另一种方法使用差分进化算法优化 anchors。
```shell
python tools/analysis_tools/optimize_anchors.py ${CONFIG} --algorithm differential_evolution --input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} --output-dir ${OUTPUT_DIR}
```
样例:
```shell
python tools/analysis_tools/optimize_anchors.py configs/yolo/yolov3_d53_8xb8-320-273e_coco.py --algorithm differential_evolution --input-shape 608 608 --device cuda --output-dir work_dirs
```
你可能会看到如下结果:
```
loading annotations into memory...
Done (t=9.70s)
creating index...
index created!
2021-07-19 19:37:20,951 - mmdet - INFO - Collecting bboxes from annotation...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 117266/117266, 15874.5 task/s, elapsed: 7s, ETA: 0s
2021-07-19 19:37:28,753 - mmdet - INFO - Collected 849902 bboxes.
differential_evolution step 1: f(x)= 0.506055
differential_evolution step 2: f(x)= 0.506055
......
differential_evolution step 489: f(x)= 0.386625
2021-07-19 19:46:40,775 - mmdet - INFO Anchor evolution finish. Average IOU: 0.6133754253387451
2021-07-19 19:46:40,776 - mmdet - INFO Anchor differential evolution result:[[10, 12], [15, 30], [32, 22], [29, 59], [61, 46], [57, 116], [112, 89], [154, 198], [349, 336]]
2021-07-19 19:46:40,798 - mmdet - INFO Result saved in work_dirs/anchor_optimize_result.json
```
## 混淆矩阵
混淆矩阵是对检测结果的概览。
`tools/analysis_tools/confusion_matrix.py` 可对预测结果进行分析,绘制成混淆矩阵表。
首先,运行 `tools/test.py` 保存 `.pkl` 预测结果。
之后再运行:
```
python tools/analysis_tools/confusion_matrix.py ${CONFIG} ${DETECTION_RESULTS} ${SAVE_DIR} --show
```
最后你可以得到如图的混淆矩阵:
![confusion_matrix_example](https://user-images.githubusercontent.com/12907710/140513068-994cdbf4-3a4a-48f0-8fd8-2830d93fd963.png)
## COCO 分离和遮挡实例分割性能评估
对于最先进的目标检测器来说,检测被遮挡的物体仍然是一个挑战。
我们实现了论文 [A Tri-Layer Plugin to Improve Occluded Detection](https://arxiv.org/abs/2210.10046) 中提出的指标来计算分离和遮挡目标的召回率。
使用此评价指标有两种方法:
### 离线评测
我们提供了一个脚本对存储后的检测结果文件计算指标。
首先,使用 `tools/test.py` 脚本存储检测结果:
```shell
python tools/test.py ${CONFIG} ${MODEL_PATH} --out results.pkl
```
然后,运行 `tools/analysis_tools/coco_occluded_separated_recall.py` 脚本来计算分离和遮挡目标的掩码的召回率:
```shell
python tools/analysis_tools/coco_occluded_separated_recall.py results.pkl --out occluded_separated_recall.json
```
输出如下:
```
loading annotations into memory...
Done (t=0.51s)
creating index...
index created!
processing detection results...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5000/5000, 109.3 task/s, elapsed: 46s, ETA: 0s
computing occluded mask recall...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5550/5550, 780.5 task/s, elapsed: 7s, ETA: 0s
COCO occluded mask recall: 58.79%
COCO occluded mask success num: 3263
computing separated mask recall...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3522/3522, 778.3 task/s, elapsed: 5s, ETA: 0s
COCO separated mask recall: 31.94%
COCO separated mask success num: 1125
+-----------+--------+-------------+
| mask type | recall | num correct |
+-----------+--------+-------------+
| occluded | 58.79% | 3263 |
| separated | 31.94% | 1125 |
+-----------+--------+-------------+
Evaluation results have been saved to occluded_separated_recall.json.
```
### 在线评测
我们实现继承自 `CocoMetic` 的 `CocoOccludedSeparatedMetric`。
要在训练期间评估分离和遮挡掩码的召回率,只需在配置中将 evaluator 类型替换为 `CocoOccludedSeparatedMetric`:
```python
val_evaluator = dict(
type='CocoOccludedSeparatedMetric', # 修改此处
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False)
test_evaluator = val_evaluator
```
如果您使用了此指标,请引用论文:
```latex
@article{zhan2022triocc,
title={A Tri-Layer Plugin to Improve Occluded Detection},
author={Zhan, Guanqi and Xie, Weidi and Zisserman, Andrew},
journal={British Machine Vision Conference},
year={2022}
}
```
# 可视化
在阅读本教程之前,建议先阅读 MMEngine 的 [Visualization](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) 文档,以对 `Visualizer` 的定义和用法有一个初步的了解。
简而言之,`Visualizer` 在 MMEngine 中实现以满足日常可视化需求,并包含以下三个主要功能:
- 实现通用的绘图 API,例如 [`draw_bboxes`](mmengine.visualization.Visualizer.draw_bboxes) 实现了绘制边界框的功能,[`draw_lines`](mmengine.visualization.Visualizer.draw_lines) 实现了绘制线条的功能。
- 支持将可视化结果、学习率曲线、损失函数曲线以及验证精度曲线写入到各种后端中,包括本地磁盘以及常见的深度学习训练日志工具,例如 [TensorBoard](https://www.tensorflow.org/tensorboard)[Wandb](https://wandb.ai/site)
- 支持在代码的任何位置调用以可视化或记录模型在训练或测试期间的中间状态,例如特征图和验证结果。
基于 MMEngine 的 `Visualizer`,MMDet 提供了各种预构建的可视化工具,用户可以通过简单地修改以下配置文件来使用它们。
- `tools/analysis_tools/browse_dataset.py` 脚本提供了一个数据集可视化功能,可以在数据经过数据转换后绘制图像和相应的注释,具体描述请参见[`browse_dataset.py`](useful_tools.md#Visualization)
- MMEngine实现了`LoggerHook`,使用`Visualizer`将学习率、损失和评估结果写入由`Visualizer`设置的后端。因此,通过修改配置文件中的`Visualizer`后端,例如修改为`TensorBoardVISBackend``WandbVISBackend`,可以实现日志记录到常用的训练日志工具,如`TensorBoard``WandB`,从而方便用户使用这些可视化工具来分析和监控训练过程。
- 在MMDet中实现了`VisualizerHook`,它使用`Visualizer`将验证或预测阶段的预测结果可视化或存储到由`Visualizer`设置的后端。因此,通过修改配置文件中的`Visualizer`后端,例如修改为`TensorBoardVISBackend``WandbVISBackend`,可以将预测图像存储到`TensorBoard``Wandb`中。
## 配置
由于使用了注册机制,在MMDet中我们可以通过修改配置文件来设置`Visualizer`的行为。通常,我们会在`configs/_base_/default_runtime.py`中为可视化器定义默认配置,详细信息请参见[配置教程](config.md)
```Python
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='DetLocalVisualizer',
vis_backends=vis_backends,
name='visualizer')
```
基于上面的例子,我们可以看到`Visualizer`的配置由两个主要部分组成,即`Visualizer`类型和其使用的可视化后端`vis_backends`
- 用户可直接使用`DetLocalVisualizer`来可视化支持任务的标签或预测结果。
- MMDet默认将可视化后端`vis_backend`设置为本地可视化后端`LocalVisBackend`,将所有可视化结果和其他训练信息保存在本地文件夹中。
## 存储
MMDet默认使用本地可视化后端[`LocalVisBackend`](mmengine.visualization.LocalVisBackend)`VisualizerHook``LoggerHook`中存储的模型损失、学习率、模型评估精度和可视化信息,包括损失、学习率、评估精度将默认保存到`{work_dir}/{config_name}/{time}/{vis_data}`文件夹中。此外,MMDet还支持其他常见的可视化后端,例如`TensorboardVisBackend``WandbVisBackend`,您只需要在配置文件中更改`vis_backends`类型为相应的可视化后端即可。例如,只需在配置文件中插入以下代码块即可将数据存储到`TensorBoard``Wandb`中。
```Python
# https://mmengine.readthedocs.io/en/latest/api/visualization.html
_base_.visualizer.vis_backends = [
dict(type='LocalVisBackend'), #
dict(type='TensorboardVisBackend'),
dict(type='WandbVisBackend'),]
```
## 绘图
### 绘制预测结果
MMDet主要使用[`DetVisualizationHook`](mmdet.engine.hooks.DetVisualizationHook)来绘制验证和测试的预测结果,默认情况下`DetVisualizationHook`是关闭的,其默认配置如下。
```Python
visualization=dict( #用户可视化验证和测试结果
type='DetVisualizationHook',
draw=False,
interval=1,
show=False)
```
以下表格展示了`DetVisualizationHook`支持的参数。
| 参数 | 描述 |
| :------: | :------------------------------------------------------------------------------: |
| draw | DetVisualizationHook通过enable参数打开和关闭,默认状态为关闭。 |
| interval | 控制在DetVisualizationHook启用时存储或显示验证或测试结果的间隔,单位为迭代次数。 |
| show | 控制是否可视化验证或测试的结果。 |
如果您想在训练或测试期间启用 `DetVisualizationHook` 相关功能和配置,您只需要修改配置文件,以 `configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py` 为例,同时绘制注释和预测,并显示图像,配置文件可以修改如下:
```Python
visualization = _base_.default_hooks.visualization
visualization.update(dict(draw=True, show=True))
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
`test.py`程序提供了`--show``--show-dir`参数,可以在测试过程中可视化注释和预测结果,而不需要修改配置文件,从而进一步简化了测试过程。
```Shell
# 展示测试结果
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show
# 指定存储预测结果的位置
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show-dir imgs/
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
# Copyright (c) OpenMMLab. All rights reserved.
import mmcv
import mmengine
from mmengine.utils import digit_version
from .version import __version__, version_info
mmcv_minimum_version = '2.0.0rc4'
mmcv_maximum_version = '2.2.0'
mmcv_version = digit_version(mmcv.__version__)
mmengine_minimum_version = '0.7.1'
mmengine_maximum_version = '1.0.0'
mmengine_version = digit_version(mmengine.__version__)
assert (mmcv_version >= digit_version(mmcv_minimum_version)
and mmcv_version < digit_version(mmcv_maximum_version)), \
f'MMCV=={mmcv.__version__} is used but incompatible. ' \
f'Please install mmcv>={mmcv_minimum_version}, <{mmcv_maximum_version}.'
assert (mmengine_version >= digit_version(mmengine_minimum_version)
and mmengine_version < digit_version(mmengine_maximum_version)), \
f'MMEngine=={mmengine.__version__} is used but incompatible. ' \
f'Please install mmengine>={mmengine_minimum_version}, ' \
f'<{mmengine_maximum_version}.'
__all__ = ['__version__', 'version_info', 'digit_version']
# Copyright (c) OpenMMLab. All rights reserved.
from .det_inferencer import DetInferencer
from .inference import (async_inference_detector, inference_detector,
inference_mot, init_detector, init_track_model)
__all__ = [
'init_detector', 'async_inference_detector', 'inference_detector',
'DetInferencer', 'inference_mot', 'init_track_model'
]
# Copyright (c) OpenMMLab. All rights reserved.
import copy
import os.path as osp
import warnings
from typing import Dict, Iterable, List, Optional, Sequence, Tuple, Union
import mmcv
import mmengine
import numpy as np
import torch.nn as nn
from mmcv.transforms import LoadImageFromFile
from mmengine.dataset import Compose
from mmengine.fileio import (get_file_backend, isdir, join_path,
list_dir_or_file)
from mmengine.infer.infer import BaseInferencer, ModelType
from mmengine.model.utils import revert_sync_batchnorm
from mmengine.registry import init_default_scope
from mmengine.runner.checkpoint import _load_checkpoint_to_model
from mmengine.visualization import Visualizer
from rich.progress import track
from mmdet.evaluation import INSTANCE_OFFSET
from mmdet.registry import DATASETS
from mmdet.structures import DetDataSample
from mmdet.structures.mask import encode_mask_results, mask2bbox
from mmdet.utils import ConfigType
from ..evaluation import get_classes
try:
from panopticapi.evaluation import VOID
from panopticapi.utils import id2rgb
except ImportError:
id2rgb = None
VOID = None
InputType = Union[str, np.ndarray]
InputsType = Union[InputType, Sequence[InputType]]
PredType = List[DetDataSample]
ImgType = Union[np.ndarray, Sequence[np.ndarray]]
IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif',
'.tiff', '.webp')
class DetInferencer(BaseInferencer):
"""Object Detection Inferencer.
Args:
model (str, optional): Path to the config file or the model name
defined in metafile. For example, it could be
"rtmdet-s" or 'rtmdet_s_8xb32-300e_coco' or
"configs/rtmdet/rtmdet_s_8xb32-300e_coco.py".
If model is not specified, user must provide the
`weights` saved by MMEngine which contains the config string.
Defaults to None.
weights (str, optional): Path to the checkpoint. If it is not specified
and model is a model name of metafile, the weights will be loaded
from metafile. Defaults to None.
device (str, optional): Device to run inference. If None, the available
device will be automatically used. Defaults to None.
scope (str, optional): The scope of the model. Defaults to mmdet.
palette (str): Color palette used for visualization. The order of
priority is palette -> config -> checkpoint. Defaults to 'none'.
show_progress (bool): Control whether to display the progress
bar during the inference process. Defaults to True.
"""
preprocess_kwargs: set = set()
forward_kwargs: set = set()
visualize_kwargs: set = {
'return_vis',
'show',
'wait_time',
'draw_pred',
'pred_score_thr',
'img_out_dir',
'no_save_vis',
}
postprocess_kwargs: set = {
'print_result',
'pred_out_dir',
'return_datasamples',
'no_save_pred',
}
def __init__(self,
model: Optional[Union[ModelType, str]] = None,
weights: Optional[str] = None,
device: Optional[str] = None,
scope: Optional[str] = 'mmdet',
palette: str = 'none',
show_progress: bool = True) -> None:
# A global counter tracking the number of images processed, for
# naming of the output images
self.num_visualized_imgs = 0
self.num_predicted_imgs = 0
self.palette = palette
init_default_scope(scope)
super().__init__(
model=model, weights=weights, device=device, scope=scope)
self.model = revert_sync_batchnorm(self.model)
self.show_progress = show_progress
def _load_weights_to_model(self, model: nn.Module,
checkpoint: Optional[dict],
cfg: Optional[ConfigType]) -> None:
"""Loading model weights and meta information from cfg and checkpoint.
Args:
model (nn.Module): Model to load weights and meta information.
checkpoint (dict, optional): The loaded checkpoint.
cfg (Config or ConfigDict, optional): The loaded config.
"""
if checkpoint is not None:
_load_checkpoint_to_model(model, checkpoint)
checkpoint_meta = checkpoint.get('meta', {})
# save the dataset_meta in the model for convenience
if 'dataset_meta' in checkpoint_meta:
# mmdet 3.x, all keys should be lowercase
model.dataset_meta = {
k.lower(): v
for k, v in checkpoint_meta['dataset_meta'].items()
}
elif 'CLASSES' in checkpoint_meta:
# < mmdet 3.x
classes = checkpoint_meta['CLASSES']
model.dataset_meta = {'classes': classes}
else:
warnings.warn(
'dataset_meta or class names are not saved in the '
'checkpoint\'s meta data, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
else:
warnings.warn('Checkpoint is not loaded, and the inference '
'result is calculated by the randomly initialized '
'model!')
warnings.warn('weights is None, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
# Priority: args.palette -> config -> checkpoint
if self.palette != 'none':
model.dataset_meta['palette'] = self.palette
else:
test_dataset_cfg = copy.deepcopy(cfg.test_dataloader.dataset)
# lazy init. We only need the metainfo.
test_dataset_cfg['lazy_init'] = True
metainfo = DATASETS.build(test_dataset_cfg).metainfo
cfg_palette = metainfo.get('palette', None)
if cfg_palette is not None:
model.dataset_meta['palette'] = cfg_palette
else:
if 'palette' not in model.dataset_meta:
warnings.warn(
'palette does not exist, random is used by default. '
'You can also set the palette to customize.')
model.dataset_meta['palette'] = 'random'
def _init_pipeline(self, cfg: ConfigType) -> Compose:
"""Initialize the test pipeline."""
pipeline_cfg = cfg.test_dataloader.dataset.pipeline
# For inference, the key of ``img_id`` is not used.
if 'meta_keys' in pipeline_cfg[-1]:
pipeline_cfg[-1]['meta_keys'] = tuple(
meta_key for meta_key in pipeline_cfg[-1]['meta_keys']
if meta_key != 'img_id')
load_img_idx = self._get_transform_idx(
pipeline_cfg, ('LoadImageFromFile', LoadImageFromFile))
if load_img_idx == -1:
raise ValueError(
'LoadImageFromFile is not found in the test pipeline')
pipeline_cfg[load_img_idx]['type'] = 'mmdet.InferencerLoader'
return Compose(pipeline_cfg)
def _get_transform_idx(self, pipeline_cfg: ConfigType,
name: Union[str, Tuple[str, type]]) -> int:
"""Returns the index of the transform in a pipeline.
If the transform is not found, returns -1.
"""
for i, transform in enumerate(pipeline_cfg):
if transform['type'] in name:
return i
return -1
def _init_visualizer(self, cfg: ConfigType) -> Optional[Visualizer]:
"""Initialize visualizers.
Args:
cfg (ConfigType): Config containing the visualizer information.
Returns:
Visualizer or None: Visualizer initialized with config.
"""
visualizer = super()._init_visualizer(cfg)
visualizer.dataset_meta = self.model.dataset_meta
return visualizer
def _inputs_to_list(self, inputs: InputsType) -> list:
"""Preprocess the inputs to a list.
Preprocess inputs to a list according to its type:
- list or tuple: return inputs
- str:
- Directory path: return all files in the directory
- other cases: return a list containing the string. The string
could be a path to file, a url or other types of string according
to the task.
Args:
inputs (InputsType): Inputs for the inferencer.
Returns:
list: List of input for the :meth:`preprocess`.
"""
if isinstance(inputs, str):
backend = get_file_backend(inputs)
if hasattr(backend, 'isdir') and isdir(inputs):
# Backends like HttpsBackend do not implement `isdir`, so only
# those backends that implement `isdir` could accept the inputs
# as a directory
filename_list = list_dir_or_file(
inputs, list_dir=False, suffix=IMG_EXTENSIONS)
inputs = [
join_path(inputs, filename) for filename in filename_list
]
if not isinstance(inputs, (list, tuple)):
inputs = [inputs]
return list(inputs)
def preprocess(self, inputs: InputsType, batch_size: int = 1, **kwargs):
"""Process the inputs into a model-feedable format.
Customize your preprocess by overriding this method. Preprocess should
return an iterable object, of which each item will be used as the
input of ``model.test_step``.
``BaseInferencer.preprocess`` will return an iterable chunked data,
which will be used in __call__ like this:
.. code-block:: python
def __call__(self, inputs, batch_size=1, **kwargs):
chunked_data = self.preprocess(inputs, batch_size, **kwargs)
for batch in chunked_data:
preds = self.forward(batch, **kwargs)
Args:
inputs (InputsType): Inputs given by user.
batch_size (int): batch size. Defaults to 1.
Yields:
Any: Data processed by the ``pipeline`` and ``collate_fn``.
"""
chunked_data = self._get_chunk_data(inputs, batch_size)
yield from map(self.collate_fn, chunked_data)
def _get_chunk_data(self, inputs: Iterable, chunk_size: int):
"""Get batch data from inputs.
Args:
inputs (Iterable): An iterable dataset.
chunk_size (int): Equivalent to batch size.
Yields:
list: batch data.
"""
inputs_iter = iter(inputs)
while True:
try:
chunk_data = []
for _ in range(chunk_size):
inputs_ = next(inputs_iter)
if isinstance(inputs_, dict):
if 'img' in inputs_:
ori_inputs_ = inputs_['img']
else:
ori_inputs_ = inputs_['img_path']
chunk_data.append(
(ori_inputs_,
self.pipeline(copy.deepcopy(inputs_))))
else:
chunk_data.append((inputs_, self.pipeline(inputs_)))
yield chunk_data
except StopIteration:
if chunk_data:
yield chunk_data
break
# TODO: Video and Webcam are currently not supported and
# may consume too much memory if your input folder has a lot of images.
# We will be optimized later.
def __call__(
self,
inputs: InputsType,
batch_size: int = 1,
return_vis: bool = False,
show: bool = False,
wait_time: int = 0,
no_save_vis: bool = False,
draw_pred: bool = True,
pred_score_thr: float = 0.3,
return_datasamples: bool = False,
print_result: bool = False,
no_save_pred: bool = True,
out_dir: str = '',
# by open image task
texts: Optional[Union[str, list]] = None,
# by open panoptic task
stuff_texts: Optional[Union[str, list]] = None,
# by GLIP
custom_entities: bool = False,
**kwargs) -> dict:
"""Call the inferencer.
Args:
inputs (InputsType): Inputs for the inferencer.
batch_size (int): Inference batch size. Defaults to 1.
show (bool): Whether to display the visualization results in a
popup window. Defaults to False.
wait_time (float): The interval of show (s). Defaults to 0.
no_save_vis (bool): Whether to force not to save prediction
vis results. Defaults to False.
draw_pred (bool): Whether to draw predicted bounding boxes.
Defaults to True.
pred_score_thr (float): Minimum score of bboxes to draw.
Defaults to 0.3.
return_datasamples (bool): Whether to return results as
:obj:`DetDataSample`. Defaults to False.
print_result (bool): Whether to print the inference result w/o
visualization to the console. Defaults to False.
no_save_pred (bool): Whether to force not to save prediction
results. Defaults to True.
out_dir: Dir to save the inference results or
visualization. If left as empty, no file will be saved.
Defaults to ''.
texts (str | list[str]): Text prompts. Defaults to None.
stuff_texts (str | list[str]): Stuff text prompts of open
panoptic task. Defaults to None.
custom_entities (bool): Whether to use custom entities.
Defaults to False. Only used in GLIP.
**kwargs: Other keyword arguments passed to :meth:`preprocess`,
:meth:`forward`, :meth:`visualize` and :meth:`postprocess`.
Each key in kwargs should be in the corresponding set of
``preprocess_kwargs``, ``forward_kwargs``, ``visualize_kwargs``
and ``postprocess_kwargs``.
Returns:
dict: Inference and visualization results.
"""
(
preprocess_kwargs,
forward_kwargs,
visualize_kwargs,
postprocess_kwargs,
) = self._dispatch_kwargs(**kwargs)
ori_inputs = self._inputs_to_list(inputs)
if texts is not None and isinstance(texts, str):
texts = [texts] * len(ori_inputs)
if stuff_texts is not None and isinstance(stuff_texts, str):
stuff_texts = [stuff_texts] * len(ori_inputs)
if texts is not None:
assert len(texts) == len(ori_inputs)
for i in range(len(texts)):
if isinstance(ori_inputs[i], str):
ori_inputs[i] = {
'text': texts[i],
'img_path': ori_inputs[i],
'custom_entities': custom_entities
}
else:
ori_inputs[i] = {
'text': texts[i],
'img': ori_inputs[i],
'custom_entities': custom_entities
}
if stuff_texts is not None:
assert len(stuff_texts) == len(ori_inputs)
for i in range(len(stuff_texts)):
ori_inputs[i]['stuff_text'] = stuff_texts[i]
inputs = self.preprocess(
ori_inputs, batch_size=batch_size, **preprocess_kwargs)
results_dict = {'predictions': [], 'visualization': []}
for ori_imgs, data in (track(inputs, description='Inference')
if self.show_progress else inputs):
preds = self.forward(data, **forward_kwargs)
visualization = self.visualize(
ori_imgs,
preds,
return_vis=return_vis,
show=show,
wait_time=wait_time,
draw_pred=draw_pred,
pred_score_thr=pred_score_thr,
no_save_vis=no_save_vis,
img_out_dir=out_dir,
**visualize_kwargs)
results = self.postprocess(
preds,
visualization,
return_datasamples=return_datasamples,
print_result=print_result,
no_save_pred=no_save_pred,
pred_out_dir=out_dir,
**postprocess_kwargs)
results_dict['predictions'].extend(results['predictions'])
if results['visualization'] is not None:
results_dict['visualization'].extend(results['visualization'])
return results_dict
def visualize(self,
inputs: InputsType,
preds: PredType,
return_vis: bool = False,
show: bool = False,
wait_time: int = 0,
draw_pred: bool = True,
pred_score_thr: float = 0.3,
no_save_vis: bool = False,
img_out_dir: str = '',
**kwargs) -> Union[List[np.ndarray], None]:
"""Visualize predictions.
Args:
inputs (List[Union[str, np.ndarray]]): Inputs for the inferencer.
preds (List[:obj:`DetDataSample`]): Predictions of the model.
return_vis (bool): Whether to return the visualization result.
Defaults to False.
show (bool): Whether to display the image in a popup window.
Defaults to False.
wait_time (float): The interval of show (s). Defaults to 0.
draw_pred (bool): Whether to draw predicted bounding boxes.
Defaults to True.
pred_score_thr (float): Minimum score of bboxes to draw.
Defaults to 0.3.
no_save_vis (bool): Whether to force not to save prediction
vis results. Defaults to False.
img_out_dir (str): Output directory of visualization results.
If left as empty, no file will be saved. Defaults to ''.
Returns:
List[np.ndarray] or None: Returns visualization results only if
applicable.
"""
if no_save_vis is True:
img_out_dir = ''
if not show and img_out_dir == '' and not return_vis:
return None
if self.visualizer is None:
raise ValueError('Visualization needs the "visualizer" term'
'defined in the config, but got None.')
results = []
for single_input, pred in zip(inputs, preds):
if isinstance(single_input, str):
img_bytes = mmengine.fileio.get(single_input)
img = mmcv.imfrombytes(img_bytes)
img = img[:, :, ::-1]
img_name = osp.basename(single_input)
elif isinstance(single_input, np.ndarray):
img = single_input.copy()
img_num = str(self.num_visualized_imgs).zfill(8)
img_name = f'{img_num}.jpg'
else:
raise ValueError('Unsupported input type: '
f'{type(single_input)}')
out_file = osp.join(img_out_dir, 'vis',
img_name) if img_out_dir != '' else None
self.visualizer.add_datasample(
img_name,
img,
pred,
show=show,
wait_time=wait_time,
draw_gt=False,
draw_pred=draw_pred,
pred_score_thr=pred_score_thr,
out_file=out_file,
)
results.append(self.visualizer.get_image())
self.num_visualized_imgs += 1
return results
def postprocess(
self,
preds: PredType,
visualization: Optional[List[np.ndarray]] = None,
return_datasamples: bool = False,
print_result: bool = False,
no_save_pred: bool = False,
pred_out_dir: str = '',
**kwargs,
) -> Dict:
"""Process the predictions and visualization results from ``forward``
and ``visualize``.
This method should be responsible for the following tasks:
1. Convert datasamples into a json-serializable dict if needed.
2. Pack the predictions and visualization results and return them.
3. Dump or log the predictions.
Args:
preds (List[:obj:`DetDataSample`]): Predictions of the model.
visualization (Optional[np.ndarray]): Visualized predictions.
return_datasamples (bool): Whether to use Datasample to store
inference results. If False, dict will be used.
print_result (bool): Whether to print the inference result w/o
visualization to the console. Defaults to False.
no_save_pred (bool): Whether to force not to save prediction
results. Defaults to False.
pred_out_dir: Dir to save the inference results w/o
visualization. If left as empty, no file will be saved.
Defaults to ''.
Returns:
dict: Inference and visualization results with key ``predictions``
and ``visualization``.
- ``visualization`` (Any): Returned by :meth:`visualize`.
- ``predictions`` (dict or DataSample): Returned by
:meth:`forward` and processed in :meth:`postprocess`.
If ``return_datasamples=False``, it usually should be a
json-serializable dict containing only basic data elements such
as strings and numbers.
"""
if no_save_pred is True:
pred_out_dir = ''
result_dict = {}
results = preds
if not return_datasamples:
results = []
for pred in preds:
result = self.pred2dict(pred, pred_out_dir)
results.append(result)
elif pred_out_dir != '':
warnings.warn('Currently does not support saving datasample '
'when return_datasamples is set to True. '
'Prediction results are not saved!')
# Add img to the results after printing and dumping
result_dict['predictions'] = results
if print_result:
print(result_dict)
result_dict['visualization'] = visualization
return result_dict
# TODO: The data format and fields saved in json need further discussion.
# Maybe should include model name, timestamp, filename, image info etc.
def pred2dict(self,
data_sample: DetDataSample,
pred_out_dir: str = '') -> Dict:
"""Extract elements necessary to represent a prediction into a
dictionary.
It's better to contain only basic data elements such as strings and
numbers in order to guarantee it's json-serializable.
Args:
data_sample (:obj:`DetDataSample`): Predictions of the model.
pred_out_dir: Dir to save the inference results w/o
visualization. If left as empty, no file will be saved.
Defaults to ''.
Returns:
dict: Prediction results.
"""
is_save_pred = True
if pred_out_dir == '':
is_save_pred = False
if is_save_pred and 'img_path' in data_sample:
img_path = osp.basename(data_sample.img_path)
img_path = osp.splitext(img_path)[0]
out_img_path = osp.join(pred_out_dir, 'preds',
img_path + '_panoptic_seg.png')
out_json_path = osp.join(pred_out_dir, 'preds', img_path + '.json')
elif is_save_pred:
out_img_path = osp.join(
pred_out_dir, 'preds',
f'{self.num_predicted_imgs}_panoptic_seg.png')
out_json_path = osp.join(pred_out_dir, 'preds',
f'{self.num_predicted_imgs}.json')
self.num_predicted_imgs += 1
result = {}
if 'pred_instances' in data_sample:
masks = data_sample.pred_instances.get('masks')
pred_instances = data_sample.pred_instances.numpy()
result = {
'labels': pred_instances.labels.tolist(),
'scores': pred_instances.scores.tolist()
}
if 'bboxes' in pred_instances:
result['bboxes'] = pred_instances.bboxes.tolist()
if masks is not None:
if 'bboxes' not in pred_instances or pred_instances.bboxes.sum(
) == 0:
# Fake bbox, such as the SOLO.
bboxes = mask2bbox(masks.cpu()).numpy().tolist()
result['bboxes'] = bboxes
encode_masks = encode_mask_results(pred_instances.masks)
for encode_mask in encode_masks:
if isinstance(encode_mask['counts'], bytes):
encode_mask['counts'] = encode_mask['counts'].decode()
result['masks'] = encode_masks
if 'pred_panoptic_seg' in data_sample:
if VOID is None:
raise RuntimeError(
'panopticapi is not installed, please install it by: '
'pip install git+https://github.com/cocodataset/'
'panopticapi.git.')
pan = data_sample.pred_panoptic_seg.sem_seg.cpu().numpy()[0]
pan[pan % INSTANCE_OFFSET == len(
self.model.dataset_meta['classes'])] = VOID
pan = id2rgb(pan).astype(np.uint8)
if is_save_pred:
mmcv.imwrite(pan[:, :, ::-1], out_img_path)
result['panoptic_seg_path'] = out_img_path
else:
result['panoptic_seg'] = pan
if is_save_pred:
mmengine.dump(result, out_json_path)
return result
# Copyright (c) OpenMMLab. All rights reserved.
import copy
import warnings
from pathlib import Path
from typing import Optional, Sequence, Union
import numpy as np
import torch
import torch.nn as nn
from mmcv.ops import RoIPool
from mmcv.transforms import Compose
from mmengine.config import Config
from mmengine.dataset import default_collate
from mmengine.model.utils import revert_sync_batchnorm
from mmengine.registry import init_default_scope
from mmengine.runner import load_checkpoint
from mmdet.registry import DATASETS
from mmdet.utils import ConfigType
from ..evaluation import get_classes
from ..registry import MODELS
from ..structures import DetDataSample, SampleList
from ..utils import get_test_pipeline_cfg
def init_detector(
config: Union[str, Path, Config],
checkpoint: Optional[str] = None,
palette: str = 'none',
device: str = 'cuda:0',
cfg_options: Optional[dict] = None,
) -> nn.Module:
"""Initialize a detector from config file.
Args:
config (str, :obj:`Path`, or :obj:`mmengine.Config`): Config file path,
:obj:`Path`, or the config object.
checkpoint (str, optional): Checkpoint path. If left as None, the model
will not load any weights.
palette (str): Color palette used for visualization. If palette
is stored in checkpoint, use checkpoint's palette first, otherwise
use externally passed palette. Currently, supports 'coco', 'voc',
'citys' and 'random'. Defaults to none.
device (str): The device where the anchors will be put on.
Defaults to cuda:0.
cfg_options (dict, optional): Options to override some settings in
the used config.
Returns:
nn.Module: The constructed detector.
"""
if isinstance(config, (str, Path)):
config = Config.fromfile(config)
elif not isinstance(config, Config):
raise TypeError('config must be a filename or Config object, '
f'but got {type(config)}')
if cfg_options is not None:
config.merge_from_dict(cfg_options)
elif 'init_cfg' in config.model.backbone:
config.model.backbone.init_cfg = None
scope = config.get('default_scope', 'mmdet')
if scope is not None:
init_default_scope(config.get('default_scope', 'mmdet'))
model = MODELS.build(config.model)
model = revert_sync_batchnorm(model)
if checkpoint is None:
warnings.simplefilter('once')
warnings.warn('checkpoint is None, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
else:
checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
# Weights converted from elsewhere may not have meta fields.
checkpoint_meta = checkpoint.get('meta', {})
# save the dataset_meta in the model for convenience
if 'dataset_meta' in checkpoint_meta:
# mmdet 3.x, all keys should be lowercase
model.dataset_meta = {
k.lower(): v
for k, v in checkpoint_meta['dataset_meta'].items()
}
elif 'CLASSES' in checkpoint_meta:
# < mmdet 3.x
classes = checkpoint_meta['CLASSES']
model.dataset_meta = {'classes': classes}
else:
warnings.simplefilter('once')
warnings.warn(
'dataset_meta or class names are not saved in the '
'checkpoint\'s meta data, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
# Priority: args.palette -> config -> checkpoint
if palette != 'none':
model.dataset_meta['palette'] = palette
else:
test_dataset_cfg = copy.deepcopy(config.test_dataloader.dataset)
# lazy init. We only need the metainfo.
test_dataset_cfg['lazy_init'] = True
metainfo = DATASETS.build(test_dataset_cfg).metainfo
cfg_palette = metainfo.get('palette', None)
if cfg_palette is not None:
model.dataset_meta['palette'] = cfg_palette
else:
if 'palette' not in model.dataset_meta:
warnings.warn(
'palette does not exist, random is used by default. '
'You can also set the palette to customize.')
model.dataset_meta['palette'] = 'random'
model.cfg = config # save the config in the model for convenience
model.to(device)
model.eval()
return model
ImagesType = Union[str, np.ndarray, Sequence[str], Sequence[np.ndarray]]
def inference_detector(
model: nn.Module,
imgs: ImagesType,
test_pipeline: Optional[Compose] = None,
text_prompt: Optional[str] = None,
custom_entities: bool = False,
) -> Union[DetDataSample, SampleList]:
"""Inference image(s) with the detector.
Args:
model (nn.Module): The loaded detector.
imgs (str, ndarray, Sequence[str/ndarray]):
Either image files or loaded images.
test_pipeline (:obj:`Compose`): Test pipeline.
Returns:
:obj:`DetDataSample` or list[:obj:`DetDataSample`]:
If imgs is a list or tuple, the same length list type results
will be returned, otherwise return the detection results directly.
"""
if isinstance(imgs, (list, tuple)):
is_batch = True
else:
imgs = [imgs]
is_batch = False
cfg = model.cfg
if test_pipeline is None:
cfg = cfg.copy()
test_pipeline = get_test_pipeline_cfg(cfg)
if isinstance(imgs[0], np.ndarray):
# Calling this method across libraries will result
# in module unregistered error if not prefixed with mmdet.
test_pipeline[0].type = 'mmdet.LoadImageFromNDArray'
test_pipeline = Compose(test_pipeline)
if model.data_preprocessor.device.type == 'cpu':
for m in model.modules():
assert not isinstance(
m, RoIPool
), 'CPU inference with RoIPool is not supported currently.'
result_list = []
for i, img in enumerate(imgs):
# prepare data
if isinstance(img, np.ndarray):
# TODO: remove img_id.
data_ = dict(img=img, img_id=0)
else:
# TODO: remove img_id.
data_ = dict(img_path=img, img_id=0)
if text_prompt:
data_['text'] = text_prompt
data_['custom_entities'] = custom_entities
# build the data pipeline
data_ = test_pipeline(data_)
data_['inputs'] = [data_['inputs']]
data_['data_samples'] = [data_['data_samples']]
# forward the model
with torch.no_grad():
results = model.test_step(data_)[0]
result_list.append(results)
if not is_batch:
return result_list[0]
else:
return result_list
# TODO: Awaiting refactoring
async def async_inference_detector(model, imgs):
"""Async inference image(s) with the detector.
Args:
model (nn.Module): The loaded detector.
img (str | ndarray): Either image files or loaded images.
Returns:
Awaitable detection results.
"""
if not isinstance(imgs, (list, tuple)):
imgs = [imgs]
cfg = model.cfg
if isinstance(imgs[0], np.ndarray):
cfg = cfg.copy()
# set loading pipeline type
cfg.data.test.pipeline[0].type = 'LoadImageFromNDArray'
# cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
test_pipeline = Compose(cfg.data.test.pipeline)
datas = []
for img in imgs:
# prepare data
if isinstance(img, np.ndarray):
# directly add img
data = dict(img=img)
else:
# add information into dict
data = dict(img_info=dict(filename=img), img_prefix=None)
# build the data pipeline
data = test_pipeline(data)
datas.append(data)
for m in model.modules():
assert not isinstance(
m,
RoIPool), 'CPU inference with RoIPool is not supported currently.'
# We don't restore `torch.is_grad_enabled()` value during concurrent
# inference since execution can overlap
torch.set_grad_enabled(False)
results = await model.aforward_test(data, rescale=True)
return results
def build_test_pipeline(cfg: ConfigType) -> ConfigType:
"""Build test_pipeline for mot/vis demo. In mot/vis infer, original
test_pipeline should remove the "LoadImageFromFile" and
"LoadTrackAnnotations".
Args:
cfg (ConfigDict): The loaded config.
Returns:
ConfigType: new test_pipeline
"""
# remove the "LoadImageFromFile" and "LoadTrackAnnotations" in pipeline
transform_broadcaster = cfg.test_dataloader.dataset.pipeline[0].copy()
for transform in transform_broadcaster['transforms']:
if transform['type'] == 'Resize':
transform_broadcaster['transforms'] = transform
pack_track_inputs = cfg.test_dataloader.dataset.pipeline[-1].copy()
test_pipeline = Compose([transform_broadcaster, pack_track_inputs])
return test_pipeline
def inference_mot(model: nn.Module, img: np.ndarray, frame_id: int,
video_len: int) -> SampleList:
"""Inference image(s) with the mot model.
Args:
model (nn.Module): The loaded mot model.
img (np.ndarray): Loaded image.
frame_id (int): frame id.
video_len (int): demo video length
Returns:
SampleList: The tracking data samples.
"""
cfg = model.cfg
data = dict(
img=[img.astype(np.float32)],
frame_id=[frame_id],
ori_shape=[img.shape[:2]],
img_id=[frame_id + 1],
ori_video_length=[video_len])
test_pipeline = build_test_pipeline(cfg)
data = test_pipeline(data)
if not next(model.parameters()).is_cuda:
for m in model.modules():
assert not isinstance(
m, RoIPool
), 'CPU inference with RoIPool is not supported currently.'
# forward the model
with torch.no_grad():
data = default_collate([data])
result = model.test_step(data)[0]
return result
def init_track_model(config: Union[str, Config],
checkpoint: Optional[str] = None,
detector: Optional[str] = None,
reid: Optional[str] = None,
device: str = 'cuda:0',
cfg_options: Optional[dict] = None) -> nn.Module:
"""Initialize a model from config file.
Args:
config (str or :obj:`mmengine.Config`): Config file path or the config
object.
checkpoint (Optional[str], optional): Checkpoint path. Defaults to
None.
detector (Optional[str], optional): Detector Checkpoint path, use in
some tracking algorithms like sort. Defaults to None.
reid (Optional[str], optional): Reid checkpoint path. use in
some tracking algorithms like sort. Defaults to None.
device (str, optional): The device that the model inferences on.
Defaults to `cuda:0`.
cfg_options (Optional[dict], optional): Options to override some
settings in the used config. Defaults to None.
Returns:
nn.Module: The constructed model.
"""
if isinstance(config, str):
config = Config.fromfile(config)
elif not isinstance(config, Config):
raise TypeError('config must be a filename or Config object, '
f'but got {type(config)}')
if cfg_options is not None:
config.merge_from_dict(cfg_options)
model = MODELS.build(config.model)
if checkpoint is not None:
checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
# Weights converted from elsewhere may not have meta fields.
checkpoint_meta = checkpoint.get('meta', {})
# save the dataset_meta in the model for convenience
if 'dataset_meta' in checkpoint_meta:
if 'CLASSES' in checkpoint_meta['dataset_meta']:
value = checkpoint_meta['dataset_meta'].pop('CLASSES')
checkpoint_meta['dataset_meta']['classes'] = value
model.dataset_meta = checkpoint_meta['dataset_meta']
if detector is not None:
assert not (checkpoint and detector), \
'Error: checkpoint and detector checkpoint cannot both exist'
load_checkpoint(model.detector, detector, map_location='cpu')
if reid is not None:
assert not (checkpoint and reid), \
'Error: checkpoint and reid checkpoint cannot both exist'
load_checkpoint(model.reid, reid, map_location='cpu')
# Some methods don't load checkpoints or checkpoints don't contain
# 'dataset_meta'
# VIS need dataset_meta, MOT don't need dataset_meta
if not hasattr(model, 'dataset_meta'):
warnings.warn('dataset_meta or class names are missed, '
'use None by default.')
model.dataset_meta = {'classes': None}
model.cfg = config # save the config in the model for convenience
model.to(device)
model.eval()
return model
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets import AspectRatioBatchSampler, CocoDataset
from mmdet.datasets.transforms import (LoadAnnotations, PackDetInputs,
RandomFlip, Resize)
from mmdet.evaluation import CocoMetric
# dataset settings
dataset_type = CocoDataset
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadAnnotations, with_bbox=True),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type=LoadAnnotations, with_bbox=True),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoMetric,
ann_file=data_root + 'annotations/instances_val2017.json',
metric='bbox',
format_only=False,
backend_args=backend_args)
test_evaluator = val_evaluator
# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=2,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type=DefaultSampler, shuffle=False),
# dataset=dict(
# type=dataset_type,
# data_root=data_root,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type=CocoMetric,
# metric='bbox',
# format_only=True,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_detection/test')
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms.loading import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets.coco import CocoDataset
from mmdet.datasets.samplers.batch_sampler import AspectRatioBatchSampler
from mmdet.datasets.transforms.formatting import PackDetInputs
from mmdet.datasets.transforms.loading import LoadAnnotations
from mmdet.datasets.transforms.transforms import RandomFlip, Resize
from mmdet.evaluation.metrics.coco_metric import CocoMetric
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadAnnotations, with_bbox=True, with_mask=True),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type=LoadAnnotations, with_bbox=True, with_mask=True),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoMetric,
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False,
backend_args=backend_args)
test_evaluator = val_evaluator
# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=2,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type=DefaultSampler, shuffle=False),
# dataset=dict(
# type=CocoDataset,
# data_root=data_root,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type=CocoMetric,
# metric=['bbox', 'segm'],
# format_only=True,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_instance/test')
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms.loading import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets.coco import CocoDataset
from mmdet.datasets.samplers.batch_sampler import AspectRatioBatchSampler
from mmdet.datasets.transforms.formatting import PackDetInputs
from mmdet.datasets.transforms.loading import LoadAnnotations
from mmdet.datasets.transforms.transforms import RandomFlip, Resize
from mmdet.evaluation.metrics.coco_metric import CocoMetric
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadAnnotations, with_bbox=True, with_mask=True, with_seg=True),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type=LoadAnnotations, with_bbox=True, with_mask=True, with_seg=True),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/', seg='stuffthingmaps/train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoMetric,
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False,
backend_args=backend_args)
test_evaluator = val_evaluator
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms.loading import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets.coco_panoptic import CocoPanopticDataset
from mmdet.datasets.samplers.batch_sampler import AspectRatioBatchSampler
from mmdet.datasets.transforms.formatting import PackDetInputs
from mmdet.datasets.transforms.loading import LoadPanopticAnnotations
from mmdet.datasets.transforms.transforms import RandomFlip, Resize
from mmdet.evaluation.metrics.coco_panoptic_metric import CocoPanopticMetric
# dataset settings
dataset_type = 'CocoPanopticDataset'
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadPanopticAnnotations, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=LoadPanopticAnnotations, backend_args=backend_args),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=CocoPanopticDataset,
data_root=data_root,
ann_file='annotations/panoptic_train2017.json',
data_prefix=dict(
img='train2017/', seg='annotations/panoptic_train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=CocoPanopticDataset,
data_root=data_root,
ann_file='annotations/panoptic_val2017.json',
data_prefix=dict(img='val2017/', seg='annotations/panoptic_val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoPanopticMetric,
ann_file=data_root + 'annotations/panoptic_val2017.json',
seg_prefix=data_root + 'annotations/panoptic_val2017/',
backend_args=backend_args)
test_evaluator = val_evaluator
# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=1,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type=DefaultSampler, shuffle=False),
# dataset=dict(
# type=CocoPanopticDataset,
# data_root=data_root,
# ann_file='annotations/panoptic_image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type=CocoPanopticMetric,
# format_only=True,
# ann_file=data_root + 'annotations/panoptic_image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_panoptic/test')
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms import (LoadImageFromFile, RandomResize,
TransformBroadcaster)
from mmdet.datasets import MOTChallengeDataset
from mmdet.datasets.samplers import TrackImgSampler
from mmdet.datasets.transforms import (LoadTrackAnnotations, PackTrackInputs,
PhotoMetricDistortion, RandomCrop,
RandomFlip, Resize,
UniformRefFrameSample)
from mmdet.evaluation import MOTChallengeMetric
# dataset settings
dataset_type = MOTChallengeDataset
data_root = 'data/MOT17/'
img_scale = (1088, 1088)
backend_args = None
# data pipeline
train_pipeline = [
dict(
type=UniformRefFrameSample,
num_ref_imgs=1,
frame_range=10,
filter_key_img=True),
dict(
type=TransformBroadcaster,
share_random_params=True,
transforms=[
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadTrackAnnotations),
dict(
type=RandomResize,
scale=img_scale,
ratio_range=(0.8, 1.2),
keep_ratio=True,
clip_object_border=False),
dict(type=PhotoMetricDistortion)
]),
dict(
type=TransformBroadcaster,
# different cropped positions for different frames
share_random_params=False,
transforms=[
dict(type=RandomCrop, crop_size=img_scale, bbox_clip_border=False)
]),
dict(
type=TransformBroadcaster,
share_random_params=True,
transforms=[
dict(type=RandomFlip, prob=0.5),
]),
dict(type=PackTrackInputs)
]
test_pipeline = [
dict(
type=TransformBroadcaster,
transforms=[
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=img_scale, keep_ratio=True),
dict(type=LoadTrackAnnotations)
]),
dict(type=PackTrackInputs)
]
# dataloader
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=TrackImgSampler), # image-based sampling
dataset=dict(
type=dataset_type,
data_root=data_root,
visibility_thr=-1,
ann_file='annotations/half-train_cocoformat.json',
data_prefix=dict(img_path='train'),
metainfo=dict(classes=('pedestrian', )),
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
# Now we support two ways to test, image_based and video_based
# if you want to use video_based sampling, you can use as follows
# sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
sampler=dict(type=TrackImgSampler), # image-based sampling
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/half-val_cocoformat.json',
data_prefix=dict(img_path='train'),
test_mode=True,
pipeline=test_pipeline))
test_dataloader = val_dataloader
# evaluator
val_evaluator = dict(
type=MOTChallengeMetric, metric=['HOTA', 'CLEAR', 'Identity'])
test_evaluator = val_evaluator
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
LoggerHook, ParamSchedulerHook)
from mmengine.runner import LogProcessor
from mmengine.visualization import LocalVisBackend
from mmdet.engine.hooks import DetVisualizationHook
from mmdet.visualization import DetLocalVisualizer
default_scope = None
default_hooks = dict(
timer=dict(type=IterTimerHook),
logger=dict(type=LoggerHook, interval=50),
param_scheduler=dict(type=ParamSchedulerHook),
checkpoint=dict(type=CheckpointHook, interval=1),
sampler_seed=dict(type=DistSamplerSeedHook),
visualization=dict(type=DetVisualizationHook))
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'),
)
vis_backends = [dict(type=LocalVisBackend)]
visualizer = dict(
type=DetLocalVisualizer, vis_backends=vis_backends, name='visualizer')
log_processor = dict(type=LogProcessor, window_size=50, by_epoch=True)
log_level = 'INFO'
load_from = None
resume = False
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.cascade_rcnn import CascadeRCNN
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import SmoothL1Loss
from mmdet.models.necks.fpn import FPN
from mmdet.models.roi_heads.bbox_heads.convfc_bbox_head import \
Shared2FCBBoxHead
from mmdet.models.roi_heads.cascade_roi_head import CascadeRoIHead
from mmdet.models.roi_heads.mask_heads.fcn_mask_head import FCNMaskHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
model = dict(
type=CascadeRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_mask=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type=RPNHead,
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0 / 9.0, loss_weight=1.0)),
roi_head=dict(
type=CascadeRoIHead,
num_stages=3,
stage_loss_weights=[1, 0.5, 0.25],
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=[
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0))
],
mask_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=14, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
mask_head=dict(
type=FCNMaskHead,
num_convs=4,
in_channels=256,
conv_out_channels=256,
num_classes=80,
loss_mask=dict(
type=CrossEntropyLoss, use_mask=True, loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=2000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False)
]),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100,
mask_thr_binary=0.5)))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.cascade_rcnn import CascadeRCNN
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import SmoothL1Loss
from mmdet.models.necks.fpn import FPN
from mmdet.models.roi_heads.bbox_heads.convfc_bbox_head import \
Shared2FCBBoxHead
from mmdet.models.roi_heads.cascade_roi_head import CascadeRoIHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
model = dict(
type=CascadeRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type=RPNHead,
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0 / 9.0, loss_weight=1.0)),
roi_head=dict(
type=CascadeRoIHead,
num_stages=3,
stage_loss_weights=[1, 0.5, 0.25],
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=[
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0))
]),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=2000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
]),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100)))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.faster_rcnn import FasterRCNN
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import L1Loss
from mmdet.models.necks.fpn import FPN
from mmdet.models.roi_heads.bbox_heads.convfc_bbox_head import \
Shared2FCBBoxHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
model = dict(
type=FasterRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type=RPNHead,
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
roi_head=dict(
type=StandardRoIHead,
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05)
))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from mmengine.model.weight_init import PretrainedInit
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.mask_rcnn import MaskRCNN
from mmdet.models.layers import ResLayer
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import L1Loss
from mmdet.models.roi_heads.bbox_heads.bbox_head import BBoxHead
from mmdet.models.roi_heads.mask_heads.fcn_mask_head import FCNMaskHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
norm_cfg = dict(type=BatchNorm2d, requires_grad=False)
# model settings
model = dict(
type=MaskRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[103.530, 116.280, 123.675],
std=[1.0, 1.0, 1.0],
bgr_to_rgb=False,
pad_mask=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=3,
strides=(1, 2, 2),
dilations=(1, 1, 1),
out_indices=(2, ),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='caffe',
init_cfg=dict(
type=PretrainedInit,
checkpoint='open-mmlab://detectron2/resnet50_caffe')),
rpn_head=dict(
type=RPNHead,
in_channels=1024,
feat_channels=1024,
anchor_generator=dict(
type=AnchorGenerator,
scales=[2, 4, 8, 16, 32],
ratios=[0.5, 1.0, 2.0],
strides=[16]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
roi_head=dict(
type=StandardRoIHead,
shared_head=dict(
type=ResLayer,
depth=50,
stage=3,
stride=2,
dilation=1,
style='caffe',
norm_cfg=norm_cfg,
norm_eval=True),
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=14, sampling_ratio=0),
out_channels=1024,
featmap_strides=[16]),
bbox_head=dict(
type=BBoxHead,
with_avg_pool=True,
roi_feat_size=7,
in_channels=2048,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
mask_roi_extractor=None,
mask_head=dict(
type=FCNMaskHead,
num_convs=0,
in_channels=2048,
conv_out_channels=256,
num_classes=80,
loss_mask=dict(
type=CrossEntropyLoss, use_mask=True, loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=12000,
max_per_img=2000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=14,
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=6000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100,
mask_thr_binary=0.5)))
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment