Commit a8562a56 authored by luopl's avatar luopl
Browse files

Initial commit

parents
Pipeline #1564 canceled with stages
# Test existing models on standard datasets
To evaluate a model's accuracy, one usually tests the model on some standard datasets, please refer to [dataset prepare guide](dataset_prepare.md) to prepare the dataset.
This section will show how to test existing models on supported datasets.
## Test existing models
We provide testing scripts for evaluating an existing model on the whole dataset (COCO, PASCAL VOC, Cityscapes, etc.).
The following testing environments are supported:
- single GPU
- CPU
- single node multiple GPUs
- multiple nodes
Choose the proper script to perform testing depending on the testing environment.
```shell
# Single-gpu testing
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--show]
# CPU: disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--show]
# Multi-gpu testing
bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
${GPU_NUM} \
[--out ${RESULT_FILE}]
```
`tools/dist_test.sh` also supports multi-node testing, but relies on PyTorch's [launch utility](https://pytorch.org/docs/stable/distributed.html#launch-utility).
Optional arguments:
- `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file.
- `--show`: If specified, detection results will be plotted on the images and shown in a new window. It is only applicable to single GPU testing and used for debugging and visualization. Please make sure that GUI is available in your environment. Otherwise, you may encounter an error like `cannot connect to X server`.
- `--show-dir`: If specified, detection results will be plotted on the images and saved to the specified directory. It is only applicable to single GPU testing and used for debugging and visualization. You do NOT need a GUI available in your environment for using this option.
- `--work-dir`: If specified, detection results containing evaluation metrics will be saved to the specified directory.
- `--cfg-options`: If specified, the key-value pair optional cfg will be merged into config file
## Examples
Assuming that you have already downloaded the checkpoints to the directory `checkpoints/`.
1. Test RTMDet and visualize the results. Press any key for the next image.
Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet).
```shell
python tools/test.py \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--show
```
2. Test RTMDet and save the painted images for future visualization.
Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet).
```shell
python tools/test.py \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--show-dir faster_rcnn_r50_fpn_1x_results
```
3. Test Faster R-CNN on PASCAL VOC (without saving the test results).
Config and checkpoint files are available [here](../../../configs/pascal_voc).
```shell
python tools/test.py \
configs/pascal_voc/faster-rcnn_r50_fpn_1x_voc0712.py \
checkpoints/faster_rcnn_r50_fpn_1x_voc0712_20200624-c9895d40.pth
```
4. Test Mask R-CNN with 8 GPUs, and evaluate.
Config and checkpoint files are available [here](../../../configs/mask_rcnn).
```shell
./tools/dist_test.sh \
configs/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8 \
--out results.pkl
```
5. Test Mask R-CNN with 8 GPUs, and evaluate the metric **class-wise**.
Config and checkpoint files are available [here](../../../configs/mask_rcnn).
```shell
./tools/dist_test.sh \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8 \
--out results.pkl \
--cfg-options test_evaluator.classwise=True
```
6. Test Mask R-CNN on COCO test-dev with 8 GPUs, and generate JSON files for submitting to the official evaluation server.
Config and checkpoint files are available [here](../../../configs/mask_rcnn).
Replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in [config](../../../configs/_base_/datasets/coco_instance.py) and run:
```shell
./tools/dist_test.sh \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8
```
This command generates two JSON files `./work_dirs/coco_instance/test.bbox.json` and `./work_dirs/coco_instance/test.segm.json`.
7. Test Mask R-CNN on Cityscapes test with 8 GPUs, and generate txt and png files for submitting to the official evaluation server.
Config and checkpoint files are available [here](../../../configs/cityscapes).
Replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in [config](../../../configs/_base_/datasets/cityscapes_instance.py) and run:
```shell
./tools/dist_test.sh \
configs/cityscapes/mask-rcnn_r50_fpn_1x_cityscapes.py \
checkpoints/mask_rcnn_r50_fpn_1x_cityscapes_20200227-afe51d5a.pth \
8
```
The generated png and txt would be under `./work_dirs/cityscapes_metric/` directory.
## Test without Ground Truth Annotations
MMDetection supports to test models without ground-truth annotations using `CocoDataset`. If your dataset format is not in COCO format, please convert them to COCO format. For example, if your dataset format is VOC, you can directly convert it to COCO format by the [script in tools.](../../../tools/dataset_converters/pascal_voc.py) If your dataset format is Cityscapes, you can directly convert it to COCO format by the [script in tools.](../../../tools/dataset_converters/cityscapes.py) The rest of the formats can be converted using [this script](../../../tools/dataset_converters/images2coco.py).
```shell
python tools/dataset_converters/images2coco.py \
${IMG_PATH} \
${CLASSES} \
${OUT} \
[--exclude-extensions]
```
arguments:
- `IMG_PATH`: The root path of images.
- `CLASSES`: The text file with a list of categories.
- `OUT`: The output annotation json file name. The save dir is in the same directory as `IMG_PATH`.
- `exclude-extensions`: The suffix of images to be excluded, such as 'png' and 'bmp'.
After the conversion is complete, you need to replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in [config](../../../configs/_base_/datasets/coco_detection.py)(find which dataset in 'configs/_base_/datasets' the current config corresponds to) and run:
```shell
# Single-gpu testing
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--show]
# CPU: disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--show]
# Multi-gpu testing
bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
${GPU_NUM} \
[--show]
```
Assuming that the checkpoints in the [model zoo](https://mmdetection.readthedocs.io/en/latest/modelzoo_statistics.html) have been downloaded to the directory `checkpoints/`, we can test Mask R-CNN on COCO test-dev with 8 GPUs, and generate JSON files using the following command.
```sh
./tools/dist_test.sh \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8
```
This command generates two JSON files `./work_dirs/coco_instance/test.bbox.json` and `./work_dirs/coco_instance/test.segm.json`.
## Batch Inference
MMDetection supports inference with a single image or batched images in test mode. By default, we use single-image inference and you can use batch inference by modifying `samples_per_gpu` in the config of test data. You can do that either by modifying the config as below.
```shell
data = dict(train_dataloader=dict(...), val_dataloader=dict(...), test_dataloader=dict(batch_size=2, ...))
```
Or you can set it through `--cfg-options` as `--cfg-options test_dataloader.batch_size=2`
## Test Time Augmentation (TTA)
Test time augmentation (TTA) is a data augmentation strategy used during the test phase. It applies different augmentations, such as flipping and scaling, to the same image for model inference, and then merges the predictions of each augmented image to obtain more accurate predictions. To make it easier for users to use TTA, MMEngine provides [BaseTTAModel](https://mmengine.readthedocs.io/en/latest/api/generated/mmengine.model.BaseTTAModel.html#mmengine.model.BaseTTAModel) class, which allows users to implement different TTA strategies by simply extending the BaseTTAModel class according to their needs.
In MMDetection, we provides [DetTTAModel](../../../mmdet/models/test_time_augs/det_tta.py) class, which inherits from BaseTTAModel.
### Use case
Using TTA requires two steps. First, you need to add `tta_model` and `tta_pipeline` in the configuration file:
```shell
tta_model = dict(
type='DetTTAModel',
tta_cfg=dict(nms=dict(
type='nms',
iou_threshold=0.5),
max_per_img=100))
tta_pipeline = [
dict(type='LoadImageFromFile',
backend_args=None),
dict(
type='TestTimeAug',
transforms=[[
dict(type='Resize', scale=(1333, 800), keep_ratio=True)
], [ # It uses 2 flipping transformations (flipping and not flipping).
dict(type='RandomFlip', prob=1.),
dict(type='RandomFlip', prob=0.)
], [
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape',
'img_shape', 'scale_factor', 'flip',
'flip_direction'))
]])]
```
Second, set `--tta` when running the test scripts as examples below:
```shell
# Single-gpu testing
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--tta]
# CPU: disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--tta]
# Multi-gpu testing
bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
${GPU_NUM} \
[--tta]
```
You can also modify the TTA config by yourself, such as adding scaling enhancement:
```shell
tta_model = dict(
type='DetTTAModel',
tta_cfg=dict(nms=dict(
type='nms',
iou_threshold=0.5),
max_per_img=100))
img_scales = [(1333, 800), (666, 400), (2000, 1200)]
tta_pipeline = [
dict(type='LoadImageFromFile',
backend_args=None),
dict(
type='TestTimeAug',
transforms=[[
dict(type='Resize', scale=s, keep_ratio=True) for s in img_scales
], [
dict(type='RandomFlip', prob=1.),
dict(type='RandomFlip', prob=0.)
], [
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape',
'img_shape', 'scale_factor', 'flip',
'flip_direction'))
]])]
```
The above data augmentation pipeline will first perform 3 multi-scaling transformations on the image, followed by 2 flipping transformations (flipping and not flipping). Finally, the image is packaged into the final result using PackDetInputs.
Here are more TTA use cases for your reference:
- [RetinaNet](../../../configs/retinanet/retinanet_tta.py)
- [CenterNet](../../../configs/centernet/centernet_tta.py)
- [YOLOX](../../../configs/rtmdet/rtmdet_tta.py)
- [RTMDet](../../../configs/yolox/yolox_tta.py)
For more advanced usage and data flow of TTA, please refer to [MMEngine](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/test_time_augmentation.html#data-flow). We will support instance segmentation TTA latter.
# Test Results Submission
## Panoptic segmentation test results submission
The following sections introduce how to produce the prediction results of panoptic segmentation models on the COCO test-dev set and submit the predictions to [COCO evaluation server](https://competitions.codalab.org/competitions/19507).
### Prerequisites
- Download [COCO test dataset images](http://images.cocodataset.org/zips/test2017.zip), [testing image info](http://images.cocodataset.org/annotations/image_info_test2017.zip), and [panoptic train/val annotations](http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip), then unzip them, put 'test2017' to `data/coco/`, put json files and annotation files to `data/coco/annotations/`.
```shell
# suppose data/coco/ does not exist
mkdir -pv data/coco/
# download test2017
wget -P data/coco/ http://images.cocodataset.org/zips/test2017.zip
wget -P data/coco/ http://images.cocodataset.org/annotations/image_info_test2017.zip
wget -P data/coco/ http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
# unzip them
unzip data/coco/test2017.zip -d data/coco/
unzip data/coco/image_info_test2017.zip -d data/coco/
unzip data/coco/panoptic_annotations_trainval2017.zip -d data/coco/
# remove zip files (optional)
rm -rf data/coco/test2017.zip data/coco/image_info_test2017.zip data/coco/panoptic_annotations_trainval2017.zip
```
- Run the following code to update category information in testing image info. Since the attribute `isthing` is missing in category information of 'image_info_test-dev2017.json', we need to update it with the category information in 'panoptic_val2017.json'.
```shell
python tools/misc/gen_coco_panoptic_test_info.py data/coco/annotations
```
After completing the above preparations, your directory structure of `data` should be like this:
```text
data
`-- coco
|-- annotations
| |-- image_info_test-dev2017.json
| |-- image_info_test2017.json
| |-- panoptic_image_info_test-dev2017.json
| |-- panoptic_train2017.json
| |-- panoptic_train2017.zip
| |-- panoptic_val2017.json
| `-- panoptic_val2017.zip
`-- test2017
```
### Inference on coco test-dev
To do inference on coco test-dev, we should update the setting of `test_dataloder` and `test_evaluator` first. There two ways to do this: 1. update them in config file; 2. update them in command line.
#### Update them in config file
The relevant settings are provided at the end of `configs/_base_/datasets/coco_panoptic.py`, as below.
```python
test_dataloader = dict(
batch_size=1,
num_workers=1,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/panoptic_image_info_test-dev2017.json',
data_prefix=dict(img='test2017/'),
test_mode=True,
pipeline=test_pipeline))
test_evaluator = dict(
type='CocoPanopticMetric',
format_only=True,
ann_file=data_root + 'annotations/panoptic_image_info_test-dev2017.json',
outfile_prefix='./work_dirs/coco_panoptic/test')
```
Any of the following way can be used to update the setting for inference on coco test-dev set.
Case 1: Directly uncomment the setting in `configs/_base_/datasets/coco_panoptic.py`.
Case 2: Copy the following setting to the config file you used now.
```python
test_dataloader = dict(
dataset=dict(
ann_file='annotations/panoptic_image_info_test-dev2017.json',
data_prefix=dict(img='test2017/', _delete_=True)))
test_evaluator = dict(
format_only=True,
ann_file=data_root + 'annotations/panoptic_image_info_test-dev2017.json',
outfile_prefix='./work_dirs/coco_panoptic/test')
```
Then infer on coco test-dev et by the following command.
```shell
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE}
```
#### Update them in command line
The command for update of the related settings and inference on coco test-dev are as below.
```shell
# test with single gpu
CUDA_VISIBLE_DEVICES=0 python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=${WORK_DIR}/results
# test with four gpus
CUDA_VISIBLE_DEVICES=0,1,3,4 bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
8 \ # eights gpus
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=${WORK_DIR}/results
# test with slurm
GPUS=8 tools/slurm_test.sh \
${Partition} \
${JOB_NAME} \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=${WORK_DIR}/results
```
Example
Suppose we perform inference on `test2017` using pretrained MaskFormer with ResNet-50 backbone.
```shell
# test with single gpu
CUDA_VISIBLE_DEVICES=0 python tools/test.py \
configs/maskformer/maskformer_r50_mstrain_16x1_75e_coco.py \
checkpoints/maskformer_r50_mstrain_16x1_75e_coco_20220221_141956-bc2699cb.pth \
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=work_dirs/maskformer/results
```
### Rename files and zip results
After inference, the panoptic segmentation results (a json file and a directory where the masks are stored) will be in `WORK_DIR`. We should rename them according to the naming convention described on [COCO's Website](https://cocodataset.org/#upload). Finally, we need to compress the json and the directory where the masks are stored into a zip file, and rename the zip file according to the naming convention. Note that the zip file should **directly** contains the above two files.
The commands to rename files and zip results:
```shell
# In WORK_DIR, we have panoptic segmentation results: 'panoptic' and 'results.panoptic.json'.
cd ${WORK_DIR}
# replace '[algorithm_name]' with the name of algorithm you used.
mv ./panoptic ./panoptic_test-dev2017_[algorithm_name]_results
mv ./results.panoptic.json ./panoptic_test-dev2017_[algorithm_name]_results.json
zip panoptic_test-dev2017_[algorithm_name]_results.zip -ur panoptic_test-dev2017_[algorithm_name]_results panoptic_test-dev2017_[algorithm_name]_results.json
```
**We provide lots of useful tools under the `tools/` directory.**
## MOT Test-time Parameter Search
`tools/analysis_tools/mot/mot_param_search.py` can search the parameters of the `tracker` in MOT models.
It is used as the same manner with `tools/test.py` but **different** in the configs.
Here is an example that shows how to modify the configs:
1. Define the desirable evaluation metrics to record.
For example, you can define the `evaluator` as
```python
test_evaluator=dict(type='MOTChallengeMetrics', metric=['HOTA', 'CLEAR', 'Identity'])
```
Of course, you can also customize the content of `metric` in `test_evaluator`. You are free to choose one or more of `['HOTA', 'CLEAR', 'Identity']`.
2. Define the parameters and the values to search.
Assume you have a tracker like
```python
model=dict(
tracker=dict(
type='BaseTracker',
obj_score_thr=0.5,
match_iou_thr=0.5
)
)
```
If you want to search the parameters of the tracker, just change the value to a list as follow
```python
model=dict(
tracker=dict(
type='BaseTracker',
obj_score_thr=[0.4, 0.5, 0.6],
match_iou_thr=[0.4, 0.5, 0.6, 0.7]
)
)
```
Then the script will test the totally 12 cases and log the results.
## MOT Error Visualize
`tools/analysis_tools/mot/mot_error_visualize.py` can visualize errors for multiple object tracking.
This script needs the result of inference. By Default, the **red** bounding box denotes false positive, the **yellow** bounding box denotes the false negative and the **blue** bounding box denotes ID switch.
```
python tools/analysis_tools/mot/mot_error_visualize.py \
${CONFIG_FILE}\
--input ${INPUT} \
--result-dir ${RESULT_DIR} \
[--output-dir ${OUTPUT}] \
[--fps ${FPS}] \
[--show] \
[--backend ${BACKEND}]
```
The `RESULT_DIR` contains the inference results of all videos and the inference result is a `txt` file.
Optional arguments:
- `OUTPUT`: Output of the visualized demo. If not specified, the `--show` is obligate to show the video on the fly.
- `FPS`: FPS of the output video.
- `--show`: Whether show the video on the fly.
- `BACKEND`: The backend to visualize the boxes. Options are `cv2` and `plt`.
## Browse dataset
`tools/analysis_tools/mot/browse_dataset.py` can visualize the training dataset to check whether the dataset configuration is correct.
**Examples:**
```shell
python tools/analysis_tools/browse_dataset.py ${CONFIG_FILE} [--show-interval ${SHOW_INTERVAL}]
```
Optional arguments:
- `SHOW_INTERVAL`: The interval of show (s).
- `--show`: Whether show the images on the fly.
# Learn about Configs
We use python files as our config system. You can find all the provided configs under $MMDetection/configs.
We incorporate modular and inheritance design into our config system,
which is convenient to conduct various experiments.
If you wish to inspect the config file,
you may run `python tools/misc/print_config.py /PATH/TO/CONFIG` to see the complete config.
## A brief description of a complete config
A complete config usually contains the following primary fields:
- `model`: the basic config of model, which may contain `data_preprocessor`, modules (e.g., `detector`, `motion`),`train_cfg`, `test_cfg`, etc.
- `train_dataloader`: the config of training dataloader, which usually contains `batch_size`, `num_workers`, `sampler`, `dataset`, etc.
- `val_dataloader`: the config of validation dataloader, which is similar with `train_dataloader`.
- `test_dataloader`: the config of testing dataloader, which is similar with `train_dataloader`.
- `val_evaluator`: the config of validation evaluator. For example,`type='MOTChallengeMetrics'` for MOT task on the MOTChallenge benchmarks.
- `test_evaluator`: the config of testing evaluator, which is similar with `val_evaluator`.
- `train_cfg`: the config of training loop. For example, `type='EpochBasedTrainLoop'`.
- `val_cfg`: the config of validation loop. For example, `type='VideoValLoop'`.
- `test_cfg`: the config of testing loop. For example, `type='VideoTestLoop'`.
- `default_hooks`: the config of default hooks, which may include hooks for timer, logger, param_scheduler, checkpoint, sampler_seed, visualization, etc.
- `vis_backends`: the config of visualization backends, which uses `type='LocalVisBackend'` as default.
- `visualizer`: the config of visualizer. `type='TrackLocalVisualizer'` for MOT tasks.
- `param_scheduler`: the config of parameter scheduler, which usually sets the learning rate scheduler.
- `optim_wrapper`: the config of optimizer wrapper, which contains optimization-related information, for example optimizer, gradient clipping, etc.
- `load_from`: load models as a pre-trained model from a given path.
- `resume`: If `True`, resume checkpoints from `load_from`, and the training will be resumed from the epoch when the checkpoint is saved.
## Modify config through script arguments
When submitting jobs using `tools/train.py` or `tools/test_tracking.py`,
you may specify `--cfg-options` to in-place modify the config.
We present several examples as follows.
For more details, please refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/config.md).
- **Update config keys of dict chains.**
The config options can be specified following the order of the dict keys in the original config.
For example, `--cfg-options model.detector.backbone.norm_eval=False` changes the all BN modules in model backbones to train mode.
- **Update keys inside a list of configs.**
Some config dicts are composed as a list in your config.
For example, the testing pipeline `test_dataloader.dataset.pipeline` is normally a list e.g. `[dict(type='LoadImageFromFile'), ...]`.
If you want to change `LoadImageFromFile` to `LoadImageFromWebcam` in the pipeline,
you may specify `--cfg-options test_dataloader.dataset.pipeline.0.type=LoadImageFromWebcam`.
- **Update values of list/tuples.**
Maybe the value to be updated is a list or a tuple.
For example, you can change the key `mean` of `data_preprocessor` by specifying `--cfg-options model.data_preprocessor.mean=[0,0,0]`.
Note that **NO** white space is allowed inside the specified value.
## Config File Structure
There are 3 basic component types under `config/_base_`, i.e., dataset, model and default_runtime.
Many methods could be easily constructed with one of each like SORT, DeepSORT.
The configs that are composed by components from `_base_` are called *primitive*.
For all configs under the same folder, it is recommended to have only **one** *primitive* config.
All other configs should inherit from the *primitive* config.
In this way, the maximum of inheritance level is 3.
For easy understanding, we recommend contributors to inherit from exiting methods.
For example, if some modification is made base on Faster R-CNN,
user may first inherit the basic Faster R-CNN structure
by specifying `_base_ = ../_base_/models/faster-rcnn_r50-dc5.py`,
then modify the necessary fields in the config files.
If you are building an entirely new method that does not share the structure with any of the existing methods,
you may create a folder `method_name` under `configs`.
Please refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/config.md) for detailed documentation.
## Config Name Style
We follow the below style to name config files. Contributors are advised to follow the same style.
```shell
{method}_{module}_{train_cfg}_{train_data}_{test_data}
```
- `{method}`: method name, like `sort`.
- `{module}`: basic modules of the method, like `faster-rcnn_r50_fpn`.
- `{train_cfg}`: training config which usually contains batch size, epochs, etc, like `8xb4-80e`.
- `{train_data}`: training data, like `mot17halftrain`.
- `{test_data}`: testing data, like `test-mot17halfval`.
## FAQ
**Ignore some fields in the base configs**
Sometimes, you may set `_delete_=True` to ignore some of fields in base configs.
You may refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/config.md) for simple illustration.
## Tracking Data Structure Introduction
### Advantages and new features
In mmdetection tracking task, we employ videos to organize the dataset and use
TrackDataSample to descirbe dataset info.
- Based on video organization, we provide transform `UniformRefFrameSample` to sample key frames and ref frames and use `TransformBroadcaster` for for clip training.
- TrackDataSample can be viewd as a wrapper of multiple DetDataSample to some extent. It contains a property `video_data_samples` which is a list of DetDataSample, each of which corresponds to a single frame. In addition, it's metainfo includes key_frames_inds and ref_frames_inds to apply clip training way.
- Thanks to video-based data organization, the entire video can be directly tested. This way is more concise and intuitive. We also provide image_based test method, if your GPU mmemory cannot fit the entire video.
### TODO
- Some algorithms like StrongSORT, Mask2Former can not support video_based testing. These algorithms pose a challenge to GPU memory. we will optimize this problem in the future.
- Now we do not support joint training of video_based dataset like MOT Challenge Dataset and image_based dataset like Crowdhuman for the algorithm QDTrack. we will optimize this problem in the future.
## Dataset Preparation
This page provides the instructions for dataset preparation on existing benchmarks, include
- Multiple Object Tracking
- [MOT Challenge](https://motchallenge.net/)
- [CrowdHuman](https://www.crowdhuman.org/)
- Video Instance Segmentation
- [YouTube-VIS](https://youtube-vos.org/dataset/vis/)
### 1. Download Datasets
Please download the datasets from the official websites. It is recommended to symlink the root of the datasets to `$MMDETECTION/data`.
#### 1.1 Multiple Object Tracking
- For the training and testing of multi object tracking task, one of the MOT Challenge datasets (e.g. MOT17, MOT20) are needed, CrowdHuman can be served as comlementary dataset.
- For users in China, the following datasets can be downloaded from [OpenDataLab](https://opendatalab.com/) with high speed:
- [MOT17](https://opendatalab.com/MOT17/download)
- [MOT20](https://opendatalab.com/MOT20/download)
- [CrowdHuman](https://opendatalab.com/CrowdHuman/download)
#### 1.2 Video Instance Segmentation
- For the training and testing of video instance segmetatioon task, only one of YouTube-VIS datasets (e.g. YouTube-VIS 2019, YouTube-VIS 2021) is needed.
- YouTube-VIS 2019 dataset can be download from [YouTubeVOS](https://codalab.lisn.upsaclay.fr/competitions/6064)
- YouTube-VIS 2021 dataset can be download from [YouTubeVOS](https://codalab.lisn.upsaclay.fr/competitions/7680)
#### 1.3 Data Structure
If your folder structure is different from the following, you may need to change the corresponding paths in config files.
```
mmdetection
├── mmdet
├── tools
├── configs
├── data
│ ├── coco
│ │ ├── train2017
│ │ ├── val2017
│ │ ├── test2017
│ │ ├── annotations
│ │
| ├── MOT15/MOT16/MOT17/MOT20
| | ├── train
| | | ├── MOT17-02-DPM
| | | | ├── det
| │ │ │ ├── gt
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
| | ├── test
| | | ├── MOT17-01-DPM
| | | | ├── det
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
│ │
│ ├── crowdhuman
│ │ ├── annotation_train.odgt
│ │ ├── annotation_val.odgt
│ │ ├── train
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_train01.zip
│ │ │ ├── CrowdHuman_train02.zip
│ │ │ ├── CrowdHuman_train03.zip
│ │ ├── val
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_val.zip
│ │
```
### 2. Convert Annotations
In this case, you need to convert the official annotations to coco style. We provide scripts and the usages are as following:
```shell
# MOT17
# The processing of other MOT Challenge dataset is the same as MOT17
python ./tools/dataset_converters/mot2coco.py -i ./data/MOT17/ -o ./data/MOT17/annotations --split-train --convert-det
python ./tools/dataset_converters/mot2reid.py -i ./data/MOT17/ -o ./data/MOT17/reid --val-split 0.2 --vis-threshold 0.3
# CrowdHuman
python ./tools/dataset_converters/crowdhuman2coco.py -i ./data/crowdhuman -o ./data/crowdhuman/annotations
# YouTube-VIS 2019
python ./tools/dataset_converters/youtubevis2coco.py -i ./data/youtube_vis_2019 -o ./data/youtube_vis_2019/annotations --version 2019
# YouTube-VIS 2021
python ./tools/dataset_converters/youtubevis2coco.py -i ./data/youtube_vis_2021 -o ./data/youtube_vis_2021/annotations --version 2021
```
The folder structure will be as following after your run these scripts:
```
mmdetection
├── mmtrack
├── tools
├── configs
├── data
│ ├── coco
│ │ ├── train2017
│ │ ├── val2017
│ │ ├── test2017
│ │ ├── annotations
│ │
| ├── MOT15/MOT16/MOT17/MOT20
| | ├── train
| | | ├── MOT17-02-DPM
| | | | ├── det
| │ │ │ ├── gt
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
| | ├── test
| | | ├── MOT17-01-DPM
| | | | ├── det
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
| | ├── annotations
| | ├── reid
│ │ │ ├── imgs
│ │ │ ├── meta
│ │
│ ├── crowdhuman
│ │ ├── annotation_train.odgt
│ │ ├── annotation_val.odgt
│ │ ├── train
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_train01.zip
│ │ │ ├── CrowdHuman_train02.zip
│ │ │ ├── CrowdHuman_train03.zip
│ │ ├── val
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_val.zip
│ │ ├── annotations
│ │ │ ├── crowdhuman_train.json
│ │ │ ├── crowdhuman_val.json
│ │
│ ├── youtube_vis_2019
│ │ │── train
│ │ │ │── JPEGImages
│ │ │ │── ......
│ │ │── valid
│ │ │ │── JPEGImages
│ │ │ │── ......
│ │ │── test
│ │ │ │── JPEGImages
│ │ │ │── ......
│ │ │── train.json (the official annotation files)
│ │ │── valid.json (the official annotation files)
│ │ │── test.json (the official annotation files)
│ │ │── annotations (the converted annotation file)
│ │
│ ├── youtube_vis_2021
│ │ │── train
│ │ │ │── JPEGImages
│ │ │ │── instances.json (the official annotation files)
│ │ │ │── ......
│ │ │── valid
│ │ │ │── JPEGImages
│ │ │ │── instances.json (the official annotation files)
│ │ │ │── ......
│ │ │── test
│ │ │ │── JPEGImages
│ │ │ │── instances.json (the official annotation files)
│ │ │ │── ......
│ │ │── annotations (the converted annotation file)
```
#### The folder of annotations and reid in MOT15/MOT16/MOT17/MOT20
We take MOT17 dataset as examples, the other datasets share similar structure.
There are 8 JSON files in `data/MOT17/annotations`:
`train_cocoformat.json`: JSON file containing the annotations information of the training set in MOT17 dataset.
`train_detections.pkl`: Pickle file containing the public detections of the training set in MOT17 dataset.
`test_cocoformat.json`: JSON file containing the annotations information of the testing set in MOT17 dataset.
`test_detections.pkl`: Pickle file containing the public detections of the testing set in MOT17 dataset.
`half-train_cocoformat.json`, `half-train_detections.pkl`, `half-val_cocoformat.json`and `half-val_detections.pkl` share similar meaning with `train_cocoformat.json` and `train_detections.pkl`. The `half` means we split each video in the training set into half. The first half videos are denoted as `half-train` set, and the second half videos are denoted as`half-val` set.
The structure of `data/MOT17/reid` is as follows:
```
reid
├── imgs
│ ├── MOT17-02-FRCNN_000002
│ │ ├── 000000.jpg
│ │ ├── 000001.jpg
│ │ ├── ...
│ ├── MOT17-02-FRCNN_000003
│ │ ├── 000000.jpg
│ │ ├── 000001.jpg
│ │ ├── ...
├── meta
│ ├── train_80.txt
│ ├── val_20.txt
```
The `80` in `train_80.txt` means the proportion of the training dataset to the whole ReID dataset is 80%. While the proportion of the validation dataset is 20%.
For training, we provide a annotation list `train_80.txt`. Each line of the list contains a filename and its corresponding ground-truth labels. The format is as follows:
```
MOT17-05-FRCNN_000110/000018.jpg 0
MOT17-13-FRCNN_000146/000014.jpg 1
MOT17-05-FRCNN_000088/000004.jpg 2
MOT17-02-FRCNN_000009/000081.jpg 3
```
`MOT17-05-FRCNN_000110` denotes the 110-th person in `MOT17-05-FRCNN` video.
For validation, The annotation list `val_20.txt` remains the same as format above.
Images in `reid/imgs` are cropped from raw images in `MOT17/train` by the corresponding `gt.txt`. The value of ground-truth labels should fall in range `[0, num_classes - 1]`.
#### The folder of annotations in crowdhuman
There are 2 JSON files in `data/crowdhuman/annotations`:
`crowdhuman_train.json`: JSON file containing the annotations information of the training set in CrowdHuman dataset.
`crowdhuman_val.json`: JSON file containing the annotations information of the validation set in CrowdHuman dataset.
#### The folder of annotations in youtube_vis_2019/youtube_vis2021
There are 3 JSON files in `data/youtube_vis_2019/annotations` or `data/youtube_vis_2021/annotations`:
`youtube_vis_2019_train.json`/`youtube_vis_2021_train.json`: JSON file containing the annotations information of the training set in youtube_vis_2019/youtube_vis2021 dataset.
`youtube_vis_2019_valid.json`/`youtube_vis_2021_valid.json`: JSON file containing the annotations information of the validation set in youtube_vis_2019/youtube_vis2021 dataset.
`youtube_vis_2019_test.json`/`youtube_vis_2021_test.json`: JSON file containing the annotations information of the testing set in youtube_vis_2019/youtube_vis2021 dataset.
# Inference
We provide demo scripts to inference a given video or a folder that contains continuous images. The source codes are available [here](https://github.com/open-mmlab/mmdetection/tree/tracking/demo).
Note that if you use a folder as the input, the image names there must be **sortable** , which means we can re-order the images according to the numbers contained in the filenames. We now only support reading the images whose filenames end with `.jpg`, `.jpeg` and `.png`.
## Inference MOT models
This script can inference an input video / images with a multiple object tracking or video instance segmentation model.
```shell
python demo/mot_demo.py \
${INPUTS}
${CONFIG_FILE} \
[--checkpoint ${CHECKPOINT_FILE}] \
[--detector ${DETECTOR_FILE}] \
[--reid ${REID_FILE}] \
[--score-thr ${SCORE_THR}] \
[--device ${DEVICE}] \
[--out ${OUTPUT}] \
[--show]
```
The `INPUT` and `OUTPUT` support both _mp4 video_ format and the _folder_ format.
**Important:** For `DeepSORT`, `SORT`, `StrongSORT`, they need load the weight of the `reid` and the weight of the `detector` separately. Therefore, we use `--detector` and `--reid` to load weights. Other algorithms such as `ByteTrack`, `OCSORT` `QDTrack` `MaskTrackRCNN` and `Mask2Former` use `--checkpoint` to load weights.
Optional arguments:
- `CHECKPOINT_FILE`: The checkpoint is optional.
- `DETECTOR_FILE`: The detector is optional.
- `REID_FILE`: The reid is optional.
- `SCORE_THR`: The threshold of score to filter bboxes.
- `DEVICE`: The device for inference. Options are `cpu` or `cuda:0`, etc.
- `OUTPUT`: Output of the visualized demo. If not specified, the `--show` is obligate to show the video on the fly.
- `--show`: Whether show the video on the fly.
**Examples of running mot model:**
```shell
# Example 1: do not specify --checkpoint to use --detector
python demo/mot_demo.py \
demo/demo_mot.mp4 \
configs/sort/sort_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py \
--detector \
https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth \
--out mot.mp4
# Example 2: use --checkpoint
python demo/mot_demo.py \
demo/demo_mot.mp4 \
configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py \
--checkpoint https://download.openmmlab.com/mmtracking/mot/qdtrack/mot_dataset/qdtrack_faster-rcnn_r50_fpn_4e_mot17_20220315_145635-76f295ef.pth \
--out mot.mp4
```
# Learn to train and test
## Train
This section will show how to train existing models on supported datasets.
The following training environments are supported:
- CPU
- single GPU
- single node multiple GPUs
- multiple nodes
You can also manage jobs with Slurm.
Important:
- You can change the evaluation interval during training by modifying the `train_cfg` as
`train_cfg = dict(val_interval=10)`. That means evaluating the model every 10 epochs.
- The default learning rate in all config files is for 8 GPUs.
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677),
you need to set the learning rate proportional to the batch size if you use different GPUs or images per GPU,
e.g., `lr=0.01` for 8 GPUs * 1 img/gpu and lr=0.04 for 16 GPUs * 2 imgs/gpu.
- During training, log files and checkpoints will be saved to the working directory,
which is specified by CLI argument `--work-dir`. It uses `./work_dirs/CONFIG_NAME` as default.
- If you want the mixed precision training, simply specify CLI argument `--amp`.
#### 1. Train on CPU
The model is default put on cuda device.
Only if there are no cuda devices, the model will be put on cpu.
So if you want to train the model on CPU, you need to `export CUDA_VISIBLE_DEVICES=-1` to disable GPU visibility first.
More details in [MMEngine](https://github.com/open-mmlab/mmengine/blob/ca282aee9e402104b644494ca491f73d93a9544f/mmengine/runner/runner.py#L849-L850).
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/train.py ${CONFIG_FILE} [optional arguments]
```
An example of training the MOT model QDTrack on CPU:
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/train.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
```
#### 2. Train on single GPU
If you want to train the model on single GPU, you can directly use the `tools/train.py` as follows.
```shell script
python tools/train.py ${CONFIG_FILE} [optional arguments]
```
You can use `export CUDA_VISIBLE_DEVICES=$GPU_ID` to select the GPU.
An example of training the MOT model QDTrack on single GPU:
```shell script
CUDA_VISIBLE_DEVICES=2 python tools/train.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
```
#### 3. Train on single node multiple GPUs
We provide `tools/dist_train.sh` to launch training on multiple GPUs.
The basic usage is as follows.
```shell script
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
If you would like to launch multiple jobs on a single machine,
e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs,
you need to specify different ports (29500 by default) for each job to avoid communication conflict.
For example, you can set the port in commands as follows.
```shell script
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
```
An example of training the MOT model QDTrack on single node multiple GPUs:
```shell script
bash ./tools/dist_train.sh configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py 8
```
#### 4. Train on multiple nodes
If you launch with multiple machines simply connected with ethernet, you can simply run following commands:
On the first machine:
```shell script
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
```
On the second machine:
```shell script
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
```
Usually it is slow if you do not have high speed networking like InfiniBand.
#### 5. Train with Slurm
[Slurm](https://slurm.schedmd.com/) is a good job scheduling system for computing clusters.
On a cluster managed by Slurm, you can use `slurm_train.sh` to spawn training jobs.
It supports both single-node and multi-node training.
The basic usage is as follows.
```shell script
bash ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR} ${GPUS}
```
An example of training the MOT model QDTrack with Slurm:
```shell script
PORT=29501 \
GPUS_PER_NODE=8 \
SRUN_ARGS="--quotatype=reserved" \
bash ./tools/slurm_train.sh \
mypartition \
mottrack
configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
./work_dirs/QDTrack \
8
```
## Test
This section will show how to test existing models on supported datasets.
The following testing environments are supported:
- CPU
- single GPU
- single node multiple GPUs
- multiple nodes
You can also manage jobs with Slurm.
Important:
- In MOT, some algorithms like `DeepSORT`, `SORT`, `StrongSORT` need load the weight of the `reid` and the weight of the `detector` separately.
Other algorithms such as `ByteTrack`, `OCSORT` and `QDTrack` don't need. So we provide `--checkpoint`, `--detector` and `--reid` to load weights.
- We provide two ways to evaluate and test models, video_basede test and image_based test. some algorithms like `StrongSORT`, `Mask2former` only support
video_based test. if your GPU memory can't fit the entire video, you can switch test way by set sampler type.
For example:
video_based test: `sampler=dict(type='DefaultSampler', shuffle=False, round_up=False)`
image_based test: `sampler=dict(type='TrackImgSampler')`
- You can set the results saving path by modifying the key `outfile_prefix` in evaluator.
For example, `val_evaluator = dict(outfile_prefix='results/sort_mot17')`.
Otherwise, a temporal file will be created and will be removed after evaluation.
- If you just want the formatted results without evaluation, you can set `format_only=True`.
For example, `test_evaluator = dict(type='MOTChallengeMetric', metric=['HOTA', 'CLEAR', 'Identity'], outfile_prefix='sort_mot17_results', format_only=True)`
#### 1. Test on CPU
The model is default put on cuda device.
Only if there are no cuda devices, the model will be put on cpu.
So if you want to test the model on CPU, you need to `export CUDA_VISIBLE_DEVICES=-1` to disable GPU visibility first.
More details in [MMEngine](https://github.com/open-mmlab/mmengine/blob/ca282aee9e402104b644494ca491f73d93a9544f/mmengine/runner/runner.py#L849-L850).
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/test_tracking.py ${CONFIG_FILE} [optional arguments]
```
An example of testing the MOT model SORT on CPU:
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/test_tracking.py configs/sort/sort_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py --detector ${CHECKPOINT_FILE}
```
#### 2. Test on single GPU
If you want to test the model on single GPU, you can directly use the `tools/test_tracking.py` as follows.
```shell script
python tools/test_tracking.py ${CONFIG_FILE} [optional arguments]
```
You can use `export CUDA_VISIBLE_DEVICES=$GPU_ID` to select the GPU.
An example of testing the MOT model QDTrack on single GPU:
```shell script
CUDA_VISIBLE_DEVICES=2 python tools/test_tracking.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py --detector ${CHECKPOINT_FILE}
```
#### 3. Test on single node multiple GPUs
We provide `tools/dist_test_tracking.sh` to launch testing on multiple GPUs.
The basic usage is as follows.
```shell script
bash ./tools/dist_test_tracking.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
An example of testing the MOT model DeepSort on single node multiple GPUs:
```shell script
bash ./tools/dist_test_tracking.sh configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py 8 --detector ${CHECKPOINT_FILE} --reid ${CHECKPOINT_FILE}
```
#### 4. Test on multiple nodes
You can test on multiple nodes, which is similar with "Train on multiple nodes".
#### 5. Test with Slurm
On a cluster managed by Slurm, you can use `slurm_test_tracking.sh` to spawn testing jobs.
It supports both single-node and multi-node testing.
The basic usage is as follows.
```shell script
[GPUS=${GPUS}] bash tools/slurm_test_tracking.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} [optional arguments]
```
An example of testing the VIS model Mask2former with Slurm:
```shell script
GPUS=8
bash tools/slurm_test_tracking.sh \
mypartition \
vis \
configs/mask2former_vis/mask2former_r50_8xb2-8e_youtubevis2021.py \
--checkpoint ${CHECKPOINT_FILE}
```
# Learn about Visualization
## Local Visualization
This section will present how to visualize the detection/tracking results with local visualizer.
If you want to draw prediction results, you can turn this feature on by setting `draw=True` in `TrackVisualizationHook` as follows.
```shell script
default_hooks = dict(visualization=dict(type='TrackVisualizationHook', draw=True))
```
Specifically, the `TrackVisualizationHook` has the following arguments:
- `draw`: whether to draw prediction results. If it is False, it means that no drawing will be done. Defaults to False.
- `interval`: The interval of visualization. Defaults to 30.
- `score_thr`: The threshold to visualize the bboxes and masks. Defaults to 0.3.
- `show`: Whether to display the drawn image. Default to False.
- `wait_time`: The interval of show (s). Defaults to 0.
- `test_out_dir`: directory where painted images will be saved in testing process.
- `backend_args`: Arguments to instantiate a file client. Defaults to `None`.
In the `TrackVisualizationHook`, `TrackLocalVisualizer` will be called to implement visualization for MOT and VIS tasks.
We will present the details below.
You can refer to MMEngine for more details about [Visualization](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) and [Hook](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/hook.md).
#### Tracking Visualization
We realize the tracking visualization with class `TrackLocalVisualizer`.
You can call it as follows.
```python
visualizer = dict(type='TrackLocalVisualizer')
```
It has the following arguments:
- `name`: Name of the instance. Defaults to 'visualizer'.
- `image`: The origin image to draw. The format should be RGB. Defaults to None.
- `vis_backends`: Visual backend config list. Defaults to None.
- `save_dir`: Save file dir for all storage backends. If it is None, the backend storage will not save any data.
- `line_width`: The linewidth of lines. Defaults to 3.
- `alpha`: The transparency of bboxes or mask. Defaults to 0.8.
Here is a visualization example of DeepSORT:
![test_img_89](https://user-images.githubusercontent.com/99722489/186062929-6d0e4663-0d8e-4045-9ec8-67e0e41da876.png)
# Train predefined models on standard datasets
MMDetection also provides out-of-the-box tools for training detection models.
This section will show how to train _predefined_ models (under [configs](../../../configs)) on standard datasets i.e. COCO.
## Prepare datasets
Preparing datasets is also necessary for training. See section [Prepare datasets](#prepare-datasets) above for details.
**Note**:
Currently, the config files under `configs/cityscapes` use COCO pre-trained weights to initialize.
If your network connection is slow or unavailable, it's advisable to download existing models before beginning training to avoid errors.
## Learning rate auto scaling
**Important**: The default learning rate in config files is for 8 GPUs and 2 sample per GPU (batch size = 8 * 2 = 16). And it had been set to `auto_scale_lr.base_batch_size` in `config/_base_/schedules/schedule_1x.py`. The learning rate will be automatically scaled based on the value at a batch size of 16. Meanwhile, to avoid affecting other codebases that use mmdet, the default setting for the `auto_scale_lr.enable` flag is `False`.
If you want to enable this feature, you need to add argument `--auto-scale-lr`. And you need to check the config name which you want to use before you process the command, because the config name indicates the default batch size.
By default, it is `8 x 2 = 16 batch size`, like `faster_rcnn_r50_caffe_fpn_90k_coco.py` or `pisa_faster_rcnn_x101_32x4d_fpn_1x_coco.py`. In other cases, you will see the config file name have `_NxM_` in dictating, like `cornernet_hourglass104_mstest_32x3_210e_coco.py` which batch size is `32 x 3 = 96`, or `scnet_x101_64x4d_fpn_8x1_20e_coco.py` which batch size is `8 x 1 = 8`.
**Please remember to check the bottom of the specific config file you want to use, it will have `auto_scale_lr.base_batch_size` if the batch size is not `16`. If you can't find those values, check the config file which in `_base_=[xxx]` and you will find it. Please do not modify its values if you want to automatically scale the LR.**
The basic usage of learning rate auto scaling is as follows.
```shell
python tools/train.py \
${CONFIG_FILE} \
--auto-scale-lr \
[optional arguments]
```
If you enabled this feature, the learning rate will be automatically scaled according to the number of GPUs on the machine and the batch size of training. See [linear scaling rule](https://arxiv.org/abs/1706.02677) for details. For example, If there are 4 GPUs and 2 pictures on each GPU, `lr = 0.01`, then if there are 16 GPUs and 4 pictures on each GPU, it will automatically scale to `lr = 0.08`.
If you don't want to use it, you need to calculate the learning rate according to the [linear scaling rule](https://arxiv.org/abs/1706.02677) manually then change `optimizer.lr` in specific config file.
## Training on a single GPU
We provide `tools/train.py` to launch training jobs on a single GPU.
The basic usage is as follows.
```shell
python tools/train.py \
${CONFIG_FILE} \
[optional arguments]
```
During training, log files and checkpoints will be saved to the working directory, which is specified by `work_dir` in the config file or via CLI argument `--work-dir`.
By default, the model is evaluated on the validation set every epoch, the evaluation interval can be specified in the config file as shown below.
```python
# evaluate the model every 12 epochs.
train_cfg = dict(val_interval=12)
```
This tool accepts several optional arguments, including:
- `--work-dir ${WORK_DIR}`: Override the working directory.
- `--resume`: resume from the latest checkpoint in the work_dir automatically.
- `--resume ${CHECKPOINT_FILE}`: resume from the specific checkpoint.
- `--cfg-options 'Key=value'`: Overrides other settings in the used config.
**Note:**
There is a difference between `resume` and `load-from`:
`resume` loads both the weights of the model and the state of the optimizer, and it inherits the iteration number from the specified checkpoint, so training does not start again from scratch. `load-from`, on the other hand, only loads the weights of the model, and its training starts from scratch. It is often used for fine-tuning a model. `load-from` needs to be written in the config file, while `resume` is passed as a command line argument.
## Training on CPU
The process of training on the CPU is consistent with single GPU training. We just need to disable GPUs before the training process.
```shell
export CUDA_VISIBLE_DEVICES=-1
```
And then run the script [above](#training-on-a-single-GPU).
**Note**:
We do not recommend users to use the CPU for training because it is too slow. We support this feature to allow users to debug on machines without GPU for convenience.
## Training on multiple GPUs
We provide `tools/dist_train.sh` to launch training on multiple GPUs.
The basic usage is as follows.
```shell
bash ./tools/dist_train.sh \
${CONFIG_FILE} \
${GPU_NUM} \
[optional arguments]
```
Optional arguments remain the same as stated [above](#training-on-a-single-GPU).
### Launch multiple jobs simultaneously
If you would like to launch multiple jobs on a single machine, e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs,
you need to specify different ports (29500 by default) for each job to avoid communication conflict.
If you use `dist_train.sh` to launch training jobs, you can set the port in the commands.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
```
## Train with multiple machines
If you launch with multiple machines simply connected with ethernet, you can simply run the following commands:
On the first machine:
```shell
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
On the second machine:
```shell
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
Usually, it is slow if you do not have high-speed networking like InfiniBand.
## Manage jobs with Slurm
[Slurm](https://slurm.schedmd.com/) is a good job scheduling system for computing clusters.
On a cluster managed by Slurm, you can use `slurm_train.sh` to spawn training jobs. It supports both single-node and multi-node training.
The basic usage is as follows.
```shell
[GPUS=${GPUS}] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR}
```
Below is an example of using 16 GPUs to train Mask R-CNN on a Slurm partition named _dev_, and set the work-dir to some shared file systems.
```shell
GPUS=16 ./tools/slurm_train.sh dev mask_r50_1x configs/mask-rcnn_r50_fpn_1x_coco.py /nfs/xxxx/mask_rcnn_r50_fpn_1x
```
You can check [the source code](../../../tools/slurm_train.sh) to review full arguments and environment variables.
When using Slurm, the port option needs to be set in one of the following ways:
1. Set the port through `--options`. This is more recommended since it does not change the original configs.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR} --cfg-options 'dist_params.port=29500'
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR} --cfg-options 'dist_params.port=29501'
```
2. Modify the config files to set different communication ports.
In `config1.py`, set
```python
dist_params = dict(backend='nccl', port=29500)
```
In `config2.py`, set
```python
dist_params = dict(backend='nccl', port=29501)
```
Then you can launch two jobs with `config1.py` and `config2.py`.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR}
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR}
```
# Train with customized datasets
In this part, you will know how to train predefined models with customized datasets and then test it. We use the [balloon dataset](https://github.com/matterport/Mask_RCNN/tree/master/samples/balloon) as an example to describe the whole process.
The basic steps are as below:
1. Prepare the customized dataset
2. Prepare a config
3. Train, test, and infer models on the customized dataset.
## Prepare the customized dataset
There are three ways to support a new dataset in MMDetection:
1. Reorganize the dataset into COCO format.
2. Reorganize the dataset into a middle format.
3. Implement a new dataset.
Usually, we recommend using the first two methods which are usually easier than the third.
In this note, we give an example of converting the data into COCO format.
**Note**: Datasets and metrics have been decoupled except CityScapes since MMDetection 3.0. Therefore, users can use any kind of evaluation metrics for any format of datasets during validation. For example: evaluate on COCO dataset with VOC metric, or evaluate on OpenImages dataset with both VOC and COCO metrics.
### COCO annotation format
The necessary keys of COCO format for instance segmentation are as below, for the complete details, please refer [here](https://cocodataset.org/#format-data).
```json
{
"images": [image],
"annotations": [annotation],
"categories": [category]
}
image = {
"id": int,
"width": int,
"height": int,
"file_name": str,
}
annotation = {
"id": int,
"image_id": int,
"category_id": int,
"segmentation": RLE or [polygon],
"area": float,
"bbox": [x,y,width,height], # (x, y) are the coordinates of the upper left corner of the bbox
"iscrowd": 0 or 1,
}
categories = [{
"id": int,
"name": str,
"supercategory": str,
}]
```
Assume we use the balloon dataset.
After downloading the data, we need to implement a function to convert the annotation format into the COCO format. Then we can use implemented `CocoDataset` to load the data and perform training and evaluation.
If you take a look at the dataset, you will find the dataset format is as below:
```json
{'base64_img_data': '',
'file_attributes': {},
'filename': '34020010494_e5cb88e1c4_k.jpg',
'fileref': '',
'regions': {'0': {'region_attributes': {},
'shape_attributes': {'all_points_x': [1020,
1000,
994,
1003,
1023,
1050,
1089,
1134,
1190,
1265,
1321,
1361,
1403,
1428,
1442,
1445,
1441,
1427,
1400,
1361,
1316,
1269,
1228,
1198,
1207,
1210,
1190,
1177,
1172,
1174,
1170,
1153,
1127,
1104,
1061,
1032,
1020],
'all_points_y': [963,
899,
841,
787,
738,
700,
663,
638,
621,
619,
643,
672,
720,
765,
800,
860,
896,
942,
990,
1035,
1079,
1112,
1129,
1134,
1144,
1153,
1166,
1166,
1150,
1136,
1129,
1122,
1112,
1084,
1037,
989,
963],
'name': 'polygon'}}},
'size': 1115004}
```
The annotation is a JSON file where each key indicates an image's all annotations.
The code to convert the balloon dataset into coco format is as below.
```python
import os.path as osp
import mmcv
from mmengine.fileio import dump, load
from mmengine.utils import track_iter_progress
def convert_balloon_to_coco(ann_file, out_file, image_prefix):
data_infos = load(ann_file)
annotations = []
images = []
obj_count = 0
for idx, v in enumerate(track_iter_progress(data_infos.values())):
filename = v['filename']
img_path = osp.join(image_prefix, filename)
height, width = mmcv.imread(img_path).shape[:2]
images.append(
dict(id=idx, file_name=filename, height=height, width=width))
for _, obj in v['regions'].items():
assert not obj['region_attributes']
obj = obj['shape_attributes']
px = obj['all_points_x']
py = obj['all_points_y']
poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
poly = [p for x in poly for p in x]
x_min, y_min, x_max, y_max = (min(px), min(py), max(px), max(py))
data_anno = dict(
image_id=idx,
id=obj_count,
category_id=0,
bbox=[x_min, y_min, x_max - x_min, y_max - y_min],
area=(x_max - x_min) * (y_max - y_min),
segmentation=[poly],
iscrowd=0)
annotations.append(data_anno)
obj_count += 1
coco_format_json = dict(
images=images,
annotations=annotations,
categories=[{
'id': 0,
'name': 'balloon'
}])
dump(coco_format_json, out_file)
if __name__ == '__main__':
convert_balloon_to_coco(ann_file='data/balloon/train/via_region_data.json',
out_file='data/balloon/train/annotation_coco.json',
image_prefix='data/balloon/train')
convert_balloon_to_coco(ann_file='data/balloon/val/via_region_data.json',
out_file='data/balloon/val/annotation_coco.json',
image_prefix='data/balloon/val')
```
Using the function above, users can successfully convert the annotation file into json format, then we can use `CocoDataset` to train and evaluate the model with `CocoMetric`.
## Prepare a config
The second step is to prepare a config thus the dataset could be successfully loaded. Assume that we want to use Mask R-CNN with FPN, the config to train the detector on balloon dataset is as below. Assume the config is under directory `configs/balloon/` and named as `mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py`, the config is as below. Please refer [Learn about Configs - MMDetection 3.0.0 documentation](https://mmdetection.readthedocs.io/en/latest/user_guides/config.html) to get detailed information about config files.
```python
# The new config inherits a base config to highlight the necessary modification
_base_ = '../mask_rcnn/mask-rcnn_r50-caffe_fpn_ms-poly-1x_coco.py'
# We also need to change the num_classes in head to match the dataset's annotation
model = dict(
roi_head=dict(
bbox_head=dict(num_classes=1), mask_head=dict(num_classes=1)))
# Modify dataset related settings
data_root = 'data/balloon/'
metainfo = {
'classes': ('balloon', ),
'palette': [
(220, 20, 60),
]
}
train_dataloader = dict(
batch_size=1,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
ann_file='train/annotation_coco.json',
data_prefix=dict(img='train/')))
val_dataloader = dict(
dataset=dict(
data_root=data_root,
metainfo=metainfo,
ann_file='val/annotation_coco.json',
data_prefix=dict(img='val/')))
test_dataloader = val_dataloader
# Modify metric related settings
val_evaluator = dict(ann_file=data_root + 'val/annotation_coco.json')
test_evaluator = val_evaluator
# We can use the pre-trained Mask RCNN model to obtain higher performance
load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth'
```
## Train a new model
To train a model with the new config, you can simply run
```shell
python tools/train.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py
```
For more detailed usages, please refer to the [training guide](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html#train-predefined-models-on-standard-datasets).
## Test and inference
To test the trained model, you can simply run
```shell
python tools/test.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py work_dirs/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon/epoch_12.pth
```
For more detailed usages, please refer to the [testing guide](https://mmdetection.readthedocs.io/en/latest/user_guides/test.html).
# Useful Hooks
MMDetection and MMEngine provide users with various useful hooks including log hooks, `NumClassCheckHook`, etc. This tutorial introduces the functionalities and usages of hooks implemented in MMDetection. For using hooks in MMEngine, please read the [API documentation in MMEngine](https://github.com/open-mmlab/mmengine/tree/main/docs/en/tutorials/hook.md).
## CheckInvalidLossHook
## NumClassCheckHook
## MemoryProfilerHook
[Memory profiler hook](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/memory_profiler_hook.py) records memory information including virtual memory, swap memory, and the memory of the current process. This hook helps grasp the memory usage of the system and discover potential memory leak bugs. To use this hook, users should install `memory_profiler` and `psutil` by `pip install memory_profiler psutil` first.
### Usage
To use this hook, users should add the following code to the config file.
```python
custom_hooks = [
dict(type='MemoryProfilerHook', interval=50)
]
```
### Result
During training, you can see the messages in the log recorded by `MemoryProfilerHook` as below.
```text
The system has 250 GB (246360 MB + 9407 MB) of memory and 8 GB (5740 MB + 2452 MB) of swap memory in total. Currently 9407 MB (4.4%) of memory and 5740 MB (29.9%) of swap memory were consumed. And the current training process consumed 5434 MB of memory.
```
```text
2022-04-21 08:49:56,881 - mmengine - INFO - Memory information available_memory: 246360 MB, used_memory: 9407 MB, memory_utilization: 4.4 %, available_swap_memory: 5740 MB, used_swap_memory: 2452 MB, swap_memory_utilization: 29.9 %, current_process_memory: 5434 MB
```
## SetEpochInfoHook
## SyncNormHook
## SyncRandomSizeHook
## YOLOXLrUpdaterHook
## YOLOXModeSwitchHook
## How to implement a custom hook
In general, there are 20 points where hooks can be inserted from the beginning to the end of model training. The users can implement custom hooks and insert them at different points in the process of training to do what they want.
- global points: `before_run`, `after_run`
- points in training: `before_train`, `before_train_epoch`, `before_train_iter`, `after_train_iter`, `after_train_epoch`, `after_train`
- points in validation: `before_val`, `before_val_epoch`, `before_val_iter`, `after_val_iter`, `after_val_epoch`, `after_val`
- points at testing: `before_test`, `before_test_epoch`, `before_test_iter`, `after_test_iter`, `after_test_epoch`, `after_test`
- other points: `before_save_checkpoint`, `after_save_checkpoint`
For example, users can implement a hook to check loss and terminate training when loss goes NaN. To achieve that, there are three steps to go:
1. Implement a new hook that inherits the `Hook` class in MMEngine, and implement `after_train_iter` method which checks whether loss goes NaN after every `n` training iterations.
2. The implemented hook should be registered in `HOOKS` by `@HOOKS.register_module()` as shown in the code below.
3. Add `custom_hooks = [dict(type='MemoryProfilerHook', interval=50)]` in the config file.
```python
from typing import Optional
import torch
from mmengine.hooks import Hook
from mmengine.runner import Runner
from mmdet.registry import HOOKS
@HOOKS.register_module()
class CheckInvalidLossHook(Hook):
"""Check invalid loss hook.
This hook will regularly check whether the loss is valid
during training.
Args:
interval (int): Checking interval (every k iterations).
Default: 50.
"""
def __init__(self, interval: int = 50) -> None:
self.interval = interval
def after_train_iter(self,
runner: Runner,
batch_idx: int,
data_batch: Optional[dict] = None,
outputs: Optional[dict] = None) -> None:
"""Regularly check whether the loss is valid every n iterations.
Args:
runner (:obj:`Runner`): The runner of the training process.
batch_idx (int): The index of the current batch in the train loop.
data_batch (dict, Optional): Data from dataloader.
Defaults to None.
outputs (dict, Optional): Outputs from model. Defaults to None.
"""
if self.every_n_train_iters(runner, self.interval):
assert torch.isfinite(outputs['loss']), \
runner.logger.info('loss become infinite or NaN!')
```
Please read [customize_runtime](../advanced_guides/customize_runtime.md) for more about implementing a custom hook.
Apart from training/testing scripts, We provide lots of useful tools under the
`tools/` directory.
## Log Analysis
`tools/analysis_tools/analyze_logs.py` plots loss/mAP curves given a training
log file. Run `pip install seaborn` first to install the dependency.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--eval-interval ${EVALUATION_INTERVAL}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
```
![loss curve image](../../../resources/loss_curve.png)
Examples:
- Plot the classification loss of some run.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
```
- Plot the classification and regression loss of some run, and save the figure to a pdf.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
```
- Compare the bbox mAP of two runs in the same figure.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log1.json log2.json --keys bbox_mAP --legend run1 run2
```
- Compute the average training speed.
```shell
python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers]
```
The output is expected to be like the following.
```text
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
slowest epoch 11, average time is 1.2024
fastest epoch 1, average time is 1.1909
time std over epochs is 0.0028
average iter time: 1.1959 s/iter
```
## Result Analysis
`tools/analysis_tools/analyze_results.py` calculates single image mAP and saves or shows the topk images with the highest and lowest scores based on prediction results.
**Usage**
```shell
python tools/analysis_tools/analyze_results.py \
${CONFIG} \
${PREDICTION_PATH} \
${SHOW_DIR} \
[--show] \
[--wait-time ${WAIT_TIME}] \
[--topk ${TOPK}] \
[--show-score-thr ${SHOW_SCORE_THR}] \
[--cfg-options ${CFG_OPTIONS}]
```
Description of all arguments:
- `config` : The path of a model config file.
- `prediction_path`: Output result file in pickle format from `tools/test.py`
- `show_dir`: Directory where painted GT and detection images will be saved
- `--show`: Determines whether to show painted images, If not specified, it will be set to `False`
- `--wait-time`: The interval of show (s), 0 is block
- `--topk`: The number of saved images that have the highest and lowest `topk` scores after sorting. If not specified, it will be set to `20`.
- `--show-score-thr`: Show score threshold. If not specified, it will be set to `0`.
- `--cfg-options`: If specified, the key-value pair optional cfg will be merged into config file
**Examples**:
Assume that you have got result file in pickle format from `tools/test.py` in the path './result.pkl'.
1. Test Faster R-CNN and visualize the results, save images to the directory `results/`
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--show
```
2. Test Faster R-CNN and specified topk to 50, save images to the directory `results/`
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--topk 50
```
3. If you want to filter the low score prediction results, you can specify the `show-score-thr` parameter
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--show-score-thr 0.3
```
## Fusing results from multiple models
`tools/analysis_tools/fusion_results.py` can fusing predictions using Weighted Boxes Fusion(WBF) from different object detection models. (Currently support coco format only)
**Usage**
```shell
python tools/analysis_tools/fuse_results.py \
${PRED_RESULTS} \
[--annotation ${ANNOTATION}] \
[--weights ${WEIGHTS}] \
[--fusion-iou-thr ${FUSION_IOU_THR}] \
[--skip-box-thr ${SKIP_BOX_THR}] \
[--conf-type ${CONF_TYPE}] \
[--eval-single ${EVAL_SINGLE}] \
[--save-fusion-results ${SAVE_FUSION_RESULTS}] \
[--out-dir ${OUT_DIR}]
```
Description of all arguments:
- `pred-results`: Paths of detection results from different models.(Currently support coco format only)
- `--annotation`: Path of ground-truth.
- `--weights`: List of weights for each model. Default: `None`, which means weight == 1 for each model.
- `--fusion-iou-thr`: IoU value for boxes to be a match。Default: `0.55`
- `--skip-box-thr`: The confidence threshold that needs to be excluded in the WBF algorithm. bboxes whose confidence is less than this value will be excluded.。Default: `0`
- `--conf-type`: How to calculate confidence in weighted boxes.
- `avg`: average value,default.
- `max`: maximum value.
- `box_and_model_avg`: box and model wise hybrid weighted average.
- `absent_model_aware_avg`: weighted average that takes into account the absent model.
- `--eval-single`: Whether evaluate every single model. Default: `False`.
- `--save-fusion-results`: Whether save fusion results. Default: `False`.
- `--out-dir`: Path of fusion results.
**Examples**:
Assume that you have got 3 result files from corresponding models through `tools/test.py`, which paths are './faster-rcnn_r50-caffe_fpn_1x_coco.json', './retinanet_r50-caffe_fpn_1x_coco.json', './cascade-rcnn_r50-caffe_fpn_1x_coco.json' respectively. The ground-truth file path is './annotation.json'.
1. Fusion of predictions from three models and evaluation of their effectiveness
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
```
2. Simultaneously evaluate each single model and fusion results
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
--eval-single
```
3. Fusion of prediction results from three models and save
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
--save-fusion-results \
--out-dir outputs/fusion
```
## Visualization
### Visualize Datasets
`tools/analysis_tools/browse_dataset.py` helps the user to browse a detection dataset (both
images and bounding box annotations) visually, or save the image to a
designated directory.
```shell
python tools/analysis_tools/browse_dataset.py ${CONFIG} [-h] [--skip-type ${SKIP_TYPE[SKIP_TYPE...]}] [--output-dir ${OUTPUT_DIR}] [--not-show] [--show-interval ${SHOW_INTERVAL}]
```
### Visualize Models
First, convert the model to ONNX as described
[here](#convert-mmdetection-model-to-onnx-experimental).
Note that currently only RetinaNet is supported, support for other models
will be coming in later versions.
The converted model could be visualized by tools like [Netron](https://github.com/lutzroeder/netron).
### Visualize Predictions
If you need a lightweight GUI for visualizing the detection results, you can refer [DetVisGUI project](https://github.com/Chien-Hung/DetVisGUI/tree/mmdetection).
## Error Analysis
`tools/analysis_tools/coco_error_analysis.py` analyzes COCO results per category and by
different criterion. It can also make a plot to provide useful information.
```shell
python tools/analysis_tools/coco_error_analysis.py ${RESULT} ${OUT_DIR} [-h] [--ann ${ANN}] [--types ${TYPES[TYPES...]}]
```
Example:
Assume that you have got [Mask R-CNN checkpoint file](https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth) in the path 'checkpoint'. For other checkpoints, please refer to our [model zoo](./model_zoo.md).
You can modify the test_evaluator to save the results bbox by:
1. Find which dataset in 'configs/base/datasets' the current config corresponds to.
2. Replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in dataset config.
3. Use the following command to get the results bbox and segmentation json file.
```shell
python tools/test.py \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoint/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
```
1. Get COCO bbox error results per category , save analyze result images to the directory(In [config](../../../configs/_base_/datasets/coco_instance.py) the default directory is './work_dirs/coco_instance/test')
```shell
python tools/analysis_tools/coco_error_analysis.py \
results.bbox.json \
results \
--ann=data/coco/annotations/instances_val2017.json \
```
2. Get COCO segmentation error results per category , save analyze result images to the directory
```shell
python tools/analysis_tools/coco_error_analysis.py \
results.segm.json \
results \
--ann=data/coco/annotations/instances_val2017.json \
--types='segm'
```
## Model Serving
In order to serve an `MMDetection` model with [`TorchServe`](https://pytorch.org/serve/), you can follow the steps:
### 1. Install TorchServe
Suppose you have a `Python` environment with `PyTorch` and `MMDetection` successfully installed,
then you could run the following command to install `TorchServe` and its dependencies.
For more other installation options, please refer to the [quick start](https://github.com/pytorch/serve/blob/master/README.md#serve-a-model).
```shell
python -m pip install torchserve torch-model-archiver torch-workflow-archiver nvgpu
```
**Note**: Please refer to [torchserve docker](https://github.com/pytorch/serve/blob/master/docker/README.md) if you want to use `TorchServe` in docker.
### 2. Convert model from MMDetection to TorchServe
```shell
python tools/deployment/mmdet2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```
### 3. Start `TorchServe`
```shell
torchserve --start --ncs \
--model-store ${MODEL_STORE} \
--models ${MODEL_NAME}.mar
```
### 4. Test deployment
```shell
curl -O curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/3dogs.jpg
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T 3dogs.jpg
```
You should obtain a response similar to:
```json
[
{
"class_label": 16,
"class_name": "dog",
"bbox": [
294.63409423828125,
203.99111938476562,
417.048583984375,
281.62744140625
],
"score": 0.9987992644309998
},
{
"class_label": 16,
"class_name": "dog",
"bbox": [
404.26019287109375,
126.0080795288086,
574.5091552734375,
293.6662292480469
],
"score": 0.9979367256164551
},
{
"class_label": 16,
"class_name": "dog",
"bbox": [
197.2144775390625,
93.3067855834961,
307.8505554199219,
276.7560119628906
],
"score": 0.993338406085968
}
]
```
#### Compare results
And you can use `test_torchserver.py` to compare result of `TorchServe` and `PyTorch`, and visualize them.
```shell
python tools/deployment/test_torchserver.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}] [--score-thr ${SCORE_THR}] [--work-dir ${WORK_DIR}]
```
Example:
```shell
python tools/deployment/test_torchserver.py \
demo/demo.jpg \
configs/yolo/yolov3_d53_8xb8-320-273e_coco.py \
checkpoint/yolov3_d53_320_273e_coco-421362b6.pth \
yolov3 \
--work-dir ./work-dir
```
### 5. Stop `TorchServe`
```shell
torchserve --stop
```
## Model Complexity
`tools/analysis_tools/get_flops.py` is a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) to compute the FLOPs and params of a given model.
```shell
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
```
You will get the results like this.
```text
==============================
Input shape: (3, 1280, 800)
Flops: 239.32 GFLOPs
Params: 37.74 M
==============================
```
**Note**: This tool is still experimental and we do not guarantee that the
number is absolutely correct. You may well use the result for simple
comparisons, but double check it before you adopt it in technical reports or papers.
1. FLOPs are related to the input shape while parameters are not. The default
input shape is (1, 3, 1280, 800).
2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/2.x/mmcv/cnn/utils/flops_counter.py) for details.
3. The FLOPs of two-stage detectors is dependent on the number of proposals.
## Model conversion
### MMDetection model to ONNX
We provide a script to convert model to [ONNX](https://github.com/onnx/onnx) format. We also support comparing the output results between Pytorch and ONNX model for verification. More details can refer to [mmdeploy](https://github.com/open-mmlab/mmdeploy)
### MMDetection 1.x model to MMDetection 2.x
`tools/model_converters/upgrade_model_version.py` upgrades a previous MMDetection checkpoint
to the new version. Note that this script is not guaranteed to work as some
breaking changes are introduced in the new version. It is recommended to
directly use the new checkpoints.
```shell
python tools/model_converters/upgrade_model_version.py ${IN_FILE} ${OUT_FILE} [-h] [--num-classes NUM_CLASSES]
```
### RegNet model to MMDetection
`tools/model_converters/regnet2mmdet.py` convert keys in pycls pretrained RegNet models to
MMDetection style.
```shell
python tools/model_converters/regnet2mmdet.py ${SRC} ${DST} [-h]
```
### Detectron ResNet to Pytorch
`tools/model_converters/detectron2pytorch.py` converts keys in the original detectron pretrained
ResNet models to PyTorch style.
```shell
python tools/model_converters/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h]
```
### Prepare a model for publishing
`tools/model_converters/publish_model.py` helps users to prepare their model for publishing.
Before you upload a model to AWS, you may want to
1. convert model weights to CPU tensors
2. delete the optimizer states and
3. compute the hash of the checkpoint file and append the hash id to the
filename.
```shell
python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
```
E.g.,
```shell
python tools/model_converters/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth
```
The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`.
## Dataset Conversion
`tools/data_converters/` contains tools to convert the Cityscapes dataset
and Pascal VOC dataset to the COCO format.
```shell
python tools/dataset_converters/cityscapes.py ${CITYSCAPES_PATH} [-h] [--img-dir ${IMG_DIR}] [--gt-dir ${GT_DIR}] [-o ${OUT_DIR}] [--nproc ${NPROC}]
python tools/dataset_converters/pascal_voc.py ${DEVKIT_PATH} [-h] [-o ${OUT_DIR}]
```
## Dataset Download
`tools/misc/download_dataset.py` supports downloading datasets such as COCO, VOC, and LVIS.
```shell
python tools/misc/download_dataset.py --dataset-name coco2017
python tools/misc/download_dataset.py --dataset-name voc2007
python tools/misc/download_dataset.py --dataset-name lvis
```
For users in China, these datasets can also be downloaded from [OpenDataLab](https://opendatalab.com/?source=OpenMMLab%20GitHub) with high speed:
- [COCO2017](https://opendatalab.com/COCO_2017/download?source=OpenMMLab%20GitHub)
- [VOC2007](https://opendatalab.com/PASCAL_VOC2007/download?source=OpenMMLab%20GitHub)
- [VOC2012](https://opendatalab.com/PASCAL_VOC2012/download?source=OpenMMLab%20GitHub)
- [LVIS](https://opendatalab.com/LVIS/download?source=OpenMMLab%20GitHub)
## Benchmark
### Robust Detection Benchmark
`tools/analysis_tools/test_robustness.py` and`tools/analysis_tools/robustness_eval.py` helps users to evaluate model robustness. The core idea comes from [Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming](https://arxiv.org/abs/1907.07484). For more information how to evaluate models on corrupted images and results for a set of standard models please refer to [robustness_benchmarking.md](robustness_benchmarking.md).
### FPS Benchmark
`tools/analysis_tools/benchmark.py` helps users to calculate FPS. The FPS value includes model forward and post-processing. In order to get a more accurate value, currently only supports single GPU distributed startup mode.
```shell
python -m torch.distributed.launch --nproc_per_node=1 --master_port=${PORT} tools/analysis_tools/benchmark.py \
${CONFIG} \
[--checkpoint ${CHECKPOINT}] \
[--repeat-num ${REPEAT_NUM}] \
[--max-iter ${MAX_ITER}] \
[--log-interval ${LOG_INTERVAL}] \
--launcher pytorch
```
Examples: Assuming that you have already downloaded the `Faster R-CNN` model checkpoint to the directory `checkpoints/`.
```shell
python -m torch.distributed.launch --nproc_per_node=1 --master_port=29500 tools/analysis_tools/benchmark.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
--launcher pytorch
```
## Miscellaneous
### Evaluating a metric
`tools/analysis_tools/eval_metric.py` evaluates certain metrics of a pkl result file
according to a config file.
```shell
python tools/analysis_tools/eval_metric.py ${CONFIG} ${PKL_RESULTS} [-h] [--format-only] [--eval ${EVAL[EVAL ...]}]
[--cfg-options ${CFG_OPTIONS [CFG_OPTIONS ...]}]
[--eval-options ${EVAL_OPTIONS [EVAL_OPTIONS ...]}]
```
### Print the entire config
`tools/misc/print_config.py` prints the whole config verbatim, expanding all its
imports.
```shell
python tools/misc/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]
```
## Hyper-parameter Optimization
### YOLO Anchor Optimization
`tools/analysis_tools/optimize_anchors.py` provides two method to optimize YOLO anchors.
One is k-means anchor cluster which refers from [darknet](https://github.com/AlexeyAB/darknet/blob/master/src/detector.c#L1421).
```shell
python tools/analysis_tools/optimize_anchors.py ${CONFIG} --algorithm k-means --input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} --output-dir ${OUTPUT_DIR}
```
Another is using differential evolution to optimize anchors.
```shell
python tools/analysis_tools/optimize_anchors.py ${CONFIG} --algorithm differential_evolution --input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} --output-dir ${OUTPUT_DIR}
```
E.g.,
```shell
python tools/analysis_tools/optimize_anchors.py configs/yolo/yolov3_d53_8xb8-320-273e_coco.py --algorithm differential_evolution --input-shape 608 608 --device cuda --output-dir work_dirs
```
You will get:
```
loading annotations into memory...
Done (t=9.70s)
creating index...
index created!
2021-07-19 19:37:20,951 - mmdet - INFO - Collecting bboxes from annotation...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 117266/117266, 15874.5 task/s, elapsed: 7s, ETA: 0s
2021-07-19 19:37:28,753 - mmdet - INFO - Collected 849902 bboxes.
differential_evolution step 1: f(x)= 0.506055
differential_evolution step 2: f(x)= 0.506055
......
differential_evolution step 489: f(x)= 0.386625
2021-07-19 19:46:40,775 - mmdet - INFO Anchor evolution finish. Average IOU: 0.6133754253387451
2021-07-19 19:46:40,776 - mmdet - INFO Anchor differential evolution result:[[10, 12], [15, 30], [32, 22], [29, 59], [61, 46], [57, 116], [112, 89], [154, 198], [349, 336]]
2021-07-19 19:46:40,798 - mmdet - INFO Result saved in work_dirs/anchor_optimize_result.json
```
## Confusion Matrix
A confusion matrix is a summary of prediction results.
`tools/analysis_tools/confusion_matrix.py` can analyze the prediction results and plot a confusion matrix table.
First, run `tools/test.py` to save the `.pkl` detection results.
Then, run
```
python tools/analysis_tools/confusion_matrix.py ${CONFIG} ${DETECTION_RESULTS} ${SAVE_DIR} --show
```
And you will get a confusion matrix like this:
![confusion_matrix_example](https://user-images.githubusercontent.com/12907710/140513068-994cdbf4-3a4a-48f0-8fd8-2830d93fd963.png)
## COCO Separated & Occluded Mask Metric
Detecting occluded objects still remains a challenge for state-of-the-art object detectors.
We implemented the metric presented in paper [A Tri-Layer Plugin to Improve Occluded Detection](https://arxiv.org/abs/2210.10046) to calculate the recall of separated and occluded masks.
There are two ways to use this metric:
### Offline evaluation
We provide a script to calculate the metric with a dumped prediction file.
First, use the `tools/test.py` script to dump the detection results:
```shell
python tools/test.py ${CONFIG} ${MODEL_PATH} --out results.pkl
```
Then, run the `tools/analysis_tools/coco_occluded_separated_recall.py` script to get the recall of separated and occluded masks:
```shell
python tools/analysis_tools/coco_occluded_separated_recall.py results.pkl --out occluded_separated_recall.json
```
The output should be like this:
```
loading annotations into memory...
Done (t=0.51s)
creating index...
index created!
processing detection results...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5000/5000, 109.3 task/s, elapsed: 46s, ETA: 0s
computing occluded mask recall...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5550/5550, 780.5 task/s, elapsed: 7s, ETA: 0s
COCO occluded mask recall: 58.79%
COCO occluded mask success num: 3263
computing separated mask recall...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3522/3522, 778.3 task/s, elapsed: 5s, ETA: 0s
COCO separated mask recall: 31.94%
COCO separated mask success num: 1125
+-----------+--------+-------------+
| mask type | recall | num correct |
+-----------+--------+-------------+
| occluded | 58.79% | 3263 |
| separated | 31.94% | 1125 |
+-----------+--------+-------------+
Evaluation results have been saved to occluded_separated_recall.json.
```
### Online evaluation
We implement `CocoOccludedSeparatedMetric` which inherits from the `CocoMetic`.
To evaluate the recall of separated and occluded masks during training, just replace the evaluator metric type with `'CocoOccludedSeparatedMetric'` in your config:
```python
val_evaluator = dict(
type='CocoOccludedSeparatedMetric', # modify this
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False)
test_evaluator = val_evaluator
```
Please cite the paper if you use this metric:
```latex
@article{zhan2022triocc,
title={A Tri-Layer Plugin to Improve Occluded Detection},
author={Zhan, Guanqi and Xie, Weidi and Zisserman, Andrew},
journal={British Machine Vision Conference},
year={2022}
}
```
# Visualization
Before reading this tutorial, it is recommended to read MMEngine's [Visualization](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) documentation to get a first glimpse of the `Visualizer` definition and usage.
In brief, the [`Visualizer`](mmengine.visualization.Visualizer) is implemented in MMEngine to meet the daily visualization needs, and contains three main functions:
- Implement common drawing APIs, such as [`draw_bboxes`](mmengine.visualization.Visualizer.draw_bboxes) which implements bounding box drawing functions, [`draw_lines`](mmengine.visualization.Visualizer.draw_lines) implements the line drawing function.
- Support writing visualization results, learning rate curves, loss function curves, and verification accuracy curves to various backends, including local disks and common deep learning training logging tools such as [TensorBoard](https://www.tensorflow.org/tensorboard) and [Wandb](https://wandb.ai/site).
- Support calling anywhere in the code to visualize or record intermediate states of the model during training or testing, such as feature maps and validation results.
Based on MMEngine's Visualizer, MMDet comes with a variety of pre-built visualization tools that can be used by the user by simply modifying the following configuration files.
- The `tools/analysis_tools/browse_dataset.py` script provides a dataset visualization function that draws images and corresponding annotations after Data Transforms, as described in [`browse_dataset.py`](useful_tools.md#Visualization).
- MMEngine implements `LoggerHook`, which uses `Visualizer` to write the learning rate, loss and evaluation results to the backend set by `Visualizer`. Therefore, by modifying the `Visualizer` backend in the configuration file, for example to ` TensorBoardVISBackend` or `WandbVISBackend`, you can implement logging to common training logging tools such as `TensorBoard` or `WandB`, thus making it easy for users to use these visualization tools to analyze and monitor the training process.
- The `VisualizerHook` is implemented in MMDet, which uses the `Visualizer` to visualize or store the prediction results of the validation or prediction phase into the backend set by the `Visualizer`, so by modifying the `Visualizer` backend in the configuration file, for example, to ` TensorBoardVISBackend` or `WandbVISBackend`, you can implement storing the predicted images to `TensorBoard` or `Wandb`.
## Configuration
Thanks to the use of the registration mechanism, in MMDet we can set the behavior of the `Visualizer` by modifying the configuration file. Usually, we define the default configuration for the visualizer in `configs/_base_/default_runtime.py`, see [configuration tutorial](config.md) for details.
```Python
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='DetLocalVisualizer',
vis_backends=vis_backends,
name='visualizer')
```
Based on the above example, we can see that the configuration of `Visualizer` consists of two main parts, namely, the type of `Visualizer` and the visualization backend `vis_backends` it uses.
- Users can directly use `DetLocalVisualizer` to visualize labels or predictions for support tasks.
- MMDet sets the visualization backend `vis_backend` to the local visualization backend `LocalVisBackend` by default, saving all visualization results and other training information in a local folder.
## Storage
MMDet uses the local visualization backend [`LocalVisBackend`](mmengine.visualization.LocalVisBackend) by default, and the model loss, learning rate, model evaluation accuracy and visualization The information stored in `VisualizerHook` and `LoggerHook`, including loss, learning rate, evaluation accuracy will be saved to the `{work_dir}/{config_name}/{time}/{vis_data}` folder by default. In addition, MMDet also supports other common visualization backends, such as `TensorboardVisBackend` and `WandbVisBackend`, and you only need to change the `vis_backends` type in the configuration file to the corresponding visualization backend. For example, you can store data to `TensorBoard` and `Wandb` by simply inserting the following code block into the configuration file.
```Python
# https://mmengine.readthedocs.io/en/latest/api/visualization.html
_base_.visualizer.vis_backends = [
dict(type='LocalVisBackend'), #
dict(type='TensorboardVisBackend'),
dict(type='WandbVisBackend'),]
```
## Plot
### Plot the prediction results
MMDet mainly uses [`DetVisualizationHook`](mmdet.engine.hooks.DetVisualizationHook) to plot the prediction results of validation and test, by default `DetVisualizationHook` is off, and the default configuration is as follows.
```Python
visualization=dict( # user visualization of validation and test results
type='DetVisualizationHook',
draw=False,
interval=1,
show=False)
```
The following table shows the parameters supported by `DetVisualizationHook`.
| Parameters | Description |
| :--------: | :-----------------------------------------------------------------------------------------------------------: |
| draw | The DetVisualizationHook is turned on and off by the enable parameter, which is the default state. |
| interval | Controls how much iteration to store or display the results of a val or test if VisualizationHook is enabled. |
| show | Controls whether to visualize the results of val or test. |
If you want to enable `DetVisualizationHook` related functions and configurations during training or testing, you only need to modify the configuration, take `configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py` as an example, draw annotations and predictions at the same time, and display the images, the configuration can be modified as follows
```Python
visualization = _base_.default_hooks.visualization
visualization.update(dict(draw=True, show=True))
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
The `test.py` procedure is further simplified by providing the `--show` and `--show-dir` parameters to visualize the annotation and prediction results during the test without modifying the configuration.
```Shell
# Show test results
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show
# Specify where to store the prediction results
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show-dir imgs/
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.header-logo {
background-image: url("../image/mmdet-logo.png");
background-size: 156px 40px;
height: 40px;
width: 156px;
}
# 默认约定
如果你想把 MMDetection 修改为自己的项目,请遵循下面的约定。
## 关于图片 shape 顺序的说明
在OpenMMLab 2.0中, 为了与 OpenCV 的输入参数相一致,图片处理 pipeline 中关于图像 shape 的输入参数总是以 `(width, height)` 的顺序排列。
相反,为了计算方便,经过 pipeline 和 model 的字段的顺序是 `(height, width)`。具体来说在每个数据 pipeline 处理的结果中,字段和它们的值含义如下:
- img_shape: (height, width)
- ori_shape: (height, width)
- pad_shape: (height, width)
- batch_input_shape: (height, width)
`Mosaic` 为例,其初始化参数如下所示:
```python
@TRANSFORMS.register_module()
class Mosaic(BaseTransform):
def __init__(self,
img_scale: Tuple[int, int] = (640, 640),
center_ratio_range: Tuple[float, float] = (0.5, 1.5),
bbox_clip_border: bool = True,
pad_val: float = 114.0,
prob: float = 1.0) -> None:
...
# img_scale 顺序应该是 (width, height)
self.img_scale = img_scale
def transform(self, results: dict) -> dict:
...
results['img'] = mosaic_img
# (height, width)
results['img_shape'] = mosaic_img.shape[:2]
```
## 损失
在 MMDetection 中,`model(**data)` 的返回值是一个字典,包含着所有的损失和评价指标,他们将会由 `model(**data)` 返回。
例如,在 bbox head 中,
```python
class BBoxHead(nn.Module):
...
def loss(self, ...):
losses = dict()
# 分类损失
losses['loss_cls'] = self.loss_cls(...)
# 分类准确率
losses['acc'] = accuracy(...)
# 边界框损失
losses['loss_bbox'] = self.loss_bbox(...)
return losses
```
`'bbox_head.loss()'` 在模型 forward 阶段会被调用。返回的字典中包含了 `'loss_bbox'`,`'loss_cls'`,`'acc'`。只有 `'loss_bbox'`, `'loss_cls'` 会被用于反向传播,`'acc'` 只会被作为评价指标来监控训练过程。
我们默认,只有那些键的名称中包含 `'loss'` 的值会被用于反向传播。这个行为可以通过修改 `BaseDetector.train_step()` 来改变。
## 空 proposals
在 MMDetection 中,我们为两阶段方法中空 proposals 的情况增加了特殊处理和单元测试。我们同时需要处理整个 batch 和单一图片中空 proposals 的情况。例如,在 CascadeRoIHead 中,
```python
# 简单的测试
...
# 在整个 batch中 都没有 proposals
if rois.shape[0] == 0:
bbox_results = [[
np.zeros((0, 5), dtype=np.float32)
for _ in range(self.bbox_head[-1].num_classes)
]] * num_imgs
if self.with_mask:
mask_classes = self.mask_head[-1].num_classes
segm_results = [[[] for _ in range(mask_classes)]
for _ in range(num_imgs)]
results = list(zip(bbox_results, segm_results))
else:
results = bbox_results
return results
...
# 在单张图片中没有 proposals
for i in range(self.num_stages):
...
if i < self.num_stages - 1:
for j in range(num_imgs):
# 处理空 proposals
if rois[j].shape[0] > 0:
bbox_label = cls_score[j][:, :-1].argmax(dim=1)
refine_roi = self.bbox_head[i].regress_by_class(
rois[j], bbox_label[j], bbox_pred[j], img_metas[j])
refine_roi_list.append(refine_roi)
```
如果你有自定义的 `RoIHead`, 你可以参考上面的方法来处理空 proposals 的情况。
## 全景分割数据集
在 MMDetection 中,我们支持了 COCO 全景分割数据集 `CocoPanopticDataset`。对于它的实现,我们在这里声明一些默认约定。
1. 在 mmdet\<=2.16.0 时,语义分割标注中的前景和背景标签范围与 MMDetection 中的默认规定有所不同。标签 `0` 代表 `VOID` 标签。
从 mmdet=2.17.0 开始,为了和框的类别标注保持一致,语义分割标注的类别标签也改为从 `0` 开始,标签 `255` 代表 `VOID` 类。
为了达成这一目标,我们在流程 `Pad` 里支持了设置 `seg` 的填充值的功能。
2. 在评估中,全景分割结果必须是一个与原图大小相同的图。结果图中每个像素的值有如此形式:`instance_id * INSTANCE_OFFSET + category_id`
# 自定义数据集
## 支持新的数据格式
为了支持新的数据格式,可以选择将数据转换成现成的格式(COCO 或者 PASCAL)或将其转换成中间格式。当然也可以选择以离线的形式(在训练之前使用脚本转换)或者在线的形式(实现一个新的 dataset 在训练中进行转换)来转换数据。
在 MMDetection 中,建议将数据转换成 COCO 格式并以离线的方式进行,因此在完成数据转换后只需修改配置文件中的标注数据的路径和类别即可。
### 将新的数据格式转换为现有的数据格式
最简单的方法就是将你的数据集转换成现有的数据格式(COCO 或者 PASCAL VOC)
COCO 格式的 JSON 标注文件有如下必要的字段:
```python
'images': [
{
'file_name': 'COCO_val2014_000000001268.jpg',
'height': 427,
'width': 640,
'id': 1268
},
...
],
'annotations': [
{
'segmentation': [[192.81,
247.09,
...
219.03,
249.06]], # 如果有 mask 标签且为多边形 XY 点坐标格式,则需要保证至少包括 3 个点坐标,否则为无效多边形
'area': 1035.749,
'iscrowd': 0,
'image_id': 1268,
'bbox': [192.81, 224.8, 74.73, 33.43],
'category_id': 16,
'id': 42986
},
...
],
'categories': [
{'id': 0, 'name': 'car'},
]
```
在 JSON 文件中有三个必要的键:
- `images`: 包含多个图片以及它们的信息的数组,例如 `file_name``height``width``id`
- `annotations`: 包含多个实例标注信息的数组。
- `categories`: 包含多个类别名字和 ID 的数组。
在数据预处理之后,使用现有的数据格式来训练自定义的新数据集有如下两步(以 COCO 为例):
1. 为自定义数据集修改配置文件。
2. 检查自定义数据集的标注。
这里我们举一个例子来展示上面的两个步骤,这个例子使用包括 5 个类别的 COCO 格式的数据集来训练一个现有的 Cascade Mask R-CNN R50-FPN 检测器
#### 1. 为自定义数据集修改配置文件
配置文件的修改涉及两个方面:
1. `dataloaer` 部分。需要在 `train_dataloader.dataset``val_dataloader.dataset``test_dataloader.dataset` 中添加 `metainfo=dict(classes=classes)`, 其中 classes 必须是 tuple 类型。
2. `model` 部分中的 `num_classes`。需要将默认值(COCO 数据集中为 80)修改为自定义数据集中的类别数。
`configs/my_custom_config.py` 内容如下:
```python
# 新的配置来自基础的配置以更好地说明需要修改的地方
_base_ = './cascade_mask_rcnn_r50_fpn_1x_coco.py'
# 1. 数据集设定
dataset_type = 'CocoDataset'
classes = ('a', 'b', 'c', 'd', 'e')
data_root='path/to/your/'
train_dataloader = dict(
batch_size=2,
num_workers=2,
dataset=dict(
type=dataset_type,
# 将类别名字添加至 `metainfo` 字段中
metainfo=dict(classes=classes),
data_root=data_root,
ann_file='train/annotation_data',
data_prefix=dict(img='train/image_data')
)
)
val_dataloader = dict(
batch_size=1,
num_workers=2,
dataset=dict(
type=dataset_type,
test_mode=True,
# 将类别名字添加至 `metainfo` 字段中
metainfo=dict(classes=classes),
data_root=data_root,
ann_file='val/annotation_data',
data_prefix=dict(img='val/image_data')
)
test_dataloader = dict(
batch_size=1,
num_workers=2,
dataset=dict(
type=dataset_type,
test_mode=True,
# 将类别名字添加至 `metainfo` 字段中
metainfo=dict(classes=classes),
data_root=data_root,
ann_file='test/annotation_data',
data_prefix=dict(img='test/image_data')
)
)
# 2. 模型设置
# 将所有的 `num_classes` 默认值修改为 5(原来为80)
model = dict(
roi_head=dict(
bbox_head=[
dict(
type='Shared2FCBBoxHead',
# 将所有的 `num_classes` 默认值修改为 5(原来为 80)
num_classes=5),
dict(
type='Shared2FCBBoxHead',
# 将所有的 `num_classes` 默认值修改为 5(原来为 80)
num_classes=5),
dict(
type='Shared2FCBBoxHead',
# 将所有的 `num_classes` 默认值修改为 5(原来为 80)
num_classes=5)],
# 将所有的 `num_classes` 默认值修改为 5(原来为 80)
mask_head=dict(num_classes=5)))
```
#### 2. 检查自定义数据集的标注
假设你自己的数据集是 COCO 格式,那么需要保证数据的标注没有问题:
1. 标注文件中 `categories` 的长度要与配置中的 `classes` 元组长度相匹配,它们都表示有几类。(如例子中有 5 个类别)
2. 配置文件中 `classes` 字段应与标注文件里 `categories` 下的 `name` 有相同的元素且顺序一致。MMDetection 会自动将 `categories` 中不连续的 `id` 映射成连续的索引,因此 `categories` 下的 `name`的字符串顺序会影响标签的索引。同时,配置文件中的 `classes` 的字符串顺序也会影响到预测框可视化时的标签。
3. `annotations` 中的 `category_id` 必须是有效的值。比如所有 `category_id` 的值都应该属于 `categories` 中的 `id`
下面是一个有效标注的例子:
```python
'annotations': [
{
'segmentation': [[192.81,
247.09,
...
219.03,
249.06]], # 如果有 mask 标签。
'area': 1035.749,
'iscrowd': 0,
'image_id': 1268,
'bbox': [192.81, 224.8, 74.73, 33.43],
'category_id': 16,
'id': 42986
},
...
],
# MMDetection 会自动将 `categories` 中不连续的 `id` 映射成连续的索引。
'categories': [
{'id': 1, 'name': 'a'}, {'id': 3, 'name': 'b'}, {'id': 4, 'name': 'c'}, {'id': 16, 'name': 'd'}, {'id': 17, 'name': 'e'},
]
```
我们使用这种方式来支持 CityScapes 数据集。脚本在 [cityscapes.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/dataset_converters/cityscapes.py) 并且我们提供了微调的 [configs](https://github.com/open-mmlab/mmdetection/blob/main/configs/cityscapes).
**注意**
1. 对于实例分割数据集, **MMDetection 目前只支持评估 COCO 格式的 mask AP**.
2. 推荐训练之前进行离线转换,这样就可以继续使用 `CocoDataset` 且只需修改标注文件的路径以及训练的种类。
### 调整新的数据格式为中间格式
如果不想将标注格式转换为 COCO 或者 PASCAL 格式也是可行的。实际上,我们在 MMEngine 的 [BaseDataset](https://github.com/open-mmlab/mmengine/blob/main/mmengine/dataset/base_dataset.py#L116) 中定义了一种简单的标注格式并且与所有现有的数据格式兼容,也能进行离线或者在线转换。
数据集的标注必须为 `json``yaml``yml``pickle``pkl` 格式;标注文件中存储的字典必须包含 `metainfo``data_list` 两个字段。其中 `metainfo` 是一个字典,里面包含数据集的元信息,例如类别信息;`data_list` 是一个列表,列表中每个元素是一个字典,该字典定义了一个原始数据(raw data),每个原始数据包含一个或若干个训练/测试样本。
以下是一个 JSON 标注文件的例子:
```json
{
'metainfo':
{
'classes': ('person', 'bicycle', 'car', 'motorcycle'),
...
},
'data_list':
[
{
"img_path": "xxx/xxx_1.jpg",
"height": 604,
"width": 640,
"instances":
[
{
"bbox": [0, 0, 10, 20],
"bbox_label": 1,
"ignore_flag": 0
},
{
"bbox": [10, 10, 110, 120],
"bbox_label": 2,
"ignore_flag": 0
}
]
},
{
"img_path": "xxx/xxx_2.jpg",
"height": 320,
"width": 460,
"instances":
[
{
"bbox": [10, 0, 20, 20],
"bbox_label": 3,
"ignore_flag": 1
}
]
},
...
]
}
```
有些数据集可能会提供如:crowd/difficult/ignored bboxes 标注,那么我们使用 `ignore_flag`来包含它们。
在得到上述标准的数据标注格式后,可以直接在配置中使用 MMDetection 的 [BaseDetDataset](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/datasets/base_det_dataset.py#L13) ,而无需进行转换。
### 自定义数据集例子
假设文本文件中表示的是一种全新的标注格式。边界框的标注信息保存在 `annotation.txt` 中,内容如下:
```
#
000001.jpg
1280 720
2
10 20 40 60 1
20 40 50 60 2
#
000002.jpg
1280 720
3
50 20 40 60 2
20 40 30 45 2
30 40 50 60 3
```
我们可以在 `mmdet/datasets/my_dataset.py` 中创建一个新的 dataset 用以加载数据。
```python
import mmengine
from mmdet.base_det_dataset import BaseDetDataset
from mmdet.registry import DATASETS
@DATASETS.register_module()
class MyDataset(BaseDetDataset):
METAINFO = {
'classes': ('person', 'bicycle', 'car', 'motorcycle'),
'palette': [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230)]
}
def load_data_list(self, ann_file):
ann_list = mmengine.list_from_file(ann_file)
data_infos = []
for i, ann_line in enumerate(ann_list):
if ann_line != '#':
continue
img_shape = ann_list[i + 2].split(' ')
width = int(img_shape[0])
height = int(img_shape[1])
bbox_number = int(ann_list[i + 3])
instances = []
for anns in ann_list[i + 4:i + 4 + bbox_number]:
instance = {}
instance['bbox'] = [float(ann) for ann in anns.split(' ')[:4]]
instance['bbox_label']=int(anns[4])
instances.append(instance)
data_infos.append(
dict(
img_path=ann_list[i + 1],
img_id=i,
width=width,
height=height,
instances=instances
))
return data_infos
```
配置文件中,可以使用 `MyDataset` 进行如下修改
```python
dataset_A_train = dict(
type='MyDataset',
ann_file = 'image_list.txt',
pipeline=train_pipeline
)
```
## 使用 dataset 包装器自定义数据集
MMEngine 也支持非常多的数据集包装器(wrapper)来混合数据集或在训练时修改数据集的分布,其支持如下三种数据集包装:
- `RepeatDataset`:将整个数据集简单地重复。
- `ClassBalancedDataset`:以类别均衡的方式重复数据集。
- `ConcatDataset`:合并数据集。
具体使用方式见 [MMEngine 数据集包装器](#TODO)
## 修改数据集的类别
根据现有数据集的类型,我们可以修改它们的类别名称来训练其标注的子集。
例如,如果只想训练当前数据集中的三个类别,那么就可以修改数据集的 `metainfo` 字典,数据集就会自动屏蔽掉其他类别的真实框。
```python
classes = ('person', 'bicycle', 'car')
train_dataloader = dict(
dataset=dict(
metainfo=dict(classes=classes))
)
val_dataloader = dict(
dataset=dict(
metainfo=dict(classes=classes))
)
test_dataloader = dict(
dataset=dict(
metainfo=dict(classes=classes))
)
```
**注意**
- 在 MMDetection v2.5.0 之前,如果类别为集合时数据集将自动过滤掉不包含 GT 的图片,且没办法通过修改配置将其关闭。这是一种不可取的行为而且会引起混淆,因为当类别不是集合时数据集时,只有在 `filter_empty_gt=True` 以及 `test_mode=False` 的情况下才会过滤掉不包含 GT 的图片。在 MMDetection v2.5.0 之后,我们将图片的过滤以及类别的修改进行解耦,数据集只有在 `filter_cfg=dict(filter_empty_gt=True)``test_mode=False` 的情况下才会过滤掉不包含 GT 的图片,无论类别是否为集合。设置类别只会影响用于训练的标注类别,用户可以自行决定是否过滤不包含 GT 的图片。
- 直接使用 MMEngine 中的 `BaseDataset` 或者 MMDetection 中的 `BaseDetDataset` 时用户不能通过修改配置来过滤不含 GT 的图片,但是可以通过离线的方式来解决。
- 当设置数据集中的 `classes` 时,记得修改 `num_classes`。从 v2.9.0 (PR#4508) 之后,我们实现了 [NumClassCheckHook](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/num_class_check_hook.py) 来检查类别数是否一致。
## COCO 全景分割数据集
现在我们也支持 COCO Panoptic Dataset,全景注释的格式与 COCO 格式不同,其前景和背景都将存在于注释文件中。COCO Panoptic 格式的注释 JSON 文件具有以下必要的键:
```python
'images': [
{
'file_name': '000000001268.jpg',
'height': 427,
'width': 640,
'id': 1268
},
...
]
'annotations': [
{
'filename': '000000001268.jpg',
'image_id': 1268,
'segments_info': [
{
'id':8345037, # One-to-one correspondence with the id in the annotation map.
'category_id': 51,
'iscrowd': 0,
'bbox': (x1, y1, w, h), # The bbox of the background is the outer rectangle of its mask.
'area': 24315
},
...
]
},
...
]
'categories': [ # including both foreground categories and background categories
{'id': 0, 'name': 'person'},
...
]
```
此外,`seg` 必须设置为全景注释图像的路径。
```python
dataset_type = 'CocoPanopticDataset'
data_root='path/to/your/'
train_dataloader = dict(
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img='train/image_data/', seg='train/panoptic/image_annotation_data/')
)
)
val_dataloader = dict(
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img='val/image_data/', seg='val/panoptic/image_annotation_data/')
)
)
test_dataloader = dict(
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img='test/image_data/', seg='test/panoptic/image_annotation_data/')
)
)
```
# 自定义损失函数
MMDetection 为用户提供了不同的损失函数。但是默认的配置可能无法适应不同的数据和模型,所以用户可能会希望修改某一个损失函数来适应新的情况。
本教程首先详细的解释计算损失的过程然后给出一些关于如何修改每一个步骤的指导。对损失的修改可以被分为微调和加权。
## 一个损失的计算过程
给定输入(包括预测和目标,以及权重),损失函数会把输入的张量映射到最后的损失标量。映射过程可以分为下面五个步骤:
1. 设置采样方法为对正负样本进行采样。
2. 通过损失核函数获取**元素**或者**样本**损失。
3. 通过权重张量来给损失**逐元素**权重。
4. 把损失张量归纳为一个**标量**
5. 用一个**张量**给当前损失一个权重。
## 设置采样方法(步骤 1)
对于一些损失函数,需要采样策略来避免正负样本之间的不平衡。
例如,在RPN head中使用`CrossEntropyLoss`时,我们需要在`train_cfg`中设置`RandomSampler`
```python
train_cfg=dict(
rpn=dict(
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False))
```
对于其他一些具有正负样本平衡机制的损失,例如 Focal Loss、GHMC 和 QualityFocalLoss,不再需要进行采样。
## 微调损失
微调一个损失主要与步骤 2,4,5 有关,大部分的修改可以在配置文件中指定。这里我们用 [Focal Loss (FL)](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/losses/focal_loss.py) 作为例子。
下面的代码分别是构建 FL 的方法和它的配置文件,他们是一一对应的。
```python
@LOSSES.register_module()
class FocalLoss(nn.Module):
def __init__(self,
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
reduction='mean',
loss_weight=1.0):
```
```python
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0)
```
### 微调超参数(步骤2)
`gamma``beta` 是 Focal Loss 中的两个超参数。如果我们想把 `gamma` 的值设为 1.5,把 `alpha` 的值设为 0.5,我们可以在配置文件中按照如下指定:
```python
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=1.5,
alpha=0.5,
loss_weight=1.0)
```
### 微调归纳方式(步骤4)
Focal Loss 默认的归纳方式是 `mean`。如果我们想把归纳方式从 `mean` 改成 `sum`,我们可以在配置文件中按照如下指定:
```python
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0,
reduction='sum')
```
### 微调损失权重(步骤5)
这里的损失权重是一个标量,他用来控制多任务学习中不同损失的重要程度,例如,分类损失和回归损失。如果我们想把分类损失的权重设为 0.5,我们可以在配置文件中如下指定:
```python
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=0.5)
```
## 加权损失(步骤3)
加权损失就是我们逐元素修改损失权重。更具体来说,我们给损失张量乘以一个与他有相同形状的权重张量。所以,损失中不同的元素可以被赋予不同的比例,所以这里叫做逐元素。损失的权重在不同模型中变化很大,而且与上下文相关,但是总的来说主要有两种损失权重:分类损失的 `label_weights` 和边界框的 `bbox_weights`。你可以在相应的头中的 `get_target` 方法中找到他们。这里我们使用 [ATSSHead](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/dense_heads/atss_head.py#L322) 作为一个例子。它继承了 [AnchorHead](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/dense_heads/anchor_head.py) ,但是我们重写它的
`get_targets` 方法来产生不同的 `label_weights``bbox_weights`
```
class ATSSHead(AnchorHead):
...
def get_targets(self,
anchor_list,
valid_flag_list,
gt_bboxes_list,
img_metas,
gt_bboxes_ignore_list=None,
gt_labels_list=None,
label_channels=1,
unmap_outputs=True):
```
# 自定义模型
我们简单地把模型的各个组件分为五类:
- 主干网络 (backbone):通常是一个用来提取特征图 (feature map) 的全卷积网络 (FCN network),例如:ResNet, MobileNet。
- Neck:主干网络和 Head 之间的连接部分,例如:FPN, PAFPN。
- Head:用于具体任务的组件,例如:边界框预测和掩码预测。
- 区域提取器 (roi extractor):从特征图中提取 RoI 特征,例如:RoI Align。
- 损失 (loss):在 Head 组件中用于计算损失的部分,例如:FocalLoss, L1Loss, GHMLoss.
## 开发新的组件
### 添加一个新的主干网络
这里,我们以 MobileNet 为例来展示如何开发新组件。
#### 1. 定义一个新的主干网络(以 MobileNet 为例)
新建一个文件 `mmdet/models/backbones/mobilenet.py`
```python
import torch.nn as nn
from mmdet.registry import MODELS
@MODELS.register_module()
class MobileNet(nn.Module):
def __init__(self, arg1, arg2):
pass
def forward(self, x): # should return a tuple
pass
```
#### 2. 导入该模块
你可以添加下述代码到 `mmdet/models/backbones/__init__.py`
```python
from .mobilenet import MobileNet
```
或添加:
```python
custom_imports = dict(
imports=['mmdet.models.backbones.mobilenet'],
allow_failed_imports=False)
```
到配置文件以避免原始代码被修改。
#### 3. 在你的配置文件中使用该主干网络
```python
model = dict(
...
backbone=dict(
type='MobileNet',
arg1=xxx,
arg2=xxx),
...
```
### 添加新的 Neck
#### 1. 定义一个 Neck(以 PAFPN 为例)
新建一个文件 `mmdet/models/necks/pafpn.py`
```python
import torch.nn as nn
from mmdet.registry import MODELS
@MODELS.register_module()
class PAFPN(nn.Module):
def __init__(self,
in_channels,
out_channels,
num_outs,
start_level=0,
end_level=-1,
add_extra_convs=False):
pass
def forward(self, inputs):
# implementation is ignored
pass
```
#### 2. 导入该模块
你可以添加下述代码到 `mmdet/models/necks/__init__.py`
```python
from .pafpn import PAFPN
```
或添加:
```python
custom_imports = dict(
imports=['mmdet.models.necks.pafpn'],
allow_failed_imports=False)
```
到配置文件以避免原始代码被修改。
#### 3. 修改配置文件
```python
neck=dict(
type='PAFPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5)
```
### 添加新的 Head
我们以 [Double Head R-CNN](https://arxiv.org/abs/1904.06493) 为例来展示如何添加一个新的 Head。
首先,添加一个新的 bbox head 到 `mmdet/models/roi_heads/bbox_heads/double_bbox_head.py`
Double Head R-CNN 在目标检测上实现了一个新的 bbox head。为了实现 bbox head,我们需要使用如下的新模块中三个函数。
```python
from typing import Tuple
import torch.nn as nn
from mmcv.cnn import ConvModule
from mmengine.model import BaseModule, ModuleList
from torch import Tensor
from mmdet.models.backbones.resnet import Bottleneck
from mmdet.registry import MODELS
from mmdet.utils import ConfigType, MultiConfig, OptConfigType, OptMultiConfig
from .bbox_head import BBoxHead
@MODELS.register_module()
class DoubleConvFCBBoxHead(BBoxHead):
r"""Bbox head used in Double-Head R-CNN
.. code-block:: none
/-> cls
/-> shared convs ->
\-> reg
roi features
/-> cls
\-> shared fc ->
\-> reg
""" # noqa: W605
def __init__(self,
num_convs: int = 0,
num_fcs: int = 0,
conv_out_channels: int = 1024,
fc_out_channels: int = 1024,
conv_cfg: OptConfigType = None,
norm_cfg: ConfigType = dict(type='BN'),
init_cfg: MultiConfig = dict(
type='Normal',
override=[
dict(type='Normal', name='fc_cls', std=0.01),
dict(type='Normal', name='fc_reg', std=0.001),
dict(
type='Xavier',
name='fc_branch',
distribution='uniform')
]),
**kwargs) -> None:
kwargs.setdefault('with_avg_pool', True)
super().__init__(init_cfg=init_cfg, **kwargs)
def forward(self, x_cls: Tensor, x_reg: Tensor) -> Tuple[Tensor]:
```
然后,如有必要,实现一个新的 bbox head。我们打算从 `StandardRoIHead` 来继承新的 `DoubleHeadRoIHead`。我们可以发现 `StandardRoIHead` 已经实现了下述函数。
```python
from typing import List, Optional, Tuple
import torch
from torch import Tensor
from mmdet.registry import MODELS, TASK_UTILS
from mmdet.structures import DetDataSample
from mmdet.structures.bbox import bbox2roi
from mmdet.utils import ConfigType, InstanceList
from ..task_modules.samplers import SamplingResult
from ..utils import empty_instances, unpack_gt_instances
from .base_roi_head import BaseRoIHead
@MODELS.register_module()
class StandardRoIHead(BaseRoIHead):
"""Simplest base roi head including one bbox head and one mask head."""
def init_assigner_sampler(self) -> None:
def init_bbox_head(self, bbox_roi_extractor: ConfigType,
bbox_head: ConfigType) -> None:
def init_mask_head(self, mask_roi_extractor: ConfigType,
mask_head: ConfigType) -> None:
def forward(self, x: Tuple[Tensor],
rpn_results_list: InstanceList) -> tuple:
def loss(self, x: Tuple[Tensor], rpn_results_list: InstanceList,
batch_data_samples: List[DetDataSample]) -> dict:
def _bbox_forward(self, x: Tuple[Tensor], rois: Tensor) -> dict:
def bbox_loss(self, x: Tuple[Tensor],
sampling_results: List[SamplingResult]) -> dict:
def mask_loss(self, x: Tuple[Tensor],
sampling_results: List[SamplingResult], bbox_feats: Tensor,
batch_gt_instances: InstanceList) -> dict:
def _mask_forward(self,
x: Tuple[Tensor],
rois: Tensor = None,
pos_inds: Optional[Tensor] = None,
bbox_feats: Optional[Tensor] = None) -> dict:
def predict_bbox(self,
x: Tuple[Tensor],
batch_img_metas: List[dict],
rpn_results_list: InstanceList,
rcnn_test_cfg: ConfigType,
rescale: bool = False) -> InstanceList:
def predict_mask(self,
x: Tuple[Tensor],
batch_img_metas: List[dict],
results_list: InstanceList,
rescale: bool = False) -> InstanceList:
```
Double Head 的修改主要在 bbox_forward 的逻辑中,且它从 `StandardRoIHead` 中继承了其他逻辑。在 `mmdet/models/roi_heads/double_roi_head.py` 中,我们用下述代码实现新的 bbox head:
```python
from typing import Tuple
from torch import Tensor
from mmdet.registry import MODELS
from .standard_roi_head import StandardRoIHead
@MODELS.register_module()
class DoubleHeadRoIHead(StandardRoIHead):
"""RoI head for `Double Head RCNN <https://arxiv.org/abs/1904.06493>`_.
Args:
reg_roi_scale_factor (float): The scale factor to extend the rois
used to extract the regression features.
"""
def __init__(self, reg_roi_scale_factor: float, **kwargs):
super().__init__(**kwargs)
self.reg_roi_scale_factor = reg_roi_scale_factor
def _bbox_forward(self, x: Tuple[Tensor], rois: Tensor) -> dict:
"""Box head forward function used in both training and testing.
Args:
x (tuple[Tensor]): List of multi-level img features.
rois (Tensor): RoIs with the shape (n, 5) where the first
column indicates batch id of each RoI.
Returns:
dict[str, Tensor]: Usually returns a dictionary with keys:
- `cls_score` (Tensor): Classification scores.
- `bbox_pred` (Tensor): Box energies / deltas.
- `bbox_feats` (Tensor): Extract bbox RoI features.
"""
bbox_cls_feats = self.bbox_roi_extractor(
x[:self.bbox_roi_extractor.num_inputs], rois)
bbox_reg_feats = self.bbox_roi_extractor(
x[:self.bbox_roi_extractor.num_inputs],
rois,
roi_scale_factor=self.reg_roi_scale_factor)
if self.with_shared_head:
bbox_cls_feats = self.shared_head(bbox_cls_feats)
bbox_reg_feats = self.shared_head(bbox_reg_feats)
cls_score, bbox_pred = self.bbox_head(bbox_cls_feats, bbox_reg_feats)
bbox_results = dict(
cls_score=cls_score,
bbox_pred=bbox_pred,
bbox_feats=bbox_cls_feats)
return bbox_results
```
最终,用户需要把该模块添加到 `mmdet/models/bbox_heads/__init__.py``mmdet/models/roi_heads/__init__.py` 以使相关的注册表可以找到并加载他们。
或者,用户可以添加:
```python
custom_imports=dict(
imports=['mmdet.models.roi_heads.double_roi_head', 'mmdet.models.roi_heads.bbox_heads.double_bbox_head'])
```
到配置文件并实现相同的目的。
Double Head R-CNN 的配置文件如下:
```python
_base_ = '../faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py'
model = dict(
roi_head=dict(
type='DoubleHeadRoIHead',
reg_roi_scale_factor=1.3,
bbox_head=dict(
_delete_=True,
type='DoubleConvFCBBoxHead',
num_convs=4,
num_fcs=2,
in_channels=256,
conv_out_channels=1024,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=2.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=2.0))))
```
从 MMDetection 2.0 版本起,配置系统支持继承配置以使用户可以专注于修改。
Double Head R-CNN 主要使用了一个新的 `DoubleHeadRoIHead` 和一个新的 `DoubleConvFCBBoxHead`,参数需要根据每个模块的 `__init__` 函数来设置。
### 添加新的损失
假设你想添加一个新的损失 `MyLoss` 用于边界框回归。
为了添加一个新的损失函数,用户需要在 `mmdet/models/losses/my_loss.py` 中实现。
装饰器 `weighted_loss` 可以使损失每个部分加权。
```python
import torch
import torch.nn as nn
from mmdet.registry import LOSSES
from .utils import weighted_loss
@weighted_loss
def my_loss(pred, target):
assert pred.size() == target.size() and target.numel() > 0
loss = torch.abs(pred - target)
return loss
@LOSSES.register_module()
class MyLoss(nn.Module):
def __init__(self, reduction='mean', loss_weight=1.0):
super(MyLoss, self).__init__()
self.reduction = reduction
self.loss_weight = loss_weight
def forward(self,
pred,
target,
weight=None,
avg_factor=None,
reduction_override=None):
assert reduction_override in (None, 'none', 'mean', 'sum')
reduction = (
reduction_override if reduction_override else self.reduction)
loss_bbox = self.loss_weight * my_loss(
pred, target, weight, reduction=reduction, avg_factor=avg_factor)
return loss_bbox
```
然后,用户需要把它加到 `mmdet/models/losses/__init__.py`
```python
from .my_loss import MyLoss, my_loss
```
或者,你可以添加:
```python
custom_imports=dict(
imports=['mmdet.models.losses.my_loss'])
```
到配置文件来实现相同的目的。
如使用,请修改 `loss_xxx` 字段。
因为 MyLoss 是用于回归的,你需要在 Head 中修改 `loss_xxx` 字段。
```python
loss_bbox=dict(type='MyLoss', loss_weight=1.0))
```
# 自定义训练配置
## 自定义优化相关的配置
优化相关的配置现在已全部集成到 `optim_wrapper` 中,通常包含三个域:`optimizer`, `paramwise_cfg``clip_grad`,具体细节见 [OptimWrapper](https://mmengine.readthedocs.io/en/latest/tutorials/optim_wrapper.md)。下面这个例子中,使用了 `AdamW` 作为优化器,主干部分的学习率缩小到原来的十分之一,以及添加了梯度裁剪。
```python
optim_wrapper = dict(
type='OptimWrapper',
# 优化器
optimizer=dict(
type='AdamW',
lr=0.0001,
weight_decay=0.05,
eps=1e-8,
betas=(0.9, 0.999)),
# 参数层面的学习率和正则化设置
paramwise_cfg=dict(
custom_keys={
'backbone': dict(lr_mult=0.1, decay_mult=1.0),
},
norm_decay_mult=0.0),
# 梯度裁剪
clip_grad=dict(max_norm=0.01, norm_type=2))
```
### 自定义 Pytorch 中优化器设置
我们已经支持了 Pytorch 中实现的所有优化器,要使用这些优化器唯一要做就是修改配置文件中的 `optimi_wrapper` 中的 `optimzer` 域。比如,如果想要使用 `ADAM` 作为优化器(可能会导致性能下降),所需要做的修改如下。
```python
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='Adam', lr=0.0003, weight_decay=0.0001))
```
要修改模型的学习率,用户只需要修改 `optimizer` 中的 `lr` 域。用户可以直接参考 PyToch 的 [API doc](https://pytorch.org/docs/stable/optim.html?highlight=optim#module-torch.optim) 来进行参数的设置。
### 自定义优化器
#### 1. 定义一个新优化器
自定义优化器可以定义的方式如下:
假设你想要添加一个名为 `MyOptimizer` 的优化器,它包含三个参数 `a``b``c`。你需要新建一个名为
`mmdet/engine/optimizers` 的文件夹。然后在文件(比如,`mmdet/engine/optimizers/my_optimizer.py`)实现一个新的优化器。
```python
from mmdet.registry import OPTIMIZERS
from torch.optim import Optimizer
@OPTIMIZERS.register_module()
class MyOptimizer(Optimizer):
def __init__(self, a, b, c)
```
#### 2. 导入自定义的优化器
为了能找到上面的所定义的模块,这个模块必须要先导入到主命名空间中。有两种方式可以实现这一点。
- 修改 `mmdet/engine/optimizers/__init__.py` 来导入模块。
新定义的模块必须导入到 `mmdet/engine/optimizers/__init__.py`,这样注册器才能找到该模块并添加它。
```python
from .my_optimizer import MyOptimizer
```
- 在配置文件使用 `custom_imports` 来手动导入模块。
```python
custom_imports = dict(imports=['mmdet.engine.optimizers.my_optimizer'], allow_failed_imports=False)
```
`mmdet.engine.optimizers.my_optimizer` 模块将在程序开始时导入,之后 `MyOptimizer` 类会被自动注册。注意:应该导入 `MyOptimizer` 所在的文件,即 `mmdet.engine.optimizers.my_optimizer`,而不是 `mmdet.engine.optimizers.my_optimizer.MyOptimizer`
实际上,用户也可以在别的目录结构下来进行导入模块,只要改模块可以在 `PYTHONPATH` 中找到。
#### 3. 在配置文件中指定优化器
接下来,你可以在配置文件中的 `optim_wrapper` 域中的中 `optimizer` 域中设置你实现的优化器 `MyOptimizer`。在配置文件中,优化器在 `optimizer` 域中的配置方式如下:
```python
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001))
```
为了使用你的优化器,可以进行如下修改
```python
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value))
```
### 自定义优化器包装构造类
一些模型可能存在一些特定参数的优化设置,比如,BN 层的权重衰减。用户可以通过自定义优化器包装构造类来实现这些精细化的参数调整。
```python
from mmengine.optim import DefaultOptiWrapperConstructor
from mmdet.registry import OPTIM_WRAPPER_CONSTRUCTORS
from .my_optimizer import MyOptimizer
@OPTIM_WRAPPER_CONSTRUCTORS.register_module()
class MyOptimizerWrapperConstructor(DefaultOptimWrapperConstructor):
def __init__(self,
optim_wrapper_cfg: dict,
paramwise_cfg: Optional[dict] = None):
def __call__(self, model: nn.Module) -> OptimWrapper:
return optim_wrapper
```
优化器包装构造类的具体实现见[这里](https://github.com/open-mmlab/mmengine/blob/main/mmengine/optim/optimizer/default_constructor.py#L18),用户以它为模板,来实现新的优化器包装构造类。
### 额外的设置
一些没有被优化器实现的技巧(比如,参数层面的学习率设置)应该通过优化器包装构造类来实现或者钩子。我们列出了一些常用的设置用于稳定训练或者加速训练。请随意创建 PR,发布更多设置。
- __使用梯度裁剪来稳定训练__:
一些模型需要进行梯度裁剪来稳定训练过程,例子如下:
```python
optim_wrapper = dict(
_delete_=True, clip_grad=dict(max_norm=35, norm_type=2))
```
如果你的配置已经集成了基础配置(包含了 `optim_wrapper` 的配置),那么你需要添加 `_delete_=True` 来覆盖掉不需要的设置。具体见[配置相关的文档](https://mmdetection.readthedocs.io/en/latest/tutorials/config.html)
- __使用动量调度加速模型收敛__:
我们支持动量调度器根据学习率修改模型的动量,这可以使模型以更快的方式收敛。动量调度器通常与学习率调度器一起使用,例如 [3D 检测](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/_base_/schedules/cyclic-20e.py) 中使用以下配置以加速收敛。
更多细节请参考 [CosineAnnealingLR](https://github.com/open-mmlab/mmengine/blob/main/mmengine/optim/scheduler/lr_scheduler.py#L43)[CosineAnnealingMomentum](https://github.com/open-mmlab/mmengine/blob/main/mmengine/optim/scheduler/momentum_scheduler.py#L71) 的具体实现。
```python
param_scheduler = [
# 学习率调度器
# 在前 8 个 epoch, 学习率从 0 增大到 lr * 10
# 在接下来 12 个 epoch, 学习率从 lr * 10 减小到 lr * 1e-4
dict(
type='CosineAnnealingLR',
T_max=8,
eta_min=lr * 10,
begin=0,
end=8,
by_epoch=True,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=12,
eta_min=lr * 1e-4,
begin=8,
end=20,
by_epoch=True,
convert_to_iter_based=True),
# 动量调度器
# 在前 8 个 epoch, 动量从 0 增大到 0.85 / 0.95
# 在接下来 12 个 epoch, 学习率从 0.85 / 0.95 增大到 1
dict(
type='CosineAnnealingMomentum',
T_max=8,
eta_min=0.85 / 0.95,
begin=0,
end=8,
by_epoch=True,
convert_to_iter_based=True),
dict(
type='CosineAnnealingMomentum',
T_max=12,
eta_min=1,
begin=8,
end=20,
by_epoch=True,
convert_to_iter_based=True)
]
```
## 自定义训练策略
默认情况下,我们使用 1x 的学习率调整策略,这会条用 MMEngine 中的 [MultiStepLR](https://github.com/open-mmlab/mmengine/blob/main/mmengine/optim/scheduler/lr_scheduler.py#L139)
我们支持许多其他学习率调整策略,具体见[这里](https://github.com/open-mmlab/mmengine/blob/main/mmengine/optim/scheduler/lr_scheduler.py),例如 `CosineAnnealingLR``PolyLR` 策略。下面有些例子
- 多项式学习率调整策略:
```python
param_scheduler = [
dict(
type='PolyLR',
power=0.9,
eta_min=1e-4,
begin=0,
end=8,
by_epoch=True)]
```
- 余弦退火学习率调整策略
```python
param_scheduler = [
dict(
type='CosineAnnealingLR',
T_max=8,
eta_min=lr * 1e-5,
begin=0,
end=8,
by_epoch=True)]
```
## 自定义训练循环
默认情况下,在 `train_cfg` 中使用 `EpochBasedTrainLoop`,并且在每个 epoch 训练之后进行验证,如下所示。
```python
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=12, val_begin=1, val_interval=1)
```
实际上,[`IterBasedTrainLoop`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L183%5D)\[`EpochBasedTrainLoop`\](https:// github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L18) 支持动态区间的方式进行验证,见下例。
```python
# 在第 365001 次迭代之前,我们每 5000 次迭代进行一次评估。
# 在第 365000 次迭代后,我们每 368750 次迭代进行一次评估,
# 这意味着我们在训练结束时进行评估。
interval = 5000
max_iters = 368750
dynamic_intervals = [(max_iters // interval * interval + 1, max_iters)]
train_cfg = dict(
type='IterBasedTrainLoop',
max_iters=max_iters,
val_interval=interval,
dynamic_intervals=dynamic_intervals)
```
## 自定义钩子
### 自定义自行实现的钩子
#### 1. 实现一个新的钩子
MMEngine 提供了许多有用的[钩子](https://mmdetection.readthedocs.io/en/latest/tutorials/hooks.html),但在某些情况下用户可能需要实现新的钩子。MMDetection 在 v3.0 中支持自定义钩子。因此,用户可以直接在 mmdet 或其基于 mmdet 的代码库中实现钩子,并通过仅在训练中修改配置来使用钩子。
这里我们给出一个在 mmdet 中创建一个新的钩子并在训练中使用它的例子。
```python
from mmengine.hooks import Hook
from mmdet.registry import HOOKS
@HOOKS.register_module()
class MyHook(Hook):
def __init__(self, a, b):
def before_run(self, runner) -> None:
def after_run(self, runner) -> None:
def before_train(self, runner) -> None:
def after_train(self, runner) -> None:
def before_train_epoch(self, runner) -> None:
def after_train_epoch(self, runner) -> None:
def before_train_iter(self,
runner,
batch_idx: int,
data_batch: DATA_BATCH = None) -> None:
def after_train_iter(self,
runner,
batch_idx: int,
data_batch: DATA_BATCH = None,
outputs: Optional[dict] = None) -> None:
```
根据钩子的功能,用户需要在 `before_run``after_run``before_train``after_train``before_train_epoch``after_train_epoch``before_train_iter``after_train_iter`。还有更多可以插入钩子的点,更多细节请参考 [base hook class](https://github.com/open-mmlab/mmengine/blob/main/mmengine/hooks/hook.py#L9)
#### 2. 注册新钩子
然后我们需要导入 `MyHook`。假设该文件位于 `mmdet/engine/hooks/my_hook.py` 中,有两种方法可以做到这一点:
- 修改 `mmdet/engine/hooks/__init__.py` 以导入它。
新定义的模块应该在 `mmdet/engine/hooks/__init__.py` 中导入,以便注册表找到新模块并添加它:
```python
from .my_hook import MyHook
```
- 在配置中使用 `custom_imports` 手动导入它
```python
custom_imports = dict(imports=['mmdet.engine.hooks.my_hook'], allow_failed_imports=False)
```
#### 3. 修改配置
```python
custom_hooks = [
dict(type='MyHook', a=a_value, b=b_value)
]
```
你还可以通过修改键 `priority` 的值为 `NORMAL``HIGHEST` 来设置挂钩的优先级,如下所示
```python
custom_hooks = [
dict(type='MyHook', a=a_value, b=b_value, priority='NORMAL')
]
```
默认情况下,钩子的优先级在注册期间设置为 `NORMAL`
### 使用 MMDetection 中实现的钩子
如果 MMDetection 中已经实现了该钩子,你可以直接修改配置以使用该钩子,如下所示
#### 例子: `NumClassCheckHook`
我们实现了一个名为 [NumClassCheckHook](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/num_class_check_hook.py) 的自定义钩子来检查 `num_classes` 是否在 head 中和 `dataset` 中的 `classes` 的长度相匹配。
我们在 [default_runtime.py](https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/default_runtime.py) 中设置它。
```python
custom_hooks = [dict(type='NumClassCheckHook')]
```
### 修改默认运行时钩子
有一些常见的钩子是通过 `default_hooks` 注册的,它们是
- `IterTimerHook`:记录 “data_time” 用于加载数据和 “time” 用于模型训练步骤的钩子。
- `LoggerHook`:从`Runner`的不同组件收集日志并将它们写入终端、JSON文件、tensorboard和 wandb 等的钩子。
- `ParamSchedulerHook`:更新优化器中一些超参数的钩子,例如学习率和动量。
- `CheckpointHook`:定期保存检查点的钩子。
- `DistSamplerSeedHook`:为采样器和批处理采样器设置种子的钩子。
- `DetVisualizationHook`:用于可视化验证和测试过程预测结果的钩子。
`IterTimerHook``ParamSchedulerHook``DistSamplerSeedHook` 很简单,通常不需要修改,所以这里我们将展示如何使用 `LoggerHook``CheckpointHook``DetVisualizationHook`
#### CheckpointHook
除了定期保存检查点,[`CheckpointHook`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/hooks/checkpoint_hook.py#L19) 提供了其他选项,例如`max_keep_ckpts``save_optimizer ` 等。用户可以设置 `max_keep_ckpts` 只保存少量检查点或通过 `save_optimizer` 决定是否存储优化器的状态字典。参数的更多细节在[这里](https://github.com/open-mmlab/mmengine/blob/main/mmengine/hooks/checkpoint_hook.py#L19)可以找到。
```python
default_hooks = dict(
checkpoint=dict(
type='CheckpointHook',
interval=1,
max_keep_ckpts=3,
save_optimizer=True))
```
#### LoggerHook
`LoggerHook` 可以设置间隔。详细用法可以在 [docstring](https://github.com/open-mmlab/mmengine/blob/main/mmengine/hooks/logger_hook.py#L18) 中找到。
```python
default_hooks = dict(logger=dict(type='LoggerHook', interval=50))
```
#### DetVisualizationHook
`DetVisualizationHook` 使用 `DetLocalVisualizer` 来可视化预测结果,`DetLocalVisualizer` 支持不同的后端,例如 `TensorboardVisBackend``WandbVisBackend` (见 [docstring](https://github.com/open-mmlab/mmengine/blob/main/mmengine/visualization/vis_backend.py) 了解更多细节)。用户可以添加多个后端来进行可视化,如下所示。
```python
default_hooks = dict(
visualization=dict(type='DetVisualizationHook', draw=True))
vis_backends = [dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend')]
visualizer = dict(
type='DetLocalVisualizer', vis_backends=vis_backends, name='visualizer')
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment