"web/extensions/vscode:/vscode.git/clone" did not exist on "dd35588b29bb1c4a8bd1fd97be78108b63ded3d5"
Commit 26b83c4a authored by dengjb's avatar dengjb
Browse files

update codes

parent 2f6baaee
Pipeline #1045 failed with stages
in 0 seconds
# Inference with existing models
MMDetection provides hundreds of pre-trained detection models in [Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html).
This note will show how to inference, which means using trained models to detect objects on images.
In MMDetection, a model is defined by a [configuration file](https://mmdetection.readthedocs.io/en/latest/user_guides/config.html) and existing model parameters are saved in a checkpoint file.
To start with, we recommend [RTMDet](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet) with this [configuration file](https://github.com/open-mmlab/mmdetection/blob/main/configs/rtmdet/rtmdet_l_8xb32-300e_coco.py) and this [checkpoint file](https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_l_8xb32-300e_coco/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth). It is recommended to download the checkpoint file to `checkpoints` directory.
## High-level APIs for inference - `Inferencer`
In OpenMMLab, all the inference operations are unified into a new interface - Inferencer. Inferencer is designed to expose a neat and simple API to users, and shares very similar interface across different OpenMMLab libraries.
A notebook demo can be found in [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/main/demo/inference_demo.ipynb).
### Basic Usage
You can get inference results for an image with only 3 lines of code.
```python
from mmdet.apis import DetInferencer
# Initialize the DetInferencer
inferencer = DetInferencer('rtmdet_tiny_8xb32-300e_coco')
# Perform inference
inferencer('demo/demo.jpg', show=True)
```
The resulting output will be displayed in a new window:.
<div align="center">
<img src='https://github.com/open-mmlab/mmdetection/assets/27466624/311df42d-640a-4a5b-9ad9-9ba7f3ec3a2f' />
</div>
```{note}
If you are running MMDetection on a server without GUI or via SSH tunnel with X11 forwarding disabled, the `show` option will not work. However, you can still save visualizations to files by setting `out_dir` arguments. Read [Dumping Results](#dumping-results) for details.
```
### Initialization
Each Inferencer must be initialized with a model. You can also choose the inference device during initialization.
#### Model Initialization
- To infer with MMDetection's pre-trained model, passing its name to the argument `model` can work. The weights will be automatically downloaded and loaded from OpenMMLab's model zoo.
```python
inferencer = DetInferencer(model='rtmdet_tiny_8xb32-300e_coco')
```
There is a very easy to list all model names in MMDetection.
```python
# models is a list of model names, and them will print automatically
models = DetInferencer.list_models('mmdet')
```
You can load another weight by passing its path/url to `weights`.
```python
inferencer = DetInferencer(model='rtmdet_tiny_8xb32-300e_coco', weights='path/to/rtmdet.pth')
```
- To load custom config and weight, you can pass the path to the config file to `model` and the path to the weight to `weights`.
```python
inferencer = DetInferencer(model='path/to/rtmdet_config.py', weights='path/to/rtmdet.pth')
```
- By default, [MMEngine](https://github.com/open-mmlab/mmengine/) dumps config to the weight. If you have a weight trained on MMEngine, you can also pass the path to the weight file to `weights` without specifying `model`:
```python
# It will raise an error if the config file cannot be found in the weight. Currently, within the MMDetection model repository, only the weights of ddq-detr-4scale_r50 can be loaded in this manner.
inferencer = DetInferencer(weights='https://download.openmmlab.com/mmdetection/v3.0/ddq/ddq-detr-4scale_r50_8xb2-12e_coco/ddq-detr-4scale_r50_8xb2-12e_coco_20230809_170711-42528127.pth')
```
- Passing config file to `model` without specifying `weight` will result in a randomly initialized model.
### Device
Each Inferencer instance is bound to a device.
By default, the best device is automatically decided by [MMEngine](https://github.com/open-mmlab/mmengine/). You can also alter the device by specifying the `device` argument. For example, you can use the following code to create an Inferencer on GPU 1.
```python
inferencer = DetInferencer(model='rtmdet_tiny_8xb32-300e_coco', device='cuda:1')
```
To create an Inferencer on CPU:
```python
inferencer = DetInferencer(model='rtmdet_tiny_8xb32-300e_coco', device='cpu')
```
Refer to [torch.device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) for all the supported forms.
### Inference
Once the Inferencer is initialized, you can directly pass in the raw data to be inferred and get the inference results from return values.
#### Input
Input can be either of these types:
- str: Path/URL to the image.
```python
inferencer('demo/demo.jpg')
```
- array: Image in numpy array. It should be in BGR order.
```python
import mmcv
array = mmcv.imread('demo/demo.jpg')
inferencer(array)
```
- list: A list of basic types above. Each element in the list will be processed separately.
```python
inferencer(['img_1.jpg', 'img_2.jpg])
# You can even mix the types
inferencer(['img_1.jpg', array])
```
- str: Path to the directory. All images in the directory will be processed.
```python
inferencer('path/to/your_imgs/')
```
### Output
By default, each `Inferencer` returns the prediction results in a dictionary format.
- `visualization` contains the visualized predictions.
- `predictions` contains the predictions results in a json-serializable format. But it's an empty list by default unless `return_vis=True`.
```python
{
'predictions' : [
# Each instance corresponds to an input image
{
'labels': [...], # int list of length (N, )
'scores': [...], # float list of length (N, )
'bboxes': [...], # 2d list of shape (N, 4), format: [min_x, min_y, max_x, max_y]
},
...
],
'visualization' : [
array(..., dtype=uint8),
]
}
```
If you wish to get the raw outputs from the model, you can set `return_datasamples` to `True` to get the original [DataSample](advanced_guides/structures.md), which will be stored in `predictions`.
#### Dumping Results
Apart from obtaining predictions from the return value, you can also export the predictions/visualizations to files by setting `out_dir` and `no_save_pred`/`no_save_vis` arguments.
```python
inferencer('demo/demo.jpg', out_dir='outputs/', no_save_pred=False)
```
Results in the directory structure like:
```text
outputs
├── preds
│ └── demo.json
└── vis
└── demo.jpg
```
The filename of each file is the same as the corresponding input image filename. If the input image is an array, the filename will be a number starting from 0.
#### Batch Inference
You can customize the batch size by setting `batch_size`. The default batch size is 1.
### API
Here are extensive lists of parameters that you can use.
- **DetInferencer.\_\_init\_\_():**
| Arguments | Type | Type | Description |
| --------------- | ------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `model` | str, optional | None | Path to the config file or the model name defined in metafile. For example, it could be 'rtmdet-s' or 'rtmdet_s_8xb32-300e_coco' or 'configs/rtmdet/rtmdet_s_8xb32-300e_coco.py'. If the model is not specified, the user must provide the `weights` saved by MMEngine which contains the config string. |
| `weights` | str, optional | None | Path to the checkpoint. If it is not specified and `model` is a model name of metafile, the weights will be loaded from metafile. |
| `device` | str, optional | None | Device used for inference, accepting all allowed strings by `torch.device`. E.g., 'cuda:0' or 'cpu'. If None, the available device will be automatically used. |
| `scope` | str, optional | 'mmdet' | The scope of the model. |
| `palette` | str | 'none' | Color palette used for visualization. The order of priority is palette -> config -> checkpoint. |
| `show_progress` | bool | True | Control whether to display the progress bar during the inference process. |
- **DetInferencer.\_\_call\_\_()**
| Arguments | Type | Default | Description |
| -------------------- | ------------------------- | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | str/list/tuple/np.array | **required** | It can be a path to an image/a folder, an np array or a list/tuple (with img paths or np arrays) |
| `batch_size` | int | 1 | Inference batch size. |
| `print_result` | bool | False | Whether to print the inference result to the console. |
| `show` | bool | False | Whether to display the visualization results in a popup window. |
| `wait_time` | float | 0 | The interval of show(s). |
| `no_save_vis` | bool | False | Whether to force not to save prediction vis results. |
| `draw_pred` | bool | True | Whether to draw predicted bounding boxes. |
| `pred_score_thr` | float | 0.3 | Minimum score of bboxes to draw. |
| `return_datasamples` | bool | False | Whether to return results as DataSamples. If False, the results will be packed into a dict. |
| `print_result` | bool | False | Whether to print the inference result to the console. |
| `no_save_pred` | bool | True | Whether to force not to save prediction results. |
| `out_dir` | str | '' | Output directory of results. |
| `texts` | str/list\[str\], optional | None | Text prompts. |
| `stuff_texts` | str/list\[str\], optional | None | Stuff text prompts of open panoptic task. |
| `custom_entities` | bool | False | Whether to use custom entities. Only used in GLIP. |
| \*\*kwargs | | | Other keyword arguments passed to :meth:`preprocess`, :meth:`forward`, :meth:`visualize` and :meth:`postprocess`. Each key in kwargs should be in the corresponding set of `preprocess_kwargs`, `forward_kwargs`, `visualize_kwargs` and `postprocess_kwargs`. |
## Demos
We also provide four demo scripts, implemented with high-level APIs and supporting functionality codes.
Source codes are available [here](https://github.com/open-mmlab/mmdetection/blob/main/demo).
### Image demo
This script performs inference on a single image.
```shell
python demo/image_demo.py \
${IMAGE_FILE} \
${CONFIG_FILE} \
[--weights ${WEIGHTS}] \
[--device ${GPU_ID}] \
[--pred-score-thr ${SCORE_THR}]
```
Examples:
```shell
python demo/image_demo.py demo/demo.jpg \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
--weights checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--device cpu
```
### Webcam demo
This is a live demo from a webcam.
```shell
python demo/webcam_demo.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--device ${GPU_ID}] \
[--camera-id ${CAMERA-ID}] \
[--score-thr ${SCORE_THR}]
```
Examples:
```shell
python demo/webcam_demo.py \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth
```
### Video demo
This script performs inference on a video.
```shell
python demo/video_demo.py \
${VIDEO_FILE} \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--device ${GPU_ID}] \
[--score-thr ${SCORE_THR}] \
[--out ${OUT_FILE}] \
[--show] \
[--wait-time ${WAIT_TIME}]
```
Examples:
```shell
python demo/video_demo.py demo/demo.mp4 \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--out result.mp4
```
#### Video demo with GPU acceleration
This script performs inference on a video with GPU acceleration.
```shell
python demo/video_gpuaccel_demo.py \
${VIDEO_FILE} \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--device ${GPU_ID}] \
[--score-thr ${SCORE_THR}] \
[--nvdecode] \
[--out ${OUT_FILE}] \
[--show] \
[--wait-time ${WAIT_TIME}]
```
Examples:
```shell
python demo/video_gpuaccel_demo.py demo/demo.mp4 \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--nvdecode --out result.mp4
```
### Large-image inference demo
This is a script for slicing inference on large images.
```
python demo/large_image_demo.py \
${IMG_PATH} \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
--device ${GPU_ID} \
--show \
--tta \
--score-thr ${SCORE_THR} \
--patch-size ${PATCH_SIZE} \
--patch-overlap-ratio ${PATCH_OVERLAP_RATIO} \
--merge-iou-thr ${MERGE_IOU_THR} \
--merge-nms-type ${MERGE_NMS_TYPE} \
--batch-size ${BATCH_SIZE} \
--debug \
--save-patch
```
Examples:
```shell
# inferecnce without tta
wget -P checkpoint https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_2x_coco/faster_rcnn_r101_fpn_2x_coco_bbox_mAP-0.398_20200504_210455-1d2dac9c.pth
python demo/large_image_demo.py \
demo/large_image.jpg \
configs/faster_rcnn/faster-rcnn_r101_fpn_2x_coco.py \
checkpoint/faster_rcnn_r101_fpn_2x_coco_bbox_mAP-0.398_20200504_210455-1d2dac9c.pth
# inference with tta
wget -P checkpoint https://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_1x_coco/retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth
python demo/large_image_demo.py \
demo/large_image.jpg \
configs/retinanet/retinanet_r50_fpn_1x_coco.py \
checkpoint/retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth --tta
```
## Multi-modal algorithm inference demo and evaluation
As multimodal vision algorithms continue to evolve, MMDetection has also supported such algorithms. This section demonstrates how to use the demo and eval scripts corresponding to multimodal algorithms using the GLIP algorithm and model as the example. Moreover, MMDetection integrated a [gradio_demo project](../../../projects/gradio_demo/), which allows developers to quickly play with all image input tasks in MMDetection on their local devices. Check the [document](../../../projects/gradio_demo/README.md) for more details.
### Preparation
Please first make sure that you have the correct dependencies installed:
```shell
# if source
pip install -r requirements/multimodal.txt
# if wheel
mim install mmdet[multimodal]
```
MMDetection has already implemented GLIP algorithms and provided the weights, you can download directly from urls:
```shell
cd mmdetection
wget https://download.openmmlab.com/mmdetection/v3.0/glip/glip_tiny_a_mmdet-b3654169.pth
```
### Inference
Once the model is successfully downloaded, you can use the `demo/image_demo.py` script to run the inference.
```shell
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts bench
```
Demo result will be similar to this:
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/234547841-266476c8-f987-4832-8642-34357be621c6.png" height="300"/>
</div>
If users would like to detect multiple targets, please declare them in the format of `xx. xx` after the `--texts`.
```shell
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'bench. car'
```
And the result will be like this one:
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/234548156-ef9bbc2e-7605-4867-abe6-048b8578893d.png" height="300"/>
</div>
You can also use a sentence as the input prompt for the `--texts` field, for example:
```shell
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'There are a lot of cars here.'
```
The result will be similar to this:
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/234548490-d2e0a16d-1aad-4708-aea0-c829634fabbd.png" height="300"/>
</div>
### Evaluation
The GLIP implementation in MMDetection does not have any performance degradation, our benchmark is as follows:
| Model | official mAP | mmdet mAP |
| ----------------------- | :----------: | :-------: |
| glip_A_Swin_T_O365.yaml | 42.9 | 43.0 |
| glip_Swin_T_O365.yaml | 44.9 | 44.9 |
| glip_Swin_L.yaml | 51.4 | 51.3 |
Users can use the test script we provided to run evaluation as well. Here is a basic example:
```shell
# 1 gpu
python tools/test.py configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth
# 8 GPU
./tools/dist_test.sh configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth 8
```
# Weight initialization
During training, a proper initialization strategy is beneficial to speeding up the training or obtaining a higher performance. [MMCV](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/weight_init.py) provide some commonly used methods for initializing modules like `nn.Conv2d`. Model initialization in MMdetection mainly uses `init_cfg`. Users can initialize models with following two steps:
1. Define `init_cfg` for a model or its components in `model_cfg`, but `init_cfg` of children components have higher priority and will override `init_cfg` of parents modules.
2. Build model as usual, but call `model.init_weights()` method explicitly, and model parameters will be initialized as configuration.
The high-level workflow of initialization in MMdetection is :
model_cfg(init_cfg) -> build_from_cfg -> model -> init_weight() -> initialize(self, self.init_cfg) -> children's init_weight()
### Description
It is dict or list\[dict\], and contains the following keys and values:
- `type` (str), containing the initializer name in `INTIALIZERS`, and followed by arguments of the initializer.
- `layer` (str or list\[str\]), containing the names of basic layers in Pytorch or MMCV with learnable parameters that will be initialized, e.g. `'Conv2d'`,`'DeformConv2d'`.
- `override` (dict or list\[dict\]), containing the sub-modules that not inherit from BaseModule and whose initialization configuration is different from other layers' which are in `'layer'` key. Initializer defined in `type` will work for all layers defined in `layer`, so if sub-modules are not derived Classes of `BaseModule` but can be initialized as same ways of layers in `layer`, it does not need to use `override`. `override` contains:
- `type` followed by arguments of initializer;
- `name` to indicate sub-module which will be initialized.
### Initialize parameters
Inherit a new model from `mmcv.runner.BaseModule` or `mmdet.models` Here we show an example of FooModel.
```python
import torch.nn as nn
from mmcv.runner import BaseModule
class FooModel(BaseModule)
def __init__(self,
arg1,
arg2,
init_cfg=None):
super(FooModel, self).__init__(init_cfg)
...
```
- Initialize model by using `init_cfg` directly in code
```python
import torch.nn as nn
from mmcv.runner import BaseModule
# or directly inherit mmdet models
class FooModel(BaseModule)
def __init__(self,
arg1,
arg2,
init_cfg=XXX):
super(FooModel, self).__init__(init_cfg)
...
```
- Initialize model by using `init_cfg` directly in `mmcv.Sequential` or `mmcv.ModuleList` code
```python
from mmcv.runner import BaseModule, ModuleList
class FooModel(BaseModule)
def __init__(self,
arg1,
arg2,
init_cfg=None):
super(FooModel, self).__init__(init_cfg)
...
self.conv1 = ModuleList(init_cfg=XXX)
```
- Initialize model by using `init_cfg` in config file
```python
model = dict(
...
model = dict(
type='FooModel',
arg1=XXX,
arg2=XXX,
init_cfg=XXX),
...
```
### Usage of init_cfg
1. Initialize model by `layer` key
If we only define `layer`, it just initialize the layer in `layer` key.
NOTE: Value of `layer` key is the class name with attributes weights and bias of Pytorch, (so such as `MultiheadAttention layer` is not supported).
- Define `layer` key for initializing module with same configuration.
```python
init_cfg = dict(type='Constant', layer=['Conv1d', 'Conv2d', 'Linear'], val=1)
# initialize whole module with same configuration
```
- Define `layer` key for initializing layer with different configurations.
```python
init_cfg = [dict(type='Constant', layer='Conv1d', val=1),
dict(type='Constant', layer='Conv2d', val=2),
dict(type='Constant', layer='Linear', val=3)]
# nn.Conv1d will be initialized with dict(type='Constant', val=1)
# nn.Conv2d will be initialized with dict(type='Constant', val=2)
# nn.Linear will be initialized with dict(type='Constant', val=3)
```
2. Initialize model by `override` key
- When initializing some specific part with its attribute name, we can use `override` key, and the value in `override` will ignore the value in init_cfg.
```python
# layers:
# self.feat = nn.Conv1d(3, 1, 3)
# self.reg = nn.Conv2d(3, 3, 3)
# self.cls = nn.Linear(1,2)
init_cfg = dict(type='Constant',
layer=['Conv1d','Conv2d'], val=1, bias=2,
override=dict(type='Constant', name='reg', val=3, bias=4))
# self.feat and self.cls will be initialized with dict(type='Constant', val=1, bias=2)
# The module called 'reg' will be initialized with dict(type='Constant', val=3, bias=4)
```
- If `layer` is None in init_cfg, only sub-module with the name in override will be initialized, and type and other args in override can be omitted.
```python
# layers:
# self.feat = nn.Conv1d(3, 1, 3)
# self.reg = nn.Conv2d(3, 3, 3)
# self.cls = nn.Linear(1,2)
init_cfg = dict(type='Constant', val=1, bias=2, override=dict(name='reg'))
# self.feat and self.cls will be initialized by Pytorch
# The module called 'reg' will be initialized with dict(type='Constant', val=1, bias=2)
```
- If we don't define `layer` key or `override` key, it will not initialize anything.
- Invalid usage
```python
# It is invalid that override don't have name key
init_cfg = dict(type='Constant', layer=['Conv1d','Conv2d'], val=1, bias=2,
override=dict(type='Constant', val=3, bias=4))
# It is also invalid that override has name and other args except type
init_cfg = dict(type='Constant', layer=['Conv1d','Conv2d'], val=1, bias=2,
override=dict(name='reg', val=3, bias=4))
```
3. Initialize model with the pretrained model
```python
init_cfg = dict(type='Pretrained',
checkpoint='torchvision://resnet50')
```
More details can refer to the documentation in [MMEngine](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/initialize.html)
# Semi-automatic Object Detection Annotation with MMDetection and Label-Studio
Annotation data is a time-consuming and laborious task. This article introduces how to perform semi-automatic annotation using the RTMDet algorithm in MMDetection in conjunction with Label-Studio software. Specifically, using RTMDet to predict image annotations and then refining the annotations with Label-Studio. Community users can refer to this process and methodology and apply it to other fields.
- RTMDet: RTMDet is a high-precision single-stage object detection algorithm developed by OpenMMLab, open-sourced in the MMDetection object detection toolbox. Its open-source license is Apache 2.0, and it can be used freely without restrictions by industrial users.
- [Label Studio](https://github.com/heartexlabs/label-studio) is an excellent annotation software covering the functionality of dataset annotation in areas such as image classification, object detection, and segmentation.
In this article, we will use [cat](https://download.openmmlab.com/mmyolo/data/cat_dataset.zip) images for semi-automatic annotation.
## Environment Configuration
To begin with, you need to create a virtual environment and then install PyTorch and MMCV. In this article, we will specify the versions of PyTorch and MMCV. Next, you can install MMDetection, Label-Studio, and label-studio-ml-backend using the following steps:
Create a virtual environment:
```shell
conda create -n rtmdet python=3.9 -y
conda activate rtmdet
```
Install PyTorch:
```shell
# Linux and Windows CPU only
pip install torch==1.10.1+cpu torchvision==0.11.2+cpu torchaudio==0.10.1 -f https://download.pytorch.org/whl/cpu/torch_stable.html
# Linux and Windows CUDA 11.3
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu113/torch_stable.html
# OSX
pip install torch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1
```
Install MMCV:
```shell
pip install -U openmim
mim install "mmcv>=2.0.0"
# Installing mmcv will automatically install mmengine
```
Install MMDetection:
```shell
git clone https://github.com/open-mmlab/mmdetection
cd mmdetection
pip install -v -e .
```
Install Label-Studio and label-studio-ml-backend:
```shell
# Installing Label-Studio may take some time, if the version is not found, please use the official source
pip install label-studio==1.7.2
pip install label-studio-ml==1.0.9
```
Download the rtmdet weights:
```shell
cd path/to/mmetection
mkdir work_dirs
cd work_dirs
wget https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_m_8xb32-300e_coco/rtmdet_m_8xb32-300e_coco_20220719_112220-229f527c.pth
```
## Start the Service
Start the RTMDet backend inference service:
```shell
cd path/to/mmetection
label-studio-ml start projects/LabelStudio/backend_template --with \
config_file=configs/rtmdet/rtmdet_m_8xb32-300e_coco.py \
checkpoint_file=./work_dirs/rtmdet_m_8xb32-300e_coco_20220719_112220-229f527c.pth \
device=cpu \
--port 8003
# Set device=cpu to use CPU inference, and replace cpu with cuda:0 to use GPU inference.
```
![](https://cdn.vansin.top/picgo20230330131601.png)
The RTMDet backend inference service has now been started. To configure it in the Label-Studio web system, use http://localhost:8003 as the backend inference service.
Now, start the Label-Studio web service:
```shell
label-studio start
```
![](https://cdn.vansin.top/picgo20230330132913.png)
Open your web browser and go to http://localhost:8080/ to see the Label-Studio interface.
![](https://cdn.vansin.top/picgo20230330133118.png)
Register a user and then create an RTMDet-Semiautomatic-Label project.
![](https://cdn.vansin.top/picgo20230330133333.png)
Download the example cat images by running the following command and import them using the Data Import button:
```shell
cd path/to/mmetection
mkdir data && cd data
wget https://download.openmmlab.com/mmyolo/data/cat_dataset.zip && unzip cat_dataset.zip
```
![](https://cdn.vansin.top/picgo20230330133628.png)
![](https://cdn.vansin.top/picgo20230330133715.png)
Then, select the Object Detection With Bounding Boxes template.
![](https://cdn.vansin.top/picgo20230330133807.png)
```shell
airplane
apple
backpack
banana
baseball_bat
baseball_glove
bear
bed
bench
bicycle
bird
boat
book
bottle
bowl
broccoli
bus
cake
car
carrot
cat
cell_phone
chair
clock
couch
cow
cup
dining_table
dog
donut
elephant
fire_hydrant
fork
frisbee
giraffe
hair_drier
handbag
horse
hot_dog
keyboard
kite
knife
laptop
microwave
motorcycle
mouse
orange
oven
parking_meter
person
pizza
potted_plant
refrigerator
remote
sandwich
scissors
sheep
sink
skateboard
skis
snowboard
spoon
sports_ball
stop_sign
suitcase
surfboard
teddy_bear
tennis_racket
tie
toaster
toilet
toothbrush
traffic_light
train
truck
tv
umbrella
vase
wine_glass
zebra
```
Then, copy and add the above categories to Label-Studio and click Save.
![](https://cdn.vansin.top/picgo20230330134027.png)
In the Settings, click Add Model to add the RTMDet backend inference service.
![](https://cdn.vansin.top/picgo20230330134320.png)
Click Validate and Save, and then click Start Labeling.
![](https://cdn.vansin.top/picgo20230330134424.png)
If you see Connected as shown below, the backend inference service has been successfully added.
![](https://cdn.vansin.top/picgo20230330134554.png)
## Start Semi-Automatic Labeling
Click on Label to start labeling.
![](https://cdn.vansin.top/picgo20230330134804.png)
We can see that the RTMDet backend inference service has successfully returned the predicted results and displayed them on the image. However, we noticed that the predicted bounding boxes for the cats are a bit too large and not very accurate.
![](https://cdn.vansin.top/picgo20230403104419.png)
We manually adjust the position of the cat bounding box, and then click Submit to complete the annotation of this image.
![](https://cdn.vansin.top/picgo/20230403105923.png)
After submitting all images, click export to export the labeled dataset in COCO format.
![](https://cdn.vansin.top/picgo20230330135921.png)
Use VS Code to open the unzipped folder to see the labeled dataset, which includes the images and the annotation files in JSON format.
![](https://cdn.vansin.top/picgo20230330140321.png)
At this point, the semi-automatic labeling is complete. We can use this dataset to train a more accurate model in MMDetection and then continue semi-automatic labeling on newly collected images with this model. This way, we can iteratively expand the high-quality dataset and improve the accuracy of the model.
## Use MMYOLO as the Backend Inference Service
If you want to use Label-Studio in MMYOLO, you can refer to replacing the config_file and checkpoint_file with the configuration file and weight file of MMYOLO when starting the backend inference service.
```shell
cd path/to/mmetection
label-studio-ml start projects/LabelStudio/backend_template --with \
config_file= path/to/mmyolo_config.py \
checkpoint_file= path/to/mmyolo_weights.pth \
device=cpu \
--port 8003
# device=cpu is for using CPU inference. If using GPU inference, replace cpu with cuda:0.
```
Rotation object detection and instance segmentation are still under development, please stay tuned.
# Train with customized models and standard datasets
In this note, you will know how to train, test and inference your own customized models under standard datasets. We use the cityscapes dataset to train a customized Cascade Mask R-CNN R50 model as an example to demonstrate the whole process, which using [`AugFPN`](https://github.com/Gus-Guo/AugFPN) to replace the default `FPN` as neck, and add `Rotate` or `TranslateX` as training-time auto augmentation.
The basic steps are as below:
1. Prepare the standard dataset
2. Prepare your own customized model
3. Prepare a config
4. Train, test, and inference models on the standard dataset.
## Prepare the standard dataset
In this note, as we use the standard cityscapes dataset as an example.
It is recommended to symlink the dataset root to `$MMDETECTION/data`.
If your folder structure is different, you may need to change the corresponding paths in config files.
```none
mmdetection
├── mmdet
├── tools
├── configs
├── data
│ ├── coco
│ │ ├── annotations
│ │ ├── train2017
│ │ ├── val2017
│ │ ├── test2017
│ ├── cityscapes
│ │ ├── annotations
│ │ ├── leftImg8bit
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── gtFine
│ │ │ ├── train
│ │ │ ├── val
│ ├── VOCdevkit
│ │ ├── VOC2007
│ │ ├── VOC2012
```
Or you can set your dataset root through
```bash
export MMDET_DATASETS=$data_root
```
We will replace dataset root with `$MMDET_DATASETS`, so you don't have to modify the corresponding path in config files.
The cityscapes annotations have to be converted into the coco format using `tools/dataset_converters/cityscapes.py`:
```shell
pip install cityscapesscripts
python tools/dataset_converters/cityscapes.py ./data/cityscapes --nproc 8 --out-dir ./data/cityscapes/annotations
```
Currently, the config files in `cityscapes` use COCO pre-trained weights to initialize.
You could download the pre-trained models in advance if the network is unavailable or slow, otherwise, it would cause errors at the beginning of training.
## Prepare your own customized model
The second step is to use your own module or training setting. Assume that we want to implement a new neck called `AugFPN` to replace with the default `FPN` under the existing detector Cascade Mask R-CNN R50. The following implements `AugFPN` under MMDetection.
### 1. Define a new neck (e.g. AugFPN)
Firstly create a new file `mmdet/models/necks/augfpn.py`.
```python
import torch.nn as nn
from mmdet.registry import MODELS
@MODELS.register_module()
class AugFPN(nn.Module):
def __init__(self,
in_channels,
out_channels,
num_outs,
start_level=0,
end_level=-1,
add_extra_convs=False):
pass
def forward(self, inputs):
# implementation is ignored
pass
```
### 2. Import the module
You can either add the following line to `mmdet/models/necks/__init__.py`,
```python
from .augfpn import AugFPN
```
or alternatively add
```python
custom_imports = dict(
imports=['mmdet.models.necks.augfpn'],
allow_failed_imports=False)
```
to the config file and avoid modifying the original code.
### 3. Modify the config file
```python
neck=dict(
type='AugFPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5)
```
For more detailed usages about customizing your own models (e.g. implement a new backbone, head, loss, etc) and runtime training settings (e.g. define a new optimizer, use gradient clip, customize training schedules and hooks, etc), please refer to the guideline [Customize Models](../advanced_guides/customize_models.md) and [Customize Runtime Settings](../advanced_guides/customize_runtime.md) respectively.
## Prepare a config
The third step is to prepare a config for your own training setting. Assume that we want to add `AugFPN` and `Rotate` or `Translate` augmentation to existing Cascade Mask R-CNN R50 to train the cityscapes dataset, and assume the config is under directory `configs/cityscapes/` and named as `cascade-mask-rcnn_r50_augfpn_autoaug-10e_cityscapes.py`, the config is as below.
```python
# The new config inherits the base configs to highlight the necessary modification
_base_ = [
'../_base_/models/cascade-mask-rcnn_r50_fpn.py',
'../_base_/datasets/cityscapes_instance.py', '../_base_/default_runtime.py'
]
model = dict(
# set None to avoid loading ImageNet pre-trained backbone,
# instead here we set `load_from` to load from COCO pre-trained detectors.
backbone=dict(init_cfg=None),
# replace neck from defaultly `FPN` to our new implemented module `AugFPN`
neck=dict(
type='AugFPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
# We also need to change the num_classes in head from 80 to 8, to match the
# cityscapes dataset's annotation. This modification involves `bbox_head` and `mask_head`.
roi_head=dict(
bbox_head=[
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
# change the number of classes from defaultly COCO to cityscapes
num_classes=8,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
# change the number of classes from defaultly COCO to cityscapes
num_classes=8,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
# change the number of classes from defaultly COCO to cityscapes
num_classes=8,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
],
mask_head=dict(
type='FCNMaskHead',
num_convs=4,
in_channels=256,
conv_out_channels=256,
# change the number of classes from default COCO to cityscapes
num_classes=8,
loss_mask=dict(
type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))))
# over-write `train_pipeline` for new added `AutoAugment` training setting
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(
type='AutoAugment',
policies=[
[dict(
type='Rotate',
level=5,
img_border_value=(124, 116, 104),
prob=0.5)
],
[dict(type='Rotate', level=7, img_border_value=(124, 116, 104)),
dict(
type='TranslateX',
level=5,
prob=0.5,
img_border_value=(124, 116, 104))
],
]),
dict(
type='RandomResize',
scale=[(2048, 800), (2048, 1024)],
keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='PackDetInputs'),
]
# set batch_size per gpu, and set new training pipeline
train_dataloader = dict(
batch_size=1,
num_workers=3,
# over-write `pipeline` with new training pipeline setting
dataset=dict(pipeline=train_pipeline))
# Set optimizer
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001))
# Set customized learning policy
param_scheduler = [
dict(
type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=500),
dict(
type='MultiStepLR',
begin=0,
end=10,
by_epoch=True,
milestones=[8],
gamma=0.1)
]
# train, val, test loop config
train_cfg = dict(max_epochs=10, val_interval=1)
# We can use the COCO pre-trained Cascade Mask R-CNN R50 model for a more stable performance initialization
load_from = 'https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco/cascade_mask_rcnn_r50_fpn_1x_coco_20200203-9d4dcb24.pth'
```
## Train a new model
To train a model with the new config, you can simply run
```shell
python tools/train.py configs/cityscapes/cascade-mask-rcnn_r50_augfpn_autoaug-10e_cityscapes.py
```
For more detailed usages, please refer to the [training guide](train.md).
## Test and inference
To test the trained model, you can simply run
```shell
python tools/test.py configs/cityscapes/cascade-mask-rcnn_r50_augfpn_autoaug-10e_cityscapes.py work_dirs/cascade-mask-rcnn_r50_augfpn_autoaug-10e_cityscapes/epoch_10.pth
```
For more detailed usages, please refer to the [testing guide](test.md).
# Corruption Benchmarking
## Introduction
We provide tools to test object detection and instance segmentation models on the image corruption benchmark defined in [Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming](https://arxiv.org/abs/1907.07484).
This page provides basic tutorials how to use the benchmark.
```latex
@article{michaelis2019winter,
title={Benchmarking Robustness in Object Detection:
Autonomous Driving when Winter is Coming},
author={Michaelis, Claudio and Mitzkus, Benjamin and
Geirhos, Robert and Rusak, Evgenia and
Bringmann, Oliver and Ecker, Alexander S. and
Bethge, Matthias and Brendel, Wieland},
journal={arXiv:1907.07484},
year={2019}
}
```
![image corruption example](../../../resources/corruptions_sev_3.png)
## About the benchmark
To submit results to the benchmark please visit the [benchmark homepage](https://github.com/bethgelab/robust-detection-benchmark)
The benchmark is modelled after the [imagenet-c benchmark](https://github.com/hendrycks/robustness) which was originally
published in [Benchmarking Neural Network Robustness to Common Corruptions and Perturbations](https://arxiv.org/abs/1903.12261) (ICLR 2019) by Dan Hendrycks and Thomas Dietterich.
The image corruption functions are included in this library but can be installed separately using:
```shell
pip install imagecorruptions
```
Compared to imagenet-c a few changes had to be made to handle images of arbitrary size and greyscale images.
We also modified the 'motion blur' and 'snow' corruptions to remove dependency from a linux specific library,
which would have to be installed separately otherwise. For details please refer to the [imagecorruptions repository](https://github.com/bethgelab/imagecorruptions).
## Inference with pretrained models
We provide a testing script to evaluate a models performance on any combination of the corruptions provided in the benchmark.
### Test a dataset
- [x] single GPU testing
- [ ] multiple GPU testing
- [ ] visualize detection results
You can use the following commands to test a models performance under the 15 corruptions used in the benchmark.
```shell
# single-gpu testing
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}]
```
Alternatively different group of corruptions can be selected.
```shell
# noise
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] --corruptions noise
# blur
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] --corruptions blur
# wetaher
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] --corruptions weather
# digital
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] --corruptions digital
```
Or a costom set of corruptions e.g.:
```shell
# gaussian noise, zoom blur and snow
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] --corruptions gaussian_noise zoom_blur snow
```
Finally the corruption severities to evaluate can be chosen.
Severity 0 corresponds to clean data and the effect increases from 1 to 5.
```shell
# severity 1
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] --severities 1
# severities 0,2,4
python tools/analysis_tools/test_robustness.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] --severities 0 2 4
```
## Results for modelzoo models
The results on COCO 2017val are shown in the below table.
| Model | Backbone | Style | Lr schd | box AP clean | box AP corr. | box % | mask AP clean | mask AP corr. | mask % |
| :-----------------: | :-----------------: | :-----: | :-----: | :----------: | :----------: | :---: | :-----------: | :-----------: | :----: |
| Faster R-CNN | R-50-FPN | pytorch | 1x | 36.3 | 18.2 | 50.2 | - | - | - |
| Faster R-CNN | R-101-FPN | pytorch | 1x | 38.5 | 20.9 | 54.2 | - | - | - |
| Faster R-CNN | X-101-32x4d-FPN | pytorch | 1x | 40.1 | 22.3 | 55.5 | - | - | - |
| Faster R-CNN | X-101-64x4d-FPN | pytorch | 1x | 41.3 | 23.4 | 56.6 | - | - | - |
| Faster R-CNN | R-50-FPN-DCN | pytorch | 1x | 40.0 | 22.4 | 56.1 | - | - | - |
| Faster R-CNN | X-101-32x4d-FPN-DCN | pytorch | 1x | 43.4 | 26.7 | 61.6 | - | - | - |
| Mask R-CNN | R-50-FPN | pytorch | 1x | 37.3 | 18.7 | 50.1 | 34.2 | 16.8 | 49.1 |
| Mask R-CNN | R-50-FPN-DCN | pytorch | 1x | 41.1 | 23.3 | 56.7 | 37.2 | 20.7 | 55.7 |
| Cascade R-CNN | R-50-FPN | pytorch | 1x | 40.4 | 20.1 | 49.7 | - | - | - |
| Cascade Mask R-CNN | R-50-FPN | pytorch | 1x | 41.2 | 20.7 | 50.2 | 35.7 | 17.6 | 49.3 |
| RetinaNet | R-50-FPN | pytorch | 1x | 35.6 | 17.8 | 50.1 | - | - | - |
| Hybrid Task Cascade | X-101-64x4d-FPN-DCN | pytorch | 1x | 50.6 | 32.7 | 64.7 | 43.8 | 28.1 | 64.0 |
Results may vary slightly due to the stochastic application of the corruptions.
# Semi-supervised Object Detection
Semi-supervised object detection uses both labeled data and unlabeled data for training. It not only reduces the annotation burden for training high-performance object detectors but also further improves the object detector by using a large number of unlabeled data.
A typical procedure to train a semi-supervised object detector is as below:
- [Semi-supervised Object Detection](#semi-supervised-object-detection)
- [Prepare and split dataset](#prepare-and-split-dataset)
- [Configure multi-branch pipeline](#configure-multi-branch-pipeline)
- [Configure semi-supervised dataloader](#configure-semi-supervised-dataloader)
- [Configure semi-supervised model](#configure-semi-supervised-model)
- [Configure MeanTeacherHook](#configure-meanteacherhook)
- [Configure TeacherStudentValLoop](#configure-teacherstudentvalloop)
## Prepare and split dataset
We provide a dataset download script, which downloads the coco2017 dataset by default and decompresses it automatically.
```shell
python tools/misc/download_dataset.py
```
The decompressed dataset directory structure is as below:
```plain
mmdetection
├── data
│ ├── coco
│ │ ├── annotations
│ │ │ ├── image_info_unlabeled2017.json
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_val2017.json
│ │ ├── test2017
│ │ ├── train2017
│ │ ├── unlabeled2017
│ │ ├── val2017
```
There are two common experimental settings for semi-supervised object detection on the coco2017 dataset:
(1) Split `train2017` according to a fixed percentage (1%, 2%, 5% and 10%) as a labeled dataset, and the rest of `train2017` as an unlabeled dataset. Because the different splits of `train2017` as labeled datasets will cause significant fluctuation on the accuracy of the semi-supervised detectors, five-fold cross-validation is used in practice to evaluate the algorithm. We provide the dataset split script:
```shell
python tools/misc/split_coco.py
```
By default, the script will split `train2017` according to the labeled data ratio 1%, 2%, 5% and 10%, and each split will be randomly repeated 5 times for cross-validation. The generated semi-supervised annotation file name format is as below:
- the name format of labeled dataset: `instances_train2017.{fold}@{percent}.json`
- the name format of unlabeled dataset: `instances_train2017.{fold}@{percent}-unlabeled.json`
Here, `fold` is used for cross-validation, and `percent` represents the ratio of labeled data. The directory structure of the divided dataset is as below:
```plain
mmdetection
├── data
│ ├── coco
│ │ ├── annotations
│ │ │ ├── image_info_unlabeled2017.json
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_val2017.json
│ │ ├── semi_anns
│ │ │ ├── instances_train2017.1@1.json
│ │ │ ├── instances_train2017.1@1-unlabeled.json
│ │ │ ├── instances_train2017.1@2.json
│ │ │ ├── instances_train2017.1@2-unlabeled.json
│ │ │ ├── instances_train2017.1@5.json
│ │ │ ├── instances_train2017.1@5-unlabeled.json
│ │ │ ├── instances_train2017.1@10.json
│ │ │ ├── instances_train2017.1@10-unlabeled.json
│ │ │ ├── instances_train2017.2@1.json
│ │ │ ├── instances_train2017.2@1-unlabeled.json
│ │ ├── test2017
│ │ ├── train2017
│ │ ├── unlabeled2017
│ │ ├── val2017
```
(2) Use `train2017` as the labeled dataset and `unlabeled2017` as the unlabeled dataset. Since `image_info_unlabeled2017.json` does not contain `categories` information, the `CocoDataset` cannot be initialized, so you need to write the `categories` of `instances_train2017.json` into `image_info_unlabeled2017.json` and save it as `instances_unlabeled2017.json`, the relevant script is as below:
```python
from mmengine.fileio import load, dump
anns_train = load('instances_train2017.json')
anns_unlabeled = load('image_info_unlabeled2017.json')
anns_unlabeled['categories'] = anns_train['categories']
dump(anns_unlabeled, 'instances_unlabeled2017.json')
```
The processed dataset directory is as below:
```plain
mmdetection
├── data
│ ├── coco
│ │ ├── annotations
│ │ │ ├── image_info_unlabeled2017.json
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_unlabeled2017.json
│ │ │ ├── instances_val2017.json
│ │ ├── test2017
│ │ ├── train2017
│ │ ├── unlabeled2017
│ │ ├── val2017
```
## Configure multi-branch pipeline
There are two main approaches to semi-supervised learning,
[consistency regularization](https://research.nvidia.com/sites/default/files/publications/laine2017iclr_paper.pdf)
and [pseudo label](https://www.researchgate.net/profile/Dong-Hyun-Lee/publication/280581078_Pseudo-Label_The_Simple_and_Efficient_Semi-Supervised_Learning_Method_for_Deep_Neural_Networks/links/55bc4ada08ae092e9660b776/Pseudo-Label-The-Simple-and-Efficient-Semi-Supervised-Learning-Method-for-Deep-Neural-Networks.pdf).
Consistency regularization often requires some careful design, while pseudo label have a simpler form and are easier to extend to downstream tasks.
We adopt a teacher-student joint training semi-supervised object detection framework based on pseudo label, so labeled data and unlabeled data need to configure different data pipeline:
(1) Pipeline for labeled data:
```python
# pipeline used to augment labeled data,
# which will be sent to student model for supervised training.
sup_pipeline = [
dict(type='LoadImageFromFile', backend_args=backend_args),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='RandomResize', scale=scale, keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(type='RandAugment', aug_space=color_space, aug_num=1),
dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)),
dict(type='MultiBranch', sup=dict(type='PackDetInputs'))
]
```
(2) Pipeline for unlabeled data:
```python
# pipeline used to augment unlabeled data weakly,
# which will be sent to teacher model for predicting pseudo instances.
weak_pipeline = [
dict(type='RandomResize', scale=scale, keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor', 'flip', 'flip_direction',
'homography_matrix')),
]
# pipeline used to augment unlabeled data strongly,
# which will be sent to student model for unsupervised training.
strong_pipeline = [
dict(type='RandomResize', scale=scale, keep_ratio=True),
dict(type='RandomFlip', prob=0.5),
dict(
type='RandomOrder',
transforms=[
dict(type='RandAugment', aug_space=color_space, aug_num=1),
dict(type='RandAugment', aug_space=geometric, aug_num=1),
]),
dict(type='RandomErasing', n_patches=(1, 5), ratio=(0, 0.2)),
dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor', 'flip', 'flip_direction',
'homography_matrix')),
]
# pipeline used to augment unlabeled data into different views
unsup_pipeline = [
dict(type='LoadImageFromFile', backend_args=backend_args),
dict(type='LoadEmptyAnnotations'),
dict(
type='MultiBranch',
unsup_teacher=weak_pipeline,
unsup_student=strong_pipeline,
)
]
```
## Configure semi-supervised dataloader
(1) Build a semi-supervised dataset. Use `ConcatDataset` to concatenate labeled and unlabeled datasets.
```python
labeled_dataset = dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=sup_pipeline)
unlabeled_dataset = dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_unlabeled2017.json',
data_prefix=dict(img='unlabeled2017/'),
filter_cfg=dict(filter_empty_gt=False),
pipeline=unsup_pipeline)
train_dataloader = dict(
batch_size=batch_size,
num_workers=num_workers,
persistent_workers=True,
sampler=dict(
type='GroupMultiSourceSampler',
batch_size=batch_size,
source_ratio=[1, 4]),
dataset=dict(
type='ConcatDataset', datasets=[labeled_dataset, unlabeled_dataset]))
```
(2) Use multi-source dataset sampler. Use `GroupMultiSourceSampler` to sample data form batches from `labeled_dataset` and `labeled_dataset`, `source_ratio` controls the proportion of labeled data and unlabeled data in the batch. `GroupMultiSourceSampler` also ensures that the images in the same batch have similar aspect ratios. If you don't need to guarantee the aspect ratio of the images in the batch, you can use `MultiSourceSampler`. The sampling diagram of `GroupMultiSourceSampler` is as below:
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/186149261-8cf28e92-de5c-4c8c-96e1-13558b2e27f7.jpg"/>
</div>
`sup=1000` indicates that the scale of the labeled dataset is 1000, `sup_h=200` indicates that the scale of the images with an aspect ratio greater than or equal to 1 in the labeled dataset is 200, and `sup_w=800` indicates that the scale of the images with an aspect ratio less than 1 in the labeled dataset is 800,
`unsup=9000` indicates that the scale of the unlabeled dataset is 9000, `unsup_h=1800` indicates that the scale of the images with an aspect ratio greater than or equal to 1 in the unlabeled dataset is 1800, and `unsup_w=7200` indicates the scale of the images with an aspect ratio less than 1 in the unlabeled dataset is 7200.
`GroupMultiSourceSampler` randomly selects a group according to the overall aspect ratio distribution of the images in the labeled dataset and the unlabeled dataset, and then sample data to form batches from the two datasets according to `source_ratio`, so labeled datasets and unlabeled datasets have different repetitions.
## Configure semi-supervised model
We choose `Faster R-CNN` as `detector` for semi-supervised training. Take the semi-supervised object detection algorithm `SoftTeacher` as an example,
the model configuration can be inherited from `_base_/models/faster-rcnn_r50_fpn.py`, replacing the backbone network of the detector with `caffe` style.
Note that unlike the supervised training configs, `Faster R-CNN` as `detector` is an attribute of `model`, not `model` .
In addition, `data_preprocessor` needs to be set to `MultiBranchDataPreprocessor`, which is used to pad and normalize images from different pipelines.
Finally, parameters required for semi-supervised training and testing can be configured via `semi_train_cfg` and `semi_test_cfg`.
```python
_base_ = [
'../_base_/models/faster-rcnn_r50_fpn.py', '../_base_/default_runtime.py',
'../_base_/datasets/semi_coco_detection.py'
]
detector = _base_.model
detector.data_preprocessor = dict(
type='DetDataPreprocessor',
mean=[103.530, 116.280, 123.675],
std=[1.0, 1.0, 1.0],
bgr_to_rgb=False,
pad_size_divisor=32)
detector.backbone = dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe'))
model = dict(
_delete_=True,
type='SoftTeacher',
detector=detector,
data_preprocessor=dict(
type='MultiBranchDataPreprocessor',
data_preprocessor=detector.data_preprocessor),
semi_train_cfg=dict(
freeze_teacher=True,
sup_weight=1.0,
unsup_weight=4.0,
pseudo_label_initial_score_thr=0.5,
rpn_pseudo_thr=0.9,
cls_pseudo_thr=0.9,
reg_pseudo_thr=0.02,
jitter_times=10,
jitter_scale=0.06,
min_pseudo_bbox_wh=(1e-2, 1e-2)),
semi_test_cfg=dict(predict_on='teacher'))
```
In addition, we also support semi-supervised training for other detection models, such as `RetinaNet` and `Cascade R-CNN`. Since `SoftTeacher` only supports `Faster R-CNN`, it needs to be replaced with `SemiBaseDetector`, example is as below:
```python
_base_ = [
'../_base_/models/retinanet_r50_fpn.py', '../_base_/default_runtime.py',
'../_base_/datasets/semi_coco_detection.py'
]
detector = _base_.model
model = dict(
_delete_=True,
type='SemiBaseDetector',
detector=detector,
data_preprocessor=dict(
type='MultiBranchDataPreprocessor',
data_preprocessor=detector.data_preprocessor),
semi_train_cfg=dict(
freeze_teacher=True,
sup_weight=1.0,
unsup_weight=1.0,
cls_pseudo_thr=0.9,
min_pseudo_bbox_wh=(1e-2, 1e-2)),
semi_test_cfg=dict(predict_on='teacher'))
```
Following the semi-supervised training configuration of `SoftTeacher`, change `batch_size` to 2 and `source_ratio` to `[1, 1]`, the experimental results of supervised and semi-supervised training of `RetinaNet`, `Faster R-CNN`, `Cascade R-CNN` and `SoftTeacher` on the 10% coco `train2017` are as below:
| Model | Detector | BackBone | Style | sup-0.1-coco mAP | semi-0.1-coco mAP |
| :--------------: | :-----------: | :------: | :---: | :--------------: | :---------------: |
| SemiBaseDetector | RetinaNet | R-50-FPN | caffe | 23.5 | 27.7 |
| SemiBaseDetector | Faster R-CNN | R-50-FPN | caffe | 26.7 | 28.4 |
| SemiBaseDetector | Cascade R-CNN | R-50-FPN | caffe | 28.0 | 29.7 |
| SoftTeacher | Faster R-CNN | R-50-FPN | caffe | 26.7 | 31.1 |
## Configure MeanTeacherHook
Usually, the teacher model is updated by Exponential Moving Average (EMA) the student model, and then the teacher model is optimized with the optimization of the student model, which can be achieved by configuring `custom_hooks`:
```python
custom_hooks = [dict(type='MeanTeacherHook')]
```
## Configure TeacherStudentValLoop
Since there are two models in the teacher-student joint training framework, we can replace `ValLoop` with `TeacherStudentValLoop` to test the accuracy of both models during the training process.
```python
val_cfg = dict(type='TeacherStudentValLoop')
```
# Use a single stage detector as RPN
Region proposal network (RPN) is a submodule in [Faster R-CNN](https://arxiv.org/abs/1506.01497), which generates proposals for the second stage of Faster R-CNN. Most two-stage detectors in MMDetection use [`RPNHead`](../../../mmdet/models/dense_heads/rpn_head.py) to generate proposals as RPN. However, any single-stage detector can serve as an RPN since their bounding box predictions can also be regarded as region proposals and thus be refined in the R-CNN. Therefore, MMDetection v3.0 supports that.
To illustrate the whole process, here we give an example of how to use an anchor-free single-stage model [FCOS](../../../configs/fcos/fcos_r50-caffe_fpn_gn-head_1x_coco.py) as an RPN in [Faster R-CNN](../../../configs/faster_rcnn/faster-rcnn_r50_fpn_fcos-rpn_1x_coco.py).
The outline of this tutorial is as below:
1. Use `FCOSHead` as an `RPNHead` in Faster R-CNN
2. Evaluate proposals
3. Train the customized Faster R-CNN with pre-trained FCOS
## Use `FCOSHead` as an `RPNHead` in Faster R-CNN
To set `FCOSHead` as an `RPNHead` in Faster R-CNN, we should create a new config file named `configs/faster_rcnn/faster-rcnn_r50_fpn_fcos-rpn_1x_coco.py`, and replace with the setting of `rpn_head` with the setting of `bbox_head` in `configs/fcos/fcos_r50-caffe_fpn_gn-head_1x_coco.py`. Besides, we still use the neck setting of FCOS with strides of `[8, 16, 32, 64, 128]`, and update `featmap_strides` of `bbox_roi_extractor` to `[8, 16, 32, 64, 128]`. To avoid loss goes NAN, we apply warmup during the first 1000 iterations instead of the first 500 iterations, which means that the lr increases more slowly. The config is as follows:
```python
_base_ = [
'../_base_/models/faster-rcnn_r50_fpn.py',
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
# copied from configs/fcos/fcos_r50-caffe_fpn_gn-head_1x_coco.py
neck=dict(
start_level=1,
add_extra_convs='on_output', # use P5
relu_before_extra_convs=True),
rpn_head=dict(
_delete_=True, # ignore the unused old settings
type='FCOSHead',
num_classes=1, # num_classes = 1 for rpn, if num_classes > 1, it will be set to 1 in TwoStageDetector automatically
in_channels=256,
stacked_convs=4,
feat_channels=256,
strides=[8, 16, 32, 64, 128],
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='IoULoss', loss_weight=1.0),
loss_centerness=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
roi_head=dict( # update featmap_strides due to the strides in neck
bbox_roi_extractor=dict(featmap_strides=[8, 16, 32, 64, 128])))
# learning rate
param_scheduler = [
dict(
type='LinearLR', start_factor=0.001, by_epoch=False, begin=0,
end=1000), # Slowly increase lr, otherwise loss becomes NAN
dict(
type='MultiStepLR',
begin=0,
end=12,
by_epoch=True,
milestones=[8, 11],
gamma=0.1)
]
```
Then, we could use the following command to train our customized model. For more training commands, please refer to [here](train.md).
```python
# training with 8 GPUS
bash tools/dist_train.sh configs/faster_rcnn/faster-rcnn_r50_fpn_fcos-rpn_1x_coco.py \
8 \
--work-dir ./work_dirs/faster-rcnn_r50_fpn_fcos-rpn_1x_coco
```
## Evaluate proposals
The quality of proposals is of great importance to the performance of detector, therefore, we also provide a way to evaluate proposals. Same as above, create a new config file named `configs/rpn/fcos-rpn_r50_fpn_1x_coco.py`, and replace with setting of `rpn_head` with the setting of `bbox_head` in `configs/fcos/fcos_r50-caffe_fpn_gn-head_1x_coco.py`.
```python
_base_ = [
'../_base_/models/rpn_r50_fpn.py', '../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
val_evaluator = dict(metric='proposal_fast')
test_evaluator = val_evaluator
model = dict(
# copied from configs/fcos/fcos_r50-caffe_fpn_gn-head_1x_coco.py
neck=dict(
start_level=1,
add_extra_convs='on_output', # use P5
relu_before_extra_convs=True),
rpn_head=dict(
_delete_=True, # ignore the unused old settings
type='FCOSHead',
num_classes=1, # num_classes = 1 for rpn, if num_classes > 1, it will be set to 1 in RPN automatically
in_channels=256,
stacked_convs=4,
feat_channels=256,
strides=[8, 16, 32, 64, 128],
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='IoULoss', loss_weight=1.0),
loss_centerness=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)))
```
Suppose we have the checkpoint `./work_dirs/faster-rcnn_r50_fpn_fcos-rpn_1x_coco/epoch_12.pth` after training, then we can evaluate the quality of proposals with the following command.
```python
# testing with 8 GPUs
bash tools/dist_test.sh \
configs/rpn/fcos-rpn_r50_fpn_1x_coco.py \
./work_dirs/faster-rcnn_r50_fpn_fcos-rpn_1x_coco/epoch_12.pth \
8
```
## Train the customized Faster R-CNN with pre-trained FCOS
Pre-training not only speeds up convergence of training, but also improves the performance of the detector. Therefore, here we give an example to illustrate how to do use a pre-trained FCOS as an RPN to accelerate training and improve the accuracy. Suppose we want to use `FCOSHead` as an rpn head in Faster R-CNN and train with the pre-trained [`fcos_r50-caffe_fpn_gn-head_1x_coco`](https://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco/fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth). The content of config file named `configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_fcos-rpn_1x_coco.py` is as the following. Note that `fcos_r50-caffe_fpn_gn-head_1x_coco` uses a caffe version of ResNet50, the pixel mean and std in `data_preprocessor` thus need to be updated.
```python
_base_ = [
'../_base_/models/faster-rcnn_r50_fpn.py',
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
data_preprocessor=dict(
mean=[103.530, 116.280, 123.675],
std=[1.0, 1.0, 1.0],
bgr_to_rgb=False),
backbone=dict(
norm_cfg=dict(type='BN', requires_grad=False),
style='caffe',
init_cfg=None), # the checkpoint in ``load_from`` contains the weights of backbone
neck=dict(
start_level=1,
add_extra_convs='on_output', # use P5
relu_before_extra_convs=True),
rpn_head=dict(
_delete_=True, # ignore the unused old settings
type='FCOSHead',
num_classes=1, # num_classes = 1 for rpn, if num_classes > 1, it will be set to 1 in TwoStageDetector automatically
in_channels=256,
stacked_convs=4,
feat_channels=256,
strides=[8, 16, 32, 64, 128],
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='IoULoss', loss_weight=1.0),
loss_centerness=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
roi_head=dict( # update featmap_strides due to the strides in neck
bbox_roi_extractor=dict(featmap_strides=[8, 16, 32, 64, 128])))
load_from = 'https://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco/fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth'
```
The command for training is as below.
```python
bash tools/dist_train.sh \
configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_fcos-rpn_1x_coco.py \
8 \
--work-dir ./work_dirs/faster-rcnn_r50-caffe_fpn_fcos-rpn_1x_coco
```
# Test existing models on standard datasets
To evaluate a model's accuracy, one usually tests the model on some standard datasets, please refer to [dataset prepare guide](dataset_prepare.md) to prepare the dataset.
This section will show how to test existing models on supported datasets.
## Test existing models
We provide testing scripts for evaluating an existing model on the whole dataset (COCO, PASCAL VOC, Cityscapes, etc.).
The following testing environments are supported:
- single GPU
- CPU
- single node multiple GPUs
- multiple nodes
Choose the proper script to perform testing depending on the testing environment.
```shell
# Single-gpu testing
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--show]
# CPU: disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--show]
# Multi-gpu testing
bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
${GPU_NUM} \
[--out ${RESULT_FILE}]
```
`tools/dist_test.sh` also supports multi-node testing, but relies on PyTorch's [launch utility](https://pytorch.org/docs/stable/distributed.html#launch-utility).
Optional arguments:
- `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file.
- `--show`: If specified, detection results will be plotted on the images and shown in a new window. It is only applicable to single GPU testing and used for debugging and visualization. Please make sure that GUI is available in your environment. Otherwise, you may encounter an error like `cannot connect to X server`.
- `--show-dir`: If specified, detection results will be plotted on the images and saved to the specified directory. It is only applicable to single GPU testing and used for debugging and visualization. You do NOT need a GUI available in your environment for using this option.
- `--work-dir`: If specified, detection results containing evaluation metrics will be saved to the specified directory.
- `--cfg-options`: If specified, the key-value pair optional cfg will be merged into config file
## Examples
Assuming that you have already downloaded the checkpoints to the directory `checkpoints/`.
1. Test RTMDet and visualize the results. Press any key for the next image.
Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet).
```shell
python tools/test.py \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--show
```
2. Test RTMDet and save the painted images for future visualization.
Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet).
```shell
python tools/test.py \
configs/rtmdet/rtmdet_l_8xb32-300e_coco.py \
checkpoints/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth \
--show-dir faster_rcnn_r50_fpn_1x_results
```
3. Test Faster R-CNN on PASCAL VOC (without saving the test results).
Config and checkpoint files are available [here](../../../configs/pascal_voc).
```shell
python tools/test.py \
configs/pascal_voc/faster-rcnn_r50_fpn_1x_voc0712.py \
checkpoints/faster_rcnn_r50_fpn_1x_voc0712_20200624-c9895d40.pth
```
4. Test Mask R-CNN with 8 GPUs, and evaluate.
Config and checkpoint files are available [here](../../../configs/mask_rcnn).
```shell
./tools/dist_test.sh \
configs/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8 \
--out results.pkl
```
5. Test Mask R-CNN with 8 GPUs, and evaluate the metric **class-wise**.
Config and checkpoint files are available [here](../../../configs/mask_rcnn).
```shell
./tools/dist_test.sh \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8 \
--out results.pkl \
--cfg-options test_evaluator.classwise=True
```
6. Test Mask R-CNN on COCO test-dev with 8 GPUs, and generate JSON files for submitting to the official evaluation server.
Config and checkpoint files are available [here](../../../configs/mask_rcnn).
Replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in [config](../../../configs/_base_/datasets/coco_instance.py) and run:
```shell
./tools/dist_test.sh \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8
```
This command generates two JSON files `./work_dirs/coco_instance/test.bbox.json` and `./work_dirs/coco_instance/test.segm.json`.
7. Test Mask R-CNN on Cityscapes test with 8 GPUs, and generate txt and png files for submitting to the official evaluation server.
Config and checkpoint files are available [here](../../../configs/cityscapes).
Replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in [config](../../../configs/_base_/datasets/cityscapes_instance.py) and run:
```shell
./tools/dist_test.sh \
configs/cityscapes/mask-rcnn_r50_fpn_1x_cityscapes.py \
checkpoints/mask_rcnn_r50_fpn_1x_cityscapes_20200227-afe51d5a.pth \
8
```
The generated png and txt would be under `./work_dirs/cityscapes_metric/` directory.
## Test without Ground Truth Annotations
MMDetection supports to test models without ground-truth annotations using `CocoDataset`. If your dataset format is not in COCO format, please convert them to COCO format. For example, if your dataset format is VOC, you can directly convert it to COCO format by the [script in tools.](../../../tools/dataset_converters/pascal_voc.py) If your dataset format is Cityscapes, you can directly convert it to COCO format by the [script in tools.](../../../tools/dataset_converters/cityscapes.py) The rest of the formats can be converted using [this script](../../../tools/dataset_converters/images2coco.py).
```shell
python tools/dataset_converters/images2coco.py \
${IMG_PATH} \
${CLASSES} \
${OUT} \
[--exclude-extensions]
```
arguments:
- `IMG_PATH`: The root path of images.
- `CLASSES`: The text file with a list of categories.
- `OUT`: The output annotation json file name. The save dir is in the same directory as `IMG_PATH`.
- `exclude-extensions`: The suffix of images to be excluded, such as 'png' and 'bmp'.
After the conversion is complete, you need to replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in [config](../../../configs/_base_/datasets/coco_detection.py)(find which dataset in 'configs/_base_/datasets' the current config corresponds to) and run:
```shell
# Single-gpu testing
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--show]
# CPU: disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--show]
# Multi-gpu testing
bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
${GPU_NUM} \
[--show]
```
Assuming that the checkpoints in the [model zoo](https://mmdetection.readthedocs.io/en/latest/modelzoo_statistics.html) have been downloaded to the directory `checkpoints/`, we can test Mask R-CNN on COCO test-dev with 8 GPUs, and generate JSON files using the following command.
```sh
./tools/dist_test.sh \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoints/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
8
```
This command generates two JSON files `./work_dirs/coco_instance/test.bbox.json` and `./work_dirs/coco_instance/test.segm.json`.
## Batch Inference
MMDetection supports inference with a single image or batched images in test mode. By default, we use single-image inference and you can use batch inference by modifying `samples_per_gpu` in the config of test data. You can do that either by modifying the config as below.
```shell
data = dict(train_dataloader=dict(...), val_dataloader=dict(...), test_dataloader=dict(batch_size=2, ...))
```
Or you can set it through `--cfg-options` as `--cfg-options test_dataloader.batch_size=2`
## Test Time Augmentation (TTA)
Test time augmentation (TTA) is a data augmentation strategy used during the test phase. It applies different augmentations, such as flipping and scaling, to the same image for model inference, and then merges the predictions of each augmented image to obtain more accurate predictions. To make it easier for users to use TTA, MMEngine provides [BaseTTAModel](https://mmengine.readthedocs.io/en/latest/api/generated/mmengine.model.BaseTTAModel.html#mmengine.model.BaseTTAModel) class, which allows users to implement different TTA strategies by simply extending the BaseTTAModel class according to their needs.
In MMDetection, we provides [DetTTAModel](../../../mmdet/models/test_time_augs/det_tta.py) class, which inherits from BaseTTAModel.
### Use case
Using TTA requires two steps. First, you need to add `tta_model` and `tta_pipeline` in the configuration file:
```shell
tta_model = dict(
type='DetTTAModel',
tta_cfg=dict(nms=dict(
type='nms',
iou_threshold=0.5),
max_per_img=100))
tta_pipeline = [
dict(type='LoadImageFromFile',
backend_args=None),
dict(
type='TestTimeAug',
transforms=[[
dict(type='Resize', scale=(1333, 800), keep_ratio=True)
], [ # It uses 2 flipping transformations (flipping and not flipping).
dict(type='RandomFlip', prob=1.),
dict(type='RandomFlip', prob=0.)
], [
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape',
'img_shape', 'scale_factor', 'flip',
'flip_direction'))
]])]
```
Second, set `--tta` when running the test scripts as examples below:
```shell
# Single-gpu testing
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--tta]
# CPU: disable GPUs and run single-gpu testing script
export CUDA_VISIBLE_DEVICES=-1
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
[--out ${RESULT_FILE}] \
[--tta]
# Multi-gpu testing
bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
${GPU_NUM} \
[--tta]
```
You can also modify the TTA config by yourself, such as adding scaling enhancement:
```shell
tta_model = dict(
type='DetTTAModel',
tta_cfg=dict(nms=dict(
type='nms',
iou_threshold=0.5),
max_per_img=100))
img_scales = [(1333, 800), (666, 400), (2000, 1200)]
tta_pipeline = [
dict(type='LoadImageFromFile',
backend_args=None),
dict(
type='TestTimeAug',
transforms=[[
dict(type='Resize', scale=s, keep_ratio=True) for s in img_scales
], [
dict(type='RandomFlip', prob=1.),
dict(type='RandomFlip', prob=0.)
], [
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape',
'img_shape', 'scale_factor', 'flip',
'flip_direction'))
]])]
```
The above data augmentation pipeline will first perform 3 multi-scaling transformations on the image, followed by 2 flipping transformations (flipping and not flipping). Finally, the image is packaged into the final result using PackDetInputs.
Here are more TTA use cases for your reference:
- [RetinaNet](../../../configs/retinanet/retinanet_tta.py)
- [CenterNet](../../../configs/centernet/centernet_tta.py)
- [YOLOX](../../../configs/rtmdet/rtmdet_tta.py)
- [RTMDet](../../../configs/yolox/yolox_tta.py)
For more advanced usage and data flow of TTA, please refer to [MMEngine](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/test_time_augmentation.html#data-flow). We will support instance segmentation TTA latter.
# Test Results Submission
## Panoptic segmentation test results submission
The following sections introduce how to produce the prediction results of panoptic segmentation models on the COCO test-dev set and submit the predictions to [COCO evaluation server](https://competitions.codalab.org/competitions/19507).
### Prerequisites
- Download [COCO test dataset images](http://images.cocodataset.org/zips/test2017.zip), [testing image info](http://images.cocodataset.org/annotations/image_info_test2017.zip), and [panoptic train/val annotations](http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip), then unzip them, put 'test2017' to `data/coco/`, put json files and annotation files to `data/coco/annotations/`.
```shell
# suppose data/coco/ does not exist
mkdir -pv data/coco/
# download test2017
wget -P data/coco/ http://images.cocodataset.org/zips/test2017.zip
wget -P data/coco/ http://images.cocodataset.org/annotations/image_info_test2017.zip
wget -P data/coco/ http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
# unzip them
unzip data/coco/test2017.zip -d data/coco/
unzip data/coco/image_info_test2017.zip -d data/coco/
unzip data/coco/panoptic_annotations_trainval2017.zip -d data/coco/
# remove zip files (optional)
rm -rf data/coco/test2017.zip data/coco/image_info_test2017.zip data/coco/panoptic_annotations_trainval2017.zip
```
- Run the following code to update category information in testing image info. Since the attribute `isthing` is missing in category information of 'image_info_test-dev2017.json', we need to update it with the category information in 'panoptic_val2017.json'.
```shell
python tools/misc/gen_coco_panoptic_test_info.py data/coco/annotations
```
After completing the above preparations, your directory structure of `data` should be like this:
```text
data
`-- coco
|-- annotations
| |-- image_info_test-dev2017.json
| |-- image_info_test2017.json
| |-- panoptic_image_info_test-dev2017.json
| |-- panoptic_train2017.json
| |-- panoptic_train2017.zip
| |-- panoptic_val2017.json
| `-- panoptic_val2017.zip
`-- test2017
```
### Inference on coco test-dev
To do inference on coco test-dev, we should update the setting of `test_dataloder` and `test_evaluator` first. There two ways to do this: 1. update them in config file; 2. update them in command line.
#### Update them in config file
The relevant settings are provided at the end of `configs/_base_/datasets/coco_panoptic.py`, as below.
```python
test_dataloader = dict(
batch_size=1,
num_workers=1,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/panoptic_image_info_test-dev2017.json',
data_prefix=dict(img='test2017/'),
test_mode=True,
pipeline=test_pipeline))
test_evaluator = dict(
type='CocoPanopticMetric',
format_only=True,
ann_file=data_root + 'annotations/panoptic_image_info_test-dev2017.json',
outfile_prefix='./work_dirs/coco_panoptic/test')
```
Any of the following way can be used to update the setting for inference on coco test-dev set.
Case 1: Directly uncomment the setting in `configs/_base_/datasets/coco_panoptic.py`.
Case 2: Copy the following setting to the config file you used now.
```python
test_dataloader = dict(
dataset=dict(
ann_file='annotations/panoptic_image_info_test-dev2017.json',
data_prefix=dict(img='test2017/', _delete_=True)))
test_evaluator = dict(
format_only=True,
ann_file=data_root + 'annotations/panoptic_image_info_test-dev2017.json',
outfile_prefix='./work_dirs/coco_panoptic/test')
```
Then infer on coco test-dev et by the following command.
```shell
python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE}
```
#### Update them in command line
The command for update of the related settings and inference on coco test-dev are as below.
```shell
# test with single gpu
CUDA_VISIBLE_DEVICES=0 python tools/test.py \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=${WORK_DIR}/results
# test with four gpus
CUDA_VISIBLE_DEVICES=0,1,3,4 bash tools/dist_test.sh \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
8 \ # eights gpus
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=${WORK_DIR}/results
# test with slurm
GPUS=8 tools/slurm_test.sh \
${Partition} \
${JOB_NAME} \
${CONFIG_FILE} \
${CHECKPOINT_FILE} \
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=${WORK_DIR}/results
```
Example
Suppose we perform inference on `test2017` using pretrained MaskFormer with ResNet-50 backbone.
```shell
# test with single gpu
CUDA_VISIBLE_DEVICES=0 python tools/test.py \
configs/maskformer/maskformer_r50_mstrain_16x1_75e_coco.py \
checkpoints/maskformer_r50_mstrain_16x1_75e_coco_20220221_141956-bc2699cb.pth \
--cfg-options \
test_dataloader.dataset.ann_file=annotations/panoptic_image_info_test-dev2017.json \
test_dataloader.dataset.data_prefix.img=test2017 \
test_dataloader.dataset.data_prefix._delete_=True \
test_evaluator.format_only=True \
test_evaluator.ann_file=data/coco/annotations/panoptic_image_info_test-dev2017.json \
test_evaluator.outfile_prefix=work_dirs/maskformer/results
```
### Rename files and zip results
After inference, the panoptic segmentation results (a json file and a directory where the masks are stored) will be in `WORK_DIR`. We should rename them according to the naming convention described on [COCO's Website](https://cocodataset.org/#upload). Finally, we need to compress the json and the directory where the masks are stored into a zip file, and rename the zip file according to the naming convention. Note that the zip file should **directly** contains the above two files.
The commands to rename files and zip results:
```shell
# In WORK_DIR, we have panoptic segmentation results: 'panoptic' and 'results.panoptic.json'.
cd ${WORK_DIR}
# replace '[algorithm_name]' with the name of algorithm you used.
mv ./panoptic ./panoptic_test-dev2017_[algorithm_name]_results
mv ./results.panoptic.json ./panoptic_test-dev2017_[algorithm_name]_results.json
zip panoptic_test-dev2017_[algorithm_name]_results.zip -ur panoptic_test-dev2017_[algorithm_name]_results panoptic_test-dev2017_[algorithm_name]_results.json
```
**We provide lots of useful tools under the `tools/` directory.**
## MOT Test-time Parameter Search
`tools/analysis_tools/mot/mot_param_search.py` can search the parameters of the `tracker` in MOT models.
It is used as the same manner with `tools/test.py` but **different** in the configs.
Here is an example that shows how to modify the configs:
1. Define the desirable evaluation metrics to record.
For example, you can define the `evaluator` as
```python
test_evaluator=dict(type='MOTChallengeMetrics', metric=['HOTA', 'CLEAR', 'Identity'])
```
Of course, you can also customize the content of `metric` in `test_evaluator`. You are free to choose one or more of `['HOTA', 'CLEAR', 'Identity']`.
2. Define the parameters and the values to search.
Assume you have a tracker like
```python
model=dict(
tracker=dict(
type='BaseTracker',
obj_score_thr=0.5,
match_iou_thr=0.5
)
)
```
If you want to search the parameters of the tracker, just change the value to a list as follow
```python
model=dict(
tracker=dict(
type='BaseTracker',
obj_score_thr=[0.4, 0.5, 0.6],
match_iou_thr=[0.4, 0.5, 0.6, 0.7]
)
)
```
Then the script will test the totally 12 cases and log the results.
## MOT Error Visualize
`tools/analysis_tools/mot/mot_error_visualize.py` can visualize errors for multiple object tracking.
This script needs the result of inference. By Default, the **red** bounding box denotes false positive, the **yellow** bounding box denotes the false negative and the **blue** bounding box denotes ID switch.
```
python tools/analysis_tools/mot/mot_error_visualize.py \
${CONFIG_FILE}\
--input ${INPUT} \
--result-dir ${RESULT_DIR} \
[--output-dir ${OUTPUT}] \
[--fps ${FPS}] \
[--show] \
[--backend ${BACKEND}]
```
The `RESULT_DIR` contains the inference results of all videos and the inference result is a `txt` file.
Optional arguments:
- `OUTPUT`: Output of the visualized demo. If not specified, the `--show` is obligate to show the video on the fly.
- `FPS`: FPS of the output video.
- `--show`: Whether show the video on the fly.
- `BACKEND`: The backend to visualize the boxes. Options are `cv2` and `plt`.
## Browse dataset
`tools/analysis_tools/mot/browse_dataset.py` can visualize the training dataset to check whether the dataset configuration is correct.
**Examples:**
```shell
python tools/analysis_tools/browse_dataset.py ${CONFIG_FILE} [--show-interval ${SHOW_INTERVAL}]
```
Optional arguments:
- `SHOW_INTERVAL`: The interval of show (s).
- `--show`: Whether show the images on the fly.
# Learn about Configs
We use python files as our config system. You can find all the provided configs under $MMDetection/configs.
We incorporate modular and inheritance design into our config system,
which is convenient to conduct various experiments.
If you wish to inspect the config file,
you may run `python tools/misc/print_config.py /PATH/TO/CONFIG` to see the complete config.
## A brief description of a complete config
A complete config usually contains the following primary fields:
- `model`: the basic config of model, which may contain `data_preprocessor`, modules (e.g., `detector`, `motion`),`train_cfg`, `test_cfg`, etc.
- `train_dataloader`: the config of training dataloader, which usually contains `batch_size`, `num_workers`, `sampler`, `dataset`, etc.
- `val_dataloader`: the config of validation dataloader, which is similar with `train_dataloader`.
- `test_dataloader`: the config of testing dataloader, which is similar with `train_dataloader`.
- `val_evaluator`: the config of validation evaluator. For example,`type='MOTChallengeMetrics'` for MOT task on the MOTChallenge benchmarks.
- `test_evaluator`: the config of testing evaluator, which is similar with `val_evaluator`.
- `train_cfg`: the config of training loop. For example, `type='EpochBasedTrainLoop'`.
- `val_cfg`: the config of validation loop. For example, `type='VideoValLoop'`.
- `test_cfg`: the config of testing loop. For example, `type='VideoTestLoop'`.
- `default_hooks`: the config of default hooks, which may include hooks for timer, logger, param_scheduler, checkpoint, sampler_seed, visualization, etc.
- `vis_backends`: the config of visualization backends, which uses `type='LocalVisBackend'` as default.
- `visualizer`: the config of visualizer. `type='TrackLocalVisualizer'` for MOT tasks.
- `param_scheduler`: the config of parameter scheduler, which usually sets the learning rate scheduler.
- `optim_wrapper`: the config of optimizer wrapper, which contains optimization-related information, for example optimizer, gradient clipping, etc.
- `load_from`: load models as a pre-trained model from a given path.
- `resume`: If `True`, resume checkpoints from `load_from`, and the training will be resumed from the epoch when the checkpoint is saved.
## Modify config through script arguments
When submitting jobs using `tools/train.py` or `tools/test_tracking.py`,
you may specify `--cfg-options` to in-place modify the config.
We present several examples as follows.
For more details, please refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/config.md).
- **Update config keys of dict chains.**
The config options can be specified following the order of the dict keys in the original config.
For example, `--cfg-options model.detector.backbone.norm_eval=False` changes the all BN modules in model backbones to train mode.
- **Update keys inside a list of configs.**
Some config dicts are composed as a list in your config.
For example, the testing pipeline `test_dataloader.dataset.pipeline` is normally a list e.g. `[dict(type='LoadImageFromFile'), ...]`.
If you want to change `LoadImageFromFile` to `LoadImageFromWebcam` in the pipeline,
you may specify `--cfg-options test_dataloader.dataset.pipeline.0.type=LoadImageFromWebcam`.
- **Update values of list/tuples.**
Maybe the value to be updated is a list or a tuple.
For example, you can change the key `mean` of `data_preprocessor` by specifying `--cfg-options model.data_preprocessor.mean=[0,0,0]`.
Note that **NO** white space is allowed inside the specified value.
## Config File Structure
There are 3 basic component types under `config/_base_`, i.e., dataset, model and default_runtime.
Many methods could be easily constructed with one of each like SORT, DeepSORT.
The configs that are composed by components from `_base_` are called *primitive*.
For all configs under the same folder, it is recommended to have only **one** *primitive* config.
All other configs should inherit from the *primitive* config.
In this way, the maximum of inheritance level is 3.
For easy understanding, we recommend contributors to inherit from exiting methods.
For example, if some modification is made base on Faster R-CNN,
user may first inherit the basic Faster R-CNN structure
by specifying `_base_ = ../_base_/models/faster-rcnn_r50-dc5.py`,
then modify the necessary fields in the config files.
If you are building an entirely new method that does not share the structure with any of the existing methods,
you may create a folder `method_name` under `configs`.
Please refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/config.md) for detailed documentation.
## Config Name Style
We follow the below style to name config files. Contributors are advised to follow the same style.
```shell
{method}_{module}_{train_cfg}_{train_data}_{test_data}
```
- `{method}`: method name, like `sort`.
- `{module}`: basic modules of the method, like `faster-rcnn_r50_fpn`.
- `{train_cfg}`: training config which usually contains batch size, epochs, etc, like `8xb4-80e`.
- `{train_data}`: training data, like `mot17halftrain`.
- `{test_data}`: testing data, like `test-mot17halfval`.
## FAQ
**Ignore some fields in the base configs**
Sometimes, you may set `_delete_=True` to ignore some of fields in base configs.
You may refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/config.md) for simple illustration.
## Tracking Data Structure Introduction
### Advantages and new features
In mmdetection tracking task, we employ videos to organize the dataset and use
TrackDataSample to descirbe dataset info.
- Based on video organization, we provide transform `UniformRefFrameSample` to sample key frames and ref frames and use `TransformBroadcaster` for for clip training.
- TrackDataSample can be viewd as a wrapper of multiple DetDataSample to some extent. It contains a property `video_data_samples` which is a list of DetDataSample, each of which corresponds to a single frame. In addition, it's metainfo includes key_frames_inds and ref_frames_inds to apply clip training way.
- Thanks to video-based data organization, the entire video can be directly tested. This way is more concise and intuitive. We also provide image_based test method, if your GPU mmemory cannot fit the entire video.
### TODO
- Some algorithms like StrongSORT, Mask2Former can not support video_based testing. These algorithms pose a challenge to GPU memory. we will optimize this problem in the future.
- Now we do not support joint training of video_based dataset like MOT Challenge Dataset and image_based dataset like Crowdhuman for the algorithm QDTrack. we will optimize this problem in the future.
## Dataset Preparation
This page provides the instructions for dataset preparation on existing benchmarks, include
- Multiple Object Tracking
- [MOT Challenge](https://motchallenge.net/)
- [CrowdHuman](https://www.crowdhuman.org/)
- Video Instance Segmentation
- [YouTube-VIS](https://youtube-vos.org/dataset/vis/)
### 1. Download Datasets
Please download the datasets from the official websites. It is recommended to symlink the root of the datasets to `$MMDETECTION/data`.
#### 1.1 Multiple Object Tracking
- For the training and testing of multi object tracking task, one of the MOT Challenge datasets (e.g. MOT17, MOT20) are needed, CrowdHuman can be served as comlementary dataset.
- For users in China, the following datasets can be downloaded from [OpenDataLab](https://opendatalab.com/) with high speed:
- [MOT17](https://opendatalab.com/MOT17/download)
- [MOT20](https://opendatalab.com/MOT20/download)
- [CrowdHuman](https://opendatalab.com/CrowdHuman/download)
#### 1.2 Video Instance Segmentation
- For the training and testing of video instance segmetatioon task, only one of YouTube-VIS datasets (e.g. YouTube-VIS 2019, YouTube-VIS 2021) is needed.
- YouTube-VIS 2019 dataset can be download from [YouTubeVOS](https://codalab.lisn.upsaclay.fr/competitions/6064)
- YouTube-VIS 2021 dataset can be download from [YouTubeVOS](https://codalab.lisn.upsaclay.fr/competitions/7680)
#### 1.3 Data Structure
If your folder structure is different from the following, you may need to change the corresponding paths in config files.
```
mmdetection
├── mmdet
├── tools
├── configs
├── data
│ ├── coco
│ │ ├── train2017
│ │ ├── val2017
│ │ ├── test2017
│ │ ├── annotations
│ │
| ├── MOT15/MOT16/MOT17/MOT20
| | ├── train
| | | ├── MOT17-02-DPM
| | | | ├── det
| │ │ │ ├── gt
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
| | ├── test
| | | ├── MOT17-01-DPM
| | | | ├── det
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
│ │
│ ├── crowdhuman
│ │ ├── annotation_train.odgt
│ │ ├── annotation_val.odgt
│ │ ├── train
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_train01.zip
│ │ │ ├── CrowdHuman_train02.zip
│ │ │ ├── CrowdHuman_train03.zip
│ │ ├── val
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_val.zip
│ │
```
### 2. Convert Annotations
In this case, you need to convert the official annotations to coco style. We provide scripts and the usages are as following:
```shell
# MOT17
# The processing of other MOT Challenge dataset is the same as MOT17
python ./tools/dataset_converters/mot2coco.py -i ./data/MOT17/ -o ./data/MOT17/annotations --split-train --convert-det
python ./tools/dataset_converters/mot2reid.py -i ./data/MOT17/ -o ./data/MOT17/reid --val-split 0.2 --vis-threshold 0.3
# CrowdHuman
python ./tools/dataset_converters/crowdhuman2coco.py -i ./data/crowdhuman -o ./data/crowdhuman/annotations
# YouTube-VIS 2019
python ./tools/dataset_converters/youtubevis2coco.py -i ./data/youtube_vis_2019 -o ./data/youtube_vis_2019/annotations --version 2019
# YouTube-VIS 2021
python ./tools/dataset_converters/youtubevis2coco.py -i ./data/youtube_vis_2021 -o ./data/youtube_vis_2021/annotations --version 2021
```
The folder structure will be as following after your run these scripts:
```
mmdetection
├── mmtrack
├── tools
├── configs
├── data
│ ├── coco
│ │ ├── train2017
│ │ ├── val2017
│ │ ├── test2017
│ │ ├── annotations
│ │
| ├── MOT15/MOT16/MOT17/MOT20
| | ├── train
| | | ├── MOT17-02-DPM
| | | | ├── det
| │ │ │ ├── gt
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
| | ├── test
| | | ├── MOT17-01-DPM
| | | | ├── det
| │ │ │ ├── img1
| │ │ │ ├── seqinfo.ini
│ │ │ ├── ......
| | ├── annotations
| | ├── reid
│ │ │ ├── imgs
│ │ │ ├── meta
│ │
│ ├── crowdhuman
│ │ ├── annotation_train.odgt
│ │ ├── annotation_val.odgt
│ │ ├── train
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_train01.zip
│ │ │ ├── CrowdHuman_train02.zip
│ │ │ ├── CrowdHuman_train03.zip
│ │ ├── val
│ │ │ ├── Images
│ │ │ ├── CrowdHuman_val.zip
│ │ ├── annotations
│ │ │ ├── crowdhuman_train.json
│ │ │ ├── crowdhuman_val.json
│ │
│ ├── youtube_vis_2019
│ │ │── train
│ │ │ │── JPEGImages
│ │ │ │── ......
│ │ │── valid
│ │ │ │── JPEGImages
│ │ │ │── ......
│ │ │── test
│ │ │ │── JPEGImages
│ │ │ │── ......
│ │ │── train.json (the official annotation files)
│ │ │── valid.json (the official annotation files)
│ │ │── test.json (the official annotation files)
│ │ │── annotations (the converted annotation file)
│ │
│ ├── youtube_vis_2021
│ │ │── train
│ │ │ │── JPEGImages
│ │ │ │── instances.json (the official annotation files)
│ │ │ │── ......
│ │ │── valid
│ │ │ │── JPEGImages
│ │ │ │── instances.json (the official annotation files)
│ │ │ │── ......
│ │ │── test
│ │ │ │── JPEGImages
│ │ │ │── instances.json (the official annotation files)
│ │ │ │── ......
│ │ │── annotations (the converted annotation file)
```
#### The folder of annotations and reid in MOT15/MOT16/MOT17/MOT20
We take MOT17 dataset as examples, the other datasets share similar structure.
There are 8 JSON files in `data/MOT17/annotations`:
`train_cocoformat.json`: JSON file containing the annotations information of the training set in MOT17 dataset.
`train_detections.pkl`: Pickle file containing the public detections of the training set in MOT17 dataset.
`test_cocoformat.json`: JSON file containing the annotations information of the testing set in MOT17 dataset.
`test_detections.pkl`: Pickle file containing the public detections of the testing set in MOT17 dataset.
`half-train_cocoformat.json`, `half-train_detections.pkl`, `half-val_cocoformat.json`and `half-val_detections.pkl` share similar meaning with `train_cocoformat.json` and `train_detections.pkl`. The `half` means we split each video in the training set into half. The first half videos are denoted as `half-train` set, and the second half videos are denoted as`half-val` set.
The structure of `data/MOT17/reid` is as follows:
```
reid
├── imgs
│ ├── MOT17-02-FRCNN_000002
│ │ ├── 000000.jpg
│ │ ├── 000001.jpg
│ │ ├── ...
│ ├── MOT17-02-FRCNN_000003
│ │ ├── 000000.jpg
│ │ ├── 000001.jpg
│ │ ├── ...
├── meta
│ ├── train_80.txt
│ ├── val_20.txt
```
The `80` in `train_80.txt` means the proportion of the training dataset to the whole ReID dataset is 80%. While the proportion of the validation dataset is 20%.
For training, we provide a annotation list `train_80.txt`. Each line of the list contains a filename and its corresponding ground-truth labels. The format is as follows:
```
MOT17-05-FRCNN_000110/000018.jpg 0
MOT17-13-FRCNN_000146/000014.jpg 1
MOT17-05-FRCNN_000088/000004.jpg 2
MOT17-02-FRCNN_000009/000081.jpg 3
```
`MOT17-05-FRCNN_000110` denotes the 110-th person in `MOT17-05-FRCNN` video.
For validation, The annotation list `val_20.txt` remains the same as format above.
Images in `reid/imgs` are cropped from raw images in `MOT17/train` by the corresponding `gt.txt`. The value of ground-truth labels should fall in range `[0, num_classes - 1]`.
#### The folder of annotations in crowdhuman
There are 2 JSON files in `data/crowdhuman/annotations`:
`crowdhuman_train.json`: JSON file containing the annotations information of the training set in CrowdHuman dataset.
`crowdhuman_val.json`: JSON file containing the annotations information of the validation set in CrowdHuman dataset.
#### The folder of annotations in youtube_vis_2019/youtube_vis2021
There are 3 JSON files in `data/youtube_vis_2019/annotations` or `data/youtube_vis_2021/annotations`:
`youtube_vis_2019_train.json`/`youtube_vis_2021_train.json`: JSON file containing the annotations information of the training set in youtube_vis_2019/youtube_vis2021 dataset.
`youtube_vis_2019_valid.json`/`youtube_vis_2021_valid.json`: JSON file containing the annotations information of the validation set in youtube_vis_2019/youtube_vis2021 dataset.
`youtube_vis_2019_test.json`/`youtube_vis_2021_test.json`: JSON file containing the annotations information of the testing set in youtube_vis_2019/youtube_vis2021 dataset.
# Inference
We provide demo scripts to inference a given video or a folder that contains continuous images. The source codes are available [here](https://github.com/open-mmlab/mmdetection/tree/tracking/demo).
Note that if you use a folder as the input, the image names there must be **sortable** , which means we can re-order the images according to the numbers contained in the filenames. We now only support reading the images whose filenames end with `.jpg`, `.jpeg` and `.png`.
## Inference MOT models
This script can inference an input video / images with a multiple object tracking or video instance segmentation model.
```shell
python demo/mot_demo.py \
${INPUTS}
${CONFIG_FILE} \
[--checkpoint ${CHECKPOINT_FILE}] \
[--detector ${DETECTOR_FILE}] \
[--reid ${REID_FILE}] \
[--score-thr ${SCORE_THR}] \
[--device ${DEVICE}] \
[--out ${OUTPUT}] \
[--show]
```
The `INPUT` and `OUTPUT` support both _mp4 video_ format and the _folder_ format.
**Important:** For `DeepSORT`, `SORT`, `StrongSORT`, they need load the weight of the `reid` and the weight of the `detector` separately. Therefore, we use `--detector` and `--reid` to load weights. Other algorithms such as `ByteTrack`, `OCSORT` `QDTrack` `MaskTrackRCNN` and `Mask2Former` use `--checkpoint` to load weights.
Optional arguments:
- `CHECKPOINT_FILE`: The checkpoint is optional.
- `DETECTOR_FILE`: The detector is optional.
- `REID_FILE`: The reid is optional.
- `SCORE_THR`: The threshold of score to filter bboxes.
- `DEVICE`: The device for inference. Options are `cpu` or `cuda:0`, etc.
- `OUTPUT`: Output of the visualized demo. If not specified, the `--show` is obligate to show the video on the fly.
- `--show`: Whether show the video on the fly.
**Examples of running mot model:**
```shell
# Example 1: do not specify --checkpoint to use --detector
python demo/mot_demo.py \
demo/demo_mot.mp4 \
configs/sort/sort_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py \
--detector \
https://download.openmmlab.com/mmtracking/mot/faster_rcnn/faster-rcnn_r50_fpn_4e_mot17-half-64ee2ed4.pth \
--out mot.mp4
# Example 2: use --checkpoint
python demo/mot_demo.py \
demo/demo_mot.mp4 \
configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py \
--checkpoint https://download.openmmlab.com/mmtracking/mot/qdtrack/mot_dataset/qdtrack_faster-rcnn_r50_fpn_4e_mot17_20220315_145635-76f295ef.pth \
--out mot.mp4
```
# Learn to train and test
## Train
This section will show how to train existing models on supported datasets.
The following training environments are supported:
- CPU
- single GPU
- single node multiple GPUs
- multiple nodes
You can also manage jobs with Slurm.
Important:
- You can change the evaluation interval during training by modifying the `train_cfg` as
`train_cfg = dict(val_interval=10)`. That means evaluating the model every 10 epochs.
- The default learning rate in all config files is for 8 GPUs.
According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677),
you need to set the learning rate proportional to the batch size if you use different GPUs or images per GPU,
e.g., `lr=0.01` for 8 GPUs * 1 img/gpu and lr=0.04 for 16 GPUs * 2 imgs/gpu.
- During training, log files and checkpoints will be saved to the working directory,
which is specified by CLI argument `--work-dir`. It uses `./work_dirs/CONFIG_NAME` as default.
- If you want the mixed precision training, simply specify CLI argument `--amp`.
#### 1. Train on CPU
The model is default put on cuda device.
Only if there are no cuda devices, the model will be put on cpu.
So if you want to train the model on CPU, you need to `export CUDA_VISIBLE_DEVICES=-1` to disable GPU visibility first.
More details in [MMEngine](https://github.com/open-mmlab/mmengine/blob/ca282aee9e402104b644494ca491f73d93a9544f/mmengine/runner/runner.py#L849-L850).
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/train.py ${CONFIG_FILE} [optional arguments]
```
An example of training the MOT model QDTrack on CPU:
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/train.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
```
#### 2. Train on single GPU
If you want to train the model on single GPU, you can directly use the `tools/train.py` as follows.
```shell script
python tools/train.py ${CONFIG_FILE} [optional arguments]
```
You can use `export CUDA_VISIBLE_DEVICES=$GPU_ID` to select the GPU.
An example of training the MOT model QDTrack on single GPU:
```shell script
CUDA_VISIBLE_DEVICES=2 python tools/train.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
```
#### 3. Train on single node multiple GPUs
We provide `tools/dist_train.sh` to launch training on multiple GPUs.
The basic usage is as follows.
```shell script
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
If you would like to launch multiple jobs on a single machine,
e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs,
you need to specify different ports (29500 by default) for each job to avoid communication conflict.
For example, you can set the port in commands as follows.
```shell script
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
```
An example of training the MOT model QDTrack on single node multiple GPUs:
```shell script
bash ./tools/dist_train.sh configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py 8
```
#### 4. Train on multiple nodes
If you launch with multiple machines simply connected with ethernet, you can simply run following commands:
On the first machine:
```shell script
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
```
On the second machine:
```shell script
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
```
Usually it is slow if you do not have high speed networking like InfiniBand.
#### 5. Train with Slurm
[Slurm](https://slurm.schedmd.com/) is a good job scheduling system for computing clusters.
On a cluster managed by Slurm, you can use `slurm_train.sh` to spawn training jobs.
It supports both single-node and multi-node training.
The basic usage is as follows.
```shell script
bash ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR} ${GPUS}
```
An example of training the MOT model QDTrack with Slurm:
```shell script
PORT=29501 \
GPUS_PER_NODE=8 \
SRUN_ARGS="--quotatype=reserved" \
bash ./tools/slurm_train.sh \
mypartition \
mottrack
configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py
./work_dirs/QDTrack \
8
```
## Test
This section will show how to test existing models on supported datasets.
The following testing environments are supported:
- CPU
- single GPU
- single node multiple GPUs
- multiple nodes
You can also manage jobs with Slurm.
Important:
- In MOT, some algorithms like `DeepSORT`, `SORT`, `StrongSORT` need load the weight of the `reid` and the weight of the `detector` separately.
Other algorithms such as `ByteTrack`, `OCSORT` and `QDTrack` don't need. So we provide `--checkpoint`, `--detector` and `--reid` to load weights.
- We provide two ways to evaluate and test models, video_basede test and image_based test. some algorithms like `StrongSORT`, `Mask2former` only support
video_based test. if your GPU memory can't fit the entire video, you can switch test way by set sampler type.
For example:
video_based test: `sampler=dict(type='DefaultSampler', shuffle=False, round_up=False)`
image_based test: `sampler=dict(type='TrackImgSampler')`
- You can set the results saving path by modifying the key `outfile_prefix` in evaluator.
For example, `val_evaluator = dict(outfile_prefix='results/sort_mot17')`.
Otherwise, a temporal file will be created and will be removed after evaluation.
- If you just want the formatted results without evaluation, you can set `format_only=True`.
For example, `test_evaluator = dict(type='MOTChallengeMetric', metric=['HOTA', 'CLEAR', 'Identity'], outfile_prefix='sort_mot17_results', format_only=True)`
#### 1. Test on CPU
The model is default put on cuda device.
Only if there are no cuda devices, the model will be put on cpu.
So if you want to test the model on CPU, you need to `export CUDA_VISIBLE_DEVICES=-1` to disable GPU visibility first.
More details in [MMEngine](https://github.com/open-mmlab/mmengine/blob/ca282aee9e402104b644494ca491f73d93a9544f/mmengine/runner/runner.py#L849-L850).
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/test_tracking.py ${CONFIG_FILE} [optional arguments]
```
An example of testing the MOT model SORT on CPU:
```shell script
CUDA_VISIBLE_DEVICES=-1 python tools/test_tracking.py configs/sort/sort_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py --detector ${CHECKPOINT_FILE}
```
#### 2. Test on single GPU
If you want to test the model on single GPU, you can directly use the `tools/test_tracking.py` as follows.
```shell script
python tools/test_tracking.py ${CONFIG_FILE} [optional arguments]
```
You can use `export CUDA_VISIBLE_DEVICES=$GPU_ID` to select the GPU.
An example of testing the MOT model QDTrack on single GPU:
```shell script
CUDA_VISIBLE_DEVICES=2 python tools/test_tracking.py configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py --detector ${CHECKPOINT_FILE}
```
#### 3. Test on single node multiple GPUs
We provide `tools/dist_test_tracking.sh` to launch testing on multiple GPUs.
The basic usage is as follows.
```shell script
bash ./tools/dist_test_tracking.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
```
An example of testing the MOT model DeepSort on single node multiple GPUs:
```shell script
bash ./tools/dist_test_tracking.sh configs/qdtrack/qdtrack_faster-rcnn_r50_fpn_8xb2-4e_mot17halftrain_test-mot17halfval.py 8 --detector ${CHECKPOINT_FILE} --reid ${CHECKPOINT_FILE}
```
#### 4. Test on multiple nodes
You can test on multiple nodes, which is similar with "Train on multiple nodes".
#### 5. Test with Slurm
On a cluster managed by Slurm, you can use `slurm_test_tracking.sh` to spawn testing jobs.
It supports both single-node and multi-node testing.
The basic usage is as follows.
```shell script
[GPUS=${GPUS}] bash tools/slurm_test_tracking.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} [optional arguments]
```
An example of testing the VIS model Mask2former with Slurm:
```shell script
GPUS=8
bash tools/slurm_test_tracking.sh \
mypartition \
vis \
configs/mask2former_vis/mask2former_r50_8xb2-8e_youtubevis2021.py \
--checkpoint ${CHECKPOINT_FILE}
```
# Learn about Visualization
## Local Visualization
This section will present how to visualize the detection/tracking results with local visualizer.
If you want to draw prediction results, you can turn this feature on by setting `draw=True` in `TrackVisualizationHook` as follows.
```shell script
default_hooks = dict(visualization=dict(type='TrackVisualizationHook', draw=True))
```
Specifically, the `TrackVisualizationHook` has the following arguments:
- `draw`: whether to draw prediction results. If it is False, it means that no drawing will be done. Defaults to False.
- `interval`: The interval of visualization. Defaults to 30.
- `score_thr`: The threshold to visualize the bboxes and masks. Defaults to 0.3.
- `show`: Whether to display the drawn image. Default to False.
- `wait_time`: The interval of show (s). Defaults to 0.
- `test_out_dir`: directory where painted images will be saved in testing process.
- `backend_args`: Arguments to instantiate a file client. Defaults to `None`.
In the `TrackVisualizationHook`, `TrackLocalVisualizer` will be called to implement visualization for MOT and VIS tasks.
We will present the details below.
You can refer to MMEngine for more details about [Visualization](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) and [Hook](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/hook.md).
#### Tracking Visualization
We realize the tracking visualization with class `TrackLocalVisualizer`.
You can call it as follows.
```python
visualizer = dict(type='TrackLocalVisualizer')
```
It has the following arguments:
- `name`: Name of the instance. Defaults to 'visualizer'.
- `image`: The origin image to draw. The format should be RGB. Defaults to None.
- `vis_backends`: Visual backend config list. Defaults to None.
- `save_dir`: Save file dir for all storage backends. If it is None, the backend storage will not save any data.
- `line_width`: The linewidth of lines. Defaults to 3.
- `alpha`: The transparency of bboxes or mask. Defaults to 0.8.
Here is a visualization example of DeepSORT:
![test_img_89](https://user-images.githubusercontent.com/99722489/186062929-6d0e4663-0d8e-4045-9ec8-67e0e41da876.png)
# Train predefined models on standard datasets
MMDetection also provides out-of-the-box tools for training detection models.
This section will show how to train _predefined_ models (under [configs](../../../configs)) on standard datasets i.e. COCO.
## Prepare datasets
Preparing datasets is also necessary for training. See section [Prepare datasets](#prepare-datasets) above for details.
**Note**:
Currently, the config files under `configs/cityscapes` use COCO pre-trained weights to initialize.
If your network connection is slow or unavailable, it's advisable to download existing models before beginning training to avoid errors.
## Learning rate auto scaling
**Important**: The default learning rate in config files is for 8 GPUs and 2 sample per GPU (batch size = 8 * 2 = 16). And it had been set to `auto_scale_lr.base_batch_size` in `config/_base_/schedules/schedule_1x.py`. The learning rate will be automatically scaled based on the value at a batch size of 16. Meanwhile, to avoid affecting other codebases that use mmdet, the default setting for the `auto_scale_lr.enable` flag is `False`.
If you want to enable this feature, you need to add argument `--auto-scale-lr`. And you need to check the config name which you want to use before you process the command, because the config name indicates the default batch size.
By default, it is `8 x 2 = 16 batch size`, like `faster_rcnn_r50_caffe_fpn_90k_coco.py` or `pisa_faster_rcnn_x101_32x4d_fpn_1x_coco.py`. In other cases, you will see the config file name have `_NxM_` in dictating, like `cornernet_hourglass104_mstest_32x3_210e_coco.py` which batch size is `32 x 3 = 96`, or `scnet_x101_64x4d_fpn_8x1_20e_coco.py` which batch size is `8 x 1 = 8`.
**Please remember to check the bottom of the specific config file you want to use, it will have `auto_scale_lr.base_batch_size` if the batch size is not `16`. If you can't find those values, check the config file which in `_base_=[xxx]` and you will find it. Please do not modify its values if you want to automatically scale the LR.**
The basic usage of learning rate auto scaling is as follows.
```shell
python tools/train.py \
${CONFIG_FILE} \
--auto-scale-lr \
[optional arguments]
```
If you enabled this feature, the learning rate will be automatically scaled according to the number of GPUs on the machine and the batch size of training. See [linear scaling rule](https://arxiv.org/abs/1706.02677) for details. For example, If there are 4 GPUs and 2 pictures on each GPU, `lr = 0.01`, then if there are 16 GPUs and 4 pictures on each GPU, it will automatically scale to `lr = 0.08`.
If you don't want to use it, you need to calculate the learning rate according to the [linear scaling rule](https://arxiv.org/abs/1706.02677) manually then change `optimizer.lr` in specific config file.
## Training on a single GPU
We provide `tools/train.py` to launch training jobs on a single GPU.
The basic usage is as follows.
```shell
python tools/train.py \
${CONFIG_FILE} \
[optional arguments]
```
During training, log files and checkpoints will be saved to the working directory, which is specified by `work_dir` in the config file or via CLI argument `--work-dir`.
By default, the model is evaluated on the validation set every epoch, the evaluation interval can be specified in the config file as shown below.
```python
# evaluate the model every 12 epochs.
train_cfg = dict(val_interval=12)
```
This tool accepts several optional arguments, including:
- `--work-dir ${WORK_DIR}`: Override the working directory.
- `--resume`: resume from the latest checkpoint in the work_dir automatically.
- `--resume ${CHECKPOINT_FILE}`: resume from the specific checkpoint.
- `--cfg-options 'Key=value'`: Overrides other settings in the used config.
**Note:**
There is a difference between `resume` and `load-from`:
`resume` loads both the weights of the model and the state of the optimizer, and it inherits the iteration number from the specified checkpoint, so training does not start again from scratch. `load-from`, on the other hand, only loads the weights of the model, and its training starts from scratch. It is often used for fine-tuning a model. `load-from` needs to be written in the config file, while `resume` is passed as a command line argument.
## Training on CPU
The process of training on the CPU is consistent with single GPU training. We just need to disable GPUs before the training process.
```shell
export CUDA_VISIBLE_DEVICES=-1
```
And then run the script [above](#training-on-a-single-GPU).
**Note**:
We do not recommend users to use the CPU for training because it is too slow. We support this feature to allow users to debug on machines without GPU for convenience.
## Training on multiple GPUs
We provide `tools/dist_train.sh` to launch training on multiple GPUs.
The basic usage is as follows.
```shell
bash ./tools/dist_train.sh \
${CONFIG_FILE} \
${GPU_NUM} \
[optional arguments]
```
Optional arguments remain the same as stated [above](#training-on-a-single-GPU).
### Launch multiple jobs simultaneously
If you would like to launch multiple jobs on a single machine, e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs,
you need to specify different ports (29500 by default) for each job to avoid communication conflict.
If you use `dist_train.sh` to launch training jobs, you can set the port in the commands.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh ${CONFIG_FILE} 4
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 ./tools/dist_train.sh ${CONFIG_FILE} 4
```
## Train with multiple machines
If you launch with multiple machines simply connected with ethernet, you can simply run the following commands:
On the first machine:
```shell
NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
On the second machine:
```shell
NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
```
Usually, it is slow if you do not have high-speed networking like InfiniBand.
## Manage jobs with Slurm
[Slurm](https://slurm.schedmd.com/) is a good job scheduling system for computing clusters.
On a cluster managed by Slurm, you can use `slurm_train.sh` to spawn training jobs. It supports both single-node and multi-node training.
The basic usage is as follows.
```shell
[GPUS=${GPUS}] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR}
```
Below is an example of using 16 GPUs to train Mask R-CNN on a Slurm partition named _dev_, and set the work-dir to some shared file systems.
```shell
GPUS=16 ./tools/slurm_train.sh dev mask_r50_1x configs/mask-rcnn_r50_fpn_1x_coco.py /nfs/xxxx/mask_rcnn_r50_fpn_1x
```
You can check [the source code](../../../tools/slurm_train.sh) to review full arguments and environment variables.
When using Slurm, the port option needs to be set in one of the following ways:
1. Set the port through `--options`. This is more recommended since it does not change the original configs.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR} --cfg-options 'dist_params.port=29500'
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR} --cfg-options 'dist_params.port=29501'
```
2. Modify the config files to set different communication ports.
In `config1.py`, set
```python
dist_params = dict(backend='nccl', port=29500)
```
In `config2.py`, set
```python
dist_params = dict(backend='nccl', port=29501)
```
Then you can launch two jobs with `config1.py` and `config2.py`.
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR}
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR}
```
# Train with customized datasets
In this part, you will know how to train predefined models with customized datasets and then test it. We use the [balloon dataset](https://github.com/matterport/Mask_RCNN/tree/master/samples/balloon) as an example to describe the whole process.
The basic steps are as below:
1. Prepare the customized dataset
2. Prepare a config
3. Train, test, and infer models on the customized dataset.
## Prepare the customized dataset
There are three ways to support a new dataset in MMDetection:
1. Reorganize the dataset into COCO format.
2. Reorganize the dataset into a middle format.
3. Implement a new dataset.
Usually, we recommend using the first two methods which are usually easier than the third.
In this note, we give an example of converting the data into COCO format.
**Note**: Datasets and metrics have been decoupled except CityScapes since MMDetection 3.0. Therefore, users can use any kind of evaluation metrics for any format of datasets during validation. For example: evaluate on COCO dataset with VOC metric, or evaluate on OpenImages dataset with both VOC and COCO metrics.
### COCO annotation format
The necessary keys of COCO format for instance segmentation are as below, for the complete details, please refer [here](https://cocodataset.org/#format-data).
```json
{
"images": [image],
"annotations": [annotation],
"categories": [category]
}
image = {
"id": int,
"width": int,
"height": int,
"file_name": str,
}
annotation = {
"id": int,
"image_id": int,
"category_id": int,
"segmentation": RLE or [polygon],
"area": float,
"bbox": [x,y,width,height], # (x, y) are the coordinates of the upper left corner of the bbox
"iscrowd": 0 or 1,
}
categories = [{
"id": int,
"name": str,
"supercategory": str,
}]
```
Assume we use the balloon dataset.
After downloading the data, we need to implement a function to convert the annotation format into the COCO format. Then we can use implemented `CocoDataset` to load the data and perform training and evaluation.
If you take a look at the dataset, you will find the dataset format is as below:
```json
{'base64_img_data': '',
'file_attributes': {},
'filename': '34020010494_e5cb88e1c4_k.jpg',
'fileref': '',
'regions': {'0': {'region_attributes': {},
'shape_attributes': {'all_points_x': [1020,
1000,
994,
1003,
1023,
1050,
1089,
1134,
1190,
1265,
1321,
1361,
1403,
1428,
1442,
1445,
1441,
1427,
1400,
1361,
1316,
1269,
1228,
1198,
1207,
1210,
1190,
1177,
1172,
1174,
1170,
1153,
1127,
1104,
1061,
1032,
1020],
'all_points_y': [963,
899,
841,
787,
738,
700,
663,
638,
621,
619,
643,
672,
720,
765,
800,
860,
896,
942,
990,
1035,
1079,
1112,
1129,
1134,
1144,
1153,
1166,
1166,
1150,
1136,
1129,
1122,
1112,
1084,
1037,
989,
963],
'name': 'polygon'}}},
'size': 1115004}
```
The annotation is a JSON file where each key indicates an image's all annotations.
The code to convert the balloon dataset into coco format is as below.
```python
import os.path as osp
import mmcv
from mmengine.fileio import dump, load
from mmengine.utils import track_iter_progress
def convert_balloon_to_coco(ann_file, out_file, image_prefix):
data_infos = load(ann_file)
annotations = []
images = []
obj_count = 0
for idx, v in enumerate(track_iter_progress(data_infos.values())):
filename = v['filename']
img_path = osp.join(image_prefix, filename)
height, width = mmcv.imread(img_path).shape[:2]
images.append(
dict(id=idx, file_name=filename, height=height, width=width))
for _, obj in v['regions'].items():
assert not obj['region_attributes']
obj = obj['shape_attributes']
px = obj['all_points_x']
py = obj['all_points_y']
poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
poly = [p for x in poly for p in x]
x_min, y_min, x_max, y_max = (min(px), min(py), max(px), max(py))
data_anno = dict(
image_id=idx,
id=obj_count,
category_id=0,
bbox=[x_min, y_min, x_max - x_min, y_max - y_min],
area=(x_max - x_min) * (y_max - y_min),
segmentation=[poly],
iscrowd=0)
annotations.append(data_anno)
obj_count += 1
coco_format_json = dict(
images=images,
annotations=annotations,
categories=[{
'id': 0,
'name': 'balloon'
}])
dump(coco_format_json, out_file)
if __name__ == '__main__':
convert_balloon_to_coco(ann_file='data/balloon/train/via_region_data.json',
out_file='data/balloon/train/annotation_coco.json',
image_prefix='data/balloon/train')
convert_balloon_to_coco(ann_file='data/balloon/val/via_region_data.json',
out_file='data/balloon/val/annotation_coco.json',
image_prefix='data/balloon/val')
```
Using the function above, users can successfully convert the annotation file into json format, then we can use `CocoDataset` to train and evaluate the model with `CocoMetric`.
## Prepare a config
The second step is to prepare a config thus the dataset could be successfully loaded. Assume that we want to use Mask R-CNN with FPN, the config to train the detector on balloon dataset is as below. Assume the config is under directory `configs/balloon/` and named as `mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py`, the config is as below. Please refer [Learn about Configs - MMDetection 3.0.0 documentation](https://mmdetection.readthedocs.io/en/latest/user_guides/config.html) to get detailed information about config files.
```python
# The new config inherits a base config to highlight the necessary modification
_base_ = '../mask_rcnn/mask-rcnn_r50-caffe_fpn_ms-poly-1x_coco.py'
# We also need to change the num_classes in head to match the dataset's annotation
model = dict(
roi_head=dict(
bbox_head=dict(num_classes=1), mask_head=dict(num_classes=1)))
# Modify dataset related settings
data_root = 'data/balloon/'
metainfo = {
'classes': ('balloon', ),
'palette': [
(220, 20, 60),
]
}
train_dataloader = dict(
batch_size=1,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
ann_file='train/annotation_coco.json',
data_prefix=dict(img='train/')))
val_dataloader = dict(
dataset=dict(
data_root=data_root,
metainfo=metainfo,
ann_file='val/annotation_coco.json',
data_prefix=dict(img='val/')))
test_dataloader = val_dataloader
# Modify metric related settings
val_evaluator = dict(ann_file=data_root + 'val/annotation_coco.json')
test_evaluator = val_evaluator
# We can use the pre-trained Mask RCNN model to obtain higher performance
load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco_bbox_mAP-0.408__segm_mAP-0.37_20200504_163245-42aa3d00.pth'
```
## Train a new model
To train a model with the new config, you can simply run
```shell
python tools/train.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py
```
For more detailed usages, please refer to the [training guide](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html#train-predefined-models-on-standard-datasets).
## Test and inference
To test the trained model, you can simply run
```shell
python tools/test.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py work_dirs/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon/epoch_12.pth
```
For more detailed usages, please refer to the [testing guide](https://mmdetection.readthedocs.io/en/latest/user_guides/test.html).
# Useful Hooks
MMDetection and MMEngine provide users with various useful hooks including log hooks, `NumClassCheckHook`, etc. This tutorial introduces the functionalities and usages of hooks implemented in MMDetection. For using hooks in MMEngine, please read the [API documentation in MMEngine](https://github.com/open-mmlab/mmengine/tree/main/docs/en/tutorials/hook.md).
## CheckInvalidLossHook
## NumClassCheckHook
## MemoryProfilerHook
[Memory profiler hook](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/memory_profiler_hook.py) records memory information including virtual memory, swap memory, and the memory of the current process. This hook helps grasp the memory usage of the system and discover potential memory leak bugs. To use this hook, users should install `memory_profiler` and `psutil` by `pip install memory_profiler psutil` first.
### Usage
To use this hook, users should add the following code to the config file.
```python
custom_hooks = [
dict(type='MemoryProfilerHook', interval=50)
]
```
### Result
During training, you can see the messages in the log recorded by `MemoryProfilerHook` as below.
```text
The system has 250 GB (246360 MB + 9407 MB) of memory and 8 GB (5740 MB + 2452 MB) of swap memory in total. Currently 9407 MB (4.4%) of memory and 5740 MB (29.9%) of swap memory were consumed. And the current training process consumed 5434 MB of memory.
```
```text
2022-04-21 08:49:56,881 - mmengine - INFO - Memory information available_memory: 246360 MB, used_memory: 9407 MB, memory_utilization: 4.4 %, available_swap_memory: 5740 MB, used_swap_memory: 2452 MB, swap_memory_utilization: 29.9 %, current_process_memory: 5434 MB
```
## SetEpochInfoHook
## SyncNormHook
## SyncRandomSizeHook
## YOLOXLrUpdaterHook
## YOLOXModeSwitchHook
## How to implement a custom hook
In general, there are 20 points where hooks can be inserted from the beginning to the end of model training. The users can implement custom hooks and insert them at different points in the process of training to do what they want.
- global points: `before_run`, `after_run`
- points in training: `before_train`, `before_train_epoch`, `before_train_iter`, `after_train_iter`, `after_train_epoch`, `after_train`
- points in validation: `before_val`, `before_val_epoch`, `before_val_iter`, `after_val_iter`, `after_val_epoch`, `after_val`
- points at testing: `before_test`, `before_test_epoch`, `before_test_iter`, `after_test_iter`, `after_test_epoch`, `after_test`
- other points: `before_save_checkpoint`, `after_save_checkpoint`
For example, users can implement a hook to check loss and terminate training when loss goes NaN. To achieve that, there are three steps to go:
1. Implement a new hook that inherits the `Hook` class in MMEngine, and implement `after_train_iter` method which checks whether loss goes NaN after every `n` training iterations.
2. The implemented hook should be registered in `HOOKS` by `@HOOKS.register_module()` as shown in the code below.
3. Add `custom_hooks = [dict(type='MemoryProfilerHook', interval=50)]` in the config file.
```python
from typing import Optional
import torch
from mmengine.hooks import Hook
from mmengine.runner import Runner
from mmdet.registry import HOOKS
@HOOKS.register_module()
class CheckInvalidLossHook(Hook):
"""Check invalid loss hook.
This hook will regularly check whether the loss is valid
during training.
Args:
interval (int): Checking interval (every k iterations).
Default: 50.
"""
def __init__(self, interval: int = 50) -> None:
self.interval = interval
def after_train_iter(self,
runner: Runner,
batch_idx: int,
data_batch: Optional[dict] = None,
outputs: Optional[dict] = None) -> None:
"""Regularly check whether the loss is valid every n iterations.
Args:
runner (:obj:`Runner`): The runner of the training process.
batch_idx (int): The index of the current batch in the train loop.
data_batch (dict, Optional): Data from dataloader.
Defaults to None.
outputs (dict, Optional): Outputs from model. Defaults to None.
"""
if self.every_n_train_iters(runner, self.interval):
assert torch.isfinite(outputs['loss']), \
runner.logger.info('loss become infinite or NaN!')
```
Please read [customize_runtime](../advanced_guides/customize_runtime.md) for more about implementing a custom hook.
Apart from training/testing scripts, We provide lots of useful tools under the
`tools/` directory.
## Log Analysis
`tools/analysis_tools/analyze_logs.py` plots loss/mAP curves given a training
log file. Run `pip install seaborn` first to install the dependency.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--eval-interval ${EVALUATION_INTERVAL}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
```
![loss curve image](../../../resources/loss_curve.png)
Examples:
- Plot the classification loss of some run.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
```
- Plot the classification and regression loss of some run, and save the figure to a pdf.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
```
- Compare the bbox mAP of two runs in the same figure.
```shell
python tools/analysis_tools/analyze_logs.py plot_curve log1.json log2.json --keys bbox_mAP --legend run1 run2
```
- Compute the average training speed.
```shell
python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers]
```
The output is expected to be like the following.
```text
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
slowest epoch 11, average time is 1.2024
fastest epoch 1, average time is 1.1909
time std over epochs is 0.0028
average iter time: 1.1959 s/iter
```
## Result Analysis
`tools/analysis_tools/analyze_results.py` calculates single image mAP and saves or shows the topk images with the highest and lowest scores based on prediction results.
**Usage**
```shell
python tools/analysis_tools/analyze_results.py \
${CONFIG} \
${PREDICTION_PATH} \
${SHOW_DIR} \
[--show] \
[--wait-time ${WAIT_TIME}] \
[--topk ${TOPK}] \
[--show-score-thr ${SHOW_SCORE_THR}] \
[--cfg-options ${CFG_OPTIONS}]
```
Description of all arguments:
- `config` : The path of a model config file.
- `prediction_path`: Output result file in pickle format from `tools/test.py`
- `show_dir`: Directory where painted GT and detection images will be saved
- `--show`: Determines whether to show painted images, If not specified, it will be set to `False`
- `--wait-time`: The interval of show (s), 0 is block
- `--topk`: The number of saved images that have the highest and lowest `topk` scores after sorting. If not specified, it will be set to `20`.
- `--show-score-thr`: Show score threshold. If not specified, it will be set to `0`.
- `--cfg-options`: If specified, the key-value pair optional cfg will be merged into config file
**Examples**:
Assume that you have got result file in pickle format from `tools/test.py` in the path './result.pkl'.
1. Test Faster R-CNN and visualize the results, save images to the directory `results/`
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--show
```
2. Test Faster R-CNN and specified topk to 50, save images to the directory `results/`
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--topk 50
```
3. If you want to filter the low score prediction results, you can specify the `show-score-thr` parameter
```shell
python tools/analysis_tools/analyze_results.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
result.pkl \
results \
--show-score-thr 0.3
```
## Fusing results from multiple models
`tools/analysis_tools/fusion_results.py` can fusing predictions using Weighted Boxes Fusion(WBF) from different object detection models. (Currently support coco format only)
**Usage**
```shell
python tools/analysis_tools/fuse_results.py \
${PRED_RESULTS} \
[--annotation ${ANNOTATION}] \
[--weights ${WEIGHTS}] \
[--fusion-iou-thr ${FUSION_IOU_THR}] \
[--skip-box-thr ${SKIP_BOX_THR}] \
[--conf-type ${CONF_TYPE}] \
[--eval-single ${EVAL_SINGLE}] \
[--save-fusion-results ${SAVE_FUSION_RESULTS}] \
[--out-dir ${OUT_DIR}]
```
Description of all arguments:
- `pred-results`: Paths of detection results from different models.(Currently support coco format only)
- `--annotation`: Path of ground-truth.
- `--weights`: List of weights for each model. Default: `None`, which means weight == 1 for each model.
- `--fusion-iou-thr`: IoU value for boxes to be a match。Default: `0.55`
- `--skip-box-thr`: The confidence threshold that needs to be excluded in the WBF algorithm. bboxes whose confidence is less than this value will be excluded.。Default: `0`
- `--conf-type`: How to calculate confidence in weighted boxes.
- `avg`: average value,default.
- `max`: maximum value.
- `box_and_model_avg`: box and model wise hybrid weighted average.
- `absent_model_aware_avg`: weighted average that takes into account the absent model.
- `--eval-single`: Whether evaluate every single model. Default: `False`.
- `--save-fusion-results`: Whether save fusion results. Default: `False`.
- `--out-dir`: Path of fusion results.
**Examples**:
Assume that you have got 3 result files from corresponding models through `tools/test.py`, which paths are './faster-rcnn_r50-caffe_fpn_1x_coco.json', './retinanet_r50-caffe_fpn_1x_coco.json', './cascade-rcnn_r50-caffe_fpn_1x_coco.json' respectively. The ground-truth file path is './annotation.json'.
1. Fusion of predictions from three models and evaluation of their effectiveness
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
```
2. Simultaneously evaluate each single model and fusion results
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
--eval-single
```
3. Fusion of prediction results from three models and save
```shell
python tools/analysis_tools/fuse_results.py \
./faster-rcnn_r50-caffe_fpn_1x_coco.json \
./retinanet_r50-caffe_fpn_1x_coco.json \
./cascade-rcnn_r50-caffe_fpn_1x_coco.json \
--annotation ./annotation.json \
--weights 1 2 3 \
--save-fusion-results \
--out-dir outputs/fusion
```
## Visualization
### Visualize Datasets
`tools/analysis_tools/browse_dataset.py` helps the user to browse a detection dataset (both
images and bounding box annotations) visually, or save the image to a
designated directory.
```shell
python tools/analysis_tools/browse_dataset.py ${CONFIG} [-h] [--skip-type ${SKIP_TYPE[SKIP_TYPE...]}] [--output-dir ${OUTPUT_DIR}] [--not-show] [--show-interval ${SHOW_INTERVAL}]
```
### Visualize Models
First, convert the model to ONNX as described
[here](#convert-mmdetection-model-to-onnx-experimental).
Note that currently only RetinaNet is supported, support for other models
will be coming in later versions.
The converted model could be visualized by tools like [Netron](https://github.com/lutzroeder/netron).
### Visualize Predictions
If you need a lightweight GUI for visualizing the detection results, you can refer [DetVisGUI project](https://github.com/Chien-Hung/DetVisGUI/tree/mmdetection).
## Error Analysis
`tools/analysis_tools/coco_error_analysis.py` analyzes COCO results per category and by
different criterion. It can also make a plot to provide useful information.
```shell
python tools/analysis_tools/coco_error_analysis.py ${RESULT} ${OUT_DIR} [-h] [--ann ${ANN}] [--types ${TYPES[TYPES...]}]
```
Example:
Assume that you have got [Mask R-CNN checkpoint file](https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth) in the path 'checkpoint'. For other checkpoints, please refer to our [model zoo](./model_zoo.md).
You can modify the test_evaluator to save the results bbox by:
1. Find which dataset in 'configs/base/datasets' the current config corresponds to.
2. Replace the original test_evaluator and test_dataloader with test_evaluator and test_dataloader in the comment in dataset config.
3. Use the following command to get the results bbox and segmentation json file.
```shell
python tools/test.py \
configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py \
checkpoint/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth \
```
1. Get COCO bbox error results per category , save analyze result images to the directory(In [config](../../../configs/_base_/datasets/coco_instance.py) the default directory is './work_dirs/coco_instance/test')
```shell
python tools/analysis_tools/coco_error_analysis.py \
results.bbox.json \
results \
--ann=data/coco/annotations/instances_val2017.json \
```
2. Get COCO segmentation error results per category , save analyze result images to the directory
```shell
python tools/analysis_tools/coco_error_analysis.py \
results.segm.json \
results \
--ann=data/coco/annotations/instances_val2017.json \
--types='segm'
```
## Model Serving
In order to serve an `MMDetection` model with [`TorchServe`](https://pytorch.org/serve/), you can follow the steps:
### 1. Install TorchServe
Suppose you have a `Python` environment with `PyTorch` and `MMDetection` successfully installed,
then you could run the following command to install `TorchServe` and its dependencies.
For more other installation options, please refer to the [quick start](https://github.com/pytorch/serve/blob/master/README.md#serve-a-model).
```shell
python -m pip install torchserve torch-model-archiver torch-workflow-archiver nvgpu
```
**Note**: Please refer to [torchserve docker](https://github.com/pytorch/serve/blob/master/docker/README.md) if you want to use `TorchServe` in docker.
### 2. Convert model from MMDetection to TorchServe
```shell
python tools/deployment/mmdet2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```
### 3. Start `TorchServe`
```shell
torchserve --start --ncs \
--model-store ${MODEL_STORE} \
--models ${MODEL_NAME}.mar
```
### 4. Test deployment
```shell
curl -O curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/3dogs.jpg
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T 3dogs.jpg
```
You should obtain a response similar to:
```json
[
{
"class_label": 16,
"class_name": "dog",
"bbox": [
294.63409423828125,
203.99111938476562,
417.048583984375,
281.62744140625
],
"score": 0.9987992644309998
},
{
"class_label": 16,
"class_name": "dog",
"bbox": [
404.26019287109375,
126.0080795288086,
574.5091552734375,
293.6662292480469
],
"score": 0.9979367256164551
},
{
"class_label": 16,
"class_name": "dog",
"bbox": [
197.2144775390625,
93.3067855834961,
307.8505554199219,
276.7560119628906
],
"score": 0.993338406085968
}
]
```
#### Compare results
And you can use `test_torchserver.py` to compare result of `TorchServe` and `PyTorch`, and visualize them.
```shell
python tools/deployment/test_torchserver.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}] [--score-thr ${SCORE_THR}] [--work-dir ${WORK_DIR}]
```
Example:
```shell
python tools/deployment/test_torchserver.py \
demo/demo.jpg \
configs/yolo/yolov3_d53_8xb8-320-273e_coco.py \
checkpoint/yolov3_d53_320_273e_coco-421362b6.pth \
yolov3 \
--work-dir ./work-dir
```
### 5. Stop `TorchServe`
```shell
torchserve --stop
```
## Model Complexity
`tools/analysis_tools/get_flops.py` is a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) to compute the FLOPs and params of a given model.
```shell
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
```
You will get the results like this.
```text
==============================
Input shape: (3, 1280, 800)
Flops: 239.32 GFLOPs
Params: 37.74 M
==============================
```
**Note**: This tool is still experimental and we do not guarantee that the
number is absolutely correct. You may well use the result for simple
comparisons, but double check it before you adopt it in technical reports or papers.
1. FLOPs are related to the input shape while parameters are not. The default
input shape is (1, 3, 1280, 800).
2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/2.x/mmcv/cnn/utils/flops_counter.py) for details.
3. The FLOPs of two-stage detectors is dependent on the number of proposals.
## Model conversion
### MMDetection model to ONNX
We provide a script to convert model to [ONNX](https://github.com/onnx/onnx) format. We also support comparing the output results between Pytorch and ONNX model for verification. More details can refer to [mmdeploy](https://github.com/open-mmlab/mmdeploy)
### MMDetection 1.x model to MMDetection 2.x
`tools/model_converters/upgrade_model_version.py` upgrades a previous MMDetection checkpoint
to the new version. Note that this script is not guaranteed to work as some
breaking changes are introduced in the new version. It is recommended to
directly use the new checkpoints.
```shell
python tools/model_converters/upgrade_model_version.py ${IN_FILE} ${OUT_FILE} [-h] [--num-classes NUM_CLASSES]
```
### RegNet model to MMDetection
`tools/model_converters/regnet2mmdet.py` convert keys in pycls pretrained RegNet models to
MMDetection style.
```shell
python tools/model_converters/regnet2mmdet.py ${SRC} ${DST} [-h]
```
### Detectron ResNet to Pytorch
`tools/model_converters/detectron2pytorch.py` converts keys in the original detectron pretrained
ResNet models to PyTorch style.
```shell
python tools/model_converters/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h]
```
### Prepare a model for publishing
`tools/model_converters/publish_model.py` helps users to prepare their model for publishing.
Before you upload a model to AWS, you may want to
1. convert model weights to CPU tensors
2. delete the optimizer states and
3. compute the hash of the checkpoint file and append the hash id to the
filename.
```shell
python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
```
E.g.,
```shell
python tools/model_converters/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth
```
The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`.
## Dataset Conversion
`tools/data_converters/` contains tools to convert the Cityscapes dataset
and Pascal VOC dataset to the COCO format.
```shell
python tools/dataset_converters/cityscapes.py ${CITYSCAPES_PATH} [-h] [--img-dir ${IMG_DIR}] [--gt-dir ${GT_DIR}] [-o ${OUT_DIR}] [--nproc ${NPROC}]
python tools/dataset_converters/pascal_voc.py ${DEVKIT_PATH} [-h] [-o ${OUT_DIR}]
```
## Dataset Download
`tools/misc/download_dataset.py` supports downloading datasets such as COCO, VOC, and LVIS.
```shell
python tools/misc/download_dataset.py --dataset-name coco2017
python tools/misc/download_dataset.py --dataset-name voc2007
python tools/misc/download_dataset.py --dataset-name lvis
```
For users in China, these datasets can also be downloaded from [OpenDataLab](https://opendatalab.com/?source=OpenMMLab%20GitHub) with high speed:
- [COCO2017](https://opendatalab.com/COCO_2017/download?source=OpenMMLab%20GitHub)
- [VOC2007](https://opendatalab.com/PASCAL_VOC2007/download?source=OpenMMLab%20GitHub)
- [VOC2012](https://opendatalab.com/PASCAL_VOC2012/download?source=OpenMMLab%20GitHub)
- [LVIS](https://opendatalab.com/LVIS/download?source=OpenMMLab%20GitHub)
## Benchmark
### Robust Detection Benchmark
`tools/analysis_tools/test_robustness.py` and`tools/analysis_tools/robustness_eval.py` helps users to evaluate model robustness. The core idea comes from [Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming](https://arxiv.org/abs/1907.07484). For more information how to evaluate models on corrupted images and results for a set of standard models please refer to [robustness_benchmarking.md](robustness_benchmarking.md).
### FPS Benchmark
`tools/analysis_tools/benchmark.py` helps users to calculate FPS. The FPS value includes model forward and post-processing. In order to get a more accurate value, currently only supports single GPU distributed startup mode.
```shell
python -m torch.distributed.launch --nproc_per_node=1 --master_port=${PORT} tools/analysis_tools/benchmark.py \
${CONFIG} \
[--checkpoint ${CHECKPOINT}] \
[--repeat-num ${REPEAT_NUM}] \
[--max-iter ${MAX_ITER}] \
[--log-interval ${LOG_INTERVAL}] \
--launcher pytorch
```
Examples: Assuming that you have already downloaded the `Faster R-CNN` model checkpoint to the directory `checkpoints/`.
```shell
python -m torch.distributed.launch --nproc_per_node=1 --master_port=29500 tools/analysis_tools/benchmark.py \
configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py \
checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
--launcher pytorch
```
## Miscellaneous
### Evaluating a metric
`tools/analysis_tools/eval_metric.py` evaluates certain metrics of a pkl result file
according to a config file.
```shell
python tools/analysis_tools/eval_metric.py ${CONFIG} ${PKL_RESULTS} [-h] [--format-only] [--eval ${EVAL[EVAL ...]}]
[--cfg-options ${CFG_OPTIONS [CFG_OPTIONS ...]}]
[--eval-options ${EVAL_OPTIONS [EVAL_OPTIONS ...]}]
```
### Print the entire config
`tools/misc/print_config.py` prints the whole config verbatim, expanding all its
imports.
```shell
python tools/misc/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]
```
## Hyper-parameter Optimization
### YOLO Anchor Optimization
`tools/analysis_tools/optimize_anchors.py` provides two method to optimize YOLO anchors.
One is k-means anchor cluster which refers from [darknet](https://github.com/AlexeyAB/darknet/blob/master/src/detector.c#L1421).
```shell
python tools/analysis_tools/optimize_anchors.py ${CONFIG} --algorithm k-means --input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} --output-dir ${OUTPUT_DIR}
```
Another is using differential evolution to optimize anchors.
```shell
python tools/analysis_tools/optimize_anchors.py ${CONFIG} --algorithm differential_evolution --input-shape ${INPUT_SHAPE [WIDTH HEIGHT]} --output-dir ${OUTPUT_DIR}
```
E.g.,
```shell
python tools/analysis_tools/optimize_anchors.py configs/yolo/yolov3_d53_8xb8-320-273e_coco.py --algorithm differential_evolution --input-shape 608 608 --device cuda --output-dir work_dirs
```
You will get:
```
loading annotations into memory...
Done (t=9.70s)
creating index...
index created!
2021-07-19 19:37:20,951 - mmdet - INFO - Collecting bboxes from annotation...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 117266/117266, 15874.5 task/s, elapsed: 7s, ETA: 0s
2021-07-19 19:37:28,753 - mmdet - INFO - Collected 849902 bboxes.
differential_evolution step 1: f(x)= 0.506055
differential_evolution step 2: f(x)= 0.506055
......
differential_evolution step 489: f(x)= 0.386625
2021-07-19 19:46:40,775 - mmdet - INFO Anchor evolution finish. Average IOU: 0.6133754253387451
2021-07-19 19:46:40,776 - mmdet - INFO Anchor differential evolution result:[[10, 12], [15, 30], [32, 22], [29, 59], [61, 46], [57, 116], [112, 89], [154, 198], [349, 336]]
2021-07-19 19:46:40,798 - mmdet - INFO Result saved in work_dirs/anchor_optimize_result.json
```
## Confusion Matrix
A confusion matrix is a summary of prediction results.
`tools/analysis_tools/confusion_matrix.py` can analyze the prediction results and plot a confusion matrix table.
First, run `tools/test.py` to save the `.pkl` detection results.
Then, run
```
python tools/analysis_tools/confusion_matrix.py ${CONFIG} ${DETECTION_RESULTS} ${SAVE_DIR} --show
```
And you will get a confusion matrix like this:
![confusion_matrix_example](https://user-images.githubusercontent.com/12907710/140513068-994cdbf4-3a4a-48f0-8fd8-2830d93fd963.png)
## COCO Separated & Occluded Mask Metric
Detecting occluded objects still remains a challenge for state-of-the-art object detectors.
We implemented the metric presented in paper [A Tri-Layer Plugin to Improve Occluded Detection](https://arxiv.org/abs/2210.10046) to calculate the recall of separated and occluded masks.
There are two ways to use this metric:
### Offline evaluation
We provide a script to calculate the metric with a dumped prediction file.
First, use the `tools/test.py` script to dump the detection results:
```shell
python tools/test.py ${CONFIG} ${MODEL_PATH} --out results.pkl
```
Then, run the `tools/analysis_tools/coco_occluded_separated_recall.py` script to get the recall of separated and occluded masks:
```shell
python tools/analysis_tools/coco_occluded_separated_recall.py results.pkl --out occluded_separated_recall.json
```
The output should be like this:
```
loading annotations into memory...
Done (t=0.51s)
creating index...
index created!
processing detection results...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5000/5000, 109.3 task/s, elapsed: 46s, ETA: 0s
computing occluded mask recall...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 5550/5550, 780.5 task/s, elapsed: 7s, ETA: 0s
COCO occluded mask recall: 58.79%
COCO occluded mask success num: 3263
computing separated mask recall...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3522/3522, 778.3 task/s, elapsed: 5s, ETA: 0s
COCO separated mask recall: 31.94%
COCO separated mask success num: 1125
+-----------+--------+-------------+
| mask type | recall | num correct |
+-----------+--------+-------------+
| occluded | 58.79% | 3263 |
| separated | 31.94% | 1125 |
+-----------+--------+-------------+
Evaluation results have been saved to occluded_separated_recall.json.
```
### Online evaluation
We implement `CocoOccludedSeparatedMetric` which inherits from the `CocoMetic`.
To evaluate the recall of separated and occluded masks during training, just replace the evaluator metric type with `'CocoOccludedSeparatedMetric'` in your config:
```python
val_evaluator = dict(
type='CocoOccludedSeparatedMetric', # modify this
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False)
test_evaluator = val_evaluator
```
Please cite the paper if you use this metric:
```latex
@article{zhan2022triocc,
title={A Tri-Layer Plugin to Improve Occluded Detection},
author={Zhan, Guanqi and Xie, Weidi and Zisserman, Andrew},
journal={British Machine Vision Conference},
year={2022}
}
```
# Visualization
Before reading this tutorial, it is recommended to read MMEngine's [Visualization](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) documentation to get a first glimpse of the `Visualizer` definition and usage.
In brief, the [`Visualizer`](mmengine.visualization.Visualizer) is implemented in MMEngine to meet the daily visualization needs, and contains three main functions:
- Implement common drawing APIs, such as [`draw_bboxes`](mmengine.visualization.Visualizer.draw_bboxes) which implements bounding box drawing functions, [`draw_lines`](mmengine.visualization.Visualizer.draw_lines) implements the line drawing function.
- Support writing visualization results, learning rate curves, loss function curves, and verification accuracy curves to various backends, including local disks and common deep learning training logging tools such as [TensorBoard](https://www.tensorflow.org/tensorboard) and [Wandb](https://wandb.ai/site).
- Support calling anywhere in the code to visualize or record intermediate states of the model during training or testing, such as feature maps and validation results.
Based on MMEngine's Visualizer, MMDet comes with a variety of pre-built visualization tools that can be used by the user by simply modifying the following configuration files.
- The `tools/analysis_tools/browse_dataset.py` script provides a dataset visualization function that draws images and corresponding annotations after Data Transforms, as described in [`browse_dataset.py`](useful_tools.md#Visualization).
- MMEngine implements `LoggerHook`, which uses `Visualizer` to write the learning rate, loss and evaluation results to the backend set by `Visualizer`. Therefore, by modifying the `Visualizer` backend in the configuration file, for example to ` TensorBoardVISBackend` or `WandbVISBackend`, you can implement logging to common training logging tools such as `TensorBoard` or `WandB`, thus making it easy for users to use these visualization tools to analyze and monitor the training process.
- The `VisualizerHook` is implemented in MMDet, which uses the `Visualizer` to visualize or store the prediction results of the validation or prediction phase into the backend set by the `Visualizer`, so by modifying the `Visualizer` backend in the configuration file, for example, to ` TensorBoardVISBackend` or `WandbVISBackend`, you can implement storing the predicted images to `TensorBoard` or `Wandb`.
## Configuration
Thanks to the use of the registration mechanism, in MMDet we can set the behavior of the `Visualizer` by modifying the configuration file. Usually, we define the default configuration for the visualizer in `configs/_base_/default_runtime.py`, see [configuration tutorial](config.md) for details.
```Python
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='DetLocalVisualizer',
vis_backends=vis_backends,
name='visualizer')
```
Based on the above example, we can see that the configuration of `Visualizer` consists of two main parts, namely, the type of `Visualizer` and the visualization backend `vis_backends` it uses.
- Users can directly use `DetLocalVisualizer` to visualize labels or predictions for support tasks.
- MMDet sets the visualization backend `vis_backend` to the local visualization backend `LocalVisBackend` by default, saving all visualization results and other training information in a local folder.
## Storage
MMDet uses the local visualization backend [`LocalVisBackend`](mmengine.visualization.LocalVisBackend) by default, and the model loss, learning rate, model evaluation accuracy and visualization The information stored in `VisualizerHook` and `LoggerHook`, including loss, learning rate, evaluation accuracy will be saved to the `{work_dir}/{config_name}/{time}/{vis_data}` folder by default. In addition, MMDet also supports other common visualization backends, such as `TensorboardVisBackend` and `WandbVisBackend`, and you only need to change the `vis_backends` type in the configuration file to the corresponding visualization backend. For example, you can store data to `TensorBoard` and `Wandb` by simply inserting the following code block into the configuration file.
```Python
# https://mmengine.readthedocs.io/en/latest/api/visualization.html
_base_.visualizer.vis_backends = [
dict(type='LocalVisBackend'), #
dict(type='TensorboardVisBackend'),
dict(type='WandbVisBackend'),]
```
## Plot
### Plot the prediction results
MMDet mainly uses [`DetVisualizationHook`](mmdet.engine.hooks.DetVisualizationHook) to plot the prediction results of validation and test, by default `DetVisualizationHook` is off, and the default configuration is as follows.
```Python
visualization=dict( # user visualization of validation and test results
type='DetVisualizationHook',
draw=False,
interval=1,
show=False)
```
The following table shows the parameters supported by `DetVisualizationHook`.
| Parameters | Description |
| :--------: | :-----------------------------------------------------------------------------------------------------------: |
| draw | The DetVisualizationHook is turned on and off by the enable parameter, which is the default state. |
| interval | Controls how much iteration to store or display the results of a val or test if VisualizationHook is enabled. |
| show | Controls whether to visualize the results of val or test. |
If you want to enable `DetVisualizationHook` related functions and configurations during training or testing, you only need to modify the configuration, take `configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py` as an example, draw annotations and predictions at the same time, and display the images, the configuration can be modified as follows
```Python
visualization = _base_.default_hooks.visualization
visualization.update(dict(draw=True, show=True))
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
The `test.py` procedure is further simplified by providing the `--show` and `--show-dir` parameters to visualize the annotation and prediction results during the test without modifying the configuration.
```Shell
# Show test results
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show
# Specify where to store the prediction results
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show-dir imgs/
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment