Commit 4a950602 authored by dengjb's avatar dengjb
Browse files

update codes

parent 3e89e256
Pipeline #1044 failed with stages
in 0 seconds
# 可视化
在阅读本教程之前,建议先阅读 MMEngine 的 [Visualization](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) 文档,以对 `Visualizer` 的定义和用法有一个初步的了解。
简而言之,`Visualizer` 在 MMEngine 中实现以满足日常可视化需求,并包含以下三个主要功能:
- 实现通用的绘图 API,例如 [`draw_bboxes`](mmengine.visualization.Visualizer.draw_bboxes) 实现了绘制边界框的功能,[`draw_lines`](mmengine.visualization.Visualizer.draw_lines) 实现了绘制线条的功能。
- 支持将可视化结果、学习率曲线、损失函数曲线以及验证精度曲线写入到各种后端中,包括本地磁盘以及常见的深度学习训练日志工具,例如 [TensorBoard](https://www.tensorflow.org/tensorboard)[Wandb](https://wandb.ai/site)
- 支持在代码的任何位置调用以可视化或记录模型在训练或测试期间的中间状态,例如特征图和验证结果。
基于 MMEngine 的 `Visualizer`,MMDet 提供了各种预构建的可视化工具,用户可以通过简单地修改以下配置文件来使用它们。
- `tools/analysis_tools/browse_dataset.py` 脚本提供了一个数据集可视化功能,可以在数据经过数据转换后绘制图像和相应的注释,具体描述请参见[`browse_dataset.py`](useful_tools.md#Visualization)
- MMEngine实现了`LoggerHook`,使用`Visualizer`将学习率、损失和评估结果写入由`Visualizer`设置的后端。因此,通过修改配置文件中的`Visualizer`后端,例如修改为`TensorBoardVISBackend``WandbVISBackend`,可以实现日志记录到常用的训练日志工具,如`TensorBoard``WandB`,从而方便用户使用这些可视化工具来分析和监控训练过程。
- 在MMDet中实现了`VisualizerHook`,它使用`Visualizer`将验证或预测阶段的预测结果可视化或存储到由`Visualizer`设置的后端。因此,通过修改配置文件中的`Visualizer`后端,例如修改为`TensorBoardVISBackend``WandbVISBackend`,可以将预测图像存储到`TensorBoard``Wandb`中。
## 配置
由于使用了注册机制,在MMDet中我们可以通过修改配置文件来设置`Visualizer`的行为。通常,我们会在`configs/_base_/default_runtime.py`中为可视化器定义默认配置,详细信息请参见[配置教程](config.md)
```Python
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='DetLocalVisualizer',
vis_backends=vis_backends,
name='visualizer')
```
基于上面的例子,我们可以看到`Visualizer`的配置由两个主要部分组成,即`Visualizer`类型和其使用的可视化后端`vis_backends`
- 用户可直接使用`DetLocalVisualizer`来可视化支持任务的标签或预测结果。
- MMDet默认将可视化后端`vis_backend`设置为本地可视化后端`LocalVisBackend`,将所有可视化结果和其他训练信息保存在本地文件夹中。
## 存储
MMDet默认使用本地可视化后端[`LocalVisBackend`](mmengine.visualization.LocalVisBackend)`VisualizerHook``LoggerHook`中存储的模型损失、学习率、模型评估精度和可视化信息,包括损失、学习率、评估精度将默认保存到`{work_dir}/{config_name}/{time}/{vis_data}`文件夹中。此外,MMDet还支持其他常见的可视化后端,例如`TensorboardVisBackend``WandbVisBackend`,您只需要在配置文件中更改`vis_backends`类型为相应的可视化后端即可。例如,只需在配置文件中插入以下代码块即可将数据存储到`TensorBoard``Wandb`中。
```Python
# https://mmengine.readthedocs.io/en/latest/api/visualization.html
_base_.visualizer.vis_backends = [
dict(type='LocalVisBackend'), #
dict(type='TensorboardVisBackend'),
dict(type='WandbVisBackend'),]
```
## 绘图
### 绘制预测结果
MMDet主要使用[`DetVisualizationHook`](mmdet.engine.hooks.DetVisualizationHook)来绘制验证和测试的预测结果,默认情况下`DetVisualizationHook`是关闭的,其默认配置如下。
```Python
visualization=dict( #用户可视化验证和测试结果
type='DetVisualizationHook',
draw=False,
interval=1,
show=False)
```
以下表格展示了`DetVisualizationHook`支持的参数。
| 参数 | 描述 |
| :------: | :------------------------------------------------------------------------------: |
| draw | DetVisualizationHook通过enable参数打开和关闭,默认状态为关闭。 |
| interval | 控制在DetVisualizationHook启用时存储或显示验证或测试结果的间隔,单位为迭代次数。 |
| show | 控制是否可视化验证或测试的结果。 |
如果您想在训练或测试期间启用 `DetVisualizationHook` 相关功能和配置,您只需要修改配置文件,以 `configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py` 为例,同时绘制注释和预测,并显示图像,配置文件可以修改如下:
```Python
visualization = _base_.default_hooks.visualization
visualization.update(dict(draw=True, show=True))
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
`test.py`程序提供了`--show``--show-dir`参数,可以在测试过程中可视化注释和预测结果,而不需要修改配置文件,从而进一步简化了测试过程。
```Shell
# 展示测试结果
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show
# 指定存储预测结果的位置
python tools/test.py configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --show-dir imgs/
```
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/224883427-1294a7ba-14ab-4d93-9152-55a7b270b1f1.png" height="300"/>
</div>
# Copyright (c) OpenMMLab. All rights reserved.
import mmcv
import mmengine
from mmengine.utils import digit_version
from .version import __version__, version_info
mmcv_minimum_version = '2.0.0rc4'
mmcv_maximum_version = '2.2.0'
mmcv_version = digit_version(mmcv.__version__)
mmengine_minimum_version = '0.7.1'
mmengine_maximum_version = '1.0.0'
mmengine_version = digit_version(mmengine.__version__)
assert (mmcv_version >= digit_version(mmcv_minimum_version)
and mmcv_version < digit_version(mmcv_maximum_version)), \
f'MMCV=={mmcv.__version__} is used but incompatible. ' \
f'Please install mmcv>={mmcv_minimum_version}, <{mmcv_maximum_version}.'
assert (mmengine_version >= digit_version(mmengine_minimum_version)
and mmengine_version < digit_version(mmengine_maximum_version)), \
f'MMEngine=={mmengine.__version__} is used but incompatible. ' \
f'Please install mmengine>={mmengine_minimum_version}, ' \
f'<{mmengine_maximum_version}.'
__all__ = ['__version__', 'version_info', 'digit_version']
# Copyright (c) OpenMMLab. All rights reserved.
from .det_inferencer import DetInferencer
from .inference import (async_inference_detector, inference_detector,
inference_mot, init_detector, init_track_model)
__all__ = [
'init_detector', 'async_inference_detector', 'inference_detector',
'DetInferencer', 'inference_mot', 'init_track_model'
]
# Copyright (c) OpenMMLab. All rights reserved.
import copy
import os.path as osp
import warnings
from typing import Dict, Iterable, List, Optional, Sequence, Tuple, Union
import mmcv
import mmengine
import numpy as np
import torch.nn as nn
from mmcv.transforms import LoadImageFromFile
from mmengine.dataset import Compose
from mmengine.fileio import (get_file_backend, isdir, join_path,
list_dir_or_file)
from mmengine.infer.infer import BaseInferencer, ModelType
from mmengine.model.utils import revert_sync_batchnorm
from mmengine.registry import init_default_scope
from mmengine.runner.checkpoint import _load_checkpoint_to_model
from mmengine.visualization import Visualizer
from rich.progress import track
from mmdet.evaluation import INSTANCE_OFFSET
from mmdet.registry import DATASETS
from mmdet.structures import DetDataSample
from mmdet.structures.mask import encode_mask_results, mask2bbox
from mmdet.utils import ConfigType
from ..evaluation import get_classes
try:
from panopticapi.evaluation import VOID
from panopticapi.utils import id2rgb
except ImportError:
id2rgb = None
VOID = None
InputType = Union[str, np.ndarray]
InputsType = Union[InputType, Sequence[InputType]]
PredType = List[DetDataSample]
ImgType = Union[np.ndarray, Sequence[np.ndarray]]
IMG_EXTENSIONS = ('.jpg', '.jpeg', '.png', '.ppm', '.bmp', '.pgm', '.tif',
'.tiff', '.webp')
class DetInferencer(BaseInferencer):
"""Object Detection Inferencer.
Args:
model (str, optional): Path to the config file or the model name
defined in metafile. For example, it could be
"rtmdet-s" or 'rtmdet_s_8xb32-300e_coco' or
"configs/rtmdet/rtmdet_s_8xb32-300e_coco.py".
If model is not specified, user must provide the
`weights` saved by MMEngine which contains the config string.
Defaults to None.
weights (str, optional): Path to the checkpoint. If it is not specified
and model is a model name of metafile, the weights will be loaded
from metafile. Defaults to None.
device (str, optional): Device to run inference. If None, the available
device will be automatically used. Defaults to None.
scope (str, optional): The scope of the model. Defaults to mmdet.
palette (str): Color palette used for visualization. The order of
priority is palette -> config -> checkpoint. Defaults to 'none'.
show_progress (bool): Control whether to display the progress
bar during the inference process. Defaults to True.
"""
preprocess_kwargs: set = set()
forward_kwargs: set = set()
visualize_kwargs: set = {
'return_vis',
'show',
'wait_time',
'draw_pred',
'pred_score_thr',
'img_out_dir',
'no_save_vis',
}
postprocess_kwargs: set = {
'print_result',
'pred_out_dir',
'return_datasamples',
'no_save_pred',
}
def __init__(self,
model: Optional[Union[ModelType, str]] = None,
weights: Optional[str] = None,
device: Optional[str] = None,
scope: Optional[str] = 'mmdet',
palette: str = 'none',
show_progress: bool = True) -> None:
# A global counter tracking the number of images processed, for
# naming of the output images
self.num_visualized_imgs = 0
self.num_predicted_imgs = 0
self.palette = palette
init_default_scope(scope)
super().__init__(
model=model, weights=weights, device=device, scope=scope)
self.model = revert_sync_batchnorm(self.model)
self.show_progress = show_progress
def _load_weights_to_model(self, model: nn.Module,
checkpoint: Optional[dict],
cfg: Optional[ConfigType]) -> None:
"""Loading model weights and meta information from cfg and checkpoint.
Args:
model (nn.Module): Model to load weights and meta information.
checkpoint (dict, optional): The loaded checkpoint.
cfg (Config or ConfigDict, optional): The loaded config.
"""
if checkpoint is not None:
_load_checkpoint_to_model(model, checkpoint)
checkpoint_meta = checkpoint.get('meta', {})
# save the dataset_meta in the model for convenience
if 'dataset_meta' in checkpoint_meta:
# mmdet 3.x, all keys should be lowercase
model.dataset_meta = {
k.lower(): v
for k, v in checkpoint_meta['dataset_meta'].items()
}
elif 'CLASSES' in checkpoint_meta:
# < mmdet 3.x
classes = checkpoint_meta['CLASSES']
model.dataset_meta = {'classes': classes}
else:
warnings.warn(
'dataset_meta or class names are not saved in the '
'checkpoint\'s meta data, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
else:
warnings.warn('Checkpoint is not loaded, and the inference '
'result is calculated by the randomly initialized '
'model!')
warnings.warn('weights is None, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
# Priority: args.palette -> config -> checkpoint
if self.palette != 'none':
model.dataset_meta['palette'] = self.palette
else:
test_dataset_cfg = copy.deepcopy(cfg.test_dataloader.dataset)
# lazy init. We only need the metainfo.
test_dataset_cfg['lazy_init'] = True
metainfo = DATASETS.build(test_dataset_cfg).metainfo
cfg_palette = metainfo.get('palette', None)
if cfg_palette is not None:
model.dataset_meta['palette'] = cfg_palette
else:
if 'palette' not in model.dataset_meta:
warnings.warn(
'palette does not exist, random is used by default. '
'You can also set the palette to customize.')
model.dataset_meta['palette'] = 'random'
def _init_pipeline(self, cfg: ConfigType) -> Compose:
"""Initialize the test pipeline."""
pipeline_cfg = cfg.test_dataloader.dataset.pipeline
# For inference, the key of ``img_id`` is not used.
if 'meta_keys' in pipeline_cfg[-1]:
pipeline_cfg[-1]['meta_keys'] = tuple(
meta_key for meta_key in pipeline_cfg[-1]['meta_keys']
if meta_key != 'img_id')
load_img_idx = self._get_transform_idx(
pipeline_cfg, ('LoadImageFromFile', LoadImageFromFile))
if load_img_idx == -1:
raise ValueError(
'LoadImageFromFile is not found in the test pipeline')
pipeline_cfg[load_img_idx]['type'] = 'mmdet.InferencerLoader'
return Compose(pipeline_cfg)
def _get_transform_idx(self, pipeline_cfg: ConfigType,
name: Union[str, Tuple[str, type]]) -> int:
"""Returns the index of the transform in a pipeline.
If the transform is not found, returns -1.
"""
for i, transform in enumerate(pipeline_cfg):
if transform['type'] in name:
return i
return -1
def _init_visualizer(self, cfg: ConfigType) -> Optional[Visualizer]:
"""Initialize visualizers.
Args:
cfg (ConfigType): Config containing the visualizer information.
Returns:
Visualizer or None: Visualizer initialized with config.
"""
visualizer = super()._init_visualizer(cfg)
visualizer.dataset_meta = self.model.dataset_meta
return visualizer
def _inputs_to_list(self, inputs: InputsType) -> list:
"""Preprocess the inputs to a list.
Preprocess inputs to a list according to its type:
- list or tuple: return inputs
- str:
- Directory path: return all files in the directory
- other cases: return a list containing the string. The string
could be a path to file, a url or other types of string according
to the task.
Args:
inputs (InputsType): Inputs for the inferencer.
Returns:
list: List of input for the :meth:`preprocess`.
"""
if isinstance(inputs, str):
backend = get_file_backend(inputs)
if hasattr(backend, 'isdir') and isdir(inputs):
# Backends like HttpsBackend do not implement `isdir`, so only
# those backends that implement `isdir` could accept the inputs
# as a directory
filename_list = list_dir_or_file(
inputs, list_dir=False, suffix=IMG_EXTENSIONS)
inputs = [
join_path(inputs, filename) for filename in filename_list
]
if not isinstance(inputs, (list, tuple)):
inputs = [inputs]
return list(inputs)
def preprocess(self, inputs: InputsType, batch_size: int = 1, **kwargs):
"""Process the inputs into a model-feedable format.
Customize your preprocess by overriding this method. Preprocess should
return an iterable object, of which each item will be used as the
input of ``model.test_step``.
``BaseInferencer.preprocess`` will return an iterable chunked data,
which will be used in __call__ like this:
.. code-block:: python
def __call__(self, inputs, batch_size=1, **kwargs):
chunked_data = self.preprocess(inputs, batch_size, **kwargs)
for batch in chunked_data:
preds = self.forward(batch, **kwargs)
Args:
inputs (InputsType): Inputs given by user.
batch_size (int): batch size. Defaults to 1.
Yields:
Any: Data processed by the ``pipeline`` and ``collate_fn``.
"""
chunked_data = self._get_chunk_data(inputs, batch_size)
yield from map(self.collate_fn, chunked_data)
def _get_chunk_data(self, inputs: Iterable, chunk_size: int):
"""Get batch data from inputs.
Args:
inputs (Iterable): An iterable dataset.
chunk_size (int): Equivalent to batch size.
Yields:
list: batch data.
"""
inputs_iter = iter(inputs)
while True:
try:
chunk_data = []
for _ in range(chunk_size):
inputs_ = next(inputs_iter)
if isinstance(inputs_, dict):
if 'img' in inputs_:
ori_inputs_ = inputs_['img']
else:
ori_inputs_ = inputs_['img_path']
chunk_data.append(
(ori_inputs_,
self.pipeline(copy.deepcopy(inputs_))))
else:
chunk_data.append((inputs_, self.pipeline(inputs_)))
yield chunk_data
except StopIteration:
if chunk_data:
yield chunk_data
break
# TODO: Video and Webcam are currently not supported and
# may consume too much memory if your input folder has a lot of images.
# We will be optimized later.
def __call__(
self,
inputs: InputsType,
batch_size: int = 1,
return_vis: bool = False,
show: bool = False,
wait_time: int = 0,
no_save_vis: bool = False,
draw_pred: bool = True,
pred_score_thr: float = 0.3,
return_datasamples: bool = False,
print_result: bool = False,
no_save_pred: bool = True,
out_dir: str = '',
# by open image task
texts: Optional[Union[str, list]] = None,
# by open panoptic task
stuff_texts: Optional[Union[str, list]] = None,
# by GLIP and Grounding DINO
custom_entities: bool = False,
# by Grounding DINO
tokens_positive: Optional[Union[int, list]] = None,
**kwargs) -> dict:
"""Call the inferencer.
Args:
inputs (InputsType): Inputs for the inferencer.
batch_size (int): Inference batch size. Defaults to 1.
show (bool): Whether to display the visualization results in a
popup window. Defaults to False.
wait_time (float): The interval of show (s). Defaults to 0.
no_save_vis (bool): Whether to force not to save prediction
vis results. Defaults to False.
draw_pred (bool): Whether to draw predicted bounding boxes.
Defaults to True.
pred_score_thr (float): Minimum score of bboxes to draw.
Defaults to 0.3.
return_datasamples (bool): Whether to return results as
:obj:`DetDataSample`. Defaults to False.
print_result (bool): Whether to print the inference result w/o
visualization to the console. Defaults to False.
no_save_pred (bool): Whether to force not to save prediction
results. Defaults to True.
out_dir: Dir to save the inference results or
visualization. If left as empty, no file will be saved.
Defaults to ''.
texts (str | list[str]): Text prompts. Defaults to None.
stuff_texts (str | list[str]): Stuff text prompts of open
panoptic task. Defaults to None.
custom_entities (bool): Whether to use custom entities.
Defaults to False. Only used in GLIP and Grounding DINO.
**kwargs: Other keyword arguments passed to :meth:`preprocess`,
:meth:`forward`, :meth:`visualize` and :meth:`postprocess`.
Each key in kwargs should be in the corresponding set of
``preprocess_kwargs``, ``forward_kwargs``, ``visualize_kwargs``
and ``postprocess_kwargs``.
Returns:
dict: Inference and visualization results.
"""
(
preprocess_kwargs,
forward_kwargs,
visualize_kwargs,
postprocess_kwargs,
) = self._dispatch_kwargs(**kwargs)
ori_inputs = self._inputs_to_list(inputs)
if texts is not None and isinstance(texts, str):
texts = [texts] * len(ori_inputs)
if stuff_texts is not None and isinstance(stuff_texts, str):
stuff_texts = [stuff_texts] * len(ori_inputs)
# Currently only supports bs=1
tokens_positive = [tokens_positive] * len(ori_inputs)
if texts is not None:
assert len(texts) == len(ori_inputs)
for i in range(len(texts)):
if isinstance(ori_inputs[i], str):
ori_inputs[i] = {
'text': texts[i],
'img_path': ori_inputs[i],
'custom_entities': custom_entities,
'tokens_positive': tokens_positive[i]
}
else:
ori_inputs[i] = {
'text': texts[i],
'img': ori_inputs[i],
'custom_entities': custom_entities,
'tokens_positive': tokens_positive[i]
}
if stuff_texts is not None:
assert len(stuff_texts) == len(ori_inputs)
for i in range(len(stuff_texts)):
ori_inputs[i]['stuff_text'] = stuff_texts[i]
inputs = self.preprocess(
ori_inputs, batch_size=batch_size, **preprocess_kwargs)
results_dict = {'predictions': [], 'visualization': []}
for ori_imgs, data in (track(inputs, description='Inference')
if self.show_progress else inputs):
preds = self.forward(data, **forward_kwargs)
visualization = self.visualize(
ori_imgs,
preds,
return_vis=return_vis,
show=show,
wait_time=wait_time,
draw_pred=draw_pred,
pred_score_thr=pred_score_thr,
no_save_vis=no_save_vis,
img_out_dir=out_dir,
**visualize_kwargs)
results = self.postprocess(
preds,
visualization,
return_datasamples=return_datasamples,
print_result=print_result,
no_save_pred=no_save_pred,
pred_out_dir=out_dir,
**postprocess_kwargs)
results_dict['predictions'].extend(results['predictions'])
if results['visualization'] is not None:
results_dict['visualization'].extend(results['visualization'])
return results_dict
def visualize(self,
inputs: InputsType,
preds: PredType,
return_vis: bool = False,
show: bool = False,
wait_time: int = 0,
draw_pred: bool = True,
pred_score_thr: float = 0.3,
no_save_vis: bool = False,
img_out_dir: str = '',
**kwargs) -> Union[List[np.ndarray], None]:
"""Visualize predictions.
Args:
inputs (List[Union[str, np.ndarray]]): Inputs for the inferencer.
preds (List[:obj:`DetDataSample`]): Predictions of the model.
return_vis (bool): Whether to return the visualization result.
Defaults to False.
show (bool): Whether to display the image in a popup window.
Defaults to False.
wait_time (float): The interval of show (s). Defaults to 0.
draw_pred (bool): Whether to draw predicted bounding boxes.
Defaults to True.
pred_score_thr (float): Minimum score of bboxes to draw.
Defaults to 0.3.
no_save_vis (bool): Whether to force not to save prediction
vis results. Defaults to False.
img_out_dir (str): Output directory of visualization results.
If left as empty, no file will be saved. Defaults to ''.
Returns:
List[np.ndarray] or None: Returns visualization results only if
applicable.
"""
if no_save_vis is True:
img_out_dir = ''
if not show and img_out_dir == '' and not return_vis:
return None
if self.visualizer is None:
raise ValueError('Visualization needs the "visualizer" term'
'defined in the config, but got None.')
results = []
for single_input, pred in zip(inputs, preds):
if isinstance(single_input, str):
img_bytes = mmengine.fileio.get(single_input)
img = mmcv.imfrombytes(img_bytes)
img = img[:, :, ::-1]
img_name = osp.basename(single_input)
elif isinstance(single_input, np.ndarray):
img = single_input.copy()
img_num = str(self.num_visualized_imgs).zfill(8)
img_name = f'{img_num}.jpg'
else:
raise ValueError('Unsupported input type: '
f'{type(single_input)}')
out_file = osp.join(img_out_dir, 'vis',
img_name) if img_out_dir != '' else None
self.visualizer.add_datasample(
img_name,
img,
pred,
show=show,
wait_time=wait_time,
draw_gt=False,
draw_pred=draw_pred,
pred_score_thr=pred_score_thr,
out_file=out_file,
)
results.append(self.visualizer.get_image())
self.num_visualized_imgs += 1
return results
def postprocess(
self,
preds: PredType,
visualization: Optional[List[np.ndarray]] = None,
return_datasamples: bool = False,
print_result: bool = False,
no_save_pred: bool = False,
pred_out_dir: str = '',
**kwargs,
) -> Dict:
"""Process the predictions and visualization results from ``forward``
and ``visualize``.
This method should be responsible for the following tasks:
1. Convert datasamples into a json-serializable dict if needed.
2. Pack the predictions and visualization results and return them.
3. Dump or log the predictions.
Args:
preds (List[:obj:`DetDataSample`]): Predictions of the model.
visualization (Optional[np.ndarray]): Visualized predictions.
return_datasamples (bool): Whether to use Datasample to store
inference results. If False, dict will be used.
print_result (bool): Whether to print the inference result w/o
visualization to the console. Defaults to False.
no_save_pred (bool): Whether to force not to save prediction
results. Defaults to False.
pred_out_dir: Dir to save the inference results w/o
visualization. If left as empty, no file will be saved.
Defaults to ''.
Returns:
dict: Inference and visualization results with key ``predictions``
and ``visualization``.
- ``visualization`` (Any): Returned by :meth:`visualize`.
- ``predictions`` (dict or DataSample): Returned by
:meth:`forward` and processed in :meth:`postprocess`.
If ``return_datasamples=False``, it usually should be a
json-serializable dict containing only basic data elements such
as strings and numbers.
"""
if no_save_pred is True:
pred_out_dir = ''
result_dict = {}
results = preds
if not return_datasamples:
results = []
for pred in preds:
result = self.pred2dict(pred, pred_out_dir)
results.append(result)
elif pred_out_dir != '':
warnings.warn('Currently does not support saving datasample '
'when return_datasamples is set to True. '
'Prediction results are not saved!')
# Add img to the results after printing and dumping
result_dict['predictions'] = results
if print_result:
print(result_dict)
result_dict['visualization'] = visualization
return result_dict
# TODO: The data format and fields saved in json need further discussion.
# Maybe should include model name, timestamp, filename, image info etc.
def pred2dict(self,
data_sample: DetDataSample,
pred_out_dir: str = '') -> Dict:
"""Extract elements necessary to represent a prediction into a
dictionary.
It's better to contain only basic data elements such as strings and
numbers in order to guarantee it's json-serializable.
Args:
data_sample (:obj:`DetDataSample`): Predictions of the model.
pred_out_dir: Dir to save the inference results w/o
visualization. If left as empty, no file will be saved.
Defaults to ''.
Returns:
dict: Prediction results.
"""
is_save_pred = True
if pred_out_dir == '':
is_save_pred = False
if is_save_pred and 'img_path' in data_sample:
img_path = osp.basename(data_sample.img_path)
img_path = osp.splitext(img_path)[0]
out_img_path = osp.join(pred_out_dir, 'preds',
img_path + '_panoptic_seg.png')
out_json_path = osp.join(pred_out_dir, 'preds', img_path + '.json')
elif is_save_pred:
out_img_path = osp.join(
pred_out_dir, 'preds',
f'{self.num_predicted_imgs}_panoptic_seg.png')
out_json_path = osp.join(pred_out_dir, 'preds',
f'{self.num_predicted_imgs}.json')
self.num_predicted_imgs += 1
result = {}
if 'pred_instances' in data_sample:
masks = data_sample.pred_instances.get('masks')
pred_instances = data_sample.pred_instances.numpy()
result = {
'labels': pred_instances.labels.tolist(),
'scores': pred_instances.scores.tolist()
}
if 'bboxes' in pred_instances:
result['bboxes'] = pred_instances.bboxes.tolist()
if masks is not None:
if 'bboxes' not in pred_instances or pred_instances.bboxes.sum(
) == 0:
# Fake bbox, such as the SOLO.
bboxes = mask2bbox(masks.cpu()).numpy().tolist()
result['bboxes'] = bboxes
encode_masks = encode_mask_results(pred_instances.masks)
for encode_mask in encode_masks:
if isinstance(encode_mask['counts'], bytes):
encode_mask['counts'] = encode_mask['counts'].decode()
result['masks'] = encode_masks
if 'pred_panoptic_seg' in data_sample:
if VOID is None:
raise RuntimeError(
'panopticapi is not installed, please install it by: '
'pip install git+https://github.com/cocodataset/'
'panopticapi.git.')
pan = data_sample.pred_panoptic_seg.sem_seg.cpu().numpy()[0]
pan[pan % INSTANCE_OFFSET == len(
self.model.dataset_meta['classes'])] = VOID
pan = id2rgb(pan).astype(np.uint8)
if is_save_pred:
mmcv.imwrite(pan[:, :, ::-1], out_img_path)
result['panoptic_seg_path'] = out_img_path
else:
result['panoptic_seg'] = pan
if is_save_pred:
mmengine.dump(result, out_json_path)
return result
# Copyright (c) OpenMMLab. All rights reserved.
import copy
import warnings
from pathlib import Path
from typing import Optional, Sequence, Union
import numpy as np
import torch
import torch.nn as nn
from mmcv.ops import RoIPool
from mmcv.transforms import Compose
from mmengine.config import Config
from mmengine.dataset import default_collate
from mmengine.model.utils import revert_sync_batchnorm
from mmengine.registry import init_default_scope
from mmengine.runner import load_checkpoint
from mmdet.registry import DATASETS
from mmdet.utils import ConfigType
from ..evaluation import get_classes
from ..registry import MODELS
from ..structures import DetDataSample, SampleList
from ..utils import get_test_pipeline_cfg
def init_detector(
config: Union[str, Path, Config],
checkpoint: Optional[str] = None,
palette: str = 'none',
device: str = 'cuda:0',
cfg_options: Optional[dict] = None,
) -> nn.Module:
"""Initialize a detector from config file.
Args:
config (str, :obj:`Path`, or :obj:`mmengine.Config`): Config file path,
:obj:`Path`, or the config object.
checkpoint (str, optional): Checkpoint path. If left as None, the model
will not load any weights.
palette (str): Color palette used for visualization. If palette
is stored in checkpoint, use checkpoint's palette first, otherwise
use externally passed palette. Currently, supports 'coco', 'voc',
'citys' and 'random'. Defaults to none.
device (str): The device where the anchors will be put on.
Defaults to cuda:0.
cfg_options (dict, optional): Options to override some settings in
the used config.
Returns:
nn.Module: The constructed detector.
"""
if isinstance(config, (str, Path)):
config = Config.fromfile(config)
elif not isinstance(config, Config):
raise TypeError('config must be a filename or Config object, '
f'but got {type(config)}')
if cfg_options is not None:
config.merge_from_dict(cfg_options)
elif 'init_cfg' in config.model.backbone:
config.model.backbone.init_cfg = None
scope = config.get('default_scope', 'mmdet')
if scope is not None:
init_default_scope(config.get('default_scope', 'mmdet'))
model = MODELS.build(config.model)
model = revert_sync_batchnorm(model)
if checkpoint is None:
warnings.simplefilter('once')
warnings.warn('checkpoint is None, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
else:
checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
# Weights converted from elsewhere may not have meta fields.
checkpoint_meta = checkpoint.get('meta', {})
# save the dataset_meta in the model for convenience
if 'dataset_meta' in checkpoint_meta:
# mmdet 3.x, all keys should be lowercase
model.dataset_meta = {
k.lower(): v
for k, v in checkpoint_meta['dataset_meta'].items()
}
elif 'CLASSES' in checkpoint_meta:
# < mmdet 3.x
classes = checkpoint_meta['CLASSES']
model.dataset_meta = {'classes': classes}
else:
warnings.simplefilter('once')
warnings.warn(
'dataset_meta or class names are not saved in the '
'checkpoint\'s meta data, use COCO classes by default.')
model.dataset_meta = {'classes': get_classes('coco')}
# Priority: args.palette -> config -> checkpoint
if palette != 'none':
model.dataset_meta['palette'] = palette
else:
test_dataset_cfg = copy.deepcopy(config.test_dataloader.dataset)
# lazy init. We only need the metainfo.
test_dataset_cfg['lazy_init'] = True
metainfo = DATASETS.build(test_dataset_cfg).metainfo
cfg_palette = metainfo.get('palette', None)
if cfg_palette is not None:
model.dataset_meta['palette'] = cfg_palette
else:
if 'palette' not in model.dataset_meta:
warnings.warn(
'palette does not exist, random is used by default. '
'You can also set the palette to customize.')
model.dataset_meta['palette'] = 'random'
model.cfg = config # save the config in the model for convenience
model.to(device)
model.eval()
return model
ImagesType = Union[str, np.ndarray, Sequence[str], Sequence[np.ndarray]]
def inference_detector(
model: nn.Module,
imgs: ImagesType,
test_pipeline: Optional[Compose] = None,
text_prompt: Optional[str] = None,
custom_entities: bool = False,
) -> Union[DetDataSample, SampleList]:
"""Inference image(s) with the detector.
Args:
model (nn.Module): The loaded detector.
imgs (str, ndarray, Sequence[str/ndarray]):
Either image files or loaded images.
test_pipeline (:obj:`Compose`): Test pipeline.
Returns:
:obj:`DetDataSample` or list[:obj:`DetDataSample`]:
If imgs is a list or tuple, the same length list type results
will be returned, otherwise return the detection results directly.
"""
if isinstance(imgs, (list, tuple)):
is_batch = True
else:
imgs = [imgs]
is_batch = False
cfg = model.cfg
if test_pipeline is None:
cfg = cfg.copy()
test_pipeline = get_test_pipeline_cfg(cfg)
if isinstance(imgs[0], np.ndarray):
# Calling this method across libraries will result
# in module unregistered error if not prefixed with mmdet.
test_pipeline[0].type = 'mmdet.LoadImageFromNDArray'
test_pipeline = Compose(test_pipeline)
if model.data_preprocessor.device.type == 'cpu':
for m in model.modules():
assert not isinstance(
m, RoIPool
), 'CPU inference with RoIPool is not supported currently.'
result_list = []
for i, img in enumerate(imgs):
# prepare data
if isinstance(img, np.ndarray):
# TODO: remove img_id.
data_ = dict(img=img, img_id=0)
else:
# TODO: remove img_id.
data_ = dict(img_path=img, img_id=0)
if text_prompt:
data_['text'] = text_prompt
data_['custom_entities'] = custom_entities
# build the data pipeline
data_ = test_pipeline(data_)
data_['inputs'] = [data_['inputs']]
data_['data_samples'] = [data_['data_samples']]
# forward the model
with torch.no_grad():
results = model.test_step(data_)[0]
result_list.append(results)
if not is_batch:
return result_list[0]
else:
return result_list
# TODO: Awaiting refactoring
async def async_inference_detector(model, imgs):
"""Async inference image(s) with the detector.
Args:
model (nn.Module): The loaded detector.
img (str | ndarray): Either image files or loaded images.
Returns:
Awaitable detection results.
"""
if not isinstance(imgs, (list, tuple)):
imgs = [imgs]
cfg = model.cfg
if isinstance(imgs[0], np.ndarray):
cfg = cfg.copy()
# set loading pipeline type
cfg.data.test.pipeline[0].type = 'LoadImageFromNDArray'
# cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
test_pipeline = Compose(cfg.data.test.pipeline)
datas = []
for img in imgs:
# prepare data
if isinstance(img, np.ndarray):
# directly add img
data = dict(img=img)
else:
# add information into dict
data = dict(img_info=dict(filename=img), img_prefix=None)
# build the data pipeline
data = test_pipeline(data)
datas.append(data)
for m in model.modules():
assert not isinstance(
m,
RoIPool), 'CPU inference with RoIPool is not supported currently.'
# We don't restore `torch.is_grad_enabled()` value during concurrent
# inference since execution can overlap
torch.set_grad_enabled(False)
results = await model.aforward_test(data, rescale=True)
return results
def build_test_pipeline(cfg: ConfigType) -> ConfigType:
"""Build test_pipeline for mot/vis demo. In mot/vis infer, original
test_pipeline should remove the "LoadImageFromFile" and
"LoadTrackAnnotations".
Args:
cfg (ConfigDict): The loaded config.
Returns:
ConfigType: new test_pipeline
"""
# remove the "LoadImageFromFile" and "LoadTrackAnnotations" in pipeline
transform_broadcaster = cfg.test_dataloader.dataset.pipeline[0].copy()
for transform in transform_broadcaster['transforms']:
if transform['type'] == 'Resize':
transform_broadcaster['transforms'] = transform
pack_track_inputs = cfg.test_dataloader.dataset.pipeline[-1].copy()
test_pipeline = Compose([transform_broadcaster, pack_track_inputs])
return test_pipeline
def inference_mot(model: nn.Module, img: np.ndarray, frame_id: int,
video_len: int) -> SampleList:
"""Inference image(s) with the mot model.
Args:
model (nn.Module): The loaded mot model.
img (np.ndarray): Loaded image.
frame_id (int): frame id.
video_len (int): demo video length
Returns:
SampleList: The tracking data samples.
"""
cfg = model.cfg
data = dict(
img=[img.astype(np.float32)],
frame_id=[frame_id],
ori_shape=[img.shape[:2]],
img_id=[frame_id + 1],
ori_video_length=[video_len])
test_pipeline = build_test_pipeline(cfg)
data = test_pipeline(data)
if not next(model.parameters()).is_cuda:
for m in model.modules():
assert not isinstance(
m, RoIPool
), 'CPU inference with RoIPool is not supported currently.'
# forward the model
with torch.no_grad():
data = default_collate([data])
result = model.test_step(data)[0]
return result
def init_track_model(config: Union[str, Config],
checkpoint: Optional[str] = None,
detector: Optional[str] = None,
reid: Optional[str] = None,
device: str = 'cuda:0',
cfg_options: Optional[dict] = None) -> nn.Module:
"""Initialize a model from config file.
Args:
config (str or :obj:`mmengine.Config`): Config file path or the config
object.
checkpoint (Optional[str], optional): Checkpoint path. Defaults to
None.
detector (Optional[str], optional): Detector Checkpoint path, use in
some tracking algorithms like sort. Defaults to None.
reid (Optional[str], optional): Reid checkpoint path. use in
some tracking algorithms like sort. Defaults to None.
device (str, optional): The device that the model inferences on.
Defaults to `cuda:0`.
cfg_options (Optional[dict], optional): Options to override some
settings in the used config. Defaults to None.
Returns:
nn.Module: The constructed model.
"""
if isinstance(config, str):
config = Config.fromfile(config)
elif not isinstance(config, Config):
raise TypeError('config must be a filename or Config object, '
f'but got {type(config)}')
if cfg_options is not None:
config.merge_from_dict(cfg_options)
model = MODELS.build(config.model)
if checkpoint is not None:
checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
# Weights converted from elsewhere may not have meta fields.
checkpoint_meta = checkpoint.get('meta', {})
# save the dataset_meta in the model for convenience
if 'dataset_meta' in checkpoint_meta:
if 'CLASSES' in checkpoint_meta['dataset_meta']:
value = checkpoint_meta['dataset_meta'].pop('CLASSES')
checkpoint_meta['dataset_meta']['classes'] = value
model.dataset_meta = checkpoint_meta['dataset_meta']
if detector is not None:
assert not (checkpoint and detector), \
'Error: checkpoint and detector checkpoint cannot both exist'
load_checkpoint(model.detector, detector, map_location='cpu')
if reid is not None:
assert not (checkpoint and reid), \
'Error: checkpoint and reid checkpoint cannot both exist'
load_checkpoint(model.reid, reid, map_location='cpu')
# Some methods don't load checkpoints or checkpoints don't contain
# 'dataset_meta'
# VIS need dataset_meta, MOT don't need dataset_meta
if not hasattr(model, 'dataset_meta'):
warnings.warn('dataset_meta or class names are missed, '
'use None by default.')
model.dataset_meta = {'classes': None}
model.cfg = config # save the config in the model for convenience
model.to(device)
model.eval()
return model
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets import AspectRatioBatchSampler, CocoDataset
from mmdet.datasets.transforms import (LoadAnnotations, PackDetInputs,
RandomFlip, Resize)
from mmdet.evaluation import CocoMetric
# dataset settings
dataset_type = CocoDataset
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadAnnotations, with_bbox=True),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type=LoadAnnotations, with_bbox=True),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoMetric,
ann_file=data_root + 'annotations/instances_val2017.json',
metric='bbox',
format_only=False,
backend_args=backend_args)
test_evaluator = val_evaluator
# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=2,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type=DefaultSampler, shuffle=False),
# dataset=dict(
# type=dataset_type,
# data_root=data_root,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type=CocoMetric,
# metric='bbox',
# format_only=True,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_detection/test')
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms.loading import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets.coco import CocoDataset
from mmdet.datasets.samplers.batch_sampler import AspectRatioBatchSampler
from mmdet.datasets.transforms.formatting import PackDetInputs
from mmdet.datasets.transforms.loading import LoadAnnotations
from mmdet.datasets.transforms.transforms import RandomFlip, Resize
from mmdet.evaluation.metrics.coco_metric import CocoMetric
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadAnnotations, with_bbox=True, with_mask=True),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type=LoadAnnotations, with_bbox=True, with_mask=True),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoMetric,
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False,
backend_args=backend_args)
test_evaluator = val_evaluator
# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=2,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type=DefaultSampler, shuffle=False),
# dataset=dict(
# type=CocoDataset,
# data_root=data_root,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type=CocoMetric,
# metric=['bbox', 'segm'],
# format_only=True,
# ann_file=data_root + 'annotations/image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_instance/test')
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms.loading import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets.coco import CocoDataset
from mmdet.datasets.samplers.batch_sampler import AspectRatioBatchSampler
from mmdet.datasets.transforms.formatting import PackDetInputs
from mmdet.datasets.transforms.loading import LoadAnnotations
from mmdet.datasets.transforms.transforms import RandomFlip, Resize
from mmdet.evaluation.metrics.coco_metric import CocoMetric
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadAnnotations, with_bbox=True, with_mask=True, with_seg=True),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
# If you don't have a gt annotation, delete the pipeline
dict(type=LoadAnnotations, with_bbox=True, with_mask=True, with_seg=True),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_train2017.json',
data_prefix=dict(img='train2017/', seg='stuffthingmaps/train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=CocoDataset,
data_root=data_root,
ann_file='annotations/instances_val2017.json',
data_prefix=dict(img='val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoMetric,
ann_file=data_root + 'annotations/instances_val2017.json',
metric=['bbox', 'segm'],
format_only=False,
backend_args=backend_args)
test_evaluator = val_evaluator
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms.loading import LoadImageFromFile
from mmengine.dataset.sampler import DefaultSampler
from mmdet.datasets.coco_panoptic import CocoPanopticDataset
from mmdet.datasets.samplers.batch_sampler import AspectRatioBatchSampler
from mmdet.datasets.transforms.formatting import PackDetInputs
from mmdet.datasets.transforms.loading import LoadPanopticAnnotations
from mmdet.datasets.transforms.transforms import RandomFlip, Resize
from mmdet.evaluation.metrics.coco_panoptic_metric import CocoPanopticMetric
# dataset settings
dataset_type = 'CocoPanopticDataset'
data_root = 'data/coco/'
# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)
# data_root = 's3://openmmlab/datasets/detection/coco/'
# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
# backend_args = dict(
# backend='petrel',
# path_mapping=dict({
# './data/': 's3://openmmlab/datasets/detection/',
# 'data/': 's3://openmmlab/datasets/detection/'
# }))
backend_args = None
train_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadPanopticAnnotations, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=RandomFlip, prob=0.5),
dict(type=PackDetInputs)
]
test_pipeline = [
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=(1333, 800), keep_ratio=True),
dict(type=LoadPanopticAnnotations, backend_args=backend_args),
dict(
type=PackDetInputs,
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=DefaultSampler, shuffle=True),
batch_sampler=dict(type=AspectRatioBatchSampler),
dataset=dict(
type=CocoPanopticDataset,
data_root=data_root,
ann_file='annotations/panoptic_train2017.json',
data_prefix=dict(
img='train2017/', seg='annotations/panoptic_train2017/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=backend_args))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type=DefaultSampler, shuffle=False),
dataset=dict(
type=CocoPanopticDataset,
data_root=data_root,
ann_file='annotations/panoptic_val2017.json',
data_prefix=dict(img='val2017/', seg='annotations/panoptic_val2017/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=backend_args))
test_dataloader = val_dataloader
val_evaluator = dict(
type=CocoPanopticMetric,
ann_file=data_root + 'annotations/panoptic_val2017.json',
seg_prefix=data_root + 'annotations/panoptic_val2017/',
backend_args=backend_args)
test_evaluator = val_evaluator
# inference on test dataset and
# format the output results for submission.
# test_dataloader = dict(
# batch_size=1,
# num_workers=1,
# persistent_workers=True,
# drop_last=False,
# sampler=dict(type=DefaultSampler, shuffle=False),
# dataset=dict(
# type=CocoPanopticDataset,
# data_root=data_root,
# ann_file='annotations/panoptic_image_info_test-dev2017.json',
# data_prefix=dict(img='test2017/'),
# test_mode=True,
# pipeline=test_pipeline))
# test_evaluator = dict(
# type=CocoPanopticMetric,
# format_only=True,
# ann_file=data_root + 'annotations/panoptic_image_info_test-dev2017.json',
# outfile_prefix='./work_dirs/coco_panoptic/test')
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.transforms import (LoadImageFromFile, RandomResize,
TransformBroadcaster)
from mmdet.datasets import MOTChallengeDataset
from mmdet.datasets.samplers import TrackImgSampler
from mmdet.datasets.transforms import (LoadTrackAnnotations, PackTrackInputs,
PhotoMetricDistortion, RandomCrop,
RandomFlip, Resize,
UniformRefFrameSample)
from mmdet.evaluation import MOTChallengeMetric
# dataset settings
dataset_type = MOTChallengeDataset
data_root = 'data/MOT17/'
img_scale = (1088, 1088)
backend_args = None
# data pipeline
train_pipeline = [
dict(
type=UniformRefFrameSample,
num_ref_imgs=1,
frame_range=10,
filter_key_img=True),
dict(
type=TransformBroadcaster,
share_random_params=True,
transforms=[
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=LoadTrackAnnotations),
dict(
type=RandomResize,
scale=img_scale,
ratio_range=(0.8, 1.2),
keep_ratio=True,
clip_object_border=False),
dict(type=PhotoMetricDistortion)
]),
dict(
type=TransformBroadcaster,
# different cropped positions for different frames
share_random_params=False,
transforms=[
dict(type=RandomCrop, crop_size=img_scale, bbox_clip_border=False)
]),
dict(
type=TransformBroadcaster,
share_random_params=True,
transforms=[
dict(type=RandomFlip, prob=0.5),
]),
dict(type=PackTrackInputs)
]
test_pipeline = [
dict(
type=TransformBroadcaster,
transforms=[
dict(type=LoadImageFromFile, backend_args=backend_args),
dict(type=Resize, scale=img_scale, keep_ratio=True),
dict(type=LoadTrackAnnotations)
]),
dict(type=PackTrackInputs)
]
# dataloader
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type=TrackImgSampler), # image-based sampling
dataset=dict(
type=dataset_type,
data_root=data_root,
visibility_thr=-1,
ann_file='annotations/half-train_cocoformat.json',
data_prefix=dict(img_path='train'),
metainfo=dict(classes=('pedestrian', )),
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=1,
num_workers=2,
persistent_workers=True,
# Now we support two ways to test, image_based and video_based
# if you want to use video_based sampling, you can use as follows
# sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
sampler=dict(type=TrackImgSampler), # image-based sampling
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='annotations/half-val_cocoformat.json',
data_prefix=dict(img_path='train'),
test_mode=True,
pipeline=test_pipeline))
test_dataloader = val_dataloader
# evaluator
val_evaluator = dict(
type=MOTChallengeMetric, metric=['HOTA', 'CLEAR', 'Identity'])
test_evaluator = val_evaluator
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
LoggerHook, ParamSchedulerHook)
from mmengine.runner import LogProcessor
from mmengine.visualization import LocalVisBackend
from mmdet.engine.hooks import DetVisualizationHook
from mmdet.visualization import DetLocalVisualizer
default_scope = None
default_hooks = dict(
timer=dict(type=IterTimerHook),
logger=dict(type=LoggerHook, interval=50),
param_scheduler=dict(type=ParamSchedulerHook),
checkpoint=dict(type=CheckpointHook, interval=1),
sampler_seed=dict(type=DistSamplerSeedHook),
visualization=dict(type=DetVisualizationHook))
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'),
)
vis_backends = [dict(type=LocalVisBackend)]
visualizer = dict(
type=DetLocalVisualizer, vis_backends=vis_backends, name='visualizer')
log_processor = dict(type=LogProcessor, window_size=50, by_epoch=True)
log_level = 'INFO'
load_from = None
resume = False
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.cascade_rcnn import CascadeRCNN
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import SmoothL1Loss
from mmdet.models.necks.fpn import FPN
from mmdet.models.roi_heads.bbox_heads.convfc_bbox_head import \
Shared2FCBBoxHead
from mmdet.models.roi_heads.cascade_roi_head import CascadeRoIHead
from mmdet.models.roi_heads.mask_heads.fcn_mask_head import FCNMaskHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
model = dict(
type=CascadeRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_mask=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type=RPNHead,
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0 / 9.0, loss_weight=1.0)),
roi_head=dict(
type=CascadeRoIHead,
num_stages=3,
stage_loss_weights=[1, 0.5, 0.25],
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=[
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0))
],
mask_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=14, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
mask_head=dict(
type=FCNMaskHead,
num_convs=4,
in_channels=256,
conv_out_channels=256,
num_classes=80,
loss_mask=dict(
type=CrossEntropyLoss, use_mask=True, loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=2000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False)
]),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100,
mask_thr_binary=0.5)))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.cascade_rcnn import CascadeRCNN
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import SmoothL1Loss
from mmdet.models.necks.fpn import FPN
from mmdet.models.roi_heads.bbox_heads.convfc_bbox_head import \
Shared2FCBBoxHead
from mmdet.models.roi_heads.cascade_roi_head import CascadeRoIHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
model = dict(
type=CascadeRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type=RPNHead,
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0 / 9.0, loss_weight=1.0)),
roi_head=dict(
type=CascadeRoIHead,
num_stages=3,
stage_loss_weights=[1, 0.5, 0.25],
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=[
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0)),
dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=SmoothL1Loss, beta=1.0, loss_weight=1.0))
]),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=2000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.6,
neg_iou_thr=0.6,
min_pos_iou=0.6,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.7,
min_pos_iou=0.7,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
]),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100)))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.faster_rcnn import FasterRCNN
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import L1Loss
from mmdet.models.necks.fpn import FPN
from mmdet.models.roi_heads.bbox_heads.convfc_bbox_head import \
Shared2FCBBoxHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
model = dict(
type=FasterRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type=RPNHead,
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
roi_head=dict(
type=StandardRoIHead,
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05)
))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from mmengine.model.weight_init import PretrainedInit
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.mask_rcnn import MaskRCNN
from mmdet.models.layers import ResLayer
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import L1Loss
from mmdet.models.roi_heads.bbox_heads.bbox_head import BBoxHead
from mmdet.models.roi_heads.mask_heads.fcn_mask_head import FCNMaskHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
norm_cfg = dict(type=BatchNorm2d, requires_grad=False)
# model settings
model = dict(
type=MaskRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[103.530, 116.280, 123.675],
std=[1.0, 1.0, 1.0],
bgr_to_rgb=False,
pad_mask=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=3,
strides=(1, 2, 2),
dilations=(1, 1, 1),
out_indices=(2, ),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='caffe',
init_cfg=dict(
type=PretrainedInit,
checkpoint='open-mmlab://detectron2/resnet50_caffe')),
rpn_head=dict(
type=RPNHead,
in_channels=1024,
feat_channels=1024,
anchor_generator=dict(
type=AnchorGenerator,
scales=[2, 4, 8, 16, 32],
ratios=[0.5, 1.0, 2.0],
strides=[16]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
roi_head=dict(
type=StandardRoIHead,
shared_head=dict(
type=ResLayer,
depth=50,
stage=3,
stride=2,
dilation=1,
style='caffe',
norm_cfg=norm_cfg,
norm_eval=True),
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=14, sampling_ratio=0),
out_channels=1024,
featmap_strides=[16]),
bbox_head=dict(
type=BBoxHead,
with_avg_pool=True,
roi_feat_size=7,
in_channels=2048,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
mask_roi_extractor=None,
mask_head=dict(
type=FCNMaskHead,
num_convs=0,
in_channels=2048,
conv_out_channels=256,
num_classes=80,
loss_mask=dict(
type=CrossEntropyLoss, use_mask=True, loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=12000,
max_per_img=2000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=False,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=14,
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=6000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100,
mask_thr_binary=0.5)))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import RoIAlign, nms
from mmengine.model.weight_init import PretrainedInit
from torch.nn import BatchNorm2d
from mmdet.models.backbones.resnet import ResNet
from mmdet.models.data_preprocessors.data_preprocessor import \
DetDataPreprocessor
from mmdet.models.dense_heads.rpn_head import RPNHead
from mmdet.models.detectors.mask_rcnn import MaskRCNN
from mmdet.models.losses.cross_entropy_loss import CrossEntropyLoss
from mmdet.models.losses.smooth_l1_loss import L1Loss
from mmdet.models.necks.fpn import FPN
from mmdet.models.roi_heads.bbox_heads.convfc_bbox_head import \
Shared2FCBBoxHead
from mmdet.models.roi_heads.mask_heads.fcn_mask_head import FCNMaskHead
from mmdet.models.roi_heads.roi_extractors.single_level_roi_extractor import \
SingleRoIExtractor
from mmdet.models.roi_heads.standard_roi_head import StandardRoIHead
from mmdet.models.task_modules.assigners.max_iou_assigner import MaxIoUAssigner
from mmdet.models.task_modules.coders.delta_xywh_bbox_coder import \
DeltaXYWHBBoxCoder
from mmdet.models.task_modules.prior_generators.anchor_generator import \
AnchorGenerator
from mmdet.models.task_modules.samplers.random_sampler import RandomSampler
# model settings
model = dict(
type=MaskRCNN,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_mask=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(
type=PretrainedInit, checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type=RPNHead,
in_channels=256,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
scales=[8],
ratios=[0.5, 1.0, 2.0],
strides=[4, 8, 16, 32, 64]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
roi_head=dict(
type=StandardRoIHead,
bbox_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=7, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type=Shared2FCBBoxHead,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=80,
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type=CrossEntropyLoss, use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
mask_roi_extractor=dict(
type=SingleRoIExtractor,
roi_layer=dict(type=RoIAlign, output_size=14, sampling_ratio=0),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
mask_head=dict(
type=FCNMaskHead,
num_convs=4,
in_channels=256,
conv_out_channels=256,
num_classes=80,
loss_mask=dict(
type=CrossEntropyLoss, use_mask=True, loss_weight=1.0))),
# model training and testing settings
train_cfg=dict(
rpn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=-1,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=2000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
match_low_quality=True,
ignore_iof_thr=-1),
sampler=dict(
type=RandomSampler,
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=1000,
max_per_img=1000,
nms=dict(type=nms, iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100,
mask_thr_binary=0.5)))
# Copyright (c) OpenMMLab. All rights reserved.
from mmcv.ops import nms
from torch.nn import BatchNorm2d
from mmdet.models import (FPN, DetDataPreprocessor, FocalLoss, L1Loss, ResNet,
RetinaHead, RetinaNet)
from mmdet.models.task_modules import (AnchorGenerator, DeltaXYWHBBoxCoder,
MaxIoUAssigner, PseudoSampler)
# model settings
model = dict(
type=RetinaNet,
data_preprocessor=dict(
type=DetDataPreprocessor,
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32),
backbone=dict(
type=ResNet,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type=BatchNorm2d, requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
neck=dict(
type=FPN,
in_channels=[256, 512, 1024, 2048],
out_channels=256,
start_level=1,
add_extra_convs='on_input',
num_outs=5),
bbox_head=dict(
type=RetinaHead,
num_classes=80,
in_channels=256,
stacked_convs=4,
feat_channels=256,
anchor_generator=dict(
type=AnchorGenerator,
octave_base_scale=4,
scales_per_octave=3,
ratios=[0.5, 1.0, 2.0],
strides=[8, 16, 32, 64, 128]),
bbox_coder=dict(
type=DeltaXYWHBBoxCoder,
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type=FocalLoss,
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type=L1Loss, loss_weight=1.0)),
# model training and testing settings
train_cfg=dict(
assigner=dict(
type=MaxIoUAssigner,
pos_iou_thr=0.5,
neg_iou_thr=0.4,
min_pos_iou=0,
ignore_iof_thr=-1),
sampler=dict(
type=PseudoSampler), # Focal loss should use PseudoSampler
allowed_border=-1,
pos_weight=-1,
debug=False),
test_cfg=dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
nms=dict(type=nms, iou_threshold=0.5),
max_per_img=100))
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.optim.optimizer.optimizer_wrapper import OptimWrapper
from mmengine.optim.scheduler.lr_scheduler import LinearLR, MultiStepLR
from mmengine.runner.loops import EpochBasedTrainLoop, TestLoop, ValLoop
from torch.optim.sgd import SGD
# training schedule for 1x
train_cfg = dict(type=EpochBasedTrainLoop, max_epochs=12, val_interval=1)
val_cfg = dict(type=ValLoop)
test_cfg = dict(type=TestLoop)
# learning rate
param_scheduler = [
dict(type=LinearLR, start_factor=0.001, by_epoch=False, begin=0, end=500),
dict(
type=MultiStepLR,
begin=0,
end=12,
by_epoch=True,
milestones=[8, 11],
gamma=0.1)
]
# optimizer
optim_wrapper = dict(
type=OptimWrapper,
optimizer=dict(type=SGD, lr=0.02, momentum=0.9, weight_decay=0.0001))
# Default setting for scaling LR automatically
# - `enable` means enable scaling LR automatically
# or not by default.
# - `base_batch_size` = (8 GPUs) x (2 samples per GPU).
auto_scale_lr = dict(enable=False, base_batch_size=16)
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.optim.optimizer.optimizer_wrapper import OptimWrapper
from mmengine.optim.scheduler.lr_scheduler import LinearLR, MultiStepLR
from mmengine.runner.loops import EpochBasedTrainLoop, TestLoop, ValLoop
from torch.optim.sgd import SGD
# training schedule for 1x
train_cfg = dict(type=EpochBasedTrainLoop, max_epochs=24, val_interval=1)
val_cfg = dict(type=ValLoop)
test_cfg = dict(type=TestLoop)
# learning rate
param_scheduler = [
dict(type=LinearLR, start_factor=0.001, by_epoch=False, begin=0, end=500),
dict(
type=MultiStepLR,
begin=0,
end=24,
by_epoch=True,
milestones=[16, 22],
gamma=0.1)
]
# optimizer
optim_wrapper = dict(
type=OptimWrapper,
optimizer=dict(type=SGD, lr=0.02, momentum=0.9, weight_decay=0.0001))
# Default setting for scaling LR automatically
# - `enable` means enable scaling LR automatically
# or not by default.
# - `base_batch_size` = (8 GPUs) x (2 samples per GPU).
auto_scale_lr = dict(enable=False, base_batch_size=16)
# Copyright (c) OpenMMLab. All rights reserved.
# Please refer to https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta for more details. # noqa
# mmcv >= 2.0.1
# mmengine >= 0.8.0
from mmengine.config import read_base
with read_base():
from .._base_.datasets.coco_instance import *
from .._base_.default_runtime import *
from .._base_.models.cascade_mask_rcnn_r50_fpn import *
from .._base_.schedules.schedule_1x import *
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment