@@ -24,7 +24,7 @@ Please refer to [mmcv](https://mmcv.readthedocs.io/en/latest/utils.html#config)
...
@@ -24,7 +24,7 @@ Please refer to [mmcv](https://mmcv.readthedocs.io/en/latest/utils.html#config)
We follow the below style to name config files. Contributors are advised to follow the same style.
We follow the below style to name config files. Contributors are advised to follow the same style.
```
```
{model}_[model setting]_{backbone}_{neck}_[norm setting]_[misc]_[batch_per_gpu x gpu]_{schedule}_{dataset}
{model}_[model setting]_{backbone}_[neck]_[norm setting]_[misc]_[batch_per_gpu x gpu]_{schedule}_{dataset}
```
```
`{xxx}` is required field and `[yyy]` is optional.
`{xxx}` is required field and `[yyy]` is optional.
...
@@ -32,7 +32,7 @@ We follow the below style to name config files. Contributors are advised to foll
...
@@ -32,7 +32,7 @@ We follow the below style to name config files. Contributors are advised to foll
-`{model}`: model type like `hv_pointpillars` (Hard Voxelization PointPillars), `VoteNet`, etc.
-`{model}`: model type like `hv_pointpillars` (Hard Voxelization PointPillars), `VoteNet`, etc.
-`[model setting]`: specific setting for some model.
-`[model setting]`: specific setting for some model.
-`{backbone}`: backbone type like `regnet-400mf`, `regnet-1.6gf`.
-`{backbone}`: backbone type like `regnet-400mf`, `regnet-1.6gf`.
-`{neck}`: neck type like `fpn`, `secfpn`.
-`[neck]`: neck type like `fpn`, `secfpn`.
-`[norm_setting]`: `bn` (Batch Normalization) is used unless specified, other norm layer type could be `gn` (Group Normalization), `sbn` (Synchronized Batch Normalization).
-`[norm_setting]`: `bn` (Batch Normalization) is used unless specified, other norm layer type could be `gn` (Group Normalization), `sbn` (Synchronized Batch Normalization).
`gn-head`/`gn-neck` indicates GN is applied in head/neck only, while `gn-all` means GN is applied in the entire model, e.g. backbone, neck, head.
`gn-head`/`gn-neck` indicates GN is applied in head/neck only, while `gn-all` means GN is applied in the entire model, e.g. backbone, neck, head.
-`[misc]`: miscellaneous setting/plugins of model, e.g. `strong-aug` means using stronger augmentation strategies for training.
-`[misc]`: miscellaneous setting/plugins of model, e.g. `strong-aug` means using stronger augmentation strategies for training.
...
@@ -383,14 +383,14 @@ evaluation = dict(pipeline=[ # Pipeline is passed by eval_pipeline created befo
...
@@ -383,14 +383,14 @@ evaluation = dict(pipeline=[ # Pipeline is passed by eval_pipeline created befo
])
])
lr=0.008# Learning rate of optimizers
lr=0.008# Learning rate of optimizers
optimizer=dict(# Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
optimizer=dict(# Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
type='Adam',# Type of optimizers, # Type of optimizers, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/optimizer/default_constructor.py#L13 for more details
type='Adam',# Type of optimizers, # Type of optimizers, refer to https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/optimizer/default_constructor.py#L12 for more details
lr=0.008)# Learning rate of optimizers, see detail usages of the parameters in the documentaion of PyTorch
lr=0.008)# Learning rate of optimizers, see detail usages of the parameters in the documentaion of PyTorch
optimizer_config=dict(# Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details.
optimizer_config=dict(# Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/optimizer.py#L22 for implementation details.
grad_clip=dict(# Config used to grad_clip
grad_clip=dict(# Config used to grad_clip
max_norm=10,# max norm of the gradients
max_norm=10,# max norm of the gradients
norm_type=2))# Type of the used p-norm. Can be 'inf' for infinity norm.
norm_type=2))# Type of the used p-norm. Can be 'inf' for infinity norm.
lr_config=dict(# Learning rate scheduler config used to register LrUpdater hook
lr_config=dict(# Learning rate scheduler config used to register LrUpdater hook
policy='step',# The policy of scheduler, also support CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9.
policy='step',# The policy of scheduler, also support CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py#L9.
warmup=None,# The warmup policy, also support `exp` and `constant`.
warmup=None,# The warmup policy, also support `exp` and `constant`.
step=[24,32])# Steps to decay the learning rate
step=[24,32])# Steps to decay the learning rate
checkpoint_config=dict(# Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation.
checkpoint_config=dict(# Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation.
- Before MMDetection v2.5.0, the dataset will filter out the empty GT images automatically if the classes are set and there is no way to disable that through config. This is an undesirable behavior and introduces confusion because if the classes are not set, the dataset only filter the empty GT images when `filter_empty_gt=True` and `test_mode=False`. After MMDetection v2.5.0, we decouple the image filtering process and the classes modification, i.e., the dataset will only filter empty GT images when `filter_empty_gt=True` and `test_mode=False`, no matter whether the classes are set. Thus, setting the classes only influences the annotations of classes used for training and users could decide whether to filter empty GT images by themselves.
- Before MMDetection v2.5.0, the dataset will filter out the empty GT images automatically if the classes are set and there is no way to disable that through config. This is an undesirable behavior and introduces confusion because if the classes are not set, the dataset only filter the empty GT images when `filter_empty_gt=True` and `test_mode=False`. After MMDetection v2.5.0, we decouple the image filtering process and the classes modification, i.e., the dataset will only filter empty GT images when `filter_empty_gt=True` and `test_mode=False`, no matter whether the classes are set. Thus, setting the classes only influences the annotations of classes used for training and users could decide whether to filter empty GT images by themselves.
- Since the middle format only has box labels and does not contain the class names, when using `CustomDataset`, users cannot filter out the empty GT images through configs but only do this offline.
- Since the middle format only has box labels and does not contain the class names, when using `CustomDataset`, users cannot filter out the empty GT images through configs but only do this offline.
- The features for setting dataset classes and dataset filtering will be refactored to be more user-friendly in v2.8.0 or v2.9.0 (depends on the progress).
- The features for setting dataset classes and dataset filtering will be refactored to be more user-friendly in the future (depends on the progress).
@@ -108,7 +108,7 @@ class MyOptimizerConstructor(object):
...
@@ -108,7 +108,7 @@ class MyOptimizerConstructor(object):
```
```
The default optimizer constructor is implemented [here](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/optimizer/default_constructor.py#L11), which could also serve as a template for new optimizer constructor.
The default optimizer constructor is implemented [here](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/optimizer/default_constructor.py#L11), which could also serve as a template for new optimizer constructor.
### Additional settings
### Additional settings
...
@@ -122,12 +122,12 @@ Tricks not implemented by the optimizer should be implemented through optimizer
...
@@ -122,12 +122,12 @@ Tricks not implemented by the optimizer should be implemented through optimizer
If your config inherits the base config which already sets the `optimizer_config`, you might need `_delete_=True` to overide the unnecessary settings. See the [config documenetation](https://mmdetection.readthedocs.io/en/latest/config.html) for more details.
If your config inherits the base config which already sets the `optimizer_config`, you might need `_delete_=True` to overide the unnecessary settings. See the [config documenetation](https://mmdetection.readthedocs.io/en/latest/tutorials/config.html) for more details.
- __Use momentum schedule to accelerate model convergence__:
- __Use momentum schedule to accelerate model convergence__:
We support momentum scheduler to modify model's momentum according to learning rate, which could make the model converge in a faster way.
We support momentum scheduler to modify model's momentum according to learning rate, which could make the model converge in a faster way.
Momentum scheduler is usually used with LR scheduler, for example, the following config is used in 3D detection to accelerate convergence.
Momentum scheduler is usually used with LR scheduler, for example, the following config is used in 3D detection to accelerate convergence.
For more details, please refer to the implementation of [CyclicLrUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L327) and [CyclicMomentumUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/momentum_updater.py#L130).
For more details, please refer to the implementation of [CyclicLrUpdater](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py#L358) and [CyclicMomentumUpdater](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/momentum_updater.py#L225).
```python
```python
lr_config = dict(
lr_config = dict(
...
@@ -146,8 +146,8 @@ Tricks not implemented by the optimizer should be implemented through optimizer
...
@@ -146,8 +146,8 @@ Tricks not implemented by the optimizer should be implemented through optimizer
## Customize training schedules
## Customize training schedules
By default we use step learning rate with 1x schedule, this calls [`StepLRHook`](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L153) in MMCV.
By default we use step learning rate with 1x schedule, this calls [`StepLRHook`](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py#L167) in MMCV.
We support many other learning rate schedule [here](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py), such as `CosineAnnealing` and `Poly` schedule. Here are some examples
We support many other learning rate schedule [here](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py), such as `CosineAnnealing` and `Poly` schedule. Here are some examples
- Poly schedule:
- Poly schedule:
...
@@ -248,7 +248,7 @@ __all__ = [..., 'MyHook']
...
@@ -248,7 +248,7 @@ __all__ = [..., 'MyHook']
```
```
- Use `custom_imports` in the config to manually import it
Or use `custom_imports` in the config to manually import it
@@ -295,17 +295,17 @@ There are some common hooks that are not registerd through `custom_hooks`, they
...
@@ -295,17 +295,17 @@ There are some common hooks that are not registerd through `custom_hooks`, they
In those hooks, only the logger hook has the `VERY_LOW` priority, others' priority are `NORMAL`.
In those hooks, only the logger hook has the `VERY_LOW` priority, others' priority are `NORMAL`.
The above-mentioned tutorials already covers how to modify `optimizer_config`, `momentum_config`, and `lr_config`.
The above-mentioned tutorials already covers how to modify `optimizer_config`, `momentum_config`, and `lr_config`.
Here we reveals how what we can do with `log_config`, `checkpoint_config`, and `evaluation`.
Here we reveal what we can do with `log_config`, `checkpoint_config`, and `evaluation`.
#### Checkpoint config
#### Checkpoint config
The MMCV runner will use `checkpoint_config` to initialize [`CheckpointHook`](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/hooks/checkpoint.py#L9).
The MMCV runner will use `checkpoint_config` to initialize [`CheckpointHook`](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/checkpoint.py#L9).
```python
```python
checkpoint_config=dict(interval=1)
checkpoint_config=dict(interval=1)
```
```
The users could set `max_keep_ckpts` to only save only small number of checkpoints or decide whether to store state dict of optimizer by `save_optimizer`. More details of the arguments are [here](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.CheckpointHook)
The users could set `max_keep_ckpts` to save only small number of checkpoints or decide whether to store state dict of optimizer by `save_optimizer`. More details of the arguments are [here](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.CheckpointHook)
#### Log config
#### Log config
...
@@ -323,7 +323,7 @@ log_config = dict(
...
@@ -323,7 +323,7 @@ log_config = dict(
#### Evaluation config
#### Evaluation config
The config of `evaluation` will be used to initialize the [`EvalHook`](https://github.com/open-mmlab/mmdetection/blob/7a404a2c000620d52156774a5025070d9e00d918/mmdet/core/evaluation/eval_hooks.py#L8).
The config of `evaluation` will be used to initialize the [`EvalHook`](https://github.com/open-mmlab/mmdetection/blob/v2.13.0/mmdet/core/evaluation/eval_hooks.py#L9).
Except the key `interval`, other arguments such as `metric` will be passed to the `dataset.evaluate()`
Except the key `interval`, other arguments such as `metric` will be passed to the `dataset.evaluate()`
@@ -15,7 +15,7 @@ defines how to process the annotations and a data pipeline defines all the steps
...
@@ -15,7 +15,7 @@ defines how to process the annotations and a data pipeline defines all the steps
A pipeline consists of a sequence of operations. Each operation takes a dict as input and also output a dict for the next transform.
A pipeline consists of a sequence of operations. Each operation takes a dict as input and also output a dict for the next transform.
We present a classical pipeline in the following figure. The blue blocks are pipeline operations. With the pipeline going on, each operator can add new keys (marked as green) to the result dict or update the existing keys (marked as orange).
We present a classical pipeline in the following figure. The blue blocks are pipeline operations. With the pipeline going on, each operator can add new keys (marked as green) to the result dict or update the existing keys (marked as orange).
This page provides specific tutorials about the usage of MMDetection3D for waymo dataset.
This page provides specific tutorials about the usage of MMDetection3D for waymo dataset.
## Prepare datasets
## Prepare dataset
Before preparing waymo dataset, if you only installed requirements in `requirements/build.txt` and `requirements/runtime.txt` before, please install the official package for this dataset at first by running
Before preparing waymo dataset, if you only installed requirements in `requirements/build.txt` and `requirements/runtime.txt` before, please install the official package for this dataset at first by running
...
@@ -43,7 +43,7 @@ mmdetection3d
...
@@ -43,7 +43,7 @@ mmdetection3d
```
```
You can download Waymo open dataset V1.2 [HERE](https://waymo.com/open/download/) and its data split [HERE](https://drive.google.com/drive/folders/18BVuF_RYJF0NjZpt8SnfzANiakoRMf0o?usp=sharing). Then put tfrecord files into corresponding folders in `data/waymo/waymo_format/` and put the data split txt files into `data/waymo/kitti_format/ImageSets`. Download ground truth bin file for validation set [HERE](https://console.cloud.google.com/storage/browser/waymo_open_dataset_v_1_2_0/validation/ground_truth_objects) and put it into `data/waymo/waymo_format/`. A tip is that you can use `gsutil` to download the large-scale dataset with commands. You can take this [tool](https://github.com/RalphMao/Waymo-Dataset-Tool) as an example for more details. Subsequently, prepare waymo data by running
You can download Waymo open dataset V1.2 [HERE](https://waymo.com/open/download/) and its data split [HERE](https://drive.google.com/drive/folders/18BVuF_RYJF0NjZpt8SnfzANiakoRMf0o?usp=sharing). Then put tfrecord files into corresponding folders in `data/waymo/waymo_format/` and put the data split txt files into `data/waymo/kitti_format/ImageSets`. Download ground truth bin files for validation set [HERE](https://console.cloud.google.com/storage/browser/waymo_open_dataset_v_1_2_0/validation/ground_truth_objects) and put it into `data/waymo/waymo_format/`. A tip is that you can use `gsutil` to download the large-scale dataset with commands. You can take this [tool](https://github.com/RalphMao/Waymo-Dataset-Tool) as an example for more details. Subsequently, prepare waymo data by running
@@ -101,7 +101,7 @@ Considering there are many similar frames in the original dataset, we can basica
...
@@ -101,7 +101,7 @@ Considering there are many similar frames in the original dataset, we can basica
## Evaluation
## Evaluation
For evaluation on waymo, please follow the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md) to build the binary file `compute_detection_metrics_main` for metrics computation and put it into `mmdet3d/core/evaluation/waymo_utils/`. Basically, you can follow the commands below to install bazel and build the file.
For evaluation on waymo, please follow the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md/) to build the binary file `compute_detection_metrics_main` for metrics computation and put it into `mmdet3d/core/evaluation/waymo_utils/`. Basically, you can follow the commands below to install bazel and build the file.
`pklfile_prefix` should be given in the `--eval-options` if the bin file is needed to be generated. For metrics, `waymo` is the recommended official evaluation prototype. Currently, evaluating with choice `kitti` is adapted from KITTI and the results for each difficulty are not exactly the same as the definition of KITTI. Instead, most of objects are marked with difficulty 0 currently, which will be fixed in the future. The reasons of its instability include the large computation for evalution, the lack of occlusion and truncation in the converted data, different definition of difficulty and different methods of computing average precision.
`pklfile_prefix` should be given in the `--eval-options` if the bin file is needed to be generated. For metrics, `waymo` is the recommended official evaluation prototype. Currently, evaluating with choice `kitti` is adapted from KITTI and the results for each difficulty are not exactly the same as the definition of KITTI. Instead, most of objects are marked with difficulty 0 currently, which will be fixed in the future. The reasons of its instability include the large computation for evalution, the lack of occlusion and truncation in the converted data, different definitions of difficulty and different methods of computing average precision.
**Notice**:
**Notice**:
...
@@ -150,7 +150,7 @@ An example to test PointPillars on waymo with 8 GPUs, generate the bin files and
...
@@ -150,7 +150,7 @@ An example to test PointPillars on waymo with 8 GPUs, generate the bin files and
After generating the bin file, you can simply build the binary file `create_submission` and use them to create a submission file by following the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md). Basically, here are some example commands.
After generating the bin file, you can simply build the binary file `create_submission` and use them to create a submission file by following the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md/). Basically, here are some example commands.