Unverified Commit 5053add3 authored by twang's avatar twang Committed by GitHub
Browse files

[Fix] Fix relative paths/links in documentation (#271)

* Replace original links by absolute md/html links

* Use html links to replace relative md links

* Fix relative links

* Fix relative paths/links

* Fix relative paths

* Fix relative paths
parent bd44491f
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
Here we provide testing scripts to evaluate a whole dataset (SUNRGBD, ScanNet, KITTI, etc.). Here we provide testing scripts to evaluate a whole dataset (SUNRGBD, ScanNet, KITTI, etc.).
For high-level apis easier to integrated into other projects and basic demos, please refer to Verification/Demo under [Get Started](./getting_started.md). For high-level apis easier to integrated into other projects and basic demos, please refer to Verification/Demo under [Get Started](https://mmdetection3d.readthedocs.io/en/latest/getting_started.html).
### Test existing models on standard datasets ### Test existing models on standard datasets
......
...@@ -71,15 +71,15 @@ Specific annotation format is described in the official object development [kit] ...@@ -71,15 +71,15 @@ Specific annotation format is described in the official object development [kit]
Assume we use the Waymo dataset. Assume we use the Waymo dataset.
After downloading the data, we need to implement a function to convert both the input data and annotation format into the KITTI style. Then we can implement WaymoDataset inherited from KittiDataset to load the data and perform training and evaluation. After downloading the data, we need to implement a function to convert both the input data and annotation format into the KITTI style. Then we can implement WaymoDataset inherited from KittiDataset to load the data and perform training and evaluation.
Specifically, we implement a waymo [converter](../tools/data_converter/waymo_converter.py) to convert Waymo data into KITTI format and a waymo dataset [class](../mmdet3d/datasets/waymo_dataset.py) to process it. Because we preprocess the raw data and reorganize it like KITTI, the dataset class could be implemented more easily by inheriting from KittiDataset. The last thing needed to be noted is the evaluation protocol you would like to use. Because Waymo has its own evaluation approach, we further incorporate it into our dataset class. Afterwards, users can successfully convert the data format and use `WaymoDataset` to train and evaluate the model. Specifically, we implement a waymo [converter](https://github.com/open-mmlab/mmdetection3d/blob/master/tools/data_converter/waymo_converter.py) to convert Waymo data into KITTI format and a waymo dataset [class](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/waymo_dataset.py) to process it. Because we preprocess the raw data and reorganize it like KITTI, the dataset class could be implemented more easily by inheriting from KittiDataset. The last thing needed to be noted is the evaluation protocol you would like to use. Because Waymo has its own evaluation approach, we further incorporate it into our dataset class. Afterwards, users can successfully convert the data format and use `WaymoDataset` to train and evaluate the model.
For more details about the intermediate results of preprocessing of Waymo dataset, please refer to its [tutorial](./tutorials/waymo.md). For more details about the intermediate results of preprocessing of Waymo dataset, please refer to its [tutorial](https://mmdetection3d.readthedocs.io/en/latest/tutorials/waymo.html).
## Prepare a config ## Prepare a config
The second step is to prepare configs such that the dataset could be successfully loaded. In addition, adjusting hyperparameters is usually necessary to obtain decent performance in 3D detection. The second step is to prepare configs such that the dataset could be successfully loaded. In addition, adjusting hyperparameters is usually necessary to obtain decent performance in 3D detection.
Suppose we would like to train PointPillars on Waymo to achieve 3D detection for 3 classes, vehilce, cyclist and pedestrian, we need to prepare dataset config like [this](../mmdet3d/datasets/waymo_dataset.py), model config like [this](../configs/_base_/models/hv_pointpillars_secfpn_waymo.py) and combine them like [this](../configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py), compared to KITTI [dataset config](../configs/_base_/datasets/kitti-3d-3class.py), [model config](../configs/_base_/models/hv_pointpillars_secfpn_kitti.py) and [overall](../configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py). Suppose we would like to train PointPillars on Waymo to achieve 3D detection for 3 classes, vehilce, cyclist and pedestrian, we need to prepare dataset config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/waymoD5-3d-3class.py), model config like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_waymo.py) and combine them like [this](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py), compared to KITTI [dataset config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/datasets/kitti-3d-3class.py), [model config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/_base_/models/hv_pointpillars_secfpn_kitti.py) and [overall](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py).
## Train a new model ## Train a new model
...@@ -89,7 +89,7 @@ To train a model with the new config, you can simply run ...@@ -89,7 +89,7 @@ To train a model with the new config, you can simply run
python tools/train.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py python tools/train.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py
``` ```
For more detailed usages, please refer to the [Case 1](1_exist_data_model.md). For more detailed usages, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html).
## Test and inference ## Test and inference
...@@ -99,6 +99,6 @@ To test the trained model, you can simply run ...@@ -99,6 +99,6 @@ To test the trained model, you can simply run
python tools/test.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py work_dirs/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/latest.pth --eval waymo python tools/test.py configs/pointpillars/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class.py work_dirs/hv_pointpillars_secfpn_sbn_2x16_2x_waymoD5-3d-3class/latest.pth --eval waymo
``` ```
**Note**: To use Waymo evaluation protocol, you need to follow the [tutorial](tutorials/waymo.md) and prepare files related to metrics computation as official instructions. **Note**: To use Waymo evaluation protocol, you need to follow the [tutorial](https://mmdetection3d.readthedocs.io/en/latest/tutorials/waymo.html) and prepare files related to metrics computation as official instructions.
For more detailed usages for test and inference, please refer to the [Case 1](1_exist_data_model.md). For more detailed usages for test and inference, please refer to the [Case 1](https://mmdetection3d.readthedocs.io/en/latest/1_exist_data_model.html).
...@@ -115,10 +115,10 @@ Note that we follow the original folder names for clear organization. Please ren ...@@ -115,10 +115,10 @@ Note that we follow the original folder names for clear organization. Please ren
### ScanNet and SUN RGB-D ### ScanNet and SUN RGB-D
To prepare scannet data, please see [scannet](../data/scannet/README.md). To prepare scannet data, please see [scannet](https://github.com/open-mmlab/mmdetection3d/blob/master/data/scannet/README.md).
To prepare sunrgbd data, please see [sunrgbd](../data/sunrgbd/README.md). To prepare sunrgbd data, please see [sunrgbd](https://github.com/open-mmlab/mmdetection3d/blob/master/data/sunrgbd/README.md).
### Customized Datasets ### Customized Datasets
For using custom datasets, please refer to [Tutorials 2: Customize Datasets](tutorials/new_dataset.md). For using custom datasets, please refer to [Tutorials 2: Customize Datasets](https://mmdetection3d.readthedocs.io/en/latest/tutorials/customize_dataset.html).
...@@ -9,7 +9,7 @@ For data sharing similar format with existing datasets, like Lyft compared to nu ...@@ -9,7 +9,7 @@ For data sharing similar format with existing datasets, like Lyft compared to nu
For data that is inconvenient to read directly online, the simplest way is to convert your dataset to existing dataset formats. For data that is inconvenient to read directly online, the simplest way is to convert your dataset to existing dataset formats.
Typically we need a data converter to reorganize the raw data and convert the annotation format into KITTI style. Then a new dataset class inherited from existing ones is sometimes necessary for dealing with some specific differences between datasets. Finally, the users need to further modify the config files to use the dataset. An [example](../2_new_data_model.md) training predefined models on Waymo dataset by converting it into KITTI style can be taken for reference. Typically we need a data converter to reorganize the raw data and convert the annotation format into KITTI style. Then a new dataset class inherited from existing ones is sometimes necessary for dealing with some specific differences between datasets. Finally, the users need to further modify the config files to use the dataset. An [example](https://mmdetection3d.readthedocs.io/en/latest/2_new_data_model.html) training predefined models on Waymo dataset by converting it into KITTI style can be taken for reference.
### Reorganize new data format to middle format ### Reorganize new data format to middle format
...@@ -60,7 +60,7 @@ With this design, we provide an alternative choice for customizing datasets. ...@@ -60,7 +60,7 @@ With this design, we provide an alternative choice for customizing datasets.
``` ```
On top of this you can write a new Dataset class inherited from `Custom3DDataset`, and overwrite related methods, On top of this you can write a new Dataset class inherited from `Custom3DDataset`, and overwrite related methods,
like [KittiDataset](../../mmdet3d/datasets/kitti_dataset.py) and [ScanNetDataset](../../mmdet3d/datasets/scannet_dataset.py). like [KittiDataset](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/kitti_dataset.py) and [ScanNetDataset](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/scannet_dataset.py).
### An example of customized dataset ### An example of customized dataset
......
...@@ -374,7 +374,7 @@ class PartAggregationROIHead(Base3DRoIHead): ...@@ -374,7 +374,7 @@ class PartAggregationROIHead(Base3DRoIHead):
return bbox_results return bbox_results
``` ```
Here we omit more details related to other functions. Please see the [code](mmdet3d/models/roi_heads/part_aggregation_roi_head.py) for more details. Here we omit more details related to other functions. Please see the [code](https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/models/roi_heads/part_aggregation_roi_head.py) for more details.
Last, the users need to add the module in Last, the users need to add the module in
`mmdet3d/models/bbox_heads/__init__.py` and `mmdet3d/models/roi_heads/__init__.py` thus the corresponding registry could find and load them. `mmdet3d/models/bbox_heads/__init__.py` and `mmdet3d/models/roi_heads/__init__.py` thus the corresponding registry could find and load them.
......
...@@ -153,7 +153,7 @@ python -u tools/data_converter/nuimage_converter.py --data-root ${DATA_ROOT} --v ...@@ -153,7 +153,7 @@ python -u tools/data_converter/nuimage_converter.py --data-root ${DATA_ROOT} --v
- `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel. - `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel.
- `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study. - `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study.
More details could be referred to the [doc](data_preparation.md) for dataset preparation and [README](../configs/nuimages/README.md) for nuImages dataset. More details could be referred to the [doc](https://mmdetection3d.readthedocs.io/en/latest/data_preparation.html) for dataset preparation and [README](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/nuimages/README.md) for nuImages dataset.
# Miscellaneous # Miscellaneous
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment