Unverified Commit 6942cf9e authored by Ziyi Wu's avatar Ziyi Wu Committed by GitHub
Browse files

fix markdown linting errors (#468)

parent a588943e
......@@ -18,9 +18,11 @@ A clear and concise description of what the bug is.
**Reproduction**
1. What command or script did you run?
```
A placeholder for the command.
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?
......@@ -33,6 +35,7 @@ A placeholder for the command.
**Error traceback**
If applicable, paste the error trackback here.
```
A placeholder for trackback.
```
......
......@@ -30,13 +30,17 @@ A clear and concise description of what the problem you meet and what have you d
**Reproduction**
1. What command or script did you run?
```
A placeholder for the command.
```
2. What config dir you run?
```
A placeholder for the config.
```
3. Did you make any modifications on the code or config? Did you understand what you have modified?
4. What dataset did you use?
......@@ -50,6 +54,7 @@ A placeholder for the config.
**Results**
If applicable, paste the related results here, e.g., what you expect and what you get.
```
A placeholder for results comparison
```
......
......@@ -16,6 +16,7 @@ We implement 3DSSD and provide the results and checkpoints on KITTI datasets.
```
### Experiment details on KITTI datasets
Some settings in our implementation are different from the [official implementation](https://github.com/Jia-Research-Lab/3DSSD), which bring marginal differences to the performance on KITTI datasets in our experiments. To simplify and unify the models of our implementation, we skip them in our models. These differences are listed as below:
1. We keep the scenes without any object while the official code skips these scenes in training. In the official implementation, only 3229 and 3394 samples are used as training and validation sets, respectively. In our implementation, we keep using 3712 and 3769 samples as training and validation sets, respectively, as those used for all the other models in our implementation on KITTI datasets.
2. We do not modify the decay of `batch normalization` during training.
......@@ -25,6 +26,7 @@ Some settings in our implementation are different from the [official implementat
## Results
### KITTI
| Backbone |Class| Lr schd | Mem (GB) | Inf time (fps) | mAP |Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: |
| [PointNet2SAMSG](./3dssd_kitti-3d-car.py)| Car |72e|4.7||78.39(81.00)<sup>1</sup>|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/3dssd/3dssd_kitti-3d-car_20210324_122002-07e9a19b.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/3dssd/3dssd_kitti-3d-car_20210324_122002.log.json)|
......
......@@ -26,6 +26,7 @@ We follow the below style to name config files. Contributors are advised to foll
`{schedule}`: training schedule, options are 1x, 2x, 20e, etc. 1x and 2x means 12 epochs and 24 epochs respectively. 20e is adopted in cascade models, which denotes 20 epochs. For 1x/2x, initial learning rate decays by a factor of 10 at the 8/16th and 11/22th epochs. For 20e, initial learning rate decays by a factor of 10 at the 16th and 19th epochs.
`{dataset}`: dataset like nus-3d, kitti-3d, lyft-3d, scannet-3d, sunrgbd-3d. We also indicate the number of classes we are using if there exist multiple settings, e.g., kitti-3d-3class and kitti-3d-car means training on KITTI dataset with 3 classes and single class, respectively.
```
@article{yin2021center,
title={Center-based 3D Object Detection and Tracking},
......
......@@ -5,6 +5,7 @@
[ALGORITHM]
We implement Dynamic Voxelization proposed in and provide its results and models on KITTI dataset.
```
@article{zhou2019endtoend,
title={End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds},
......
......@@ -13,12 +13,14 @@ Mixed precision training for PointNet-based methods will be supported in the fut
## Results
### SECOND on KITTI dataset
| Backbone |Class| Lr schd | FP32 Mem (GB) | FP16 Mem (GB) | FP32 mAP | FP16 mAP |Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: | :------: |
| [SECFPN](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py)| Car |cyclic 80e|5.4|2.9|79.07|78.72|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301-1f5ad833.pth)&#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301.log.json)|
| [SECFPN](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py)| 3 Class |cyclic 80e|5.4|2.9|64.41|67.4|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059.log.json)|
### PointPillars on nuScenes dataset
| Backbone | Lr schd | FP32 Mem (GB) | FP16 Mem (GB) | FP32 mAP | FP32 NDS| FP16 mAP | FP16 NDS| Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :----: |:----: | :------: |
|[SECFPN](./hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py)|2x|16.4|8.37|35.17|49.7|35.19|50.27|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626-c3f0483e.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626.log.json)|
......
......@@ -5,6 +5,7 @@
[ALGORITHM]
We implement H3DNet and provide the result and checkpoints on ScanNet datasets.
```
@inproceedings{zhang2020h3dnet,
author = {Zhang, Zaiwei and Sun, Bo and Yang, Haitao and Huang, Qixing},
......@@ -17,6 +18,7 @@ We implement H3DNet and provide the result and checkpoints on ScanNet datasets.
## Results
### ScanNet
| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 |AP@0.5| Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: |
| [MultiBackbone](./h3dnet_3x8_scannet-3d-18class.py) | 3x |7.9||66.43|48.01|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_scannet-3d-18class_20200830_000136-02e36246.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_scannet-3d-18class_20200830_000136.log.json) |
......@@ -5,6 +5,7 @@
[ALGORITHM]
We implement MVX-Net and provide its results and models on KITTI dataset.
```
@inproceedings{sindagi2019mvx,
title={MVX-Net: Multimodal voxelnet for 3D object detection},
......
......@@ -14,8 +14,8 @@ We implement Part-A^2 and provide its results and checkpoints on KITTI dataset.
year={2020},
publisher={IEEE}
}
```
## Results
### KITTI
......
......@@ -5,6 +5,7 @@
[ALGORITHM]
We implement SECOND and provide the results and checkpoints on KITTI dataset.
```
@article{yan2018second,
title={Second: Sparsely embedded convolutional detection},
......@@ -13,11 +14,12 @@ We implement SECOND and provide the results and checkpoints on KITTI dataset.
year={2018},
publisher={Multidisciplinary Digital Publishing Institute}
}
```
## Results
### KITTI
| Backbone |Class| Lr schd | Mem (GB) | Inf time (fps) | mAP |Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: |
| [SECFPN](./hv_second_secfpn_6x8_80e_kitti-3d-car.py)| Car |cyclic 80e|5.4||79.07|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238.log.json)|
......
......@@ -13,7 +13,6 @@ We implement PointPillars with Shape-aware grouping heads used in the SSN and pr
booktitle={Proceedings of the European Conference on Computer Vision},
year={2020}
}
```
## Results
......
### Prepare ScanNet Data for Indoor Detection or Segmentation Task
We follow the procedure in [votenet](https://github.com/facebookresearch/votenet/).
1. Download ScanNet v2 data [HERE](https://github.com/ScanNet/ScanNet). Link or move the 'scans' folder to this level of directory. If you are performing segmentation tasks and want to upload the results to its official [benchmark](http://kaldir.vc.in.tum.de/scannet_benchmark/), please also link or move the 'scans_test' folder to this directory.
......@@ -6,11 +7,13 @@ We follow the procedure in [votenet](https://github.com/facebookresearch/votenet
2. In this directory, extract point clouds and annotations by running `python batch_load_scannet_data.py`. Add the `--max_num_point 50000` flag if you only use the ScanNet data for the detection task. It will downsample the scenes to less points.
3. Enter the project root directory, generate training data by running
```bash
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannet
```
The overall process could be achieved through the following script
```bash
python batch_load_scannet_data.py
cd ../..
......@@ -18,6 +21,7 @@ python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/
```
The directory structure after pre-processing should be as below
```
scannet
├── scannet_utils.py
......
### Prepare SUN RGB-D Data
We follow the procedure in [votenet](https://github.com/facebookresearch/votenet/).
1. Download SUNRGBD v2 data [HERE](http://rgbd.cs.princeton.edu/data/). Then, move SUNRGBD.zip, SUNRGBDMeta2DBB_v2.mat, SUNRGBDMeta3DBB_v2.mat and SUNRGBDtoolbox.zip to the OFFICIAL_SUNRGBD folder, unzip the zip files.
......@@ -6,11 +7,13 @@ We follow the procedure in [votenet](https://github.com/facebookresearch/votenet
2. Enter the `matlab` folder, Extract point clouds and annotations by running `extract_split.m`, `extract_rgbd_data_v2.m` and `extract_rgbd_data_v1.m`.
3. Enter the project root directory, Generate training data by running
```bash
python tools/create_data.py sunrgbd --root-path ./data/sunrgbd --out-dir ./data/sunrgbd --extra-tag sunrgbd
```
The overall process could be achieved through the following script
```bash
cd matlab
matlab -nosplash -nodesktop -r 'extract_split;quit;'
......@@ -23,11 +26,13 @@ python tools/create_data.py sunrgbd --root-path ./data/sunrgbd --out-dir ./data
NOTE: SUNRGBDtoolbox.zip should have MD5 hash `18d22e1761d36352f37232cba102f91f` (you can check the hash with `md5 SUNRGBDtoolbox.zip` on Mac OS or `md5sum SUNRGBDtoolbox.zip` on Linux)
NOTE: If you would like to play around with [ImVoteNet](../../configs/imvotenet/README.md), the image data (`./data/sunrgbd/sunrgbd_trainval/image`) are required. If you pre-processed the data before mmdet3d version 0.12.0, please pre-process the data again due to some updates in data pre-processing
```bash
python tools/create_data.py sunrgbd --root-path ./data/sunrgbd --out-dir ./data/sunrgbd --extra-tag sunrgbd
```
The directory structure after pre-processing should be as below
```
sunrgbd
├── README.md
......
......@@ -127,6 +127,7 @@ All outputs (log files and checkpoints) will be saved to the working directory,
which is specified by `work_dir` in the config file.
By default we evaluate the model on the validation set after each epoch, you can change the evaluation interval by adding the interval argument in the training config.
```python
evaluation = dict(interval=12) # This evaluate the model per 12 epoch.
```
......
......@@ -64,7 +64,7 @@ you can use more CUDA versions such as 9.0.
`e.g.` The pre-build *mmcv-full* could be installed by running: (available versions could be found [here](https://mmcv.readthedocs.io/en/latest/#install-with-pip))
```shell
```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
```
......@@ -253,6 +253,7 @@ More demos about single/multi-modality and indoor/outdoor 3D detection can be fo
## High-level APIs for testing point clouds
### Synchronous interface
Here is an example of building the model and test given point clouds.
```python
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment