@@ -18,111 +18,135 @@ with some other popular open source 3D detection codebases.
### VoteNet
We compare our implementation with VoteNet and report the performance of VoteNets on SUNRGB-D v2 dataset under the AP@0.5 metric.
We compare our implementation of VoteNet with [votenet](https://github.com/facebookresearch/votenet/) and report the performance on SUNRGB-D v2 dataset under the AP@0.5 metric.
Since Det3D only provides PointPillars on car class while PCDet only provides PointPillars
Since [Det3D](https://github.com/poodarchu/Det3D/) only provides PointPillars on car class while [PCDet](https://github.com/sshaoshuai/PCDet) only provides PointPillars
on 3 classes, we compare with them separately. For performance on single class, we report the AP on moderate
condition following the KITTI benchmark and compare average AP over all classes on moderate condition for
f.Install build requirements and then install mmdetection3d.
```shell
pip install-r requirements/build.txt
pip install-v-e.# or "python setup.py develop"
```
Note:
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
...
...
@@ -76,7 +91,7 @@ It is recommended that you run step d each time you pull some updates from githu
> Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version.
```
pip uninstall mmdet
pip uninstall mmdet3d
rm -rf ./build
find . -name "*.so" | xargs rm
```
...
...
@@ -88,35 +103,8 @@ you can install it before installing MMCV.
4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
### Install with CPU only
The code can be built for CPU only environment (where CUDA isn't available).
In CPU mode you can run the demo/webcam_demo.py for example.
However some functionality is gone in this mode:
- Deformable Convolution
- Deformable ROI pooling
- CARAFE: Content-Aware ReAssembly of FEatures
- nms_cuda
- sigmoid_focal_loss_cuda
5. The code can not be built for CPU only environment (where CUDA isn't available) for now.
So if you try to run inference with a model containing deformable convolution you will get an error.
Note: We set `use_torchvision=True` on-the-fly in CPU mode for `RoIPool` and `RoIAlign`
### Another option: Docker Image
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection/blob/master/docker/Dockerfile) to build an image.
```shell
# build an image with PyTorch 1.5, CUDA 10.1
docker build -t mmdetection docker/
```
Run it with
```shell
docker run --gpus all --shm-size=8g -it-v{DATA_DIR}:/mmdetection/data mmdetection
The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMDetection in the current directory.
The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMDetection3D in the current directory.
To use the default MMDetection installed in the environment rather than that you are working with, you can remove the following line in those scripts
To use the default MMDetection3D installed in the environment rather than that you are working with, you can remove the following line in those scripts