getting_started.md 11.7 KB
Newer Older
twang's avatar
twang committed
1
# Prerequisites
zhangwenwei's avatar
zhangwenwei committed
2

twang's avatar
twang committed
3
4
5
6
7
- Linux or macOS (Windows is not currently officially supported)
- Python 3.6+
- PyTorch 1.3+
- CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
- GCC 5+
xiliu8006's avatar
xiliu8006 committed
8
9
10
- [MMCV](https://mmcv.readthedocs.io/en/latest/#installation)


11
12
13
14
The required versions of MMCV, MMDetection and MMSegmentation for different versions of MMDetection3D are as below. Please install the correct version of MMCV, MMDetection and MMSegmentation to avoid installation issues.

| MMDetection3D version | MMDetection version | MMSegmentation version |    MMCV version     |
|:-------------------:|:-------------------:|:-------------------:|:-------------------:|
15
| master              | mmdet>=2.19.0, <=3.0.0| mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.8, <=1.5.0|
ChaimZhu's avatar
ChaimZhu committed
16
| 0.18.1              | mmdet>=2.19.0, <=3.0.0| mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.8, <=1.5.0|
Wenhao Wu's avatar
Wenhao Wu committed
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| 0.18.0              | mmdet>=2.19.0, <=3.0.0| mmseg>=0.20.0, <=1.0.0 | mmcv-full>=1.3.8, <=1.5.0|
| 0.17.3              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0|
| 0.17.2              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0|
| 0.17.1              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0|
| 0.17.0              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0|
| 0.16.0              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0|
| 0.15.0              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4.0|
| 0.14.0              | mmdet>=2.10.0, <=2.11.0| mmseg==0.14.0 | mmcv-full>=1.3.1, <=1.4.0|
| 0.13.0              | mmdet>=2.10.0, <=2.11.0| Not required  | mmcv-full>=1.2.4, <=1.4.0|
| 0.12.0              | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.4.0|
| 0.11.0              | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.3.0|
| 0.10.0              | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.3.0|
| 0.9.0               | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.3.0|
| 0.8.0               | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.1.5, <=1.3.0|
| 0.7.0               | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.1.5, <=1.3.0|
Tai-Wang's avatar
Tai-Wang committed
32
| 0.6.0               | mmdet>=2.4.0, <=2.11.0 | Not required  | mmcv-full>=1.1.3, <=1.2.0|
33
| 0.5.0               | 2.3.0                  | Not required  | mmcv-full==1.0.5|
zhangwenwei's avatar
Doc  
zhangwenwei committed
34

twang's avatar
twang committed
35
# Installation
zhangwenwei's avatar
Doc  
zhangwenwei committed
36

twang's avatar
twang committed
37
## Install MMDetection3D
zhangwenwei's avatar
Doc  
zhangwenwei committed
38

39
**a. Create a conda virtual environment and activate it.**
zhangwenwei's avatar
zhangwenwei committed
40

twang's avatar
twang committed
41
42
43
```shell
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
zhangwenwei's avatar
Doc  
zhangwenwei committed
44
45
```

46
**b. Install PyTorch and torchvision following the [official instructions](https://pytorch.org/).**
Wenwei Zhang's avatar
Wenwei Zhang committed
47

twang's avatar
twang committed
48
49
```shell
conda install pytorch torchvision -c pytorch
Wenwei Zhang's avatar
Wenwei Zhang committed
50
51
```

twang's avatar
twang committed
52
53
Note: Make sure that your compilation CUDA version and runtime CUDA version match.
You can check the supported CUDA version for precompiled packages on the [PyTorch website](https://pytorch.org/).
Wenwei Zhang's avatar
Wenwei Zhang committed
54

55
`E.g. 1` If you have CUDA 10.1 installed under `/usr/local/cuda` and would like to install
twang's avatar
twang committed
56
PyTorch 1.5, you need to install the prebuilt PyTorch with CUDA 10.1.
Wenwei Zhang's avatar
Wenwei Zhang committed
57

twang's avatar
twang committed
58
```python
59
conda install pytorch==1.5.0 cudatoolkit=10.1 torchvision==0.6.0 -c pytorch
Wenwei Zhang's avatar
Wenwei Zhang committed
60
61
```

twang's avatar
twang committed
62
63
`E.g. 2` If you have CUDA 9.2 installed under `/usr/local/cuda` and would like to install
PyTorch 1.3.1., you need to install the prebuilt PyTorch with CUDA 9.2.
zhangwenwei's avatar
zhangwenwei committed
64

twang's avatar
twang committed
65
66
```python
conda install pytorch=1.3.1 cudatoolkit=9.2 torchvision=0.4.2 -c pytorch
wangtai's avatar
wangtai committed
67
68
```

69
If you build PyTorch from source instead of installing the prebuilt package,
twang's avatar
twang committed
70
you can use more CUDA versions such as 9.0.
71

72
**c. Install [MMCV](https://mmcv.readthedocs.io/en/latest/).**
xiliu8006's avatar
xiliu8006 committed
73
*mmcv-full* is necessary since MMDetection3D relies on MMDetection, CUDA ops in *mmcv-full* are required.
zhangwenwei's avatar
Doc  
zhangwenwei committed
74

75
`e.g.` The pre-build *mmcv-full* could be installed by running: (available versions could be found [here](https://mmcv.readthedocs.io/en/latest/#install-with-pip))
zhangwenwei's avatar
zhangwenwei committed
76

Ziyi Wu's avatar
Ziyi Wu committed
77
```shell
xiliu8006's avatar
xiliu8006 committed
78
79
80
81
82
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
```

Please replace `{cu_version}` and `{torch_version}` in the url to your desired one. For example, to install the latest `mmcv-full` with `CUDA 11` and `PyTorch 1.7.0`, use the following command:

twang's avatar
twang committed
83
```shell
xiliu8006's avatar
xiliu8006 committed
84
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html
twang's avatar
twang committed
85
```
zhangwenwei's avatar
zhangwenwei committed
86

87
88
89
90
91
92
93
mmcv-full is only compiled on PyTorch 1.x.0 because the compatibility usually holds between 1.x.0 and 1.x.1. If your PyTorch version is 1.x.1, you can install mmcv-full compiled with PyTorch 1.x.0 and it usually works well.

```shell
# We can ignore the micro version of PyTorch
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7/index.html
```

xiliu8006's avatar
xiliu8006 committed
94
See [here](https://github.com/open-mmlab/mmcv#install-with-pip) for different versions of MMCV compatible to different PyTorch and CUDA versions.
twang's avatar
twang committed
95
Optionally, you could also build the full version from source:
zhangwenwei's avatar
zhangwenwei committed
96

twang's avatar
twang committed
97
```shell
xiliu8006's avatar
xiliu8006 committed
98
99
100
101
102
103
104
105
106
107
git clone https://github.com/open-mmlab/mmcv.git
cd mmcv
MMCV_WITH_OPS=1 pip install -e .  # package mmcv-full will be installed after this step
cd ..
```

Or directly run

```shell
pip install mmcv-full
twang's avatar
twang committed
108
```
zhangwenwei's avatar
zhangwenwei committed
109

110
**d. Install [MMDetection](https://github.com/open-mmlab/mmdetection).**
zhangwenwei's avatar
zhangwenwei committed
111

twang's avatar
twang committed
112
```shell
hjin2902's avatar
hjin2902 committed
113
pip install mmdet==2.14.0
twang's avatar
twang committed
114
```
zhangwenwei's avatar
zhangwenwei committed
115

twang's avatar
twang committed
116
Optionally, you could also build MMDetection from source in case you want to modify the code:
zhangwenwei's avatar
zhangwenwei committed
117
118

```shell
twang's avatar
twang committed
119
120
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
hjin2902's avatar
hjin2902 committed
121
git checkout v2.14.0  # switch to v2.14.0 branch
twang's avatar
twang committed
122
123
pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"
zhangwenwei's avatar
zhangwenwei committed
124
125
```

126
127
128
**e. Install [MMSegmentation](https://github.com/open-mmlab/mmsegmentation).**

```shell
hjin2902's avatar
hjin2902 committed
129
pip install mmsegmentation==0.14.1
130
131
132
133
134
135
136
```

Optionally, you could also build MMSegmentation from source in case you want to modify the code:

```shell
git clone https://github.com/open-mmlab/mmsegmentation.git
cd mmsegmentation
hjin2902's avatar
hjin2902 committed
137
git checkout v0.14.1  # switch to v0.14.1 branch
138
139
140
141
pip install -e .  # or "python setup.py develop"
```

**f. Clone the MMDetection3D repository.**
zhangwenwei's avatar
Doc  
zhangwenwei committed
142

twang's avatar
twang committed
143
144
145
146
```shell
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
```
zhangwenwei's avatar
zhangwenwei committed
147

148
**g.Install build requirements and then install MMDetection3D.**
zhangwenwei's avatar
zhangwenwei committed
149

twang's avatar
twang committed
150
151
152
```shell
pip install -v -e .  # or "python setup.py develop"
```
zhangwenwei's avatar
zhangwenwei committed
153

twang's avatar
twang committed
154
Note:
zhangwenwei's avatar
Doc  
zhangwenwei committed
155

twang's avatar
twang committed
156
157
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
zhangwenwei's avatar
Doc  
zhangwenwei committed
158

twang's avatar
twang committed
159
    > Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version.
zhangwenwei's avatar
zhangwenwei committed
160

twang's avatar
twang committed
161
162
163
164
165
    ```shell
    pip uninstall mmdet3d
    rm -rf ./build
    find . -name "*.so" | xargs rm
    ```
zhangwenwei's avatar
zhangwenwei committed
166

167
2. Following the above instructions, MMDetection3D is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
zhangwenwei's avatar
zhangwenwei committed
168

twang's avatar
twang committed
169
170
3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
you can install it before installing MMCV.
zhangwenwei's avatar
zhangwenwei committed
171

twang's avatar
twang committed
172
4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
zhangwenwei's avatar
zhangwenwei committed
173

twang's avatar
twang committed
174
5. The code can not be built for CPU only environment (where CUDA isn't available) for now.
zhangwenwei's avatar
zhangwenwei committed
175

twang's avatar
twang committed
176
## Another option: Docker Image
Wenwei Zhang's avatar
Wenwei Zhang committed
177

twang's avatar
twang committed
178
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/master/docker/Dockerfile) to build an image.
Wenwei Zhang's avatar
Wenwei Zhang committed
179

twang's avatar
twang committed
180
181
```shell
# build an image with PyTorch 1.6, CUDA 10.1
182
docker build -t mmdetection3d -f docker/Dockerfile .
twang's avatar
twang committed
183
```
Wenwei Zhang's avatar
Wenwei Zhang committed
184

twang's avatar
twang committed
185
Run it with
Wenwei Zhang's avatar
Wenwei Zhang committed
186

twang's avatar
twang committed
187
188
189
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
```
Wenwei Zhang's avatar
Wenwei Zhang committed
190

twang's avatar
twang committed
191
## A from-scratch setup script
Wenwei Zhang's avatar
Wenwei Zhang committed
192

193
Here is a full script for setting up MMdetection3D with conda.
Wenwei Zhang's avatar
Wenwei Zhang committed
194

twang's avatar
twang committed
195
196
197
```shell
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
Wenwei Zhang's avatar
Wenwei Zhang committed
198

199
# install latest PyTorch prebuilt with the default prebuilt CUDA version (usually the latest)
twang's avatar
twang committed
200
conda install -c pytorch pytorch torchvision -y
Wenwei Zhang's avatar
Wenwei Zhang committed
201

twang's avatar
twang committed
202
203
# install mmcv
pip install mmcv-full
liyinhao's avatar
liyinhao committed
204

twang's avatar
twang committed
205
206
# install mmdetection
pip install git+https://github.com/open-mmlab/mmdetection.git
liyinhao's avatar
liyinhao committed
207

208
209
210
# install mmsegmentation
pip install git+https://github.com/open-mmlab/mmsegmentation.git

twang's avatar
twang committed
211
212
213
214
# install mmdetection3d
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
zhangwenwei's avatar
zhangwenwei committed
215
```
liyinhao's avatar
liyinhao committed
216

twang's avatar
twang committed
217
218
219
## Using multiple MMDetection3D versions

The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMDetection3D in the current directory.
liyinhao's avatar
liyinhao committed
220

twang's avatar
twang committed
221
222
223
224
To use the default MMDetection3D installed in the environment rather than that you are working with, you can remove the following line in those scripts

```shell
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
liyinhao's avatar
liyinhao committed
225
226
```

twang's avatar
twang committed
227
# Verification
liyinhao's avatar
liyinhao committed
228

229
## Verify with point cloud demo
zhangwenwei's avatar
Doc  
zhangwenwei committed
230

231
We provide several demo scripts to test a single sample. Pre-trained models can be downloaded from [model zoo](model_zoo.md). To test a single-modality 3D detection on point cloud scenes:
zhangwenwei's avatar
Doc  
zhangwenwei committed
232
233

```shell
wuyuefeng's avatar
Demo  
wuyuefeng committed
234
python demo/pcd_demo.py ${PCD_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${GPU_ID}] [--score-thr ${SCORE_THR}] [--out-dir ${OUT_DIR}]
zhangwenwei's avatar
Doc  
zhangwenwei committed
235
236
237
238
239
```

Examples:

```shell
240
python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth
zhangwenwei's avatar
zhangwenwei committed
241
```
242

yinchimaoliang's avatar
yinchimaoliang committed
243
If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo.
244
Note that you need to install `pandas` and `plyfile` before using this script. This function can also be used for data preprocessing for training ```ply data```.
245

yinchimaoliang's avatar
yinchimaoliang committed
246
247
248
249
250
```python
import numpy as np
import pandas as pd
from plyfile import PlyData

251
def convert_ply(input_path, output_path):
yinchimaoliang's avatar
yinchimaoliang committed
252
253
254
255
256
257
258
259
260
261
    plydata = PlyData.read(input_path)  # read file
    data = plydata.elements[0].data  # read data
    data_pd = pd.DataFrame(data)  # convert to DataFrame
    data_np = np.zeros(data_pd.shape, dtype=np.float)  # initialize array to store data
    property_names = data[0].dtype.names  # read names of properties
    for i, name in enumerate(
            property_names):  # read data by property
        data_np[:, i] = data_pd[name]
    data_np.astype(np.float32).tofile(output_path)
```
262

yinchimaoliang's avatar
yinchimaoliang committed
263
Examples:
zhangwenwei's avatar
zhangwenwei committed
264

yinchimaoliang's avatar
yinchimaoliang committed
265
266
267
```python
convert_ply('./test.ply', './test.bin')
```
zhangwenwei's avatar
zhangwenwei committed
268

269
If you have point clouds in other format (`off`, `obj`, etc.), you can use `trimesh` to convert them into `ply`.
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284

```python
import trimesh

def to_ply(input_path, output_path, original_type):
    mesh = trimesh.load(input_path, file_type=original_type)  # read file
    mesh.export(output_path, file_type='ply')  # convert to ply
```

Examples:

```python
to_ply('./test.obj', './test.ply', 'obj')
```

285
More demos about single/multi-modality and indoor/outdoor 3D detection can be found in [demo](demo.md).
286

twang's avatar
twang committed
287
## High-level APIs for testing point clouds
zhangwenwei's avatar
zhangwenwei committed
288

twang's avatar
twang committed
289
### Synchronous interface
Ziyi Wu's avatar
Ziyi Wu committed
290

liyinhao's avatar
liyinhao committed
291
Here is an example of building the model and test given point clouds.
zhangwenwei's avatar
zhangwenwei committed
292
293

```python
294
from mmdet3d.apis import init_model, inference_detector
zhangwenwei's avatar
zhangwenwei committed
295

liyinhao's avatar
liyinhao committed
296
297
config_file = 'configs/votenet/votenet_8x8_scannet-3d-18class.py'
checkpoint_file = 'checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth'
zhangwenwei's avatar
zhangwenwei committed
298
299

# build the model from a config file and a checkpoint file
300
model = init_model(config_file, checkpoint_file, device='cuda:0')
zhangwenwei's avatar
zhangwenwei committed
301
302

# test a single image and show the results
liyinhao's avatar
liyinhao committed
303
304
305
306
point_cloud = 'test.bin'
result, data = inference_detector(model, point_cloud)
# visualize the results and save the results in 'results' folder
model.show_results(data, result, out_dir='results')
zhangwenwei's avatar
zhangwenwei committed
307
```