getting_started.md 14 KB
Newer Older
twang's avatar
twang committed
1
# Prerequisites
zhangwenwei's avatar
zhangwenwei committed
2

3
- Linux or macOS (Windows is in experimental support)
twang's avatar
twang committed
4
5
6
7
- Python 3.6+
- PyTorch 1.3+
- CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
- GCC 5+
xiliu8006's avatar
xiliu8006 committed
8
9
- [MMCV](https://mmcv.readthedocs.io/en/latest/#installation)

10
11
The required versions of MMCV, MMDetection and MMSegmentation for different versions of MMDetection3D are as below. Please install the correct version of MMCV, MMDetection and MMSegmentation to avoid installation issues.

12
13
| MMDetection3D version |   MMDetection version    | MMSegmentation version  |        MMCV version         |
| :-------------------: | :----------------------: | :---------------------: | :-------------------------: |
14
15
|        master         | mmdet>=2.24.0, \<=3.0.0  | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0  |
|       v1.0.0rc2       | mmdet>=2.24.0, \<=3.0.0  | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.6.0  |
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|       v1.0.0rc1       | mmdet>=2.19.0, \<=3.0.0  | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.4.8, \<=1.5.0  |
|       v1.0.0rc0       | mmdet>=2.19.0, \<=3.0.0  | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
|        0.18.1         | mmdet>=2.19.0, \<=3.0.0  | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
|        0.18.0         | mmdet>=2.19.0, \<=3.0.0  | mmseg>=0.20.0, \<=1.0.0 | mmcv-full>=1.3.17, \<=1.5.0 |
|        0.17.3         | mmdet>=2.14.0, \<=3.0.0  | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0  |
|        0.17.2         | mmdet>=2.14.0, \<=3.0.0  | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0  |
|        0.17.1         | mmdet>=2.14.0, \<=3.0.0  | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0  |
|        0.17.0         | mmdet>=2.14.0, \<=3.0.0  | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0  |
|        0.16.0         | mmdet>=2.14.0, \<=3.0.0  | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0  |
|        0.15.0         | mmdet>=2.14.0, \<=3.0.0  | mmseg>=0.14.1, \<=1.0.0 | mmcv-full>=1.3.8, \<=1.4.0  |
|        0.14.0         | mmdet>=2.10.0, \<=2.11.0 |      mmseg==0.14.0      | mmcv-full>=1.3.1, \<=1.4.0  |
|        0.13.0         | mmdet>=2.10.0, \<=2.11.0 |      Not required       | mmcv-full>=1.2.4, \<=1.4.0  |
|        0.12.0         | mmdet>=2.5.0, \<=2.11.0  |      Not required       | mmcv-full>=1.2.4, \<=1.4.0  |
|        0.11.0         | mmdet>=2.5.0, \<=2.11.0  |      Not required       | mmcv-full>=1.2.4, \<=1.3.0  |
|        0.10.0         | mmdet>=2.5.0, \<=2.11.0  |      Not required       | mmcv-full>=1.2.4, \<=1.3.0  |
|         0.9.0         | mmdet>=2.5.0, \<=2.11.0  |      Not required       | mmcv-full>=1.2.4, \<=1.3.0  |
|         0.8.0         | mmdet>=2.5.0, \<=2.11.0  |      Not required       | mmcv-full>=1.1.5, \<=1.3.0  |
|         0.7.0         | mmdet>=2.5.0, \<=2.11.0  |      Not required       | mmcv-full>=1.1.5, \<=1.3.0  |
|         0.6.0         | mmdet>=2.4.0, \<=2.11.0  |      Not required       | mmcv-full>=1.1.3, \<=1.2.0  |
|         0.5.0         |          2.3.0           |      Not required       |      mmcv-full==1.0.5       |
zhangwenwei's avatar
Doc  
zhangwenwei committed
36

twang's avatar
twang committed
37
# Installation
zhangwenwei's avatar
Doc  
zhangwenwei committed
38

twang's avatar
twang committed
39
## Install MMDetection3D
zhangwenwei's avatar
Doc  
zhangwenwei committed
40

41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
### Quick installation instructions script

Assuming that you already have CUDA 11.0 installed, here is a full script for quick installation of MMDetection3D with conda.
Otherwise, you should refer to the step-by-step installation instructions in the next section.

```shell
conda create -n open-mmlab python=3.7 pytorch=1.9 cudatoolkit=11.0 torchvision -c pytorch -y
conda activate open-mmlab
pip3 install openmim
mim install mmcv-full
mim install mmdet
mim install mmsegmentation
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
pip3 install -e .
```

### Step-by-step installation instructions

60
**a. Create a conda virtual environment and activate it.**
zhangwenwei's avatar
zhangwenwei committed
61

twang's avatar
twang committed
62
63
64
```shell
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
zhangwenwei's avatar
Doc  
zhangwenwei committed
65
66
```

67
**b. Install PyTorch and torchvision following the [official instructions](https://pytorch.org/).**
Wenwei Zhang's avatar
Wenwei Zhang committed
68

twang's avatar
twang committed
69
70
```shell
conda install pytorch torchvision -c pytorch
Wenwei Zhang's avatar
Wenwei Zhang committed
71
72
```

twang's avatar
twang committed
73
74
Note: Make sure that your compilation CUDA version and runtime CUDA version match.
You can check the supported CUDA version for precompiled packages on the [PyTorch website](https://pytorch.org/).
Wenwei Zhang's avatar
Wenwei Zhang committed
75

76
`E.g. 1` If you have CUDA 10.1 installed under `/usr/local/cuda` and would like to install
twang's avatar
twang committed
77
PyTorch 1.5, you need to install the prebuilt PyTorch with CUDA 10.1.
Wenwei Zhang's avatar
Wenwei Zhang committed
78

79
```shell
80
conda install pytorch==1.5.0 cudatoolkit=10.1 torchvision==0.6.0 -c pytorch
Wenwei Zhang's avatar
Wenwei Zhang committed
81
82
```

twang's avatar
twang committed
83
84
`E.g. 2` If you have CUDA 9.2 installed under `/usr/local/cuda` and would like to install
PyTorch 1.3.1., you need to install the prebuilt PyTorch with CUDA 9.2.
zhangwenwei's avatar
zhangwenwei committed
85

86
```shell
twang's avatar
twang committed
87
conda install pytorch=1.3.1 cudatoolkit=9.2 torchvision=0.4.2 -c pytorch
wangtai's avatar
wangtai committed
88
89
```

90
If you build PyTorch from source instead of installing the prebuilt package,
twang's avatar
twang committed
91
you can use more CUDA versions such as 9.0.
92

93
**c. Install [MMCV](https://mmcv.readthedocs.io/en/latest/).**
xiliu8006's avatar
xiliu8006 committed
94
*mmcv-full* is necessary since MMDetection3D relies on MMDetection, CUDA ops in *mmcv-full* are required.
zhangwenwei's avatar
Doc  
zhangwenwei committed
95

96
`e.g.` The pre-build *mmcv-full* could be installed by running: (available versions could be found [here](https://mmcv.readthedocs.io/en/latest/#install-with-pip))
zhangwenwei's avatar
zhangwenwei committed
97

Ziyi Wu's avatar
Ziyi Wu committed
98
```shell
xiliu8006's avatar
xiliu8006 committed
99
100
101
102
103
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
```

Please replace `{cu_version}` and `{torch_version}` in the url to your desired one. For example, to install the latest `mmcv-full` with `CUDA 11` and `PyTorch 1.7.0`, use the following command:

twang's avatar
twang committed
104
```shell
xiliu8006's avatar
xiliu8006 committed
105
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html
twang's avatar
twang committed
106
```
zhangwenwei's avatar
zhangwenwei committed
107

108
109
110
111
112
113
114
mmcv-full is only compiled on PyTorch 1.x.0 because the compatibility usually holds between 1.x.0 and 1.x.1. If your PyTorch version is 1.x.1, you can install mmcv-full compiled with PyTorch 1.x.0 and it usually works well.

```shell
# We can ignore the micro version of PyTorch
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7/index.html
```

xiliu8006's avatar
xiliu8006 committed
115
See [here](https://github.com/open-mmlab/mmcv#install-with-pip) for different versions of MMCV compatible to different PyTorch and CUDA versions.
twang's avatar
twang committed
116
Optionally, you could also build the full version from source:
zhangwenwei's avatar
zhangwenwei committed
117

twang's avatar
twang committed
118
```shell
xiliu8006's avatar
xiliu8006 committed
119
120
121
122
123
124
125
126
127
128
git clone https://github.com/open-mmlab/mmcv.git
cd mmcv
MMCV_WITH_OPS=1 pip install -e .  # package mmcv-full will be installed after this step
cd ..
```

Or directly run

```shell
pip install mmcv-full
twang's avatar
twang committed
129
```
zhangwenwei's avatar
zhangwenwei committed
130

131
**d. Install [MMDetection](https://github.com/open-mmlab/mmdetection).**
zhangwenwei's avatar
zhangwenwei committed
132

twang's avatar
twang committed
133
```shell
134
pip install mmdet
twang's avatar
twang committed
135
```
zhangwenwei's avatar
zhangwenwei committed
136

twang's avatar
twang committed
137
Optionally, you could also build MMDetection from source in case you want to modify the code:
zhangwenwei's avatar
zhangwenwei committed
138
139

```shell
twang's avatar
twang committed
140
141
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
142
git checkout v2.19.0  # switch to v2.19.0 branch
twang's avatar
twang committed
143
144
pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"
zhangwenwei's avatar
zhangwenwei committed
145
146
```

147
148
149
**e. Install [MMSegmentation](https://github.com/open-mmlab/mmsegmentation).**

```shell
150
pip install mmsegmentation
151
152
153
154
155
156
157
```

Optionally, you could also build MMSegmentation from source in case you want to modify the code:

```shell
git clone https://github.com/open-mmlab/mmsegmentation.git
cd mmsegmentation
158
git checkout v0.20.0  # switch to v0.20.0 branch
159
160
161
162
pip install -e .  # or "python setup.py develop"
```

**f. Clone the MMDetection3D repository.**
zhangwenwei's avatar
Doc  
zhangwenwei committed
163

twang's avatar
twang committed
164
165
166
167
```shell
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
```
zhangwenwei's avatar
zhangwenwei committed
168

169
**g.Install build requirements and then install MMDetection3D.**
zhangwenwei's avatar
zhangwenwei committed
170

twang's avatar
twang committed
171
172
173
```shell
pip install -v -e .  # or "python setup.py develop"
```
zhangwenwei's avatar
zhangwenwei committed
174

twang's avatar
twang committed
175
Note:
zhangwenwei's avatar
Doc  
zhangwenwei committed
176

twang's avatar
twang committed
177
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
178
   It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
zhangwenwei's avatar
Doc  
zhangwenwei committed
179

180
   > Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version.
zhangwenwei's avatar
zhangwenwei committed
181

182
183
184
185
186
   ```shell
   pip uninstall mmdet3d
   rm -rf ./build
   find . -name "*.so" | xargs rm
   ```
zhangwenwei's avatar
zhangwenwei committed
187

188
2. Following the above instructions, MMDetection3D is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
zhangwenwei's avatar
zhangwenwei committed
189

twang's avatar
twang committed
190
3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
191
   you can install it before installing MMCV.
zhangwenwei's avatar
zhangwenwei committed
192

twang's avatar
twang committed
193
4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
zhangwenwei's avatar
zhangwenwei committed
194

VVsssssk's avatar
VVsssssk committed
195
196
197
198
199
200
201
202
203
204
205
206
207
   We have supported spconv2.0. If the user has installed spconv2.0, the code will use spconv2.0 first, which will take up less GPU memory than using the default mmcv spconv. Users can use the following commands to install spconv2.0:

   ```bash
   pip install cumm-cuxxx
   pip install spconv-cuxxx
   ```

   Where xxx is the CUDA version in the environment.

   For example, using CUDA 10.2, the command will be `pip install cumm-cu102 && pip install spconv-cu102`.

   Supported CUDA versions include 10.2, 11.1, 11.3, and 11.4. Users can also install it by building from the source. For more details please refer to [spconv v2.x](https://github.com/traveller59/spconv).

208
209
   We also support Minkowski Engine as a sparse convolution backend. If necessary please follow original [installation guide](https://github.com/NVIDIA/MinkowskiEngine#installation) or use `pip`:

210
211
212
213
   ```shell
   conda install openblas-devel -c anaconda
   pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=/opt/conda/include" --install-option="--blas=openblas"
   ```
214

twang's avatar
twang committed
215
5. The code can not be built for CPU only environment (where CUDA isn't available) for now.
zhangwenwei's avatar
zhangwenwei committed
216

twang's avatar
twang committed
217
## Another option: Docker Image
Wenwei Zhang's avatar
Wenwei Zhang committed
218

twang's avatar
twang committed
219
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/master/docker/Dockerfile) to build an image.
Wenwei Zhang's avatar
Wenwei Zhang committed
220

twang's avatar
twang committed
221
222
```shell
# build an image with PyTorch 1.6, CUDA 10.1
223
docker build -t mmdetection3d -f docker/Dockerfile .
twang's avatar
twang committed
224
```
Wenwei Zhang's avatar
Wenwei Zhang committed
225

twang's avatar
twang committed
226
Run it with
Wenwei Zhang's avatar
Wenwei Zhang committed
227

twang's avatar
twang committed
228
229
230
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
```
Wenwei Zhang's avatar
Wenwei Zhang committed
231

twang's avatar
twang committed
232
## A from-scratch setup script
Wenwei Zhang's avatar
Wenwei Zhang committed
233

234
Here is a full script for setting up MMdetection3D with conda.
Wenwei Zhang's avatar
Wenwei Zhang committed
235

twang's avatar
twang committed
236
237
238
```shell
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
Wenwei Zhang's avatar
Wenwei Zhang committed
239

240
# install latest PyTorch prebuilt with the default prebuilt CUDA version (usually the latest)
twang's avatar
twang committed
241
conda install -c pytorch pytorch torchvision -y
Wenwei Zhang's avatar
Wenwei Zhang committed
242

twang's avatar
twang committed
243
244
# install mmcv
pip install mmcv-full
liyinhao's avatar
liyinhao committed
245

twang's avatar
twang committed
246
247
# install mmdetection
pip install git+https://github.com/open-mmlab/mmdetection.git
liyinhao's avatar
liyinhao committed
248

249
250
251
# install mmsegmentation
pip install git+https://github.com/open-mmlab/mmsegmentation.git

twang's avatar
twang committed
252
253
254
255
# install mmdetection3d
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
zhangwenwei's avatar
zhangwenwei committed
256
```
liyinhao's avatar
liyinhao committed
257

twang's avatar
twang committed
258
259
260
## Using multiple MMDetection3D versions

The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMDetection3D in the current directory.
liyinhao's avatar
liyinhao committed
261

twang's avatar
twang committed
262
263
264
265
To use the default MMDetection3D installed in the environment rather than that you are working with, you can remove the following line in those scripts

```shell
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
liyinhao's avatar
liyinhao committed
266
267
```

twang's avatar
twang committed
268
# Verification
liyinhao's avatar
liyinhao committed
269

270
## Verify with point cloud demo
zhangwenwei's avatar
Doc  
zhangwenwei committed
271

272
We provide several demo scripts to test a single sample. Pre-trained models can be downloaded from [model zoo](model_zoo.md). To test a single-modality 3D detection on point cloud scenes:
zhangwenwei's avatar
Doc  
zhangwenwei committed
273
274

```shell
wuyuefeng's avatar
Demo  
wuyuefeng committed
275
python demo/pcd_demo.py ${PCD_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${GPU_ID}] [--score-thr ${SCORE_THR}] [--out-dir ${OUT_DIR}]
zhangwenwei's avatar
Doc  
zhangwenwei committed
276
277
278
279
280
```

Examples:

```shell
281
python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth
zhangwenwei's avatar
zhangwenwei committed
282
```
283

yinchimaoliang's avatar
yinchimaoliang committed
284
If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo.
285
Note that you need to install `pandas` and `plyfile` before using this script. This function can also be used for data preprocessing for training `ply data`.
286

yinchimaoliang's avatar
yinchimaoliang committed
287
288
289
290
291
```python
import numpy as np
import pandas as pd
from plyfile import PlyData

292
def convert_ply(input_path, output_path):
yinchimaoliang's avatar
yinchimaoliang committed
293
294
295
296
297
298
299
300
301
302
    plydata = PlyData.read(input_path)  # read file
    data = plydata.elements[0].data  # read data
    data_pd = pd.DataFrame(data)  # convert to DataFrame
    data_np = np.zeros(data_pd.shape, dtype=np.float)  # initialize array to store data
    property_names = data[0].dtype.names  # read names of properties
    for i, name in enumerate(
            property_names):  # read data by property
        data_np[:, i] = data_pd[name]
    data_np.astype(np.float32).tofile(output_path)
```
303

yinchimaoliang's avatar
yinchimaoliang committed
304
Examples:
zhangwenwei's avatar
zhangwenwei committed
305

yinchimaoliang's avatar
yinchimaoliang committed
306
307
308
```python
convert_ply('./test.ply', './test.bin')
```
zhangwenwei's avatar
zhangwenwei committed
309

310
If you have point clouds in other format (`off`, `obj`, etc.), you can use `trimesh` to convert them into `ply`.
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325

```python
import trimesh

def to_ply(input_path, output_path, original_type):
    mesh = trimesh.load(input_path, file_type=original_type)  # read file
    mesh.export(output_path, file_type='ply')  # convert to ply
```

Examples:

```python
to_ply('./test.obj', './test.ply', 'obj')
```

326
More demos about single/multi-modality and indoor/outdoor 3D detection can be found in [demo](demo.md).
327

twang's avatar
twang committed
328
## High-level APIs for testing point clouds
zhangwenwei's avatar
zhangwenwei committed
329

twang's avatar
twang committed
330
### Synchronous interface
Ziyi Wu's avatar
Ziyi Wu committed
331

liyinhao's avatar
liyinhao committed
332
Here is an example of building the model and test given point clouds.
zhangwenwei's avatar
zhangwenwei committed
333
334

```python
335
from mmdet3d.apis import init_model, inference_detector
zhangwenwei's avatar
zhangwenwei committed
336

liyinhao's avatar
liyinhao committed
337
338
config_file = 'configs/votenet/votenet_8x8_scannet-3d-18class.py'
checkpoint_file = 'checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth'
zhangwenwei's avatar
zhangwenwei committed
339
340

# build the model from a config file and a checkpoint file
341
model = init_model(config_file, checkpoint_file, device='cuda:0')
zhangwenwei's avatar
zhangwenwei committed
342
343

# test a single image and show the results
liyinhao's avatar
liyinhao committed
344
345
346
347
point_cloud = 'test.bin'
result, data = inference_detector(model, point_cloud)
# visualize the results and save the results in 'results' folder
model.show_results(data, result, out_dir='results')
zhangwenwei's avatar
zhangwenwei committed
348
```