getting_started.md 10.6 KB
Newer Older
twang's avatar
twang committed
1
# Prerequisites
zhangwenwei's avatar
zhangwenwei committed
2

twang's avatar
twang committed
3
4
5
6
7
- Linux or macOS (Windows is not currently officially supported)
- Python 3.6+
- PyTorch 1.3+
- CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
- GCC 5+
xiliu8006's avatar
xiliu8006 committed
8
9
10
- [MMCV](https://mmcv.readthedocs.io/en/latest/#installation)


11
12
13
14
The required versions of MMCV, MMDetection and MMSegmentation for different versions of MMDetection3D are as below. Please install the correct version of MMCV, MMDetection and MMSegmentation to avoid installation issues.

| MMDetection3D version | MMDetection version | MMSegmentation version |    MMCV version     |
|:-------------------:|:-------------------:|:-------------------:|:-------------------:|
hjin2902's avatar
hjin2902 committed
15
16
17
| master              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4|
| 0.15.0              | mmdet>=2.14.0, <=3.0.0| mmseg>=0.14.1, <=1.0.0 | mmcv-full>=1.3.8, <=1.4|
| 0.14.0              | mmdet>=2.10.0, <=2.11.0| mmseg==0.14.0 | mmcv-full>=1.3.1, <=1.4|
18
19
20
21
22
23
24
25
26
| 0.13.0              | mmdet>=2.10.0, <=2.11.0| Not required  | mmcv-full>=1.2.4, <=1.4|
| 0.12.0              | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.4|
| 0.11.0              | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.4|
| 0.10.0              | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.4|
| 0.9.0               | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.2.4, <=1.4|
| 0.8.0               | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.1.5, <=1.4|
| 0.7.0               | mmdet>=2.5.0, <=2.11.0 | Not required  | mmcv-full>=1.1.5, <=1.4|
| 0.6.0               | mmdet>=2.4.0, <=2.11.0 | Not required  | mmcv-full>=1.1.3, <=1.2|
| 0.5.0               | 2.3.0                  | Not required  | mmcv-full==1.0.5|
zhangwenwei's avatar
Doc  
zhangwenwei committed
27

twang's avatar
twang committed
28
# Installation
zhangwenwei's avatar
Doc  
zhangwenwei committed
29

twang's avatar
twang committed
30
## Install MMDetection3D
zhangwenwei's avatar
Doc  
zhangwenwei committed
31

32
**a. Create a conda virtual environment and activate it.**
zhangwenwei's avatar
zhangwenwei committed
33

twang's avatar
twang committed
34
35
36
```shell
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
zhangwenwei's avatar
Doc  
zhangwenwei committed
37
38
```

39
**b. Install PyTorch and torchvision following the [official instructions](https://pytorch.org/).**
Wenwei Zhang's avatar
Wenwei Zhang committed
40

twang's avatar
twang committed
41
42
```shell
conda install pytorch torchvision -c pytorch
Wenwei Zhang's avatar
Wenwei Zhang committed
43
44
```

twang's avatar
twang committed
45
46
Note: Make sure that your compilation CUDA version and runtime CUDA version match.
You can check the supported CUDA version for precompiled packages on the [PyTorch website](https://pytorch.org/).
Wenwei Zhang's avatar
Wenwei Zhang committed
47

48
`E.g. 1` If you have CUDA 10.1 installed under `/usr/local/cuda` and would like to install
twang's avatar
twang committed
49
PyTorch 1.5, you need to install the prebuilt PyTorch with CUDA 10.1.
Wenwei Zhang's avatar
Wenwei Zhang committed
50

twang's avatar
twang committed
51
```python
52
conda install pytorch==1.5.0 cudatoolkit=10.1 torchvision==0.6.0 -c pytorch
Wenwei Zhang's avatar
Wenwei Zhang committed
53
54
```

twang's avatar
twang committed
55
56
`E.g. 2` If you have CUDA 9.2 installed under `/usr/local/cuda` and would like to install
PyTorch 1.3.1., you need to install the prebuilt PyTorch with CUDA 9.2.
zhangwenwei's avatar
zhangwenwei committed
57

twang's avatar
twang committed
58
59
```python
conda install pytorch=1.3.1 cudatoolkit=9.2 torchvision=0.4.2 -c pytorch
wangtai's avatar
wangtai committed
60
61
```

twang's avatar
twang committed
62
63
If you build PyTorch from source instead of installing the prebuilt pacakge,
you can use more CUDA versions such as 9.0.
64

65
**c. Install [MMCV](https://mmcv.readthedocs.io/en/latest/).**
xiliu8006's avatar
xiliu8006 committed
66
*mmcv-full* is necessary since MMDetection3D relies on MMDetection, CUDA ops in *mmcv-full* are required.
zhangwenwei's avatar
Doc  
zhangwenwei committed
67

68
`e.g.` The pre-build *mmcv-full* could be installed by running: (available versions could be found [here](https://mmcv.readthedocs.io/en/latest/#install-with-pip))
zhangwenwei's avatar
zhangwenwei committed
69

Ziyi Wu's avatar
Ziyi Wu committed
70
```shell
xiliu8006's avatar
xiliu8006 committed
71
72
73
74
75
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
```

Please replace `{cu_version}` and `{torch_version}` in the url to your desired one. For example, to install the latest `mmcv-full` with `CUDA 11` and `PyTorch 1.7.0`, use the following command:

twang's avatar
twang committed
76
```shell
xiliu8006's avatar
xiliu8006 committed
77
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html
twang's avatar
twang committed
78
```
zhangwenwei's avatar
zhangwenwei committed
79

xiliu8006's avatar
xiliu8006 committed
80
See [here](https://github.com/open-mmlab/mmcv#install-with-pip) for different versions of MMCV compatible to different PyTorch and CUDA versions.
twang's avatar
twang committed
81
Optionally, you could also build the full version from source:
zhangwenwei's avatar
zhangwenwei committed
82

twang's avatar
twang committed
83
```shell
xiliu8006's avatar
xiliu8006 committed
84
85
86
87
88
89
90
91
92
93
git clone https://github.com/open-mmlab/mmcv.git
cd mmcv
MMCV_WITH_OPS=1 pip install -e .  # package mmcv-full will be installed after this step
cd ..
```

Or directly run

```shell
pip install mmcv-full
twang's avatar
twang committed
94
```
zhangwenwei's avatar
zhangwenwei committed
95

96
**d. Install [MMDetection](https://github.com/open-mmlab/mmdetection).**
zhangwenwei's avatar
zhangwenwei committed
97

twang's avatar
twang committed
98
```shell
hjin2902's avatar
hjin2902 committed
99
pip install mmdet==2.14.0
twang's avatar
twang committed
100
```
zhangwenwei's avatar
zhangwenwei committed
101

twang's avatar
twang committed
102
Optionally, you could also build MMDetection from source in case you want to modify the code:
zhangwenwei's avatar
zhangwenwei committed
103
104

```shell
twang's avatar
twang committed
105
106
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
hjin2902's avatar
hjin2902 committed
107
git checkout v2.14.0  # switch to v2.14.0 branch
twang's avatar
twang committed
108
109
pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"
zhangwenwei's avatar
zhangwenwei committed
110
111
```

112
113
114
**e. Install [MMSegmentation](https://github.com/open-mmlab/mmsegmentation).**

```shell
hjin2902's avatar
hjin2902 committed
115
pip install mmsegmentation==0.14.1
116
117
118
119
120
121
122
```

Optionally, you could also build MMSegmentation from source in case you want to modify the code:

```shell
git clone https://github.com/open-mmlab/mmsegmentation.git
cd mmsegmentation
hjin2902's avatar
hjin2902 committed
123
git checkout v0.14.1  # switch to v0.14.1 branch
124
125
126
127
pip install -e .  # or "python setup.py develop"
```

**f. Clone the MMDetection3D repository.**
zhangwenwei's avatar
Doc  
zhangwenwei committed
128

twang's avatar
twang committed
129
130
131
132
```shell
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
```
zhangwenwei's avatar
zhangwenwei committed
133

134
**g.Install build requirements and then install MMDetection3D.**
zhangwenwei's avatar
zhangwenwei committed
135

twang's avatar
twang committed
136
137
138
```shell
pip install -v -e .  # or "python setup.py develop"
```
zhangwenwei's avatar
zhangwenwei committed
139

twang's avatar
twang committed
140
Note:
zhangwenwei's avatar
Doc  
zhangwenwei committed
141

twang's avatar
twang committed
142
143
1. The git commit id will be written to the version number with step d, e.g. 0.6.0+2e7045c. The version will also be saved in trained models.
It is recommended that you run step d each time you pull some updates from github. If C++/CUDA codes are modified, then this step is compulsory.
zhangwenwei's avatar
Doc  
zhangwenwei committed
144

twang's avatar
twang committed
145
    > Important: Be sure to remove the `./build` folder if you reinstall mmdet with a different CUDA/PyTorch version.
zhangwenwei's avatar
zhangwenwei committed
146

twang's avatar
twang committed
147
148
149
150
151
    ```shell
    pip uninstall mmdet3d
    rm -rf ./build
    find . -name "*.so" | xargs rm
    ```
zhangwenwei's avatar
zhangwenwei committed
152

twang's avatar
twang committed
153
2. Following the above instructions, mmdetection is installed on `dev` mode, any local modifications made to the code will take effect without the need to reinstall it (unless you submit some commits and want to update the version number).
zhangwenwei's avatar
zhangwenwei committed
154

twang's avatar
twang committed
155
156
3. If you would like to use `opencv-python-headless` instead of `opencv-python`,
you can install it before installing MMCV.
zhangwenwei's avatar
zhangwenwei committed
157

twang's avatar
twang committed
158
4. Some dependencies are optional. Simply running `pip install -v -e .` will only install the minimum runtime requirements. To use optional dependencies like `albumentations` and `imagecorruptions` either install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -v -e .[optional]`). Valid keys for the extras field are: `all`, `tests`, `build`, and `optional`.
zhangwenwei's avatar
zhangwenwei committed
159

twang's avatar
twang committed
160
5. The code can not be built for CPU only environment (where CUDA isn't available) for now.
zhangwenwei's avatar
zhangwenwei committed
161

twang's avatar
twang committed
162
## Another option: Docker Image
Wenwei Zhang's avatar
Wenwei Zhang committed
163

twang's avatar
twang committed
164
We provide a [Dockerfile](https://github.com/open-mmlab/mmdetection3d/blob/master/docker/Dockerfile) to build an image.
Wenwei Zhang's avatar
Wenwei Zhang committed
165

twang's avatar
twang committed
166
167
168
169
```shell
# build an image with PyTorch 1.6, CUDA 10.1
docker build -t mmdetection3d docker/
```
Wenwei Zhang's avatar
Wenwei Zhang committed
170

twang's avatar
twang committed
171
Run it with
Wenwei Zhang's avatar
Wenwei Zhang committed
172

twang's avatar
twang committed
173
174
175
```shell
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
```
Wenwei Zhang's avatar
Wenwei Zhang committed
176

twang's avatar
twang committed
177
## A from-scratch setup script
Wenwei Zhang's avatar
Wenwei Zhang committed
178

twang's avatar
twang committed
179
Here is a full script for setting up mmdetection with conda.
Wenwei Zhang's avatar
Wenwei Zhang committed
180

twang's avatar
twang committed
181
182
183
```shell
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
Wenwei Zhang's avatar
Wenwei Zhang committed
184

twang's avatar
twang committed
185
186
# install latest pytorch prebuilt with the default prebuilt CUDA version (usually the latest)
conda install -c pytorch pytorch torchvision -y
Wenwei Zhang's avatar
Wenwei Zhang committed
187

twang's avatar
twang committed
188
189
# install mmcv
pip install mmcv-full
liyinhao's avatar
liyinhao committed
190

twang's avatar
twang committed
191
192
# install mmdetection
pip install git+https://github.com/open-mmlab/mmdetection.git
liyinhao's avatar
liyinhao committed
193

194
195
196
# install mmsegmentation
pip install git+https://github.com/open-mmlab/mmsegmentation.git

twang's avatar
twang committed
197
198
199
200
# install mmdetection3d
git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
zhangwenwei's avatar
zhangwenwei committed
201
```
liyinhao's avatar
liyinhao committed
202

twang's avatar
twang committed
203
204
205
## Using multiple MMDetection3D versions

The train and test scripts already modify the `PYTHONPATH` to ensure the script use the MMDetection3D in the current directory.
liyinhao's avatar
liyinhao committed
206

twang's avatar
twang committed
207
208
209
210
To use the default MMDetection3D installed in the environment rather than that you are working with, you can remove the following line in those scripts

```shell
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
liyinhao's avatar
liyinhao committed
211
212
```

twang's avatar
twang committed
213
# Verification
liyinhao's avatar
liyinhao committed
214

215
## Verify with point cloud demo
zhangwenwei's avatar
Doc  
zhangwenwei committed
216

217
We provide several demo scripts to test a single sample. Pre-trained models can be downloaded from [model zoo](model_zoo.md). To test a single-modality 3D detection on point cloud scenes:
zhangwenwei's avatar
Doc  
zhangwenwei committed
218
219

```shell
wuyuefeng's avatar
Demo  
wuyuefeng committed
220
python demo/pcd_demo.py ${PCD_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${GPU_ID}] [--score-thr ${SCORE_THR}] [--out-dir ${OUT_DIR}]
zhangwenwei's avatar
Doc  
zhangwenwei committed
221
222
223
224
225
```

Examples:

```shell
226
python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth
zhangwenwei's avatar
zhangwenwei committed
227
```
228

yinchimaoliang's avatar
yinchimaoliang committed
229
230
If you want to input a `ply` file, you can use the following function and convert it to `bin` format. Then you can use the converted `bin` file to generate demo.
Note that you need to install pandas and plyfile before using this script. This function can also be used for data preprocessing for training ```ply data```.
231

yinchimaoliang's avatar
yinchimaoliang committed
232
233
234
235
236
```python
import numpy as np
import pandas as pd
from plyfile import PlyData

237
def convert_ply(input_path, output_path):
yinchimaoliang's avatar
yinchimaoliang committed
238
239
240
241
242
243
244
245
246
247
    plydata = PlyData.read(input_path)  # read file
    data = plydata.elements[0].data  # read data
    data_pd = pd.DataFrame(data)  # convert to DataFrame
    data_np = np.zeros(data_pd.shape, dtype=np.float)  # initialize array to store data
    property_names = data[0].dtype.names  # read names of properties
    for i, name in enumerate(
            property_names):  # read data by property
        data_np[:, i] = data_pd[name]
    data_np.astype(np.float32).tofile(output_path)
```
248

yinchimaoliang's avatar
yinchimaoliang committed
249
Examples:
zhangwenwei's avatar
zhangwenwei committed
250

yinchimaoliang's avatar
yinchimaoliang committed
251
252
253
```python
convert_ply('./test.ply', './test.bin')
```
zhangwenwei's avatar
zhangwenwei committed
254

255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
If you have point clouds in other format (`off`, `obj`, etc.), you can use trimesh to convert them into `ply`.

```python
import trimesh

def to_ply(input_path, output_path, original_type):
    mesh = trimesh.load(input_path, file_type=original_type)  # read file
    mesh.export(output_path, file_type='ply')  # convert to ply
```

Examples:

```python
to_ply('./test.obj', './test.ply', 'obj')
```

271
More demos about single/multi-modality and indoor/outdoor 3D detection can be found in [demo](demo.md).
272

twang's avatar
twang committed
273
## High-level APIs for testing point clouds
zhangwenwei's avatar
zhangwenwei committed
274

twang's avatar
twang committed
275
### Synchronous interface
Ziyi Wu's avatar
Ziyi Wu committed
276

liyinhao's avatar
liyinhao committed
277
Here is an example of building the model and test given point clouds.
zhangwenwei's avatar
zhangwenwei committed
278
279

```python
280
from mmdet3d.apis import init_model, inference_detector
zhangwenwei's avatar
zhangwenwei committed
281

liyinhao's avatar
liyinhao committed
282
283
config_file = 'configs/votenet/votenet_8x8_scannet-3d-18class.py'
checkpoint_file = 'checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth'
zhangwenwei's avatar
zhangwenwei committed
284
285

# build the model from a config file and a checkpoint file
286
model = init_model(config_file, checkpoint_file, device='cuda:0')
zhangwenwei's avatar
zhangwenwei committed
287
288

# test a single image and show the results
liyinhao's avatar
liyinhao committed
289
290
291
292
point_cloud = 'test.bin'
result, data = inference_detector(model, point_cloud)
# visualize the results and save the results in 'results' folder
model.show_results(data, result, out_dir='results')
zhangwenwei's avatar
zhangwenwei committed
293
```