README.md 4.61 KB
Newer Older
pangjm's avatar
pangjm committed
1

WXinlong's avatar
WXinlong committed
2
# SOLO: Segmenting Objects by Locations
Kai Chen's avatar
Kai Chen committed
3

WXinlong's avatar
WXinlong committed
4
This project hosts the code for implementing the SOLO algorithms for instance segmentation.
Dahua Lin's avatar
Dahua Lin committed
5

WXinlong's avatar
WXinlong committed
6
7
8
> [**SOLO: Segmenting Objects by Locations**](https://arxiv.org/abs/1912.04488),            
> Xinlong Wang, Tao Kong, Chunhua Shen, Yuning Jiang, Lei Li        
> *arXiv preprint ([arXiv 1912.04488](https://arxiv.org/abs/1912.04488))*   
Kai Chen's avatar
Kai Chen committed
9

Dahua Lin's avatar
Dahua Lin committed
10

WXinlong's avatar
WXinlong committed
11
12
13
> [**SOLOv2: Dynamic, Faster and Stronger**](https://arxiv.org/abs/2003.10152),            
> Xinlong Wang, Rufeng Zhang, Tao Kong, Lei Li, Chunhua Shen        
> *arXiv preprint ([arXiv 2003.10152](https://arxiv.org/abs/2003.10152))*  
Kai Chen's avatar
Kai Chen committed
14

WXinlong's avatar
WXinlong committed
15
More code and models will be released soon. Stay tuned.
Dahua Lin's avatar
Dahua Lin committed
16

Kai Chen's avatar
Kai Chen committed
17

WXinlong's avatar
WXinlong committed
18
19
20
21
## Highlights
- **Totally box-free:**  SOLO is totally box-free thus not being restricted by (anchor) box locations and scales, and naturally benefits from the inherent advantages of FCNs.
- **Direct instance segmentation:** Our method takes an image as input, directly outputs instance masks and corresponding class probabilities, in a fully convolutional, box-free and grouping-free paradigm.
- **State-of-the-art performance:** Our best single model based on ResNet-101 and deformable convolutions achieves **41.7%** in AP on COCO test-dev (without multi-scale testing). A light-weight version of SOLOv2 executes at **31.3** FPS on a single V100 GPU and yields **37.1%** AP.
Dahua Lin's avatar
Dahua Lin committed
22

WXinlong's avatar
WXinlong committed
23
## Updates
Xinlong Wang's avatar
Xinlong Wang committed
24
   - Light-weight models and R101-based models are available. (31/03/2020) 
WXinlong's avatar
WXinlong committed
25
   - SOLOv1 is available. Code and trained models of SOLO and Decoupled SOLO are released. (28/03/2020)
Kai Chen's avatar
Kai Chen committed
26
27


Kai Chen's avatar
Kai Chen committed
28
## Installation
WXinlong's avatar
WXinlong committed
29
This implementation is based on [mmdetection](https://github.com/open-mmlab/mmdetection)(v1.0.0). Please refer to [INSTALL.md](docs/INSTALL.md) for installation and dataset preparation.
Kai Chen's avatar
Kai Chen committed
30

WXinlong's avatar
WXinlong committed
31
32
## Models
For your convenience, we provide the following trained models on COCO (more models are coming soon).
Kai Chen's avatar
Kai Chen committed
33

WXinlong's avatar
WXinlong committed
34
35
Model | Multi-scale training | Testing time / im | AP (minival) | Link
--- |:---:|:---:|:---:|:---:
Xinlong Wang's avatar
Xinlong Wang committed
36
37
38
39
40
41
42
43
44
45
46
47
48
SOLO_R50_1x | No | 77ms | 32.9 | [download](https://cloudstor.aarnet.edu.au/plus/s/nTOgDldI4dvDrPs/download)
SOLO_R50_3x | Yes | 77ms |  35.8 | [download](https://cloudstor.aarnet.edu.au/plus/s/x4Fb4XQ0OmkBvaQ/download)
SOLO_R101_3x | Yes | 86ms |  37.1 | [download](https://cloudstor.aarnet.edu.au/plus/s/WxOFQzHhhKQGxDG/download)
Decoupled_SOLO_R50_1x | No | 85ms | 33.9 | [download](https://cloudstor.aarnet.edu.au/plus/s/RcQyLrZQeeS6JIy/download)
Decoupled_SOLO_R50_3x | Yes | 85ms | 36.4 | [download](https://cloudstor.aarnet.edu.au/plus/s/dXz11J672ax0Z1Q/download)
Decoupled_SOLO_R101_3x | Yes | 92ms | 37.9 | [download](https://cloudstor.aarnet.edu.au/plus/s/BRhKBimVmdFDI9o/download)

**Light-weight models:**

Model | Multi-scale training | Testing time / im | AP (minival) | Link
--- |:---:|:---:|:---:|:---:
DECOUPLED_SOLO_LIGHT_R50_3x | Yes | 29ms | 33.0 | [download](https://cloudstor.aarnet.edu.au/plus/s/d0zuZgCnAjeYvod/download)
DECOUPLED_SOLO_LIGHT_DCN_R50_3x | Yes | 36ms | 35.0 | [download](https://cloudstor.aarnet.edu.au/plus/s/QvWhOTmCA5pFj6E/download)
Kai Chen's avatar
Kai Chen committed
49

WXinlong's avatar
WXinlong committed
50
## Usage
Kai Chen's avatar
Kai Chen committed
51

Xinlong Wang's avatar
Xinlong Wang committed
52
53
54
55
### A quick demo

Once the installation is done, you can download the provided models and use [inference_demo.py](demo/inference_demo.py) to run a quick demo.

WXinlong's avatar
WXinlong committed
56
57
### Train with multiple GPUs
    ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM}
Kai Chen's avatar
Kai Chen committed
58

WXinlong's avatar
WXinlong committed
59
60
    Example: 
    ./tools/dist_train.sh configs/solo/solo_r50_fpn_8gpu_1x.py  8
Kai Chen's avatar
Kai Chen committed
61

WXinlong's avatar
WXinlong committed
62
63
64
65
66
### Train with single GPU
    python tools/train.py ${CONFIG_FILE}
    
    Example:
    python tools/train.py configs/solo/solo_r50_fpn_8gpu_1x.py
Kai Chen's avatar
Kai Chen committed
67

WXinlong's avatar
WXinlong committed
68
69
70
71
72
### Testing
    python tools/test_ins.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --show --out  ${OUTPUT_FILE} --eval segm
    
    Example: 
    python tools/test_ins.py configs/solo/solo_r50_fpn_8gpu_1x.py  SOLO_R50_FPN_1x.pth --show --out  results_solo.pkl --eval segm
Kai Chen's avatar
Kai Chen committed
73

WXinlong's avatar
WXinlong committed
74
### Visualization
Kai Chen's avatar
Kai Chen committed
75

WXinlong's avatar
WXinlong committed
76
77
78
79
    python tools/test_ins_vis.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --show --save_dir  ${SAVE_DIR}
    
    Example: 
    python tools/test_ins_vis.py configs/solo/solo_r50_fpn_8gpu_1x.py  SOLO_R50_FPN_1x.pth --show --save_dir  work_dirs/vis_solo
Kai Chen's avatar
Kai Chen committed
80

WXinlong's avatar
WXinlong committed
81
82
## Contributing to the project
Any pull requests or issues are welcome.
Kai Chen's avatar
Kai Chen committed
83

WXinlong's avatar
WXinlong committed
84
## Citations
Tao Kong's avatar
Tao Kong committed
85
Please consider citing our papers in your publications if the project helps your research. BibTeX reference is as follows.
Kai Chen's avatar
Kai Chen committed
86
```
WXinlong's avatar
WXinlong committed
87
88
89
90
@article{wang2019solo,
  title={SOLO: Segmenting Objects by Locations},
  author={Wang, Xinlong and Kong, Tao and Shen, Chunhua and Jiang, Yuning and Li, Lei},
  journal={arXiv preprint arXiv:1912.04488},
Kai Chen's avatar
Kai Chen committed
91
  year={2019}
Kai Chen's avatar
Kai Chen committed
92
93
}
```
Kai Chen's avatar
Kai Chen committed
94

WXinlong's avatar
WXinlong committed
95
96
97
98
99
100
101
102
```
@article{wang2020solov2,
  title={SOLOv2: Dynamic, Faster and Stronger},
  author={Wang, Xinlong and Zhang, Rufeng and  Kong, Tao and Li, Lei and Shen, Chunhua},
  journal={arXiv preprint arXiv:2003.10152},
  year={2020}
}
```