README.md 9.14 KB
Newer Older
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
1
2
# Submanifold Sparse Convolutional Networks

Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
3
This is the PyTorch library for training Submanifold Sparse Convolutional Networks.
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
4
5
6

## Spatial sparsity

Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
7
This library brings [Spatially-sparse convolutional networks](https://github.com/btgraham/SparseConvNet) to PyTorch. Moreover, it introduces **Submanifold Sparse Convolutions**, that can be used to build computationally efficient sparse VGG/ResNet/DenseNet-style networks.
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
8
9

With regular 3x3 convolutions, the set of active (non-zero) sites grows rapidly:<br />
Ben Graham's avatar
Ben Graham committed
10
![submanifold](img/i.gif) <br />
Ben Graham's avatar
Ben Graham committed
11
With **Submanifold Sparse Convolutions**, the set of active sites is unchanged. Active sites look at their active neighbors (green); non-active sites (red) have no computational overhead: <br />
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
12
![submanifold](img/img.gif) <br />
Ben Graham's avatar
Ben Graham committed
13
Stacking Submanifold Sparse Convolutions to build VGG and ResNet type ConvNets, information can flow along lines or surfaces of active points.<br />
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
14

Ben Graham's avatar
Ben Graham committed
15
Disconnected components don't communicate at first, although they will merge due to the effect of strided operations, either pooling or convolutions. Additionally, adding ConvolutionWithStride2-SubmanifoldConvolution-DeconvolutionWithStride2 paths to the network allows disjoint active sites to communicate; see the 'VGG+' networks in the paper.<br />
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
16
17
![Strided Convolution, convolution, deconvolution](img/img_stridedConv_conv_deconv.gif) <br />
![Strided Convolution, convolution, deconvolution](img/img_stridedConv_conv_deconv.png) <br />
Ben Graham's avatar
Ben Graham committed
18
From left: **(i)** an active point is highlighted; a convolution with stride 2 sees the green active sites **(ii)** and produces output **(iii)**, 'children' of hightlighted active point from (i) are highlighted; a submanifold sparse convolution sees the green active sites **(iv)** and produces output **(v)**; a deconvolution operation sees the green active sites **(vi)**  and produces output **(vii)**.
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

## Dimensionality and 'submanifolds'

SparseConvNet supports input with different numbers of spatial/temporal dimensions.
Higher dimensional input is more likely to be sparse because of the 'curse of dimensionality'. <br />

  Dimension|Name in 'torch.nn'|Use cases
  :--:|:--:|:--:
  1|TemporalConvolution| Text, audio
  2|SpatialConvolution|Lines in 2D space, e.g. handwriting
  3|VolumetricConvolution|Lines and surfaces in 3D space or (2+1)D space-time
  4| - |Lines, etc,  in (3+1)D space-time

We use the term 'submanifold' to refer to input data that is sparse because it has a lower effective dimension than the space in which it lives, for example a one-dimensional curve in 2+ dimensional space, or a two-dimensional surface in 3+ dimensional space.

In theory, the library supports up to 10 dimensions. In practice, ConvNets with size-3 SVC convolutions in dimension 5+ may be impractical as the number of parameters per convolution is growing exponentially. Possible solutions include factorizing the convolutions (e.g. 3x1x1x..., 1x3x1x..., etc), or switching to a hyper-tetrahedral lattice (see [Sparse 3D convolutional neural networks](http://arxiv.org/abs/1505.02890)).





Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
40
## Hello World
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
41
SparseConvNets can be built either by [defining a function that inherits from torch.nn.Module](examples/Assamese_handwriting/VGGplus.py) or by stacking modules in a [sparseconvnet.Sequential](PyTorch/sparseconvnet/sequential.py):
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
42
43
```
import torch
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
44
import sparseconvnet as scn
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
45
46

# Use the GPU if there is one, otherwise CPU
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
47
use_gpu = torch.cuda.is_available()
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
48
49
50

model = scn.Sequential().add(
    scn.SparseVggNet(2, 1,
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
51
52
53
		     [['C',  8], ['C',  8], ['MP', 3, 2],
		      ['C', 16], ['C', 16], ['MP', 3, 2],
		      ['C', 24], ['C', 24], ['MP', 3, 2]])
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
54
).add(
Ben Graham's avatar
Ben Graham committed
55
    scn.SubmanifoldConvolution(2, 24, 32, 3, False)
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
56
57
58
).add(
    scn.BatchNormReLU(32)
).add(
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
59
60
61
62
    scn.SparseToDense(2,32)
)
if use_gpu:
    model.cuda()
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
63
64

# output will be 10x10
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
65
inputSpatialSize = model.input_spatial_size(torch.LongTensor([10, 10]))
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
66
67
68
69
70
71
72
73
input = scn.InputBatch(2, inputSpatialSize)

msg = [
    " X   X  XXX  X    X    XX     X       X   XX   XXX   X    XXX   ",
    " X   X  X    X    X   X  X    X       X  X  X  X  X  X    X  X  ",
    " XXXXX  XX   X    X   X  X    X   X   X  X  X  XXX   X    X   X ",
    " X   X  X    X    X   X  X     X X X X   X  X  X  X  X    X  X  ",
    " X   X  XXX  XXX  XXX  XX       X   X     XX   X  X  XXX  XXX   "]
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
74

Benjamin Thomas Graham's avatar
tidy  
Benjamin Thomas Graham committed
75
76
#Add a sample using set_location
input.add_sample()
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
77
78
for y, line in enumerate(msg):
    for x, c in enumerate(line):
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
79
80
81
82
	if c == 'X':
	    location = torch.LongTensor([x, y])
	    featureVector = torch.FloatTensor([1])
	    input.set_location(location, featureVector, 0)
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
83

Benjamin Thomas Graham's avatar
tidy  
Benjamin Thomas Graham committed
84
85
#Add a sample using set_locations
input.add_sample()
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
86
87
88
89
locations = []
features = []
for y, line in enumerate(msg):
    for x, c in enumerate(line):
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
90
91
92
	if c == 'X':
	    locations.append([x,y])
	    features.append([1])
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
93
94
locations = torch.LongTensor(locations)
features = torch.FloatTensor(features)
Benjamin Thomas Graham's avatar
tidy  
Benjamin Thomas Graham committed
95
input.set_locations(locations, features, 0)
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
96
97
98
99

model.train()
if use_gpu:
    input.cuda()
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
100
101
output = model.forward(input)

Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
102
# Output is 2x32x10x10: our minibatch has 2 samples, the network has 32 output
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
103
# feature planes, and 10x10 is the spatial size of the output.
104
print(output.size(), output.type())
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
105
106
107
108
109
110
111
112
```


## Examples

Examples in the examples folder include
* [Assamese handwriting recognition](https://archive.ics.uci.edu/ml/datasets/Online+Handwritten+Assamese+Characters+Dataset#)
* [Chinese handwriting for recognition](http://www.nlpr.ia.ac.cn/databases/handwriting/Online_database.html)
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
113
* [3D Segmentation](https://shapenet.cs.stanford.edu/iccv17/) using ShapeNet Core-55
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
114
115
116
117
118
119
120

Data will be downloaded/preprocessed on the first run, i.e.
```
cd examples/Assamese_handwriting
python VGGplus.py
```

Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
121
## Setup
122

Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
123
Tested with Ubuntu 16.04, Python 3.6 in [Miniconda](https://conda.io/miniconda.html) and PyTorch 1.0.
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
124
125

```
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
126
conda install pytorch-nightly -c pytorch # See https://pytorch.org/get-started/locally/
127
conda install google-sparsehash -c bioconda   # OR apt-get install libsparsehash-dev
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
128
conda install -c anaconda pillow
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
129
git clone git@github.com:facebookresearch/SparseConvNet.git
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
130
cd SparseConvNet/
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
131
bash develop.sh
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
132
```
133
To run the examples you may also need to install unrar:
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
134
135
136
137
```
apt-get install unrar
```

Benjamin Thomas Graham's avatar
README  
Benjamin Thomas Graham committed
138
139
## License
SparseConvNet is Attribution-NonCommercial 4.0 International licensed, as found in the LICENSE file.
140

Benjamin Thomas Graham's avatar
README  
Benjamin Thomas Graham committed
141
## Links
Ben Graham's avatar
Ben Graham committed
142
143
144
145
1. [ICDAR 2013 Chinese Handwriting Recognition Competition 2013](http://www.nlpr.ia.ac.cn/events/CHRcompetition2013/competition/Home.html) First place in task 3, with test error of 2.61%. Human performance on the test set was 4.81%. [Report](http://www.nlpr.ia.ac.cn/events/CHRcompetition2013/competition/ICDAR%202013%20CHR%20competition.pdf)
2. [Spatially-sparse convolutional neural networks, 2014](http://arxiv.org/abs/1409.6070) SparseConvNets for Chinese handwriting recognition
3. [Fractional max-pooling, 2014](http://arxiv.org/abs/1412.6071) A SparseConvNet with fractional max-pooling achieves an error rate of 3.47% for CIFAR-10.
4. [Sparse 3D convolutional neural networks, BMVC 2015](http://arxiv.org/abs/1505.02890) SparseConvNets for 3D object recognition and (2+1)D video action recognition.
146
5. [Kaggle plankton recognition competition, 2015](https://www.kaggle.com/c/datasciencebowl) Third place. The competition solution is being adapted for research purposes in [EcoTaxa](http://ecotaxa.obs-vlfr.fr/).
Ben Graham's avatar
Ben Graham committed
147
148
149
6. [Kaggle Diabetic Retinopathy Detection, 2015](https://www.kaggle.com/c/diabetic-retinopathy-detection/) First place in the Kaggle Diabetic Retinopathy Detection competition.
7. [Submanifold Sparse Convolutional Networks, 2017](https://arxiv.org/abs/1706.01307) Introduces deep 'submanifold' SparseConvNets.
8. [Workshop on Learning to See from 3D Data, 2017](https://shapenet.cs.stanford.edu/iccv17workshop/) First place in the [semantic segmentation](https://shapenet.cs.stanford.edu/iccv17/) competition. [Report](https://arxiv.org/pdf/1710.06104)
150
9. [3D Semantic Segmentation with Submanifold Sparse Convolutional Networks, 2017](https://arxiv.org/abs/1711.10275) Semantic segmentation for the ShapeNet Core55 and NYU-DepthV2 datasets, CVPR 2018
Benjamin Thomas Graham's avatar
readme  
Benjamin Thomas Graham committed
151
10. [ScanNet 3D semantic label benchmark 2018](http://kaldir.vc.in.tum.de/scannet_benchmark/semantic_label_3d) 0.726 average IOU.
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
152

Benjamin Thomas Graham's avatar
README  
Benjamin Thomas Graham committed
153
## Citations
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
154
155
156

If you find this code useful in your research then please cite:

Benjamin Thomas Graham's avatar
readme  
Benjamin Thomas Graham committed
157
**[3D Semantic Segmentation with Submanifold Sparse Convolutional Networks, CVPR 2018](https://arxiv.org/abs/1711.10275)** <br />
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
158
[Benjamin Graham](https://research.fb.com/people/graham-benjamin/), <br />
Benjamin Thomas Graham's avatar
readme  
Benjamin Thomas Graham committed
159
[Martin Engelcke](http://ori.ox.ac.uk/mrg_people/martin-engelcke/), <br />
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
160
161
162
[Laurens van der Maaten](https://lvdmaaten.github.io/), <br />

```
Benjamin Thomas Graham's avatar
readme  
Benjamin Thomas Graham committed
163
164
165
166
167
@article{3DSemanticSegmentationWithSubmanifoldSparseConvNet,
  title={3D Semantic Segmentation with Submanifold Sparse Convolutional Networks},
  author={Graham, Benjamin and Engelcke, Martin and van der Maaten, Laurens},
  journal={CVPR},
  year={2018}
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
168
169
170
171
172
}
```

and/or

Benjamin Thomas Graham's avatar
readme  
Benjamin Thomas Graham committed
173
**[Submanifold Sparse Convolutional Networks, https://arxiv.org/abs/1706.01307](https://arxiv.org/abs/1706.01307)** <br />
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
174
175
176
177
[Benjamin Graham](https://research.fb.com/people/graham-benjamin/), <br />
[Laurens van der Maaten](https://lvdmaaten.github.io/), <br />

```
Benjamin Thomas Graham's avatar
readme  
Benjamin Thomas Graham committed
178
179
180
181
182
@article{SubmanifoldSparseConvNet,
  title={Submanifold Sparse Convolutional Networks},
  author={Graham, Benjamin and van der Maaten, Laurens},
  journal={arXiv preprint arXiv:1706.01307},
  year={2017}
Benjamin Thomas Graham's avatar
Benjamin Thomas Graham committed
183
184
}
```