Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
face20bd
Unverified
Commit
face20bd
authored
May 21, 2019
by
Francisco Massa
Committed by
GitHub
May 21, 2019
Browse files
Add documentation for ShuffleNet plus minor doc fixes (#932)
parent
041b8ba1
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
47 additions
and
4 deletions
+47
-4
docs/source/models.rst
docs/source/models.rst
+4
-1
torchvision/datasets/cityscapes.py
torchvision/datasets/cityscapes.py
+3
-0
torchvision/models/shufflenetv2.py
torchvision/models/shufflenetv2.py
+36
-0
torchvision/transforms/transforms.py
torchvision/transforms/transforms.py
+4
-3
No files found.
docs/source/models.rst
View file @
face20bd
...
@@ -171,7 +171,10 @@ GoogLeNet
...
@@ -171,7 +171,10 @@ GoogLeNet
ShuffleNet v2
ShuffleNet v2
-------------
-------------
.. autofunction:: shufflenet
.. autofunction:: shufflenet_v2_x0_5
.. autofunction:: shufflenet_v2_x1_0
.. autofunction:: shufflenet_v2_x1_5
.. autofunction:: shufflenet_v2_x2_0
MobileNet v2
MobileNet v2
-------------
-------------
...
...
torchvision/datasets/cityscapes.py
View file @
face20bd
...
@@ -27,6 +27,7 @@ class Cityscapes(VisionDataset):
...
@@ -27,6 +27,7 @@ class Cityscapes(VisionDataset):
Get semantic segmentation target
Get semantic segmentation target
.. code-block:: python
.. code-block:: python
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine',
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine',
target_type='semantic')
target_type='semantic')
...
@@ -35,6 +36,7 @@ class Cityscapes(VisionDataset):
...
@@ -35,6 +36,7 @@ class Cityscapes(VisionDataset):
Get multiple targets
Get multiple targets
.. code-block:: python
.. code-block:: python
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine',
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine',
target_type=['instance', 'color', 'polygon'])
target_type=['instance', 'color', 'polygon'])
...
@@ -43,6 +45,7 @@ class Cityscapes(VisionDataset):
...
@@ -43,6 +45,7 @@ class Cityscapes(VisionDataset):
Validate on the "coarse" set
Validate on the "coarse" set
.. code-block:: python
.. code-block:: python
dataset = Cityscapes('./data/cityscapes', split='val', mode='coarse',
dataset = Cityscapes('./data/cityscapes', split='val', mode='coarse',
target_type='semantic')
target_type='semantic')
...
...
torchvision/models/shufflenetv2.py
View file @
face20bd
...
@@ -146,20 +146,56 @@ def _shufflenetv2(arch, pretrained, progress, *args, **kwargs):
...
@@ -146,20 +146,56 @@ def _shufflenetv2(arch, pretrained, progress, *args, **kwargs):
def
shufflenet_v2_x0_5
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
def
shufflenet_v2_x0_5
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
"""
Constructs a ShuffleNetV2 with 0.5x output channels, as described in
`"ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"
<https://arxiv.org/abs/1807.11164>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return
_shufflenetv2
(
'shufflenetv2_x0.5'
,
pretrained
,
progress
,
return
_shufflenetv2
(
'shufflenetv2_x0.5'
,
pretrained
,
progress
,
[
4
,
8
,
4
],
[
24
,
48
,
96
,
192
,
1024
],
**
kwargs
)
[
4
,
8
,
4
],
[
24
,
48
,
96
,
192
,
1024
],
**
kwargs
)
def
shufflenet_v2_x1_0
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
def
shufflenet_v2_x1_0
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
"""
Constructs a ShuffleNetV2 with 1.0x output channels, as described in
`"ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"
<https://arxiv.org/abs/1807.11164>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return
_shufflenetv2
(
'shufflenetv2_x1.0'
,
pretrained
,
progress
,
return
_shufflenetv2
(
'shufflenetv2_x1.0'
,
pretrained
,
progress
,
[
4
,
8
,
4
],
[
24
,
116
,
232
,
464
,
1024
],
**
kwargs
)
[
4
,
8
,
4
],
[
24
,
116
,
232
,
464
,
1024
],
**
kwargs
)
def
shufflenet_v2_x1_5
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
def
shufflenet_v2_x1_5
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
"""
Constructs a ShuffleNetV2 with 1.5x output channels, as described in
`"ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"
<https://arxiv.org/abs/1807.11164>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return
_shufflenetv2
(
'shufflenetv2_x1.5'
,
pretrained
,
progress
,
return
_shufflenetv2
(
'shufflenetv2_x1.5'
,
pretrained
,
progress
,
[
4
,
8
,
4
],
[
24
,
176
,
352
,
704
,
1024
],
**
kwargs
)
[
4
,
8
,
4
],
[
24
,
176
,
352
,
704
,
1024
],
**
kwargs
)
def
shufflenet_v2_x2_0
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
def
shufflenet_v2_x2_0
(
pretrained
=
False
,
progress
=
True
,
**
kwargs
):
"""
Constructs a ShuffleNetV2 with 2.0x output channels, as described in
`"ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design"
<https://arxiv.org/abs/1807.11164>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return
_shufflenetv2
(
'shufflenetv2_x2.0'
,
pretrained
,
progress
,
return
_shufflenetv2
(
'shufflenetv2_x2.0'
,
pretrained
,
progress
,
[
4
,
8
,
4
],
[
24
,
244
,
488
,
976
,
2048
],
**
kwargs
)
[
4
,
8
,
4
],
[
24
,
244
,
488
,
976
,
2048
],
**
kwargs
)
torchvision/transforms/transforms.py
View file @
face20bd
...
@@ -787,6 +787,7 @@ class LinearTransformation(object):
...
@@ -787,6 +787,7 @@ class LinearTransformation(object):
whitening transformation: Suppose X is a column vector zero-centered data.
whitening transformation: Suppose X is a column vector zero-centered data.
Then compute the data covariance matrix [D x D] with torch.mm(X.t(), X),
Then compute the data covariance matrix [D x D] with torch.mm(X.t(), X),
perform SVD on this matrix and pass it as transformation_matrix.
perform SVD on this matrix and pass it as transformation_matrix.
Args:
Args:
transformation_matrix (Tensor): tensor [D x D], D = C x H x W
transformation_matrix (Tensor): tensor [D x D], D = C x H x W
mean_vector (Tensor): tensor [D], D = C x H x W
mean_vector (Tensor): tensor [D], D = C x H x W
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment