Commit 6f3c5f1c authored by limm's avatar limm
Browse files

support v1.4.0

parent 6f674c7e
fileio
-------
.. automodule:: mmcv.fileio
:members:
image
------
.. automodule:: mmcv.image
:members:
video
------
.. automodule:: mmcv.video
:members:
arraymisc
---------
.. automodule:: mmcv.arraymisc
:members:
visualization
--------------
.. automodule:: mmcv.visualization
:members:
utils
-----
.. automodule:: mmcv.utils
:members:
cnn
----
.. automodule:: mmcv.cnn
:members:
runner
------
.. automodule:: mmcv.runner
:members:
ops
------
.. automodule:: mmcv.ops
:members:
../../CONTRIBUTING.md
\ No newline at end of file
## Pull Request (PR)
### What is PR
`PR` is the abbreviation of `Pull Request`. Here's the definition of `PR` in the [official document](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests) of Github.
> Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch.
### Basic Workflow
1. Get the most recent codebase
2. Checkout a new branch from the master branch
3. Commit your changes
4. Push your changes and create a PR
5. Discuss and review your code
6. Merge your branch to the master branch
### Procedures in detail
1. Get the most recent codebase
+ When you work on your first PR
- Fork the OpenMMLab repository: click the **fork** button at the top right corner of Github page
![avatar](../_static/community/1.png)
- Clone forked repository to local
```bash
git clone git@github.com:XXX/mmcv.git
```
- Add source repository to upstream
```bash
git remote add upstream git@github.com:open-mmlab/mmcv
```
+ After your first PR
- Checkout master branch of the local repository and pull the latest master branch of the source repository
```bash
git checkout master
git pull upstream master
```
2. Checkout a new branch from the master branch
```bash
git checkout -b branchname
```
```{tip}
To make commit history clear, we strongly recommend you checkout the master branch before create a new branch.
```
3. Commit your changes
```bash
# coding
git add [files]
git commit -m 'messages'
```
4. Push your changes to the forked repository and create a PR
+ Push the branch to your forked remote repository
```bash
git push origin branchname
```
+ Create a PR
![avatar](../_static/community/2.png)
+ Revise PR message template to describe your motivation and modifications made in this PR. You can also link the related issue to the PR manually in the PR message (For more information, checkout the [official guidance](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)).
5. Discuss and review your code
+ After creating a pull request, you can ask a specific person to review the changes you've proposed
![avatar](../_static/community/3.png)
+ Modify your codes according to reviewers' suggestions and then push your changes
6. Merge your branch to the master branch and delete the branch
```bash
git branch -d branchname # delete local branch
git push origin --delete branchname # delete remote branch
```
### PR Specs
1. Use [pre-commit](https://pre-commit.com) hook to avoid issues of code style
2. One short-time branch should be matched with only one PR
3. Accomplish a detailed change in one PR. Avoid large PR
>- Bad: Support Faster R-CNN
>- Acceptable: Add a box head to Faster R-CNN
>- Good: Add a parameter to box head to support custom conv-layer number
4. Provide clear and significant commit message
5. Provide clear and meaningful PR description
>- Task name should be clarified in title. The general format is: [Prefix] Short description of the PR (Suffix)
>- Prefix: add new feature [Feature], fix bug [Fix], related to documents [Docs], in developing [WIP] (which will not be reviewed temporarily)
>- Introduce main changes, results and influences on other modules in short description
>- Associate related issues and pull requests with a milestone
...@@ -15,19 +15,21 @@ import os ...@@ -15,19 +15,21 @@ import os
import sys import sys
import pytorch_sphinx_theme import pytorch_sphinx_theme
from m2r import MdInclude
from recommonmark.transform import AutoStructify
from sphinx.builders.html import StandaloneHTMLBuilder from sphinx.builders.html import StandaloneHTMLBuilder
sys.path.insert(0, os.path.abspath('../..')) sys.path.insert(0, os.path.abspath('..'))
version_file = '../../mmcv/version.py' version_file = '../mmcv/version.py'
with open(version_file) as f: with open(version_file, 'r') as f:
exec(compile(f.read(), version_file, 'exec')) exec(compile(f.read(), version_file, 'exec'))
__version__ = locals()['__version__'] __version__ = locals()['__version__']
# -- Project information ----------------------------------------------------- # -- Project information -----------------------------------------------------
project = 'mmcv' project = 'mmcv'
copyright = '2018-2022, OpenMMLab' copyright = '2018-2021, OpenMMLab'
author = 'MMCV Authors' author = 'MMCV Authors'
# The short X.Y version # The short X.Y version
...@@ -47,8 +49,6 @@ release = __version__ ...@@ -47,8 +49,6 @@ release = __version__
extensions = [ extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon', 'sphinx.ext.napoleon',
'sphinx.ext.viewcode', 'sphinx.ext.viewcode',
'sphinx.ext.autosectionlabel', 'sphinx.ext.autosectionlabel',
...@@ -57,18 +57,6 @@ extensions = [ ...@@ -57,18 +57,6 @@ extensions = [
'sphinx_copybutton', 'sphinx_copybutton',
] # yapf: disable ] # yapf: disable
myst_heading_anchors = 4
myst_enable_extensions = ['colon_fence']
# Configuration for intersphinx
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'numpy': ('https://numpy.org/doc/stable', None),
'torch': ('https://pytorch.org/docs/stable/', None),
'mmengine': ('https://mmengine.readthedocs.io/en/latest', None),
}
autodoc_mock_imports = ['mmcv._ext', 'mmcv.utils.ext_loader', 'torchvision'] autodoc_mock_imports = ['mmcv._ext', 'mmcv.utils.ext_loader', 'torchvision']
autosectionlabel_prefix_document = True autosectionlabel_prefix_document = True
...@@ -91,7 +79,7 @@ master_doc = 'index' ...@@ -91,7 +79,7 @@ master_doc = 'index'
# #
# This is also used if you do content translation via gettext catalogs. # This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases. # Usually you set "language" from the command line for these cases.
language = 'zh_CN' language = None
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
...@@ -120,9 +108,92 @@ html_theme_options = { ...@@ -120,9 +108,92 @@ html_theme_options = {
'name': 'GitHub', 'name': 'GitHub',
'url': 'https://github.com/open-mmlab/mmcv' 'url': 'https://github.com/open-mmlab/mmcv'
}, },
], {
# Specify the language of shared menu 'name':
'menu_lang': 'cn', 'Docs',
'children': [
{
'name': 'MMCV',
'url': 'https://mmcv.readthedocs.io/en/latest/',
},
{
'name': 'MIM',
'url': 'https://openmim.readthedocs.io/en/latest/'
},
{
'name': 'MMAction2',
'url': 'https://mmaction2.readthedocs.io/en/latest/',
},
{
'name': 'MMClassification',
'url':
'https://mmclassification.readthedocs.io/en/latest/',
},
{
'name': 'MMDetection',
'url': 'https://mmdetection.readthedocs.io/en/latest/',
},
{
'name': 'MMDetection3D',
'url': 'https://mmdetection3d.readthedocs.io/en/latest/',
},
{
'name': 'MMEditing',
'url': 'https://mmediting.readthedocs.io/en/latest/',
},
{
'name': 'MMGeneration',
'url': 'https://mmgeneration.readthedocs.io/en/latest/',
},
{
'name': 'MMOCR',
'url': 'https://mmocr.readthedocs.io/en/latest/',
},
{
'name': 'MMPose',
'url': 'https://mmpose.readthedocs.io/en/latest/',
},
{
'name': 'MMSegmentation',
'url': 'https://mmsegmentation.readthedocs.io/en/latest/',
},
{
'name': 'MMTracking',
'url': 'https://mmtracking.readthedocs.io/en/latest/',
},
{
'name': 'MMFlow',
'url': 'https://mmflow.readthedocs.io/en/latest/',
},
{
'name': 'MMFewShot',
'url': 'https://mmfewshot.readthedocs.io/en/latest/',
},
]
},
{
'name':
'OpenMMLab',
'children': [
{
'name': 'Homepage',
'url': 'https://openmmlab.com/'
},
{
'name': 'GitHub',
'url': 'https://github.com/open-mmlab/'
},
{
'name': 'Twitter',
'url': 'https://twitter.com/OpenMMLab'
},
{
'name': 'Zhihu',
'url': 'https://zhihu.com/people/openmmlab'
},
]
},
]
} }
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
...@@ -215,3 +286,16 @@ StandaloneHTMLBuilder.supported_image_types = [ ...@@ -215,3 +286,16 @@ StandaloneHTMLBuilder.supported_image_types = [
# Ignore >>> when copying code # Ignore >>> when copying code
copybutton_prompt_text = r'>>> |\.\.\. ' copybutton_prompt_text = r'>>> |\.\.\. '
copybutton_prompt_is_regexp = True copybutton_prompt_is_regexp = True
def setup(app):
app.add_config_value('no_underscore_emphasis', False, 'env')
app.add_config_value('m2r_parse_relative_links', False, 'env')
app.add_config_value('m2r_anonymous_references', False, 'env')
app.add_config_value('m2r_disable_inline_math', False, 'env')
app.add_directive('mdinclude', MdInclude)
app.add_config_value('recommonmark_config', {
'auto_toc_tree_section': 'Contents',
'enable_eval_rst': True,
}, True)
app.add_transform(AutoStructify)
# MMCV Operators # Definition of custom operators in MMCV
To make custom operators in MMCV more standard, precise definitions of each operator are listed in this document.
<!-- TOC --> <!-- TOC -->
- [Definition of custom operators in MMCV](#definition-of-custom-operators-in-mmcv)
- [MMCV Operators](#mmcv-operators)
- [MMCVBorderAlign](#mmcvborderalign) - [MMCVBorderAlign](#mmcvborderalign)
- [Description](#description) - [Description](#description)
- [Parameters](#parameters) - [Parameters](#parameters)
...@@ -83,26 +80,25 @@ To make custom operators in MMCV more standard, precise definitions of each oper ...@@ -83,26 +80,25 @@ To make custom operators in MMCV more standard, precise definitions of each oper
- [Inputs](#inputs-12) - [Inputs](#inputs-12)
- [Outputs](#outputs-12) - [Outputs](#outputs-12)
- [Type Constraints](#type-constraints-12) - [Type Constraints](#type-constraints-12)
- [grid_sampler\*](#grid_sampler) - [torch](#torch)
- [grid_sampler](#grid_sampler)
- [Description](#description-13) - [Description](#description-13)
- [Parameters](#parameters-13) - [Parameters](#parameters-13)
- [Inputs](#inputs-13) - [Inputs](#inputs-13)
- [Outputs](#outputs-13) - [Outputs](#outputs-13)
- [Type Constraints](#type-constraints-13) - [Type Constraints](#type-constraints-13)
- [cummax\*](#cummax) - [cummax](#cummax)
- [Description](#description-14) - [Description](#description-14)
- [Parameters](#parameters-14) - [Parameters](#parameters-14)
- [Inputs](#inputs-14) - [Inputs](#inputs-14)
- [Outputs](#outputs-14) - [Outputs](#outputs-14)
- [Type Constraints](#type-constraints-14) - [Type Constraints](#type-constraints-14)
- [cummin\*](#cummin) - [cummin](#cummin)
- [Description](#description-15) - [Description](#description-15)
- [Parameters](#parameters-15) - [Parameters](#parameters-15)
- [Inputs](#inputs-15) - [Inputs](#inputs-15)
- [Outputs](#outputs-15) - [Outputs](#outputs-15)
- [Type Constraints](#type-constraints-15) - [Type Constraints](#type-constraints-15)
- [Reminders](#reminders)
<!-- TOC --> <!-- TOC -->
## MMCVBorderAlign ## MMCVBorderAlign
...@@ -122,9 +118,9 @@ Read [BorderDet: Border Feature for Dense Object Detection](ttps://arxiv.org/abs ...@@ -122,9 +118,9 @@ Read [BorderDet: Border Feature for Dense Object Detection](ttps://arxiv.org/abs
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| ----- | ----------- | ----------------------------------------------------------------------------------- | | ------- | --------------- | -------------------------------------------------------------- |
| `int` | `pool_size` | number of positions sampled over the boxes' borders(e.g. top, bottom, left, right). | | `int` | `pool_size` | number of positions sampled over the boxes' borders(e.g. top, bottom, left, right). |
### Inputs ### Inputs
...@@ -156,11 +152,11 @@ Read [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.0 ...@@ -156,11 +152,11 @@ Read [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.0
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| ------- | -------------- | --------------------------------------------- | | ------- | --------------- | -------------------------------------------------------------- |
| `int` | `kernel_size` | reassemble kernel size, should be odd integer | | `int` | `kernel_size` | reassemble kernel size, should be odd integer|
| `int` | `group_size` | reassemble group size | | `int` | `group_size` | reassemble group size |
| `float` | `scale_factor` | upsample ratio(>=1) | | `float` | `scale_factor` | upsample ratio(>=1) |
### Inputs ### Inputs
...@@ -191,7 +187,8 @@ Read [CCNet: Criss-Cross Attention for SemanticSegmentation](https://arxiv.org/p ...@@ -191,7 +187,8 @@ Read [CCNet: Criss-Cross Attention for SemanticSegmentation](https://arxiv.org/p
### Parameters ### Parameters
None | Type | Parameter | Description |
| ------- | --------------- | -------------------------------------------------------------- |
### Inputs ### Inputs
...@@ -222,7 +219,8 @@ Read [CCNet: Criss-Cross Attention for SemanticSegmentation](https://arxiv.org/p ...@@ -222,7 +219,8 @@ Read [CCNet: Criss-Cross Attention for SemanticSegmentation](https://arxiv.org/p
### Parameters ### Parameters
None | Type | Parameter | Description |
| ------- | --------------- | -------------------------------------------------------------- |
### Inputs ### Inputs
...@@ -244,6 +242,7 @@ None ...@@ -244,6 +242,7 @@ None
- T:tensor(float32) - T:tensor(float32)
## MMCVCornerPool ## MMCVCornerPool
### Description ### Description
...@@ -252,9 +251,9 @@ Perform CornerPool on `input` features. Read [CornerNet -- Detecting Objects as ...@@ -252,9 +251,9 @@ Perform CornerPool on `input` features. Read [CornerNet -- Detecting Objects as
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| ----- | --------- | ---------------------------------------------------------------- | | ------- | --------------- | ---------------------------------------------------------------- |
| `int` | `mode` | corner pool mode, (0: `top`, 1: `bottom`, 2: `left`, 3: `right`) | | `int` | `mode` | corner pool mode, (0: `top`, 1: `bottom`, 2: `left`, 3: `right`) |
### Inputs ### Inputs
...@@ -284,15 +283,15 @@ Read [Deformable Convolutional Networks](https://arxiv.org/pdf/1703.06211.pdf) f ...@@ -284,15 +283,15 @@ Read [Deformable Convolutional Networks](https://arxiv.org/pdf/1703.06211.pdf) f
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| -------------- | ------------------- | ----------------------------------------------------------------------------------------------------------------- | | -------------- | ------------------ | ------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel, (sH, sW). Defaults to `(1, 1)`. | | `list of ints` | `stride` | The stride of the convolving kernel, (sH, sW). Defaults to `(1, 1)`. |
| `list of ints` | `padding` | Paddings on both sides of the input, (padH, padW). Defaults to `(0, 0)`. | | `list of ints` | `padding` | Paddings on both sides of the input, (padH, padW). Defaults to `(0, 0)`. |
| `list of ints` | `dilation` | The spacing between kernel elements (dH, dW). Defaults to `(1, 1)`. | | `list of ints` | `dilation` | The spacing between kernel elements (dH, dW). Defaults to `(1, 1)`. |
| `int` | `groups` | Split input into groups. `input_channel` should be divisible by the number of groups. Defaults to `1`. | | `int` | `groups` | Split input into groups. `input_channel` should be divisible by the number of groups. Defaults to `1`.|
| `int` | `deformable_groups` | Groups of deformable offset. Defaults to `1`. | | `int` | `deformable_groups` | Groups of deformable offset. Defaults to `1`. |
| `int` | `bias` | Whether to add a learnable bias to the output. `0` stands for `False` and `1` stands for `True`. Defaults to `0`. | | `int` | `bias` | Whether to add a learnable bias to the output. `0` stands for `False` and `1` stands for `True`. Defaults to `0`. |
| `int` | `im2col_step` | Groups of deformable offset. Defaults to `32`. | | `int` | `im2col_step` | Groups of deformable offset. Defaults to `32`. |
### Inputs ### Inputs
...@@ -324,11 +323,11 @@ Perform Modulated Deformable Convolution on input feature, read [Deformable Conv ...@@ -324,11 +323,11 @@ Perform Modulated Deformable Convolution on input feature, read [Deformable Conv
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| -------------- | ------------------- | ------------------------------------------------------------------------------------- | | -------------- | ------------------ | ------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) | | `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) | | `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) | | `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
| `int` | `deformable_groups` | Groups of deformable offset. | | `int` | `deformable_groups` | Groups of deformable offset. |
| `int` | `groups` | Split input into groups. `input_channel` should be divisible by the number of groups. | | `int` | `groups` | Split input into groups. `input_channel` should be divisible by the number of groups. |
...@@ -366,13 +365,13 @@ Deformable roi pooling layer ...@@ -366,13 +365,13 @@ Deformable roi pooling layer
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- | | ------- | --------------- | -------------------------------------------------------------- |
| `int` | `output_height` | height of output roi | | `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi | | `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes | | `float` | `spatial_scale` | used to scale the input boxes |
| `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. | | `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. |
| `float` | `gamma` | gamma | | `float` | `gamma` | gamma |
### Inputs ### Inputs
...@@ -405,10 +404,10 @@ Read [Pixel Recurrent Neural Networks](https://arxiv.org/abs/1601.06759) for mor ...@@ -405,10 +404,10 @@ Read [Pixel Recurrent Neural Networks](https://arxiv.org/abs/1601.06759) for mor
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| -------------- | --------- | -------------------------------------------------------------------------------- | | ------- | --------------- | -------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW). **Only support stride=1 in mmcv** | | `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW). **Only support stride=1 in mmcv** |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW). Defaults to `(0, 0)`. | | `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW). Defaults to `(0, 0)`. |
### Inputs ### Inputs
...@@ -444,10 +443,10 @@ Read [PSANet: Point-wise Spatial Attention Network for Scene Parsing](https://hs ...@@ -444,10 +443,10 @@ Read [PSANet: Point-wise Spatial Attention Network for Scene Parsing](https://hs
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| -------------- | ----------- | -------------------------------------------- | | ------- | --------------- | -------------------------------------------------------------- |
| `int` | `psa_type` | `0` means collect and `1` means `distribute` | | `int` | `psa_type` | `0` means collect and `1` means `distribute` |
| `list of ints` | `mask_size` | The size of mask | | `list of ints` | `mask_size` | The size of mask |
### Inputs ### Inputs
...@@ -479,9 +478,9 @@ Note this definition is slightly different with [onnx: NonMaxSuppression](https: ...@@ -479,9 +478,9 @@ Note this definition is slightly different with [onnx: NonMaxSuppression](https:
| Type | Parameter | Description | | Type | Parameter | Description |
| ------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | | ------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `int` | `center_point_box` | 0 - the box data is supplied as \[y1, x1, y2, x2\], 1-the box data is supplied as \[x_center, y_center, width, height\]. | | `int` | `center_point_box` | 0 - the box data is supplied as [y1, x1, y2, x2], 1-the box data is supplied as [x_center, y_center, width, height]. |
| `int` | `max_output_boxes_per_class` | The maximum number of boxes to be selected per batch per class. Default to 0, number of output boxes equal to number of input boxes. | | `int` | `max_output_boxes_per_class` | The maximum number of boxes to be selected per batch per class. Default to 0, number of output boxes equal to number of input boxes. |
| `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range \[0, 1\]. Default to 0. | | `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range [0, 1]. Default to 0. |
| `float` | `score_threshold` | The threshold for deciding when to remove boxes based on score. | | `float` | `score_threshold` | The threshold for deciding when to remove boxes based on score. |
| `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). | | `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). |
...@@ -544,6 +543,7 @@ Perform RoIAlign on output feature, used in bbox_head of most two-stage detector ...@@ -544,6 +543,7 @@ Perform RoIAlign on output feature, used in bbox_head of most two-stage detector
- T:tensor(float32) - T:tensor(float32)
## MMCVRoIAlignRotated ## MMCVRoIAlignRotated
### Description ### Description
...@@ -552,15 +552,15 @@ Perform RoI align pooling for rotated proposals ...@@ -552,15 +552,15 @@ Perform RoI align pooling for rotated proposals
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- | | ------- | --------------- | -------------------------------------------------------------- |
| `int` | `output_height` | height of output roi | | `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi | | `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes | | `float` | `spatial_scale` | used to scale the input boxes |
| `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. | | `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. |
| `str` | `mode` | pooling mode in each bin. `avg` or `max` | | `str` | `mode` | pooling mode in each bin. `avg` or `max` |
| `int` | `aligned` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. | | `int` | `aligned` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. |
| `int` | `clockwise` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. | | `int` | `clockwise` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. |
### Inputs ### Inputs
...@@ -581,7 +581,9 @@ Perform RoI align pooling for rotated proposals ...@@ -581,7 +581,9 @@ Perform RoI align pooling for rotated proposals
- T:tensor(float32) - T:tensor(float32)
## grid_sampler\* # torch
## grid_sampler
### Description ### Description
...@@ -617,7 +619,7 @@ Check [torch.nn.functional.grid_sample](https://pytorch.org/docs/stable/generate ...@@ -617,7 +619,7 @@ Check [torch.nn.functional.grid_sample](https://pytorch.org/docs/stable/generate
- T:tensor(float32, Linear) - T:tensor(float32, Linear)
## cummax\* ## cummax
### Description ### Description
...@@ -625,9 +627,9 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e ...@@ -625,9 +627,9 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| ----- | --------- | -------------------------------------- | | ------- | --------------- | ---------------------------------------------------------------- |
| `int` | `dim` | the dimension to do the operation over | | `int` | `dim` | the dimension to do the operation over |
### Inputs ### Inputs
...@@ -649,7 +651,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e ...@@ -649,7 +651,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e
- T:tensor(float32) - T:tensor(float32)
## cummin\* ## cummin
### Description ### Description
...@@ -657,9 +659,9 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e ...@@ -657,9 +659,9 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
| ----- | --------- | -------------------------------------- | | ------- | --------------- | ---------------------------------------------------------------- |
| `int` | `dim` | the dimension to do the operation over | | `int` | `dim` | the dimension to do the operation over |
### Inputs ### Inputs
...@@ -680,7 +682,3 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e ...@@ -680,7 +682,3 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e
### Type Constraints ### Type Constraints
- T:tensor(float32) - T:tensor(float32)
## Reminders
- Operators endwith `*` are defined in Torch and are included here for the conversion to ONNX.
## Introduction of onnx module in MMCV (Experimental)
### register_extra_symbolics
Some extra symbolic functions need to be registered before exporting PyTorch model to ONNX.
#### Example
```python
import mmcv
from mmcv.onnx import register_extra_symbolics
opset_version = 11
register_extra_symbolics(opset_version)
```
#### FAQs
- None
## Onnxruntime Custom Ops
<!-- TOC -->
- [Onnxruntime Custom Ops](#onnxruntime-custom-ops)
- [SoftNMS](#softnms)
- [Description](#description)
- [Parameters](#parameters)
- [Inputs](#inputs)
- [Outputs](#outputs)
- [Type Constraints](#type-constraints)
- [RoIAlign](#roialign)
- [Description](#description-1)
- [Parameters](#parameters-1)
- [Inputs](#inputs-1)
- [Outputs](#outputs-1)
- [Type Constraints](#type-constraints-1)
- [NMS](#nms)
- [Description](#description-2)
- [Parameters](#parameters-2)
- [Inputs](#inputs-2)
- [Outputs](#outputs-2)
- [Type Constraints](#type-constraints-2)
- [grid_sampler](#grid_sampler)
- [Description](#description-3)
- [Parameters](#parameters-3)
- [Inputs](#inputs-3)
- [Outputs](#outputs-3)
- [Type Constraints](#type-constraints-3)
- [CornerPool](#cornerpool)
- [Description](#description-4)
- [Parameters](#parameters-4)
- [Inputs](#inputs-4)
- [Outputs](#outputs-4)
- [Type Constraints](#type-constraints-4)
- [cummax](#cummax)
- [Description](#description-5)
- [Parameters](#parameters-5)
- [Inputs](#inputs-5)
- [Outputs](#outputs-5)
- [Type Constraints](#type-constraints-5)
- [cummin](#cummin)
- [Description](#description-6)
- [Parameters](#parameters-6)
- [Inputs](#inputs-6)
- [Outputs](#outputs-6)
- [Type Constraints](#type-constraints-6)
- [MMCVModulatedDeformConv2d](#mmcvmodulateddeformconv2d)
- [Description](#description-7)
- [Parameters](#parameters-7)
- [Inputs](#inputs-7)
- [Outputs](#outputs-7)
- [Type Constraints](#type-constraints-7)
- [MMCVDeformConv2d](#mmcvdeformconv2d)
- [Description](#description-8)
- [Parameters](#parameters-8)
- [Inputs](#inputs-8)
- [Outputs](#outputs-8)
- [Type Constraints](#type-constraints-8)
<!-- TOC -->
### SoftNMS
#### Description
Perform soft NMS on `boxes` with `scores`. Read [Soft-NMS -- Improving Object Detection With One Line of Code](https://arxiv.org/abs/1704.04503) for detail.
#### Parameters
| Type | Parameter | Description |
| ------- | --------------- | -------------------------------------------------------------- |
| `float` | `iou_threshold` | IoU threshold for NMS |
| `float` | `sigma` | hyperparameter for gaussian method |
| `float` | `min_score` | score filter threshold |
| `int` | `method` | method to do the nms, (0: `naive`, 1: `linear`, 2: `gaussian`) |
| `int` | `offset` | `boxes` width or height is (x2 - x1 + offset). (0 or 1) |
#### Inputs
<dl>
<dt><tt>boxes</tt>: T</dt>
<dd>Input boxes. 2-D tensor of shape (N, 4). N is the number of boxes.</dd>
<dt><tt>scores</tt>: T</dt>
<dd>Input scores. 1-D tensor of shape (N, ).</dd>
</dl>
#### Outputs
<dl>
<dt><tt>dets</tt>: T</dt>
<dd>Output boxes and scores. 2-D tensor of shape (num_valid_boxes, 5), [[x1, y1, x2, y2, score], ...]. num_valid_boxes is the number of valid boxes.</dd>
<dt><tt>indices</tt>: tensor(int64)</dt>
<dd>Output indices. 1-D tensor of shape (num_valid_boxes, ).</dd>
</dl>
#### Type Constraints
- T:tensor(float32)
### RoIAlign
#### Description
Perform RoIAlign on output feature, used in bbox_head of most two-stage detectors.
#### Parameters
| Type | Parameter | Description |
| ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- |
| `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes |
| `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. |
| `str` | `mode` | pooling mode in each bin. `avg` or `max` |
| `int` | `aligned` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. |
#### Inputs
<dl>
<dt><tt>input</tt>: T</dt>
<dd>Input feature map; 4D tensor of shape (N, C, H, W), where N is the batch size, C is the numbers of channels, H and W are the height and width of the data.</dd>
<dt><tt>rois</tt>: T</dt>
<dd>RoIs (Regions of Interest) to pool over; 2-D tensor of shape (num_rois, 5) given as [[batch_index, x1, y1, x2, y2], ...]. The RoIs' coordinates are the coordinate system of input.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>feat</tt>: T</dt>
<dd>RoI pooled output, 4-D tensor of shape (num_rois, C, output_height, output_width). The r-th batch element feat[r-1] is a pooled feature map corresponding to the r-th RoI RoIs[r-1].<dd>
</dl>
#### Type Constraints
- T:tensor(float32)
### NMS
#### Description
Filter out boxes has high IoU overlap with previously selected boxes.
#### Parameters
| Type | Parameter | Description |
| ------- | --------------- | ---------------------------------------------------------------------------------------------------------------- |
| `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range [0, 1]. Default to 0. |
| `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). |
#### Inputs
<dl>
<dt><tt>bboxes</tt>: T</dt>
<dd>Input boxes. 2-D tensor of shape (num_boxes, 4). num_boxes is the number of input boxes.</dd>
<dt><tt>scores</tt>: T</dt>
<dd>Input scores. 1-D tensor of shape (num_boxes, ).</dd>
</dl>
#### Outputs
<dl>
<dt><tt>indices</tt>: tensor(int32, Linear)</dt>
<dd>Selected indices. 1-D tensor of shape (num_valid_boxes, ). num_valid_boxes is the number of valid boxes.</dd>
</dl>
#### Type Constraints
- T:tensor(float32)
### grid_sampler
#### Description
Perform sample from `input` with pixel locations from `grid`.
#### Parameters
| Type | Parameter | Description |
| ----- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) |
| `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) |
| `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. |
#### Inputs
<dl>
<dt><tt>input</tt>: T</dt>
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the numbers of channels, inH and inW are the height and width of the data.</dd>
<dt><tt>grid</tt>: T</dt>
<dd>Input offset; 4-D tensor of shape (N, outH, outW, 2), where outH and outW is the height and width of offset and output. </dd>
</dl>
#### Outputs
<dl>
<dt><tt>output</tt>: T</dt>
<dd>Output feature; 4-D tensor of shape (N, C, outH, outW).</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### CornerPool
#### Description
Perform CornerPool on `input` features. Read [CornerNet -- Detecting Objects as Paired Keypoints](https://arxiv.org/abs/1808.01244) for more details.
#### Parameters
| Type | Parameter | Description |
| ----- | --------- | ---------------------------------------------------------------- |
| `int` | `mode` | corner pool mode, (0: `top`, 1: `bottom`, 2: `left`, 3: `right`) |
#### Inputs
<dl>
<dt><tt>input</tt>: T</dt>
<dd>Input features. 4-D tensor of shape (N, C, H, W). N is the batch size.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>output</tt>: T</dt>
<dd>Output the pooled features. 4-D tensor of shape (N, C, H, W).</dd>
</dl>
#### Type Constraints
- T:tensor(float32)
### cummax
#### Description
Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum elements of `input` in the dimension `dim`. And `indices` is the index location of each maximum value found in the dimension `dim`. Read [torch.cummax](https://pytorch.org/docs/stable/generated/torch.cummax.html) for more details.
#### Parameters
| Type | Parameter | Description |
| ----- | --------- | -------------------------------------- |
| `int` | `dim` | the dimension to do the operation over |
#### Inputs
<dl>
<dt><tt>input</tt>: T</dt>
<dd>The input tensor with various shapes. Tensor with empty element is also supported.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>output</tt>: T</dt>
<dd>Output the cumulative maximum elements of `input` in the dimension `dim`, with the same shape and dtype as `input`.</dd>
<dt><tt>indices</tt>: tensor(int64)</dt>
<dd>Output the index location of each cumulative maximum value found in the dimension `dim`, with the same shape as `input`.</dd>
</dl>
#### Type Constraints
- T:tensor(float32)
### cummin
#### Description
Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum elements of `input` in the dimension `dim`. And `indices` is the index location of each minimum value found in the dimension `dim`. Read [torch.cummin](https://pytorch.org/docs/stable/generated/torch.cummin.html) for more details.
#### Parameters
| Type | Parameter | Description |
| ----- | --------- | -------------------------------------- |
| `int` | `dim` | the dimension to do the operation over |
#### Inputs
<dl>
<dt><tt>input</tt>: T</dt>
<dd>The input tensor with various shapes. Tensor with empty element is also supported.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>output</tt>: T</dt>
<dd>Output the cumulative minimum elements of `input` in the dimension `dim`, with the same shape and dtype as `input`.</dd>
<dt><tt>indices</tt>: tensor(int64)</dt>
<dd>Output the index location of each cumulative minimum value found in the dimension `dim`, with the same shape as `input`.</dd>
</dl>
#### Type Constraints
- T:tensor(float32)
### MMCVModulatedDeformConv2d
#### Description
Perform Modulated Deformable Convolution on input feature, read [Deformable ConvNets v2: More Deformable, Better Results](https://arxiv.org/abs/1811.11168?from=timeline) for detail.
#### Parameters
| Type | Parameter | Description |
| -------------- | ------------------- | ------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
| `int` | `deformable_groups` | Groups of deformable offset. |
| `int` | `groups` | Split input into groups. `input_channel` should be divisible by the number of groups. |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the number of channels, inH and inW are the height and width of the data.</dd>
<dt><tt>inputs[1]</tt>: T</dt>
<dd>Input offset; 4-D tensor of shape (N, deformable_group* 2* kH* kW, outH, outW), where kH and kW is the height and width of weight, outH and outW is the height and width of offset and output.</dd>
<dt><tt>inputs[2]</tt>: T</dt>
<dd>Input mask; 4-D tensor of shape (N, deformable_group* kH* kW, outH, outW), where kH and kW is the height and width of weight, outH and outW is the height and width of offset and output.</dd>
<dt><tt>inputs[3]</tt>: T</dt>
<dd>Input weight; 4-D tensor of shape (output_channel, input_channel, kH, kW).</dd>
<dt><tt>inputs[4]</tt>: T, optional</dt>
<dd>Input bias; 1-D tensor of shape (output_channel).</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Output feature; 4-D tensor of shape (N, output_channel, outH, outW).</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
## MMCVDeformConv2d
### Description
Perform Deformable Convolution on input feature, read [Deformable Convolutional Network](https://arxiv.org/abs/1703.06211) for detail.
### Parameters
| Type | Parameter | Description |
| -------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
| `int` | `deformable_group` | Groups of deformable offset. |
| `int` | `group` | Split input into groups. `input_channel` should be divisible by the number of groups. |
| `int` | `im2col_step` | DeformableConv2d use im2col to compute convolution. im2col_step is used to split input and offset, reduce memory usage of column. |
### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the numbers of channels, inH and inW are the height and width of the data.</dd>
<dt><tt>inputs[1]</tt>: T</dt>
<dd>Input offset; 4-D tensor of shape (N, deformable_group* 2* kH* kW, outH, outW), where kH and kW is the height and width of weight, outH and outW is the height and width of offset and output.</dd>
<dt><tt>inputs[2]</tt>: T</dt>
<dd>Input weight; 4-D tensor of shape (output_channel, input_channel, kH, kW).</dd>
</dl>
### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Output feature; 4-D tensor of shape (N, output_channel, outH, outW).</dd>
</dl>
### Type Constraints
- T:tensor(float32, Linear)
## Custom operators for ONNX Runtime in MMCV
### Introduction of ONNX Runtime
**ONNX Runtime** is a cross-platform inferencing and training accelerator compatible with many popular ML/DNN frameworks. Check its [github](https://github.com/microsoft/onnxruntime) for more information.
### Introduction of ONNX
**ONNX** stands for **Open Neural Network Exchange**, which acts as *Intermediate Representation(IR)* for ML/DNN models from many frameworks. Check its [github](https://github.com/onnx/onnx) for more information.
### Why include custom operators for ONNX Runtime in MMCV
- To verify the correctness of exported ONNX models in ONNX Runtime.
- To ease the deployment of ONNX models with custom operators from `mmcv.ops` in ONNX Runtime.
### List of operators for ONNX Runtime supported in MMCV
| Operator | CPU | GPU | MMCV Releases |
| :----------------------------------------------------: | :---: | :---: | :-----------: |
| [SoftNMS](onnxruntime_custom_ops.md#softnms) | Y | N | 1.2.3 |
| [RoIAlign](onnxruntime_custom_ops.md#roialign) | Y | N | 1.2.5 |
| [NMS](onnxruntime_custom_ops.md#nms) | Y | N | 1.2.7 |
| [grid_sampler](onnxruntime_custom_ops.md#grid_sampler) | Y | N | 1.3.1 |
| [CornerPool](onnxruntime_custom_ops.md#cornerpool) | Y | N | 1.3.4 |
| [cummax](onnxruntime_custom_ops.md#cummax) | Y | N | master |
| [cummin](onnxruntime_custom_ops.md#cummin) | Y | N | master |
### How to build custom operators for ONNX Runtime
*Please be noted that only **onnxruntime>=1.8.1** of CPU version on Linux platform is tested by now.*
#### Prerequisite
- Clone repository
```bash
git clone https://github.com/open-mmlab/mmcv.git
```
- Download `onnxruntime-linux` from ONNX Runtime [releases](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1), extract it, expose `ONNXRUNTIME_DIR` and finally add the lib path to `LD_LIBRARY_PATH` as below:
```bash
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
cd onnxruntime-linux-x64-1.8.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
```
#### Build on Linux
```bash
cd mmcv ## to MMCV root directory
MMCV_WITH_OPS=1 MMCV_WITH_ORT=1 python setup.py develop
```
### How to do inference using exported ONNX models with custom operators in ONNX Runtime in python
Install ONNX Runtime with `pip`
```bash
pip install onnxruntime==1.8.1
```
Inference Demo
```python
import os
import numpy as np
import onnxruntime as ort
from mmcv.ops import get_onnxruntime_op_path
ort_custom_op_path = get_onnxruntime_op_path()
assert os.path.exists(ort_custom_op_path)
session_options = ort.SessionOptions()
session_options.register_custom_ops_library(ort_custom_op_path)
## exported ONNX model with custom operators
onnx_file = 'sample.onnx'
input_data = np.random.randn(1, 3, 224, 224).astype(np.float32)
sess = ort.InferenceSession(onnx_file, session_options)
onnx_results = sess.run(None, {'input' : input_data})
```
### How to add a new custom operator for ONNX Runtime in MMCV
#### Reminder
- The custom operator is not included in [supported operator list](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md) in ONNX Runtime.
- The custom operator should be able to be exported to ONNX.
#### Main procedures
Take custom operator `soft_nms` for example.
1. Add header `soft_nms.h` to ONNX Runtime include directory `mmcv/ops/csrc/onnxruntime/`
2. Add source `soft_nms.cpp` to ONNX Runtime source directory `mmcv/ops/csrc/onnxruntime/cpu/`
3. Register `soft_nms` operator in [onnxruntime_register.cpp](../../mmcv/ops/csrc/onnxruntime/cpu/onnxruntime_register.cpp)
```c++
#include "soft_nms.h"
SoftNmsOp c_SoftNmsOp;
if (auto status = ortApi->CustomOpDomain_Add(domain, &c_SoftNmsOp)) {
return status;
}
```
4. Add unit test into `tests/test_ops/test_onnx.py`
Check [here](../../tests/test_ops/test_onnx.py) for examples.
**Finally, welcome to send us PR of adding custom operators for ONNX Runtime in MMCV.** :nerd_face:
### Known Issues
- "RuntimeError: tuple appears in op that does not forward tuples, unsupported kind: `prim::PythonOp`."
1. Note generally `cummax` or `cummin` is exportable to ONNX as long as the torch version >= 1.5.0, since `torch.cummax` is only supported with torch >= 1.5.0. But when `cummax` or `cummin` serves as an intermediate component whose outputs is used as inputs for another modules, it's expected that torch version must be >= 1.7.0. Otherwise the above error might arise, when running exported ONNX model with onnxruntime.
2. Solution: update the torch version to 1.7.0 or higher.
### References
- [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md)
- [How to add a custom operator/kernel in ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/docs/AddingCustomOp.md)
## TensorRT Custom Ops
<!-- TOC -->
- [TensorRT Custom Ops](#tensorrt-custom-ops)
- [MMCVRoIAlign](#mmcvroialign)
- [Description](#description)
- [Parameters](#parameters)
- [Inputs](#inputs)
- [Outputs](#outputs)
- [Type Constraints](#type-constraints)
- [ScatterND](#scatternd)
- [Description](#description-1)
- [Parameters](#parameters-1)
- [Inputs](#inputs-1)
- [Outputs](#outputs-1)
- [Type Constraints](#type-constraints-1)
- [NonMaxSuppression](#nonmaxsuppression)
- [Description](#description-2)
- [Parameters](#parameters-2)
- [Inputs](#inputs-2)
- [Outputs](#outputs-2)
- [Type Constraints](#type-constraints-2)
- [MMCVDeformConv2d](#mmcvdeformconv2d)
- [Description](#description-3)
- [Parameters](#parameters-3)
- [Inputs](#inputs-3)
- [Outputs](#outputs-3)
- [Type Constraints](#type-constraints-3)
- [grid_sampler](#grid_sampler)
- [Description](#description-4)
- [Parameters](#parameters-4)
- [Inputs](#inputs-4)
- [Outputs](#outputs-4)
- [Type Constraints](#type-constraints-4)
- [cummax](#cummax)
- [Description](#description-5)
- [Parameters](#parameters-5)
- [Inputs](#inputs-5)
- [Outputs](#outputs-5)
- [Type Constraints](#type-constraints-5)
- [cummin](#cummin)
- [Description](#description-6)
- [Parameters](#parameters-6)
- [Inputs](#inputs-6)
- [Outputs](#outputs-6)
- [Type Constraints](#type-constraints-6)
- [MMCVInstanceNormalization](#mmcvinstancenormalization)
- [Description](#description-7)
- [Parameters](#parameters-7)
- [Inputs](#inputs-7)
- [Outputs](#outputs-7)
- [Type Constraints](#type-constraints-7)
- [MMCVModulatedDeformConv2d](#mmcvmodulateddeformconv2d)
- [Description](#description-8)
- [Parameters](#parameters-8)
- [Inputs](#inputs-8)
- [Outputs](#outputs-8)
- [Type Constraints](#type-constraints-8)
<!-- TOC -->
### MMCVRoIAlign
#### Description
Perform RoIAlign on output feature, used in bbox_head of most two stage
detectors.
#### Parameters
| Type | Parameter | Description |
| ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- |
| `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes |
| `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. |
| `str` | `mode` | pooling mode in each bin. `avg` or `max` |
| `int` | `aligned` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Input feature map; 4D tensor of shape (N, C, H, W), where N is the batch size, C is the numbers of channels, H and W are the height and width of the data.</dd>
<dt><tt>inputs[1]</tt>: T</dt>
<dd>RoIs (Regions of Interest) to pool over; 2-D tensor of shape (num_rois, 5) given as [[batch_index, x1, y1, x2, y2], ...]. The RoIs' coordinates are the coordinate system of inputs[0].</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>RoI pooled output, 4-D tensor of shape (num_rois, C, output_height, output_width). The r-th batch element output[0][r-1] is a pooled feature map corresponding to the r-th RoI inputs[1][r-1].<dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### ScatterND
#### Description
ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1, and `updates` tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input `data`, and then updating its value to values specified by updates at specific index positions specified by `indices`. Its output shape is the same as the shape of `data`. Note that `indices` should not have duplicate entries. That is, two or more updates for the same index-location is not supported.
The `output` is calculated via the following equation:
```python
output = np.copy(data)
update_indices = indices.shape[:-1]
for idx in np.ndindex(update_indices):
output[indices[idx]] = updates[idx]
```
#### Parameters
None
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Tensor of rank r>=1.</dd>
<dt><tt>inputs[1]</tt>: tensor(int32, Linear)</dt>
<dd>Tensor of rank q>=1.</dd>
<dt><tt>inputs[2]</tt>: T</dt>
<dd>Tensor of rank q + r - indices_shape[-1] - 1.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Tensor of rank r >= 1.</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear), tensor(int32, Linear)
### NonMaxSuppression
#### Description
Filter out boxes has high IoU overlap with previously selected boxes or low score. Output the indices of valid boxes. Indices of invalid boxes will be filled with -1.
#### Parameters
| Type | Parameter | Description |
| ------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `int` | `center_point_box` | 0 - the box data is supplied as [y1, x1, y2, x2], 1-the box data is supplied as [x_center, y_center, width, height]. |
| `int` | `max_output_boxes_per_class` | The maximum number of boxes to be selected per batch per class. Default to 0, number of output boxes equal to number of input boxes. |
| `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range [0, 1]. Default to 0. |
| `float` | `score_threshold` | The threshold for deciding when to remove boxes based on score. |
| `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Input boxes. 3-D tensor of shape (num_batches, spatial_dimension, 4).</dd>
<dt><tt>inputs[1]</tt>: T</dt>
<dd>Input scores. 3-D tensor of shape (num_batches, num_classes, spatial_dimension).</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: tensor(int32, Linear)</dt>
<dd>Selected indices. 2-D tensor of shape (num_selected_indices, 3) as [[batch_index, class_index, box_index], ...].</dd>
<dd>num_selected_indices=num_batches* num_classes* min(max_output_boxes_per_class, spatial_dimension).</dd>
<dd>All invalid indices will be filled with -1.</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### MMCVDeformConv2d
#### Description
Perform Deformable Convolution on input feature, read [Deformable Convolutional Network](https://arxiv.org/abs/1703.06211) for detail.
#### Parameters
| Type | Parameter | Description |
| -------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
| `int` | `deformable_group` | Groups of deformable offset. |
| `int` | `group` | Split input into groups. `input_channel` should be divisible by the number of groups. |
| `int` | `im2col_step` | DeformableConv2d use im2col to compute convolution. im2col_step is used to split input and offset, reduce memory usage of column. |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the numbers of channels, inH and inW are the height and width of the data.</dd>
<dt><tt>inputs[1]</tt>: T</dt>
<dd>Input offset; 4-D tensor of shape (N, deformable_group* 2* kH* kW, outH, outW), where kH and kW is the height and width of weight, outH and outW is the height and width of offset and output.</dd>
<dt><tt>inputs[2]</tt>: T</dt>
<dd>Input weight; 4-D tensor of shape (output_channel, input_channel, kH, kW).</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Output feature; 4-D tensor of shape (N, output_channel, outH, outW).</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### grid_sampler
#### Description
Perform sample from `input` with pixel locations from `grid`.
#### Parameters
| Type | Parameter | Description |
| ----- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) |
| `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) |
| `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the numbers of channels, inH and inW are the height and width of the data.</dd>
<dt><tt>inputs[1]</tt>: T</dt>
<dd>Input offset; 4-D tensor of shape (N, outH, outW, 2), where outH and outW is the height and width of offset and output. </dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Output feature; 4-D tensor of shape (N, C, outH, outW).</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### cummax
#### Description
Returns a namedtuple (`values`, `indices`) where `values` is the cumulative maximum of elements of `input` in the dimension `dim`. And `indices` is the index location of each maximum value found in the dimension `dim`.
#### Parameters
| Type | Parameter | Description |
| ----- | --------- | --------------------------------------- |
| `int` | `dim` | The dimension to do the operation over. |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>The input tensor.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Output values.</dd>
<dt><tt>outputs[1]</tt>: (int32, Linear)</dt>
<dd>Output indices.</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### cummin
#### Description
Returns a namedtuple (`values`, `indices`) where `values` is the cumulative minimum of elements of `input` in the dimension `dim`. And `indices` is the index location of each minimum value found in the dimension `dim`.
#### Parameters
| Type | Parameter | Description |
| ----- | --------- | --------------------------------------- |
| `int` | `dim` | The dimension to do the operation over. |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>The input tensor.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Output values.</dd>
<dt><tt>outputs[1]</tt>: (int32, Linear)</dt>
<dd>Output indices.</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### MMCVInstanceNormalization
#### Description
Carries out instance normalization as described in the paper https://arxiv.org/abs/1607.08022.
y = scale * (x - mean) / sqrt(variance + epsilon) + B, where mean and variance are computed per instance per channel.
#### Parameters
| Type | Parameter | Description |
| ------- | --------- | -------------------------------------------------------------------- |
| `float` | `epsilon` | The epsilon value to use to avoid division by zero. Default is 1e-05 |
#### Inputs
<dl>
<dt><tt>input</tt>: T</dt>
<dd>Input data tensor from the previous operator; dimensions for image case are (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and the width of the data. For non image case, the dimensions are in the form of (N x C x D1 x D2 ... Dn), where N is the batch size.</dd>
<dt><tt>scale</tt>: T</dt>
<dd>The input 1-dimensional scale tensor of size C.</dd>
<dt><tt>B</tt>: T</dt>
<dd>The input 1-dimensional bias tensor of size C.</dd>
</dl>
#### Outputs
<dl>
<dt><tt>output</tt>: T</dt>
<dd>The output tensor of the same shape as input.</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
### MMCVModulatedDeformConv2d
#### Description
Perform Modulated Deformable Convolution on input feature, read [Deformable ConvNets v2: More Deformable, Better Results](https://arxiv.org/abs/1811.11168?from=timeline) for detail.
#### Parameters
| Type | Parameter | Description |
| -------------- | ------------------ | ------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
| `int` | `deformable_group` | Groups of deformable offset. |
| `int` | `group` | Split input into groups. `input_channel` should be divisible by the number of groups. |
#### Inputs
<dl>
<dt><tt>inputs[0]</tt>: T</dt>
<dd>Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the number of channels, inH and inW are the height and width of the data.</dd>
<dt><tt>inputs[1]</tt>: T</dt>
<dd>Input offset; 4-D tensor of shape (N, deformable_group* 2* kH* kW, outH, outW), where kH and kW is the height and width of weight, outH and outW is the height and width of offset and output.</dd>
<dt><tt>inputs[2]</tt>: T</dt>
<dd>Input mask; 4-D tensor of shape (N, deformable_group* kH* kW, outH, outW), where kH and kW is the height and width of weight, outH and outW is the height and width of offset and output.</dd>
<dt><tt>inputs[3]</tt>: T</dt>
<dd>Input weight; 4-D tensor of shape (output_channel, input_channel, kH, kW).</dd>
<dt><tt>inputs[4]</tt>: T, optional</dt>
<dd>Input weight; 1-D tensor of shape (output_channel).</dd>
</dl>
#### Outputs
<dl>
<dt><tt>outputs[0]</tt>: T</dt>
<dd>Output feature; 4-D tensor of shape (N, output_channel, outH, outW).</dd>
</dl>
#### Type Constraints
- T:tensor(float32, Linear)
## TensorRT Plugins for custom operators in MMCV (Experimental)
<!-- TOC -->
- [TensorRT Plugins for custom operators in MMCV (Experimental)](#tensorrt-plugins-for-custom-operators-in-mmcv-experimental)
- [Introduction](#introduction)
- [List of TensorRT plugins supported in MMCV](#list-of-tensorrt-plugins-supported-in-mmcv)
- [How to build TensorRT plugins in MMCV](#how-to-build-tensorrt-plugins-in-mmcv)
- [Prerequisite](#prerequisite)
- [Build on Linux](#build-on-linux)
- [Create TensorRT engine and run inference in python](#create-tensorrt-engine-and-run-inference-in-python)
- [How to add a TensorRT plugin for custom op in MMCV](#how-to-add-a-tensorrt-plugin-for-custom-op-in-mmcv)
- [Main procedures](#main-procedures)
- [Reminders](#reminders)
- [Known Issues](#known-issues)
- [References](#references)
<!-- TOC -->
### Introduction
**NVIDIA TensorRT** is a software development kit(SDK) for high-performance inference of deep learning models. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. Please check its [developer's website](https://developer.nvidia.com/tensorrt) for more information.
To ease the deployment of trained models with custom operators from `mmcv.ops` using TensorRT, a series of TensorRT plugins are included in MMCV.
### List of TensorRT plugins supported in MMCV
| ONNX Operator | TensorRT Plugin | MMCV Releases |
| :-----------------------: | :-----------------------------------------------------------------------------: | :-----------: |
| MMCVRoiAlign | [MMCVRoiAlign](./tensorrt_custom_ops.md#mmcvroialign) | 1.2.6 |
| ScatterND | [ScatterND](./tensorrt_custom_ops.md#scatternd) | 1.2.6 |
| NonMaxSuppression | [NonMaxSuppression](./tensorrt_custom_ops.md#nonmaxsuppression) | 1.3.0 |
| MMCVDeformConv2d | [MMCVDeformConv2d](./tensorrt_custom_ops.md#mmcvdeformconv2d) | 1.3.0 |
| grid_sampler | [grid_sampler](./tensorrt_custom_ops.md#grid-sampler) | 1.3.1 |
| cummax | [cummax](./tensorrt_custom_ops.md#cummax) | 1.3.5 |
| cummin | [cummin](./tensorrt_custom_ops.md#cummin) | 1.3.5 |
| MMCVInstanceNormalization | [MMCVInstanceNormalization](./tensorrt_custom_ops.md#mmcvinstancenormalization) | 1.3.5 |
| MMCVModulatedDeformConv2d | [MMCVModulatedDeformConv2d](./tensorrt_custom_ops.md#mmcvmodulateddeformconv2d) | master |
Notes
- All plugins listed above are developed on TensorRT-7.2.1.6.Ubuntu-16.04.x86_64-gnu.cuda-10.2.cudnn8.0
### How to build TensorRT plugins in MMCV
#### Prerequisite
- Clone repository
```bash
git clone https://github.com/open-mmlab/mmcv.git
```
- Install TensorRT
Download the corresponding TensorRT build from [NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-tensorrt-download).
For example, for Ubuntu 16.04 on x86-64 with cuda-10.2, the downloaded file is `TensorRT-7.2.1.6.Ubuntu-16.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz`.
Then, install as below:
```bash
cd ~/Downloads
tar -xvzf TensorRT-7.2.1.6.Ubuntu-16.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz
export TENSORRT_DIR=`pwd`/TensorRT-7.2.1.6
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$TENSORRT_DIR/lib
```
Install python packages: tensorrt, graphsurgeon, onnx-graphsurgeon
```bash
pip install $TENSORRT_DIR/python/tensorrt-7.2.1.6-cp37-none-linux_x86_64.whl
pip install $TENSORRT_DIR/onnx_graphsurgeon/onnx_graphsurgeon-0.2.6-py2.py3-none-any.whl
pip install $TENSORRT_DIR/graphsurgeon/graphsurgeon-0.4.5-py2.py3-none-any.whl
```
For more detailed information of installing TensorRT using tar, please refer to [Nvidia' website](https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-721/install-guide/index.html#installing-tar).
#### Build on Linux
```bash
cd mmcv ## to MMCV root directory
MMCV_WITH_OPS=1 MMCV_WITH_TRT=1 pip install -e .
```
### Create TensorRT engine and run inference in python
Here is an example.
```python
import torch
import onnx
from mmcv.tensorrt import (TRTWrapper, onnx2trt, save_trt_engine,
is_tensorrt_plugin_loaded)
assert is_tensorrt_plugin_loaded(), 'Requires to complie TensorRT plugins in mmcv'
onnx_file = 'sample.onnx'
trt_file = 'sample.trt'
onnx_model = onnx.load(onnx_file)
## Model input
inputs = torch.rand(1, 3, 224, 224).cuda()
## Model input shape info
opt_shape_dict = {
'input': [list(inputs.shape),
list(inputs.shape),
list(inputs.shape)]
}
## Create TensorRT engine
max_workspace_size = 1 << 30
trt_engine = onnx2trt(
onnx_model,
opt_shape_dict,
max_workspace_size=max_workspace_size)
## Save TensorRT engine
save_trt_engine(trt_engine, trt_file)
## Run inference with TensorRT
trt_model = TRTWrapper(trt_file, ['input'], ['output'])
with torch.no_grad():
trt_outputs = trt_model({'input': inputs})
output = trt_outputs['output']
```
### How to add a TensorRT plugin for custom op in MMCV
#### Main procedures
Below are the main steps:
1. Add c++ header file
2. Add c++ source file
3. Add cuda kernel file
4. Register plugin in `trt_plugin.cpp`
5. Add unit test in `tests/test_ops/test_tensorrt.py`
**Take RoIAlign plugin `roi_align` for example.**
1. Add header `trt_roi_align.hpp` to TensorRT include directory `mmcv/ops/csrc/tensorrt/`
2. Add source `trt_roi_align.cpp` to TensorRT source directory `mmcv/ops/csrc/tensorrt/plugins/`
3. Add cuda kernel `trt_roi_align_kernel.cu` to TensorRT source directory `mmcv/ops/csrc/tensorrt/plugins/`
4. Register `roi_align` plugin in [trt_plugin.cpp](https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/csrc/tensorrt/plugins/trt_plugin.cpp)
```c++
#include "trt_plugin.hpp"
#include "trt_roi_align.hpp"
REGISTER_TENSORRT_PLUGIN(RoIAlignPluginDynamicCreator);
extern "C" {
bool initLibMMCVInferPlugins() { return true; }
} // extern "C"
```
5. Add unit test into `tests/test_ops/test_tensorrt.py`
Check [here](https://github.com/open-mmlab/mmcv/blob/master/tests/test_ops/test_tensorrt.py) for examples.
#### Reminders
- Some of the [custom ops](https://mmcv.readthedocs.io/en/latest/ops.html) in `mmcv` have their cuda implementations, which could be referred.
### Known Issues
- None
### References
- [Developer guide of Nvidia TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html)
- [TensorRT Open Source Software](https://github.com/NVIDIA/TensorRT)
- [onnx-tensorrt](https://github.com/onnx/onnx-tensorrt)
- [TensorRT python API](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/index.html)
- [TensorRT c++ plugin API](https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_plugin.html)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment