Unverified Commit b326a219 authored by Zaida Zhou's avatar Zaida Zhou Committed by GitHub
Browse files

[Docs] Replace markdownlint with mdformat for avoiding installing ruby (#1936)

* Use mdformat pre-commit hook

* allows consecutive numbering

* improve .mdformat.toml

* test mdformat

* format markdown

* minor fix

* fix codespecll

* fix circleci

* add linkify-it-py dependency for cicleci

* add comments

* replace flake8 url

* add mdformat-myst dependency

* remove mdformat-myst dependency

* update contributing.md
parent 8708851e
...@@ -5,12 +5,6 @@ jobs: ...@@ -5,12 +5,6 @@ jobs:
- image: cimg/python:3.7.4 - image: cimg/python:3.7.4
steps: steps:
- checkout - checkout
- run:
name: Install dependencies
command: |
sudo apt-add-repository ppa:brightbox/ruby-ng -y
sudo apt-get update
sudo apt-get install -y ruby2.7
- run: - run:
name: Install pre-commit hook name: Install pre-commit hook
command: | command: |
......
...@@ -4,15 +4,14 @@ about: Suggest an idea for this project ...@@ -4,15 +4,14 @@ about: Suggest an idea for this project
title: '' title: ''
labels: '' labels: ''
assignees: '' assignees: ''
--- ---
**Describe the feature** **Describe the feature**
**Motivation** **Motivation**
A clear and concise description of the motivation of the feature. A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when [....]. Ex1. It is inconvenient when \[....\].
Ex2. There is a recent paper [....], which is very helpful for [....]. Ex2. There is a recent paper \[....\], which is very helpful for \[....\].
**Related resources** **Related resources**
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful. If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
......
...@@ -4,7 +4,6 @@ about: Ask general questions to get help ...@@ -4,7 +4,6 @@ about: Ask general questions to get help
title: '' title: ''
labels: '' labels: ''
assignees: '' assignees: ''
--- ---
**Checklist** **Checklist**
......
...@@ -4,7 +4,6 @@ about: Create a report to help us improve ...@@ -4,7 +4,6 @@ about: Create a report to help us improve
title: '' title: ''
labels: '' labels: ''
assignees: '' assignees: ''
--- ---
Thanks for reporting the unexpected results and we appreciate it a lot. Thanks for reporting the unexpected results and we appreciate it a lot.
...@@ -32,8 +31,8 @@ A placeholder for the command. ...@@ -32,8 +31,8 @@ A placeholder for the command.
1. Please run `python -c "from mmcv.utils import collect_env; print(collect_env())"` to collect necessary environment information and paste it here. 1. Please run `python -c "from mmcv.utils import collect_env; print(collect_env())"` to collect necessary environment information and paste it here.
2. You may add addition that may be helpful for locating the problem, such as 2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source] - How you installed PyTorch \[e.g., pip, conda, source\]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.) - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback** **Error traceback**
If applicable, paste the error traceback here. If applicable, paste the error traceback here.
......
exclude: ^tests/data/ exclude: ^tests/data/
repos: repos:
- repo: https://gitlab.com/pycqa/flake8.git - repo: https://github.com/PyCQA/flake8
rev: 3.8.3 rev: 3.8.3
hooks: hooks:
- id: flake8 - id: flake8
...@@ -25,16 +25,19 @@ repos: ...@@ -25,16 +25,19 @@ repos:
args: ["--remove"] args: ["--remove"]
- id: mixed-line-ending - id: mixed-line-ending
args: ["--fix=lf"] args: ["--fix=lf"]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
args: ["-r", "~MD002,~MD013,~MD029,~MD033,~MD034",
"-t", "allow_different_nesting"]
- repo: https://github.com/codespell-project/codespell - repo: https://github.com/codespell-project/codespell
rev: v2.1.0 rev: v2.1.0
hooks: hooks:
- id: codespell - id: codespell
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.14
hooks:
- id: mdformat
args: ["--number"]
additional_dependencies:
- mdformat-gfm
- mdformat_frontmatter
- linkify-it-py
- repo: https://github.com/myint/docformatter - repo: https://github.com/myint/docformatter
rev: v1.3.1 rev: v1.3.1
hooks: hooks:
......
...@@ -16,6 +16,7 @@ All kinds of contributions are welcome, including but not limited to the followi ...@@ -16,6 +16,7 @@ All kinds of contributions are welcome, including but not limited to the followi
```{note} ```{note}
If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first. If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first.
``` ```
### Code style ### Code style
#### Python #### Python
...@@ -24,10 +25,11 @@ We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code ...@@ -24,10 +25,11 @@ We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code
We use the following tools for linting and formatting: We use the following tools for linting and formatting:
- [flake8](http://flake8.pycqa.org/en/latest/): A wrapper around some linter tools. - [flake8](https://github.com/PyCQA/flake8): A wrapper around some linter tools.
- [yapf](https://github.com/google/yapf): A formatter for Python files.
- [isort](https://github.com/timothycrosley/isort): A Python utility to sort imports. - [isort](https://github.com/timothycrosley/isort): A Python utility to sort imports.
- [markdownlint](https://github.com/markdownlint/markdownlint): A linter to check markdown files and flag style issues. - [yapf](https://github.com/google/yapf): A formatter for Python files.
- [codespell](https://github.com/codespell-project/codespell): A Python utility to fix common misspellings in text files.
- [mdformat](https://github.com/executablebooks/mdformat): Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files.
- [docformatter](https://github.com/myint/docformatter): A formatter to format docstring. - [docformatter](https://github.com/myint/docformatter): A formatter to format docstring.
Style configurations of yapf and isort can be found in [setup.cfg](./setup.cfg). Style configurations of yapf and isort can be found in [setup.cfg](./setup.cfg).
...@@ -48,23 +50,9 @@ From the repository folder ...@@ -48,23 +50,9 @@ From the repository folder
pre-commit install pre-commit install
``` ```
Try the following steps to install ruby when you encounter an issue on installing markdownlint
```shell
# install rvm
curl -L https://get.rvm.io | bash -s -- --autolibs=read-fail
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm"
rvm autolibs disable
# install ruby
rvm install 2.7.1
```
Or refer to [this repo](https://github.com/innerlee/setup) and take [`zzruby.sh`](https://github.com/innerlee/setup/blob/master/zzruby.sh) according its instruction.
After this on every commit check code linters and formatter will be enforced. After this on every commit check code linters and formatter will be enforced.
>Before you create a PR, make sure that your code lints and is formatted by yapf. > Before you create a PR, make sure that your code lints and is formatted by yapf.
#### C++ and CUDA #### C++ and CUDA
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
In this file, we list the operations with other licenses instead of Apache 2.0. Users should be careful about adopting these operations in any commercial matters. In this file, we list the operations with other licenses instead of Apache 2.0. Users should be careful about adopting these operations in any commercial matters.
| Operation | Files | License | | Operation | Files | License |
| :--------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------: | :------------: | | :--------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------: |
| upfirdn2d | [mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu](https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu) | NVIDIA License | | upfirdn2d | [mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu](https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/csrc/pytorch/cuda/upfirdn2d_kernel.cu) | NVIDIA License |
| fused_leaky_relu | [mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu](https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu) | NVIDIA License | | fused_leaky_relu | [mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu](https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/csrc/pytorch/cuda/fused_bias_leakyrelu_cuda.cu) | NVIDIA License |
...@@ -77,7 +77,7 @@ Note: MMCV requires Python 3.6+. ...@@ -77,7 +77,7 @@ Note: MMCV requires Python 3.6+.
There are two versions of MMCV: There are two versions of MMCV:
- **mmcv-full**: comprehensive, with full features and various CUDA ops out of box. It takes longer time to build. - **mmcv-full**: comprehensive, with full features and various CUDA ops out of box. It takes longer time to build.
- **mmcv**: lite, without CUDA ops but all other features, similar to mmcv<1.0.0. It is useful when you do not need those CUDA ops. - **mmcv**: lite, without CUDA ops but all other features, similar to mmcv\<1.0.0. It is useful when you do not need those CUDA ops.
**Note**: Do not install both versions in the same environment, otherwise you may encounter errors like `ModuleNotFound`. You need to uninstall one before installing the other. `Installing the full version is highly recommended if CUDA is available`. **Note**: Do not install both versions in the same environment, otherwise you may encounter errors like `ModuleNotFound`. You need to uninstall one before installing the other. `Installing the full version is highly recommended if CUDA is available`.
...@@ -89,14 +89,14 @@ We provide pre-built mmcv packages (recommended) with different PyTorch and CUDA ...@@ -89,14 +89,14 @@ We provide pre-built mmcv packages (recommended) with different PyTorch and CUDA
i. Install the latest version. i. Install the latest version.
The rule for installing the latest ``mmcv-full`` is as follows: The rule for installing the latest `mmcv-full` is as follows:
```shell ```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
``` ```
Please replace ``{cu_version}`` and ``{torch_version}`` in the url to your desired one. For example, Please replace `{cu_version}` and `{torch_version}` in the url to your desired one. For example,
to install the latest ``mmcv-full`` with ``CUDA 11.1`` and ``PyTorch 1.9.0``, use the following command: to install the latest `mmcv-full` with `CUDA 11.1` and `PyTorch 1.9.0`, use the following command:
```shell ```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
...@@ -108,19 +108,19 @@ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9 ...@@ -108,19 +108,19 @@ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.8.0/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.8.0/index.html
``` ```
For more details, please refer the the following tables and delete ``=={mmcv_version}``. For more details, please refer the the following tables and delete `=={mmcv_version}`.
ii. Install a specified version. ii. Install a specified version.
The rule for installing a specified ``mmcv-full`` is as follows: The rule for installing a specified `mmcv-full` is as follows:
```shell ```shell
pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
``` ```
First of all, please refer to the Releases and replace ``{mmcv_version}`` a specified one. e.g. ``1.3.9``. First of all, please refer to the Releases and replace `{mmcv_version}` a specified one. e.g. `1.3.9`.
Then replace ``{cu_version}`` and ``{torch_version}`` in the url to your desired versions. For example, Then replace `{cu_version}` and `{torch_version}` in the url to your desired versions. For example,
to install ``mmcv-full==1.3.9`` with ``CUDA 11.1`` and ``PyTorch 1.9.0``, use the following command: to install `mmcv-full==1.3.9` with `CUDA 11.1` and `PyTorch 1.9.0`, use the following command:
```shell ```shell
pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
......
...@@ -88,13 +88,13 @@ a. 安装完整版 ...@@ -88,13 +88,13 @@ a. 安装完整版
i. 安装最新版本 i. 安装最新版本
如下是安装最新版 ``mmcv-full`` 的命令 如下是安装最新版 `mmcv-full` 的命令
```shell ```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
``` ```
请将链接中的 ``{cu_version}````{torch_version}`` 根据自身需求替换成实际的版本号,例如想安装和 ``CUDA 11.1````PyTorch 1.9.0`` 兼容的最新版 ``mmcv-full``,使用如下替换过的命令 请将链接中的 `{cu_version}``{torch_version}` 根据自身需求替换成实际的版本号,例如想安装和 `CUDA 11.1``PyTorch 1.9.0` 兼容的最新版 `mmcv-full`,使用如下替换过的命令
```shell ```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
...@@ -106,18 +106,18 @@ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9 ...@@ -106,18 +106,18 @@ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.8.0/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.8.0/index.html
``` ```
如果想知道更多 CUDA 和 PyTorch 版本的命令,可以参考下面的表格,将链接中的 ``=={mmcv_version}`` 删去即可。 如果想知道更多 CUDA 和 PyTorch 版本的命令,可以参考下面的表格,将链接中的 `=={mmcv_version}` 删去即可。
ii. 安装特定的版本 ii. 安装特定的版本
如下是安装特定版本 ``mmcv-full`` 的命令 如下是安装特定版本 `mmcv-full` 的命令
```shell ```shell
pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
``` ```
首先请参考版本发布信息找到想要安装的版本号,将 ``{mmcv_version}`` 替换成该版本号,例如 ``1.3.9`` 首先请参考版本发布信息找到想要安装的版本号,将 `{mmcv_version}` 替换成该版本号,例如 `1.3.9`
然后将链接中的 ``{cu_version}````{torch_version}`` 根据自身需求替换成实际的版本号,例如想安装和 ``CUDA 11.1````PyTorch 1.9.0`` 兼容的 ``mmcv-full`` 1.3.9 版本,使用如下替换过的命令 然后将链接中的 `{cu_version}``{torch_version}` 根据自身需求替换成实际的版本号,例如想安装和 `CUDA 11.1``PyTorch 1.9.0` 兼容的 `mmcv-full` 1.3.9 版本,使用如下替换过的命令
```shell ```shell
pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
...@@ -255,6 +255,7 @@ c. 安装完整版并且编译 onnxruntime 的自定义算子 ...@@ -255,6 +255,7 @@ c. 安装完整版并且编译 onnxruntime 的自定义算子
## 许可证 ## 许可证
`MMCV` 目前以 Apache 2.0 的许可证发布,但是其中有一部分功能并不是使用的 Apache2.0 许可证,我们在 [许可证](LICENSES.md) 中详细地列出了这些功能以及他们对应的许可证,如果您正在从事盈利性活动,请谨慎参考此文档。 `MMCV` 目前以 Apache 2.0 的许可证发布,但是其中有一部分功能并不是使用的 Apache2.0 许可证,我们在 [许可证](LICENSES.md) 中详细地列出了这些功能以及他们对应的许可证,如果您正在从事盈利性活动,请谨慎参考此文档。
## 欢迎加入 OpenMMLab 社区 ## 欢迎加入 OpenMMLab 社区
扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=3ijNTqfg),或添加微信小助手”OpenMMLabwx“加入官方交流微信群。 扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=3ijNTqfg),或添加微信小助手”OpenMMLabwx“加入官方交流微信群。
......
...@@ -4,27 +4,27 @@ This document is used as a reference for English-Chinese terminology translation ...@@ -4,27 +4,27 @@ This document is used as a reference for English-Chinese terminology translation
该文档用作中英文翻译对照参考。 该文档用作中英文翻译对照参考。
| English | 中文 | | English | 中文 |
| :-----: | :---:| | :---------------: | :----: |
| annotation | 标注 | | annotation | 标注 |
| backbone | 主干网络 | | backbone | 主干网络 |
| benchmark | 基准测试 | | benchmark | 基准测试 |
| checkpoint | 模型权重文件 | | checkpoint | 模型权重文件 |
| classifier | 分类器 | | classifier | 分类器 |
| cls_head | 分类头 | | cls_head | 分类头 |
| decoder | 解码器 | | decoder | 解码器 |
| detector | 检测器 | | detector | 检测器 |
| encoder | 编码器 | | encoder | 编码器 |
| finetune | 微调 | | finetune | 微调 |
| ground truth | 真实标签 | | ground truth | 真实标签 |
| hook | 钩子 | | hook | 钩子 |
| localizer | 定位器 | | localizer | 定位器 |
| neck | 模型颈部 | | neck | 模型颈部 |
| pipeline | 流水线 | | pipeline | 流水线 |
| recognizer | 识别器 | | recognizer | 识别器 |
| register | 注册器 | | register | 注册器 |
| schedule | 调整 | | schedule | 调整 |
| scheduler | 调度器 | | scheduler | 调度器 |
| segmentor | 分割器 | | segmentor | 分割器 |
| tensor | 张量 | | tensor | 张量 |
| training schedule | 训练策略 | | training schedule | 训练策略 |
...@@ -21,10 +21,10 @@ Pull requests let you tell others about changes you have pushed to a branch in a ...@@ -21,10 +21,10 @@ Pull requests let you tell others about changes you have pushed to a branch in a
#### 1. Get the most recent codebase #### 1. Get the most recent codebase
+ When you work on your first PR - When you work on your first PR
Fork the OpenMMLab repository: click the **fork** button at the top right corner of Github page Fork the OpenMMLab repository: click the **fork** button at the top right corner of Github page
![avatar](../_static/community/1.png) ![avatar](../_static/community/1.png)
Clone forked repository to local Clone forked repository to local
...@@ -38,14 +38,14 @@ Pull requests let you tell others about changes you have pushed to a branch in a ...@@ -38,14 +38,14 @@ Pull requests let you tell others about changes you have pushed to a branch in a
git remote add upstream git@github.com:open-mmlab/mmcv git remote add upstream git@github.com:open-mmlab/mmcv
``` ```
+ After your first PR - After your first PR
Checkout master branch of the local repository and pull the latest master branch of the source repository Checkout master branch of the local repository and pull the latest master branch of the source repository
```bash ```bash
git checkout master git checkout master
git pull upstream master git pull upstream master
``` ```
#### 2. Checkout a new branch from the master branch #### 2. Checkout a new branch from the master branch
...@@ -67,23 +67,23 @@ git commit -m 'messages' ...@@ -67,23 +67,23 @@ git commit -m 'messages'
#### 4. Push your changes to the forked repository and create a PR #### 4. Push your changes to the forked repository and create a PR
+ Push the branch to your forked remote repository - Push the branch to your forked remote repository
```bash ```bash
git push origin branchname git push origin branchname
``` ```
+ Create a PR - Create a PR
![avatar](../_static/community/2.png) ![avatar](../_static/community/2.png)
+ Revise PR message template to describe your motivation and modifications made in this PR. You can also link the related issue to the PR manually in the PR message (For more information, checkout the [official guidance](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)). - Revise PR message template to describe your motivation and modifications made in this PR. You can also link the related issue to the PR manually in the PR message (For more information, checkout the [official guidance](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)).
#### 5. Discuss and review your code #### 5. Discuss and review your code
+ After creating a pull request, you can ask a specific person to review the changes you've proposed - After creating a pull request, you can ask a specific person to review the changes you've proposed
![avatar](../_static/community/3.png) ![avatar](../_static/community/3.png)
+ Modify your codes according to reviewers' suggestions and then push your changes - Modify your codes according to reviewers' suggestions and then push your changes
#### 6. Merge your branch to the master branch and delete the branch #### 6. Merge your branch to the master branch and delete the branch
...@@ -100,15 +100,15 @@ git push origin --delete branchname # delete remote branch ...@@ -100,15 +100,15 @@ git push origin --delete branchname # delete remote branch
3. Accomplish a detailed change in one PR. Avoid large PR 3. Accomplish a detailed change in one PR. Avoid large PR
+ Bad: Support Faster R-CNN - Bad: Support Faster R-CNN
+ Acceptable: Add a box head to Faster R-CNN - Acceptable: Add a box head to Faster R-CNN
+ Good: Add a parameter to box head to support custom conv-layer number - Good: Add a parameter to box head to support custom conv-layer number
4. Provide clear and significant commit message 4. Provide clear and significant commit message
5. Provide clear and meaningful PR description 5. Provide clear and meaningful PR description
+ Task name should be clarified in title. The general format is: [Prefix] Short description of the PR (Suffix) - Task name should be clarified in title. The general format is: \[Prefix\] Short description of the PR (Suffix)
+ Prefix: add new feature [Feature], fix bug [Fix], related to documents [Docs], in developing [WIP] (which will not be reviewed temporarily) - Prefix: add new feature \[Feature\], fix bug \[Fix\], related to documents \[Docs\], in developing \[WIP\] (which will not be reviewed temporarily)
+ Introduce main changes, results and influences on other modules in short description - Introduce main changes, results and influences on other modules in short description
+ Associate related issues and pull requests with a milestone - Associate related issues and pull requests with a milestone
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
To make custom operators in MMCV more standard, precise definitions of each operator are listed in this document. To make custom operators in MMCV more standard, precise definitions of each operator are listed in this document.
<!-- TOC --> <!-- TOC -->
- [MMCV Operators](#mmcv-operators) - [MMCV Operators](#mmcv-operators)
- [MMCVBorderAlign](#mmcvborderalign) - [MMCVBorderAlign](#mmcvborderalign)
- [Description](#description) - [Description](#description)
...@@ -82,25 +83,26 @@ To make custom operators in MMCV more standard, precise definitions of each oper ...@@ -82,25 +83,26 @@ To make custom operators in MMCV more standard, precise definitions of each oper
- [Inputs](#inputs-12) - [Inputs](#inputs-12)
- [Outputs](#outputs-12) - [Outputs](#outputs-12)
- [Type Constraints](#type-constraints-12) - [Type Constraints](#type-constraints-12)
- [grid_sampler*](#grid_sampler) - [grid_sampler\*](#grid_sampler)
- [Description](#description-13) - [Description](#description-13)
- [Parameters](#parameters-13) - [Parameters](#parameters-13)
- [Inputs](#inputs-13) - [Inputs](#inputs-13)
- [Outputs](#outputs-13) - [Outputs](#outputs-13)
- [Type Constraints](#type-constraints-13) - [Type Constraints](#type-constraints-13)
- [cummax*](#cummax) - [cummax\*](#cummax)
- [Description](#description-14) - [Description](#description-14)
- [Parameters](#parameters-14) - [Parameters](#parameters-14)
- [Inputs](#inputs-14) - [Inputs](#inputs-14)
- [Outputs](#outputs-14) - [Outputs](#outputs-14)
- [Type Constraints](#type-constraints-14) - [Type Constraints](#type-constraints-14)
- [cummin*](#cummin) - [cummin\*](#cummin)
- [Description](#description-15) - [Description](#description-15)
- [Parameters](#parameters-15) - [Parameters](#parameters-15)
- [Inputs](#inputs-15) - [Inputs](#inputs-15)
- [Outputs](#outputs-15) - [Outputs](#outputs-15)
- [Type Constraints](#type-constraints-15) - [Type Constraints](#type-constraints-15)
- [Reminders](#reminders) - [Reminders](#reminders)
<!-- TOC --> <!-- TOC -->
## MMCVBorderAlign ## MMCVBorderAlign
...@@ -121,7 +123,7 @@ Read [BorderDet: Border Feature for Dense Object Detection](ttps://arxiv.org/abs ...@@ -121,7 +123,7 @@ Read [BorderDet: Border Feature for Dense Object Detection](ttps://arxiv.org/abs
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|-------------|-------------------------------------------------------------------------------------| | ----- | ----------- | ----------------------------------------------------------------------------------- |
| `int` | `pool_size` | number of positions sampled over the boxes' borders(e.g. top, bottom, left, right). | | `int` | `pool_size` | number of positions sampled over the boxes' borders(e.g. top, bottom, left, right). |
### Inputs ### Inputs
...@@ -155,7 +157,7 @@ Read [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.0 ...@@ -155,7 +157,7 @@ Read [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.0
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|----------------|-----------------------------------------------| | ------- | -------------- | --------------------------------------------- |
| `int` | `kernel_size` | reassemble kernel size, should be odd integer | | `int` | `kernel_size` | reassemble kernel size, should be odd integer |
| `int` | `group_size` | reassemble group size | | `int` | `group_size` | reassemble group size |
| `float` | `scale_factor` | upsample ratio(>=1) | | `float` | `scale_factor` | upsample ratio(>=1) |
...@@ -242,7 +244,6 @@ None ...@@ -242,7 +244,6 @@ None
- T:tensor(float32) - T:tensor(float32)
## MMCVCornerPool ## MMCVCornerPool
### Description ### Description
...@@ -252,7 +253,7 @@ Perform CornerPool on `input` features. Read [CornerNet -- Detecting Objects as ...@@ -252,7 +253,7 @@ Perform CornerPool on `input` features. Read [CornerNet -- Detecting Objects as
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|-----------|------------------------------------------------------------------| | ----- | --------- | ---------------------------------------------------------------- |
| `int` | `mode` | corner pool mode, (0: `top`, 1: `bottom`, 2: `left`, 3: `right`) | | `int` | `mode` | corner pool mode, (0: `top`, 1: `bottom`, 2: `left`, 3: `right`) |
### Inputs ### Inputs
...@@ -284,7 +285,7 @@ Read [Deformable Convolutional Networks](https://arxiv.org/pdf/1703.06211.pdf) f ...@@ -284,7 +285,7 @@ Read [Deformable Convolutional Networks](https://arxiv.org/pdf/1703.06211.pdf) f
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|----------------|---------------------|-------------------------------------------------------------------------------------------------------------------| | -------------- | ------------------- | ----------------------------------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel, (sH, sW). Defaults to `(1, 1)`. | | `list of ints` | `stride` | The stride of the convolving kernel, (sH, sW). Defaults to `(1, 1)`. |
| `list of ints` | `padding` | Paddings on both sides of the input, (padH, padW). Defaults to `(0, 0)`. | | `list of ints` | `padding` | Paddings on both sides of the input, (padH, padW). Defaults to `(0, 0)`. |
| `list of ints` | `dilation` | The spacing between kernel elements (dH, dW). Defaults to `(1, 1)`. | | `list of ints` | `dilation` | The spacing between kernel elements (dH, dW). Defaults to `(1, 1)`. |
...@@ -324,7 +325,7 @@ Perform Modulated Deformable Convolution on input feature, read [Deformable Conv ...@@ -324,7 +325,7 @@ Perform Modulated Deformable Convolution on input feature, read [Deformable Conv
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|----------------|---------------------|---------------------------------------------------------------------------------------| | -------------- | ------------------- | ------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) | | `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) | | `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) | | `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
...@@ -366,7 +367,7 @@ Deformable roi pooling layer ...@@ -366,7 +367,7 @@ Deformable roi pooling layer
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|------------------|---------------------------------------------------------------------------------------------------------------| | ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- |
| `int` | `output_height` | height of output roi | | `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi | | `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes | | `float` | `spatial_scale` | used to scale the input boxes |
...@@ -405,7 +406,7 @@ Read [Pixel Recurrent Neural Networks](https://arxiv.org/abs/1601.06759) for mor ...@@ -405,7 +406,7 @@ Read [Pixel Recurrent Neural Networks](https://arxiv.org/abs/1601.06759) for mor
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|----------------|-----------|----------------------------------------------------------------------------------| | -------------- | --------- | -------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW). **Only support stride=1 in mmcv** | | `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW). **Only support stride=1 in mmcv** |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW). Defaults to `(0, 0)`. | | `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW). Defaults to `(0, 0)`. |
...@@ -444,7 +445,7 @@ Read [PSANet: Point-wise Spatial Attention Network for Scene Parsing](https://hs ...@@ -444,7 +445,7 @@ Read [PSANet: Point-wise Spatial Attention Network for Scene Parsing](https://hs
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|----------------|-------------|----------------------------------------------| | -------------- | ----------- | -------------------------------------------- |
| `int` | `psa_type` | `0` means collect and `1` means `distribute` | | `int` | `psa_type` | `0` means collect and `1` means `distribute` |
| `list of ints` | `mask_size` | The size of mask | | `list of ints` | `mask_size` | The size of mask |
...@@ -477,10 +478,10 @@ Note this definition is slightly different with [onnx: NonMaxSuppression](https: ...@@ -477,10 +478,10 @@ Note this definition is slightly different with [onnx: NonMaxSuppression](https:
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------| | ------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `int` | `center_point_box` | 0 - the box data is supplied as [y1, x1, y2, x2], 1-the box data is supplied as [x_center, y_center, width, height]. | | `int` | `center_point_box` | 0 - the box data is supplied as \[y1, x1, y2, x2\], 1-the box data is supplied as \[x_center, y_center, width, height\]. |
| `int` | `max_output_boxes_per_class` | The maximum number of boxes to be selected per batch per class. Default to 0, number of output boxes equal to number of input boxes. | | `int` | `max_output_boxes_per_class` | The maximum number of boxes to be selected per batch per class. Default to 0, number of output boxes equal to number of input boxes. |
| `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range [0, 1]. Default to 0. | | `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range \[0, 1\]. Default to 0. |
| `float` | `score_threshold` | The threshold for deciding when to remove boxes based on score. | | `float` | `score_threshold` | The threshold for deciding when to remove boxes based on score. |
| `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). | | `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). |
...@@ -515,7 +516,7 @@ Perform RoIAlign on output feature, used in bbox_head of most two-stage detector ...@@ -515,7 +516,7 @@ Perform RoIAlign on output feature, used in bbox_head of most two-stage detector
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|------------------|---------------------------------------------------------------------------------------------------------------| | ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- |
| `int` | `output_height` | height of output roi | | `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi | | `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes | | `float` | `spatial_scale` | used to scale the input boxes |
...@@ -552,7 +553,7 @@ Perform RoI align pooling for rotated proposals ...@@ -552,7 +553,7 @@ Perform RoI align pooling for rotated proposals
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|------------------|---------------------------------------------------------------------------------------------------------------| | ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- |
| `int` | `output_height` | height of output roi | | `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi | | `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes | | `float` | `spatial_scale` | used to scale the input boxes |
...@@ -580,7 +581,7 @@ Perform RoI align pooling for rotated proposals ...@@ -580,7 +581,7 @@ Perform RoI align pooling for rotated proposals
- T:tensor(float32) - T:tensor(float32)
## grid_sampler* ## grid_sampler\*
### Description ### Description
...@@ -591,7 +592,7 @@ Check [torch.nn.functional.grid_sample](https://pytorch.org/docs/stable/generate ...@@ -591,7 +592,7 @@ Check [torch.nn.functional.grid_sample](https://pytorch.org/docs/stable/generate
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ----- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) | | `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) |
| `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) | | `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) |
| `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. | | `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. |
...@@ -616,7 +617,7 @@ Check [torch.nn.functional.grid_sample](https://pytorch.org/docs/stable/generate ...@@ -616,7 +617,7 @@ Check [torch.nn.functional.grid_sample](https://pytorch.org/docs/stable/generate
- T:tensor(float32, Linear) - T:tensor(float32, Linear)
## cummax* ## cummax\*
### Description ### Description
...@@ -625,7 +626,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e ...@@ -625,7 +626,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|-----------|----------------------------------------| | ----- | --------- | -------------------------------------- |
| `int` | `dim` | the dimension to do the operation over | | `int` | `dim` | the dimension to do the operation over |
### Inputs ### Inputs
...@@ -648,7 +649,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e ...@@ -648,7 +649,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e
- T:tensor(float32) - T:tensor(float32)
## cummin* ## cummin\*
### Description ### Description
...@@ -657,7 +658,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e ...@@ -657,7 +658,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e
### Parameters ### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|-----------|----------------------------------------| | ----- | --------- | -------------------------------------- |
| `int` | `dim` | the dimension to do the operation over | | `int` | `dim` | the dimension to do the operation over |
### Inputs ### Inputs
......
...@@ -69,7 +69,7 @@ Perform soft NMS on `boxes` with `scores`. Read [Soft-NMS -- Improving Object De ...@@ -69,7 +69,7 @@ Perform soft NMS on `boxes` with `scores`. Read [Soft-NMS -- Improving Object De
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|-----------------|----------------------------------------------------------------| | ------- | --------------- | -------------------------------------------------------------- |
| `float` | `iou_threshold` | IoU threshold for NMS | | `float` | `iou_threshold` | IoU threshold for NMS |
| `float` | `sigma` | hyperparameter for gaussian method | | `float` | `sigma` | hyperparameter for gaussian method |
| `float` | `min_score` | score filter threshold | | `float` | `min_score` | score filter threshold |
...@@ -107,7 +107,7 @@ Perform RoIAlign on output feature, used in bbox_head of most two-stage detector ...@@ -107,7 +107,7 @@ Perform RoIAlign on output feature, used in bbox_head of most two-stage detector
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|------------------|---------------------------------------------------------------------------------------------------------------| | ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- |
| `int` | `output_height` | height of output roi | | `int` | `output_height` | height of output roi |
| `int` | `output_width` | width of output roi | | `int` | `output_width` | width of output roi |
| `float` | `spatial_scale` | used to scale the input boxes | | `float` | `spatial_scale` | used to scale the input boxes |
...@@ -143,10 +143,10 @@ Filter out boxes has high IoU overlap with previously selected boxes. ...@@ -143,10 +143,10 @@ Filter out boxes has high IoU overlap with previously selected boxes.
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|---------|-----------------|------------------------------------------------------------------------------------------------------------------| | ------- | --------------- | ------------------------------------------------------------------------------------------------------------------ |
| `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range [0, 1]. Default to 0. | | `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range \[0, 1\]. Default to 0. |
| `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). | | `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). |
#### Inputs #### Inputs
...@@ -177,7 +177,7 @@ Perform sample from `input` with pixel locations from `grid`. ...@@ -177,7 +177,7 @@ Perform sample from `input` with pixel locations from `grid`.
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ----- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) | | `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) |
| `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) | | `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) |
| `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. | | `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. |
...@@ -211,7 +211,7 @@ Perform CornerPool on `input` features. Read [CornerNet -- Detecting Objects as ...@@ -211,7 +211,7 @@ Perform CornerPool on `input` features. Read [CornerNet -- Detecting Objects as
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|-----------|------------------------------------------------------------------| | ----- | --------- | ---------------------------------------------------------------- |
| `int` | `mode` | corner pool mode, (0: `top`, 1: `bottom`, 2: `left`, 3: `right`) | | `int` | `mode` | corner pool mode, (0: `top`, 1: `bottom`, 2: `left`, 3: `right`) |
#### Inputs #### Inputs
...@@ -241,7 +241,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e ...@@ -241,7 +241,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative maximum e
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|-----------|----------------------------------------| | ----- | --------- | -------------------------------------- |
| `int` | `dim` | the dimension to do the operation over | | `int` | `dim` | the dimension to do the operation over |
#### Inputs #### Inputs
...@@ -273,7 +273,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e ...@@ -273,7 +273,7 @@ Returns a tuple (`values`, `indices`) where `values` is the cumulative minimum e
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|-------|-----------|----------------------------------------| | ----- | --------- | -------------------------------------- |
| `int` | `dim` | the dimension to do the operation over | | `int` | `dim` | the dimension to do the operation over |
#### Inputs #### Inputs
...@@ -305,7 +305,7 @@ Perform Modulated Deformable Convolution on input feature, read [Deformable Conv ...@@ -305,7 +305,7 @@ Perform Modulated Deformable Convolution on input feature, read [Deformable Conv
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|----------------|---------------------|---------------------------------------------------------------------------------------| | -------------- | ------------------- | ------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) | | `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) | | `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) | | `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
...@@ -347,7 +347,7 @@ Perform Deformable Convolution on input feature, read [Deformable Convolutional ...@@ -347,7 +347,7 @@ Perform Deformable Convolution on input feature, read [Deformable Convolutional
#### Parameters #### Parameters
| Type | Parameter | Description | | Type | Parameter | Description |
|----------------|--------------------|-----------------------------------------------------------------------------------------------------------------------------------| | -------------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) | | `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) |
| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) | | `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) |
| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) | | `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) |
......
...@@ -21,7 +21,7 @@ Welcome to use the unified model deployment toolbox MMDeploy: https://github.com ...@@ -21,7 +21,7 @@ Welcome to use the unified model deployment toolbox MMDeploy: https://github.com
### List of operators for ONNX Runtime supported in MMCV ### List of operators for ONNX Runtime supported in MMCV
| Operator | CPU | GPU | MMCV Releases | | Operator | CPU | GPU | MMCV Releases |
|:-------------------------------------------------------|:---:|:---:|:-------------:| | :----------------------------------------------------- | :-: | :-: | :-----------: |
| [SoftNMS](onnxruntime_custom_ops.md#softnms) | Y | N | 1.2.3 | | [SoftNMS](onnxruntime_custom_ops.md#softnms) | Y | N | 1.2.3 |
| [RoIAlign](onnxruntime_custom_ops.md#roialign) | Y | N | 1.2.5 | | [RoIAlign](onnxruntime_custom_ops.md#roialign) | Y | N | 1.2.5 |
| [NMS](onnxruntime_custom_ops.md#nms) | Y | N | 1.2.7 | | [NMS](onnxruntime_custom_ops.md#nms) | Y | N | 1.2.7 |
...@@ -96,6 +96,7 @@ onnx_results = sess.run(None, {'input' : input_data}) ...@@ -96,6 +96,7 @@ onnx_results = sess.run(None, {'input' : input_data})
- *Please note that this feature is experimental and may change in the future. Strongly suggest users always try with the latest master branch.* - *Please note that this feature is experimental and may change in the future. Strongly suggest users always try with the latest master branch.*
- The custom operator is not included in [supported operator list](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md) in ONNX Runtime. - The custom operator is not included in [supported operator list](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md) in ONNX Runtime.
- The custom operator should be able to be exported to ONNX. - The custom operator should be able to be exported to ONNX.
#### Main procedures #### Main procedures
...@@ -103,18 +104,20 @@ onnx_results = sess.run(None, {'input' : input_data}) ...@@ -103,18 +104,20 @@ onnx_results = sess.run(None, {'input' : input_data})
Take custom operator `soft_nms` for example. Take custom operator `soft_nms` for example.
1. Add header `soft_nms.h` to ONNX Runtime include directory `mmcv/ops/csrc/onnxruntime/` 1. Add header `soft_nms.h` to ONNX Runtime include directory `mmcv/ops/csrc/onnxruntime/`
2. Add source `soft_nms.cpp` to ONNX Runtime source directory `mmcv/ops/csrc/onnxruntime/cpu/` 2. Add source `soft_nms.cpp` to ONNX Runtime source directory `mmcv/ops/csrc/onnxruntime/cpu/`
3. Register `soft_nms` operator in [onnxruntime_register.cpp](../../mmcv/ops/csrc/onnxruntime/cpu/onnxruntime_register.cpp) 3. Register `soft_nms` operator in [onnxruntime_register.cpp](../../mmcv/ops/csrc/onnxruntime/cpu/onnxruntime_register.cpp)
```c++ ```c++
#include "soft_nms.h" #include "soft_nms.h"
SoftNmsOp c_SoftNmsOp; SoftNmsOp c_SoftNmsOp;
if (auto status = ortApi->CustomOpDomain_Add(domain, &c_SoftNmsOp)) { if (auto status = ortApi->CustomOpDomain_Add(domain, &c_SoftNmsOp)) {
return status; return status;
} }
``` ```
4. Add unit test into `tests/test_ops/test_onnx.py` 4. Add unit test into `tests/test_ops/test_onnx.py`
Check [here](../../tests/test_ops/test_onnx.py) for examples. Check [here](../../tests/test_ops/test_onnx.py) for examples.
...@@ -124,8 +127,8 @@ Take custom operator `soft_nms` for example. ...@@ -124,8 +127,8 @@ Take custom operator `soft_nms` for example.
### Known Issues ### Known Issues
- "RuntimeError: tuple appears in op that does not forward tuples, unsupported kind: `prim::PythonOp`." - "RuntimeError: tuple appears in op that does not forward tuples, unsupported kind: `prim::PythonOp`."
1. Note generally `cummax` or `cummin` is exportable to ONNX as long as the torch version >= 1.5.0, since `torch.cummax` is only supported with torch >= 1.5.0. But when `cummax` or `cummin` serves as an intermediate component whose outputs is used as inputs for another modules, it's expected that torch version must be >= 1.7.0. Otherwise the above error might arise, when running exported ONNX model with onnxruntime. 1. Note generally `cummax` or `cummin` is exportable to ONNX as long as the torch version >= 1.5.0, since `torch.cummax` is only supported with torch >= 1.5.0. But when `cummax` or `cummin` serves as an intermediate component whose outputs is used as inputs for another modules, it's expected that torch version must be >= 1.7.0. Otherwise the above error might arise, when running exported ONNX model with onnxruntime.
2. Solution: update the torch version to 1.7.0 or higher. 2. Solution: update the torch version to 1.7.0 or higher.
### References ### References
......
...@@ -102,7 +102,7 @@ detectors. ...@@ -102,7 +102,7 @@ detectors.
#### Description #### Description
ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1, and `updates` tensor of rank q + r - indices.shape[-1] - 1. The output of the operation is produced by creating a copy of the input `data`, and then updating its value to values specified by updates at specific index positions specified by `indices`. Its output shape is the same as the shape of `data`. Note that `indices` should not have duplicate entries. That is, two or more updates for the same index-location is not supported. ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1, and `updates` tensor of rank q + r - indices.shape\[-1\] - 1. The output of the operation is produced by creating a copy of the input `data`, and then updating its value to values specified by updates at specific index positions specified by `indices`. Its output shape is the same as the shape of `data`. Note that `indices` should not have duplicate entries. That is, two or more updates for the same index-location is not supported.
The `output` is calculated via the following equation: The `output` is calculated via the following equation:
...@@ -151,9 +151,9 @@ Filter out boxes has high IoU overlap with previously selected boxes or low scor ...@@ -151,9 +151,9 @@ Filter out boxes has high IoU overlap with previously selected boxes or low scor
| Type | Parameter | Description | | Type | Parameter | Description |
| ------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | | ------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `int` | `center_point_box` | 0 - the box data is supplied as [y1, x1, y2, x2], 1-the box data is supplied as [x_center, y_center, width, height]. | | `int` | `center_point_box` | 0 - the box data is supplied as \[y1, x1, y2, x2\], 1-the box data is supplied as \[x_center, y_center, width, height\]. |
| `int` | `max_output_boxes_per_class` | The maximum number of boxes to be selected per batch per class. Default to 0, number of output boxes equal to number of input boxes. | | `int` | `max_output_boxes_per_class` | The maximum number of boxes to be selected per batch per class. Default to 0, number of output boxes equal to number of input boxes. |
| `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range [0, 1]. Default to 0. | | `float` | `iou_threshold` | The threshold for deciding whether boxes overlap too much with respect to IoU. Value range \[0, 1\]. Default to 0. |
| `float` | `score_threshold` | The threshold for deciding when to remove boxes based on score. | | `float` | `score_threshold` | The threshold for deciding when to remove boxes based on score. |
| `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). | | `int` | `offset` | 0 or 1, boxes' width or height is (x2 - x1 + offset). |
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
TensorRT support will be deprecated in the future. TensorRT support will be deprecated in the future.
Welcome to use the unified model deployment toolbox MMDeploy: https://github.com/open-mmlab/mmdeploy Welcome to use the unified model deployment toolbox MMDeploy: https://github.com/open-mmlab/mmdeploy
<!-- TOC --> <!-- TOC -->
- [TensorRT Deployment](#tensorrt-deployment) - [TensorRT Deployment](#tensorrt-deployment)
...@@ -30,7 +31,7 @@ To ease the deployment of trained models with custom operators from `mmcv.ops` u ...@@ -30,7 +31,7 @@ To ease the deployment of trained models with custom operators from `mmcv.ops` u
### List of TensorRT plugins supported in MMCV ### List of TensorRT plugins supported in MMCV
| ONNX Operator | TensorRT Plugin | MMCV Releases | | ONNX Operator | TensorRT Plugin | MMCV Releases |
|:--------------------------|:--------------------------------------------------------------------------------|:-------------:| | :------------------------ | :------------------------------------------------------------------------------ | :-----------: |
| MMCVRoiAlign | [MMCVRoiAlign](./tensorrt_custom_ops.md#mmcvroialign) | 1.2.6 | | MMCVRoiAlign | [MMCVRoiAlign](./tensorrt_custom_ops.md#mmcvroialign) | 1.2.6 |
| ScatterND | [ScatterND](./tensorrt_custom_ops.md#scatternd) | 1.2.6 | | ScatterND | [ScatterND](./tensorrt_custom_ops.md#scatternd) | 1.2.6 |
| NonMaxSuppression | [NonMaxSuppression](./tensorrt_custom_ops.md#nonmaxsuppression) | 1.3.0 | | NonMaxSuppression | [NonMaxSuppression](./tensorrt_custom_ops.md#nonmaxsuppression) | 1.3.0 |
...@@ -151,21 +152,24 @@ Below are the main steps: ...@@ -151,21 +152,24 @@ Below are the main steps:
**Take RoIAlign plugin `roi_align` for example.** **Take RoIAlign plugin `roi_align` for example.**
1. Add header `trt_roi_align.hpp` to TensorRT include directory `mmcv/ops/csrc/tensorrt/` 1. Add header `trt_roi_align.hpp` to TensorRT include directory `mmcv/ops/csrc/tensorrt/`
2. Add source `trt_roi_align.cpp` to TensorRT source directory `mmcv/ops/csrc/tensorrt/plugins/` 2. Add source `trt_roi_align.cpp` to TensorRT source directory `mmcv/ops/csrc/tensorrt/plugins/`
3. Add cuda kernel `trt_roi_align_kernel.cu` to TensorRT source directory `mmcv/ops/csrc/tensorrt/plugins/` 3. Add cuda kernel `trt_roi_align_kernel.cu` to TensorRT source directory `mmcv/ops/csrc/tensorrt/plugins/`
4. Register `roi_align` plugin in [trt_plugin.cpp](https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/csrc/tensorrt/plugins/trt_plugin.cpp) 4. Register `roi_align` plugin in [trt_plugin.cpp](https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/csrc/tensorrt/plugins/trt_plugin.cpp)
```c++ ```c++
#include "trt_plugin.hpp" #include "trt_plugin.hpp"
#include "trt_roi_align.hpp" #include "trt_roi_align.hpp"
REGISTER_TENSORRT_PLUGIN(RoIAlignPluginDynamicCreator); REGISTER_TENSORRT_PLUGIN(RoIAlignPluginDynamicCreator);
extern "C" { extern "C" {
bool initLibMMCVInferPlugins() { return true; } bool initLibMMCVInferPlugins() { return true; }
} // extern "C" } // extern "C"
``` ```
5. Add unit test into `tests/test_ops/test_tensorrt.py` 5. Add unit test into `tests/test_ops/test_tensorrt.py`
Check [here](https://github.com/open-mmlab/mmcv/blob/master/tests/test_ops/test_tensorrt.py) for examples. Check [here](https://github.com/open-mmlab/mmcv/blob/master/tests/test_ops/test_tensorrt.py) for examples.
......
...@@ -7,53 +7,53 @@ Feel free to enrich the list if you find any frequent issues and have ways to he ...@@ -7,53 +7,53 @@ Feel free to enrich the list if you find any frequent issues and have ways to he
- KeyError: "xxx: 'yyy is not in the zzz registry'" - KeyError: "xxx: 'yyy is not in the zzz registry'"
The registry mechanism will be triggered only when the file of the module is imported. The registry mechanism will be triggered only when the file of the module is imported.
So you need to import that file somewhere. More details can be found at https://github.com/open-mmlab/mmdetection/issues/5974. So you need to import that file somewhere. More details can be found at https://github.com/open-mmlab/mmdetection/issues/5974.
- "No module named 'mmcv.ops'"; "No module named 'mmcv._ext'" - "No module named 'mmcv.ops'"; "No module named 'mmcv.\_ext'"
1. Uninstall existing mmcv in the environment using `pip uninstall mmcv` 1. Uninstall existing mmcv in the environment using `pip uninstall mmcv`
2. Install mmcv-full following the [installation instruction](https://mmcv.readthedocs.io/en/latest/get_started/installation.html) or [Build MMCV from source](https://mmcv.readthedocs.io/en/latest/get_started/build.html) 2. Install mmcv-full following the [installation instruction](https://mmcv.readthedocs.io/en/latest/get_started/installation.html) or [Build MMCV from source](https://mmcv.readthedocs.io/en/latest/get_started/build.html)
- "invalid device function" or "no kernel image is available for execution" - "invalid device function" or "no kernel image is available for execution"
1. Check the CUDA compute capability of you GPU 1. Check the CUDA compute capability of you GPU
2. Run `python mmdet/utils/collect_env.py` to check whether PyTorch, torchvision, and MMCV are built for the correct GPU architecture. You may need to set `TORCH_CUDA_ARCH_LIST` to reinstall MMCV. The compatibility issue could happen when using old GPUS, e.g., Tesla K80 (3.7) on colab. 2. Run `python mmdet/utils/collect_env.py` to check whether PyTorch, torchvision, and MMCV are built for the correct GPU architecture. You may need to set `TORCH_CUDA_ARCH_LIST` to reinstall MMCV. The compatibility issue could happen when using old GPUS, e.g., Tesla K80 (3.7) on colab.
3. Check whether the running environment is the same as that when mmcv/mmdet is compiled. For example, you may compile mmcv using CUDA 10.0 bug run it on CUDA9.0 environments 3. Check whether the running environment is the same as that when mmcv/mmdet is compiled. For example, you may compile mmcv using CUDA 10.0 bug run it on CUDA9.0 environments
- "undefined symbol" or "cannot open xxx.so" - "undefined symbol" or "cannot open xxx.so"
1. If those symbols are CUDA/C++ symbols (e.g., libcudart.so or GLIBCXX), check 1. If those symbols are CUDA/C++ symbols (e.g., libcudart.so or GLIBCXX), check
whether the CUDA/GCC runtimes are the same as those used for compiling mmcv whether the CUDA/GCC runtimes are the same as those used for compiling mmcv
2. If those symbols are Pytorch symbols (e.g., symbols containing caffe, aten, and TH), check whether the Pytorch version is the same as that used for compiling mmcv 2. If those symbols are Pytorch symbols (e.g., symbols containing caffe, aten, and TH), check whether the Pytorch version is the same as that used for compiling mmcv
3. Run `python mmdet/utils/collect_env.py` to check whether PyTorch, torchvision, and MMCV are built by and running on the same environment 3. Run `python mmdet/utils/collect_env.py` to check whether PyTorch, torchvision, and MMCV are built by and running on the same environment
- "RuntimeError: CUDA error: invalid configuration argument" - "RuntimeError: CUDA error: invalid configuration argument"
This error may be caused by the poor performance of GPU. Try to decrease the value of [THREADS_PER_BLOCK](https://github.com/open-mmlab/mmcv/blob/cac22f8cf5a904477e3b5461b1cc36856c2793da/mmcv/ops/csrc/common_cuda_helper.hpp#L10) This error may be caused by the poor performance of GPU. Try to decrease the value of [THREADS_PER_BLOCK](https://github.com/open-mmlab/mmcv/blob/cac22f8cf5a904477e3b5461b1cc36856c2793da/mmcv/ops/csrc/common_cuda_helper.hpp#L10)
and recompile mmcv. and recompile mmcv.
- "RuntimeError: nms is not compiled with GPU support" - "RuntimeError: nms is not compiled with GPU support"
This error is because your CUDA environment is not installed correctly. This error is because your CUDA environment is not installed correctly.
You may try to re-install your CUDA environment and then delete the build/ folder before re-compile mmcv. You may try to re-install your CUDA environment and then delete the build/ folder before re-compile mmcv.
- "Segmentation fault" - "Segmentation fault"
1. Check your GCC version and use GCC >= 5.4. This usually caused by the incompatibility between PyTorch and the environment (e.g., GCC < 4.9 for PyTorch). We also recommend the users to avoid using GCC 5.5 because many feedbacks report that GCC 5.5 will cause "segmentation fault" and simply changing it to GCC 5.4 could solve the problem 1. Check your GCC version and use GCC >= 5.4. This usually caused by the incompatibility between PyTorch and the environment (e.g., GCC \< 4.9 for PyTorch). We also recommend the users to avoid using GCC 5.5 because many feedbacks report that GCC 5.5 will cause "segmentation fault" and simply changing it to GCC 5.4 could solve the problem
2. Check whether PyTorch is correctly installed and could use CUDA op, e.g. type the following command in your terminal and see whether they could correctly output results 2. Check whether PyTorch is correctly installed and could use CUDA op, e.g. type the following command in your terminal and see whether they could correctly output results
```shell ```shell
python -c 'import torch; print(torch.cuda.is_available())' python -c 'import torch; print(torch.cuda.is_available())'
``` ```
3. If PyTorch is correctly installed, check whether MMCV is correctly installed. If MMCV is correctly installed, then there will be no issue of the command 3. If PyTorch is correctly installed, check whether MMCV is correctly installed. If MMCV is correctly installed, then there will be no issue of the command
```shell ```shell
python -c 'import mmcv; import mmcv.ops' python -c 'import mmcv; import mmcv.ops'
``` ```
4. If MMCV and PyTorch are correctly installed, you can use `ipdb` to set breakpoints or directly add `print` to debug and see which part leads the `segmentation fault` 4. If MMCV and PyTorch are correctly installed, you can use `ipdb` to set breakpoints or directly add `print` to debug and see which part leads the `segmentation fault`
- "libtorch_cuda_cu.so: cannot open shared object file" - "libtorch_cuda_cu.so: cannot open shared object file"
`mmcv-full` depends on the share object but it can not be found. We can check whether the object exists in `~/miniconda3/envs/{environment-name}/lib/python3.7/site-packages/torch/lib` or try to re-install the PyTorch. `mmcv-full` depends on the share object but it can not be found. We can check whether the object exists in `~/miniconda3/envs/{environment-name}/lib/python3.7/site-packages/torch/lib` or try to re-install the PyTorch.
- "fatal error C1189: #error: -- unsupported Microsoft Visual Studio version!" - "fatal error C1189: #error: -- unsupported Microsoft Visual Studio version!"
...@@ -77,15 +77,15 @@ Feel free to enrich the list if you find any frequent issues and have ways to he ...@@ -77,15 +77,15 @@ Feel free to enrich the list if you find any frequent issues and have ways to he
- Compatibility issue between MMCV and MMDetection; "ConvWS is already registered in conv layer" - Compatibility issue between MMCV and MMDetection; "ConvWS is already registered in conv layer"
Please install the correct version of MMCV for the version of your MMDetection following the [installation instruction](https://mmdetection.readthedocs.io/en/latest/get_started.html#installation). More details can be found at https://github.com/pytorch/pytorch/pull/45956. Please install the correct version of MMCV for the version of your MMDetection following the [installation instruction](https://mmdetection.readthedocs.io/en/latest/get_started.html#installation). More details can be found at https://github.com/pytorch/pytorch/pull/45956.
### Usage ### Usage
- "RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one" - "RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one"
1. This error indicates that your module has parameters that were not used in producing loss. This phenomenon may be caused by running different branches in your code in DDP mode. More datails at https://github.com/pytorch/pytorch/issues/55582 1. This error indicates that your module has parameters that were not used in producing loss. This phenomenon may be caused by running different branches in your code in DDP mode. More datails at https://github.com/pytorch/pytorch/issues/55582
2. You can set ` find_unused_parameters = True` in the config to solve the above problems or find those unused parameters manually 2. You can set ` find_unused_parameters = True` in the config to solve the above problems or find those unused parameters manually
- "RuntimeError: Trying to backward through the graph a second time" - "RuntimeError: Trying to backward through the graph a second time"
`GradientCumulativeOptimizerHook` and `OptimizerHook` are both set which causes the `loss.backward()` to be called twice so `RuntimeError` was raised. We can only use one of these. More datails at https://github.com/open-mmlab/mmcv/issues/1379. `GradientCumulativeOptimizerHook` and `OptimizerHook` are both set which causes the `loss.backward()` to be called twice so `RuntimeError` was raised. We can only use one of these. More datails at https://github.com/open-mmlab/mmcv/issues/1379.
...@@ -46,6 +46,7 @@ If you would like to use `opencv-python-headless` instead of `opencv-python`, ...@@ -46,6 +46,7 @@ If you would like to use `opencv-python-headless` instead of `opencv-python`,
e.g., in a minimum container environment or servers without GUI, e.g., in a minimum container environment or servers without GUI,
you can first install it before installing MMCV to skip the installation of `opencv-python`. you can first install it before installing MMCV to skip the installation of `opencv-python`.
``` ```
### Build on Windows ### Build on Windows
Building MMCV on Windows is a bit more complicated than that on Linux. Building MMCV on Windows is a bit more complicated than that on Linux.
...@@ -74,41 +75,41 @@ You should know how to set up environment variables, especially `Path`, on Windo ...@@ -74,41 +75,41 @@ You should know how to set up environment variables, especially `Path`, on Windo
1. Launch Anaconda prompt from Windows Start menu 1. Launch Anaconda prompt from Windows Start menu
Do not use raw `cmd.exe` s instruction is based on PowerShell syntax. Do not use raw `cmd.exe` s instruction is based on PowerShell syntax.
2. Create a new conda environment 2. Create a new conda environment
```shell ```shell
conda create --name mmcv python=3.7 # 3.6, 3.7, 3.8 should work too as tested conda create --name mmcv python=3.7 # 3.6, 3.7, 3.8 should work too as tested
conda activate mmcv # make sure to activate environment before any operation conda activate mmcv # make sure to activate environment before any operation
``` ```
3. Install PyTorch. Choose a version based on your need. 3. Install PyTorch. Choose a version based on your need.
```shell ```shell
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
``` ```
We only tested PyTorch version >= 1.6.0. We only tested PyTorch version >= 1.6.0.
4. Prepare MMCV source code 4. Prepare MMCV source code
```shell ```shell
git clone https://github.com/open-mmlab/mmcv.git git clone https://github.com/open-mmlab/mmcv.git
cd mmcv cd mmcv
``` ```
5. Install required Python packages 5. Install required Python packages
```shell ```shell
pip3 install -r requirements/runtime.txt pip3 install -r requirements/runtime.txt
``` ```
6. It is recommended to install `ninja` to speed up the compilation 6. It is recommended to install `ninja` to speed up the compilation
```bash ```bash
pip install -r requirements/optional.txt pip install -r requirements/optional.txt
``` ```
#### Build and install MMCV #### Build and install MMCV
...@@ -130,19 +131,19 @@ MMCV can be built in three ways: ...@@ -130,19 +131,19 @@ MMCV can be built in three ways:
1. Set up MSVC compiler 1. Set up MSVC compiler
Set Environment variable, add `C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\Hostx86\x64` to `PATH`, so that `cl.exe` will be available in prompt, as shown below. Set Environment variable, add `C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\Hostx86\x64` to `PATH`, so that `cl.exe` will be available in prompt, as shown below.
```none ```none
(base) PS C:\Users\xxx> cl (base) PS C:\Users\xxx> cl
Microsoft (R) C/C++ Optimizing Compiler Version 19.27.29111 for x64 Microsoft (R) C/C++ Optimizing Compiler Version 19.27.29111 for x64
Copyright (C) Microsoft Corporation. All rights reserved. Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ / link linkoption... ] usage: cl [ option... ] filename... [ / link linkoption... ]
``` ```
For compatibility, we use the x86-hosted and x64-targeted compiler. note `Hostx86\x64` in the path. For compatibility, we use the x86-hosted and x64-targeted compiler. note `Hostx86\x64` in the path.
You may want to change the system language to English because pytorch will parse text output from `cl.exe` to check its version. However only utf-8 is recognized. Navigate to Control Panel -> Region -> Administrative -> Language for Non-Unicode programs and change it to English. You may want to change the system language to English because pytorch will parse text output from `cl.exe` to check its version. However only utf-8 is recognized. Navigate to Control Panel -> Region -> Administrative -> Language for Non-Unicode programs and change it to English.
##### Option 1: Build MMCV (lite version) ##### Option 1: Build MMCV (lite version)
...@@ -162,31 +163,33 @@ pip list ...@@ -162,31 +163,33 @@ pip list
##### Option 2: Build MMCV (full version with CPU) ##### Option 2: Build MMCV (full version with CPU)
1. Finish above common steps 1. Finish above common steps
2. Set up environment variables 2. Set up environment variables
```shell ```shell
$env:MMCV_WITH_OPS = 1 $env:MMCV_WITH_OPS = 1
$env:MAX_JOBS = 8 # based on your available number of CPU cores and amount of memory $env:MAX_JOBS = 8 # based on your available number of CPU cores and amount of memory
``` ```
3. Following build steps of the lite version 3. Following build steps of the lite version
```shell ```shell
# activate environment # activate environment
conda activate mmcv conda activate mmcv
# change directory # change directory
cd mmcv cd mmcv
# build # build
python setup.py build_ext # if success, cl will be launched to compile ops python setup.py build_ext # if success, cl will be launched to compile ops
# install # install
python setup.py develop python setup.py develop
# check # check
pip list pip list
``` ```
##### Option 3: Build MMCV (full version with CUDA) ##### Option 3: Build MMCV (full version with CUDA)
1. Finish above common steps 1. Finish above common steps
2. Make sure `CUDA_PATH` or `CUDA_HOME` is already set in `envs` via `ls env:`, desired output is shown as below: 2. Make sure `CUDA_PATH` or `CUDA_HOME` is already set in `envs` via `ls env:`, desired output is shown as below:
```none ```none
...@@ -245,16 +248,17 @@ If you are compiling against PyTorch 1.6.0, you might meet some errors from PyTo ...@@ -245,16 +248,17 @@ If you are compiling against PyTorch 1.6.0, you might meet some errors from PyTo
If you meet issues when running or compiling mmcv, we list some common issues in [Frequently Asked Question](../faq.html). If you meet issues when running or compiling mmcv, we list some common issues in [Frequently Asked Question](../faq.html).
## [Optional] Build MMCV on IPU machine ## \[Optional\] Build MMCV on IPU machine
Firstly, you need to apply for an IPU cloud machine, see [here](https://www.graphcore.ai/ipus-in-the-cloud). Firstly, you need to apply for an IPU cloud machine, see [here](https://www.graphcore.ai/ipus-in-the-cloud).
### Option 1: Docker ### Option 1: Docker
1. Pull docker 1. Pull docker
```shell
docker pull graphcore/pytorch ```shell
``` docker pull graphcore/pytorch
```
2. Build MMCV under same python environment 2. Build MMCV under same python environment
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
There are two versions of MMCV: There are two versions of MMCV:
- **mmcv-full**: comprehensive, with full features and various CUDA ops out of box. It takes longer time to build. - **mmcv-full**: comprehensive, with full features and various CUDA ops out of box. It takes longer time to build.
- **mmcv**: lite, without CUDA ops but all other features, similar to mmcv<1.0.0. It is useful when you do not need those CUDA ops. - **mmcv**: lite, without CUDA ops but all other features, similar to mmcv\<1.0.0. It is useful when you do not need those CUDA ops.
```{warning} ```{warning}
Do not install both versions in the same environment, otherwise you may encounter errors like `ModuleNotFound`. You need to uninstall one before installing the other. `Installing the full version is highly recommended if CUDA is avaliable`. Do not install both versions in the same environment, otherwise you may encounter errors like `ModuleNotFound`. You need to uninstall one before installing the other. `Installing the full version is highly recommended if CUDA is avaliable`.
...@@ -17,32 +17,32 @@ We provide pre-built mmcv packages (recommended) with different PyTorch and CUDA ...@@ -17,32 +17,32 @@ We provide pre-built mmcv packages (recommended) with different PyTorch and CUDA
i. Install the latest version. i. Install the latest version.
The rule for installing the latest ``mmcv-full`` is as follows: The rule for installing the latest `mmcv-full` is as follows:
```shell ```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
``` ```
Please replace ``{cu_version}`` and ``{torch_version}`` in the url to your desired one. For example, Please replace `{cu_version}` and `{torch_version}` in the url to your desired one. For example,
to install the latest ``mmcv-full`` with ``CUDA 11.1`` and ``PyTorch 1.9.0``, use the following command: to install the latest `mmcv-full` with `CUDA 11.1` and `PyTorch 1.9.0`, use the following command:
```shell ```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
``` ```
For more details, please refer the the following tables and delete ``=={mmcv_version}``. For more details, please refer the the following tables and delete `=={mmcv_version}`.
ii. Install a specified version. ii. Install a specified version.
The rule for installing a specified ``mmcv-full`` is as follows: The rule for installing a specified `mmcv-full` is as follows:
```shell ```shell
pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
``` ```
First of all, please refer to the Releases and replace ``{mmcv_version}`` a specified one. e.g. ``1.3.9``. First of all, please refer to the Releases and replace `{mmcv_version}` a specified one. e.g. `1.3.9`.
Then replace ``{cu_version}`` and ``{torch_version}`` in the url to your desired versions. For example, Then replace `{cu_version}` and `{torch_version}` in the url to your desired versions. For example,
to install ``mmcv-full==1.3.9`` with ``CUDA 11.1`` and ``PyTorch 1.9.0``, use the following command: to install `mmcv-full==1.3.9` with `CUDA 11.1` and `PyTorch 1.9.0`, use the following command:
```shell ```shell
pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html pip install mmcv-full==1.3.9 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
......
...@@ -4,7 +4,7 @@ We no longer provide `mmcv-full` packages compiled under lower versions of `PyTo ...@@ -4,7 +4,7 @@ We no longer provide `mmcv-full` packages compiled under lower versions of `PyTo
### PyTorch 1.4 ### PyTorch 1.4
| 1.0.0 <= mmcv_version <= 1.2.1 | 1.0.0 \<= mmcv_version \<= 1.2.1
#### CUDA 10.1 #### CUDA 10.1
...@@ -26,7 +26,7 @@ pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dis ...@@ -26,7 +26,7 @@ pip install mmcv-full=={mmcv_version} -f https://download.openmmlab.com/mmcv/dis
### PyTorch v1.3 ### PyTorch v1.3
| 1.0.0 <= mmcv_version <= 1.3.16 | 1.0.0 \<= mmcv_version \<= 1.3.16
#### CUDA 10.1 #### CUDA 10.1
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment