@@ -26,51 +26,28 @@ This folder contains all non-python code for MMCV custom ops. Please follow the
│ │ └── ...
| └── utils
│ │ └── ...
├── onnxruntime
│ ├── onnxruntime_register.h
│ ├── onnxruntime_session_options_config_keys.h
│ ├── ort_mmcv_utils.h
│ ├── ...
│ ├── onnx_ops.h
│ └── cpu
│ ├── onnxruntime_register.cpp
│ ├── ...
│ └── onnx_ops_impl.cpp
├── parrots
│ ├── ...
│ ├── ops.cpp
│ ├── ops_parrots.cpp
│ └── ops_pytorch.h
├── pytorch
│ ├── info.cpp
│ ├── pybind.cpp
│ ├── ...
│ ├── ops.cpp
│ ├── cuda
│ │ ├── ...
│ │ └── ops_cuda.cu
│ ├── cpu
│ │ ├── ...
│ │ └── ops.cpp
│ ├── mps
│ │ ├── ...
│ | └── op_mps.mm
│ └── mlu
│ ├── ...
│ └── op_mlu.cpp
└── tensorrt
├── trt_cuda_helper.cuh
├── trt_plugin_helper.hpp
├── trt_plugin.hpp
├── trt_serialize.hpp
├── ...
├── trt_ops.hpp
└── plugins
├── trt_cuda_helper.cu
├── trt_plugin.cpp
├── ...
├── trt_ops.cpp
└── trt_ops_kernel.cu
└── pytorch
├── info.cpp
├── pybind.cpp
├── ...
├── ops.cpp
├── cuda
│ ├── ...
│ └── ops_cuda.cu
├── cpu
│ ├── ...
│ └── ops.cpp
├── mps
│ ├── ...
| └── op_mps.mm
└── mlu
├── ...
└── op_mlu.cpp
```
## Components
...
...
@@ -80,16 +57,12 @@ This folder contains all non-python code for MMCV custom ops. Please follow the
-`mps`: The tools used to support MPS ops. **NOTE** that MPS support is **experimental**.
-`mlu`: The MLU kernels used to support [Cambricon](https://www.cambricon.com/) device.
-`utils`: The kernels and utils of spconv.
-`onnxruntime`: **ONNX Runtime** support for custom ops. Has been deprecated, please try the latest custom ops in [MMDeploy](https://github.com/open-mmlab/mmdeploy).
-`cpu`: CPU implementation of supported ops.
-`parrots`: **Parrots** is a deep learning frame for model training and inference. Parrots custom ops are placed in this directory.
-`pytorch`: **PyTorch** custom ops are supported by binding C++ to Python with **pybind11**. The ops implementation and binding codes are placed in this directory.
-`cuda`: This directory contains cuda kernel launchers, which feed memory pointers of tensor to the cuda kernel in `common/cuda`. The launchers provide c++ interface of cuda implementation of corresponding custom ops.
-`cpu`: This directory contain cpu implementations of corresponding custom ops.
-`mlu`: This directory contain launchers of each MLU kernels.
-`mps`: MPS ops implementation and launchers.
-`tensorrt`: **TensorRT** support for custom ops. Has been deprecated, please try the latest custom ops in [MMDeploy](https://github.com/open-mmlab/mmdeploy).
-`plugins`: This directory contains the implementation of the supported custom ops. Some ops might also use shared cuda kernel in `common/cuda`.