Commit a5744539 authored by gaoqiong's avatar gaoqiong
Browse files

lite

parent 25d7fde8
# ONNX Runtime
## 安装
### 安装包安装
c++ 安装包下载:https://cancon.hpccube.com:65024/directlink/4/onnxruntime/dtk23.04/onnxruntime-lite-1.14.0+git81e68c5.abi0.dtk2304-linux_x86_64.run
Python 安装包下载:
Python3.7:https://cancon.hpccube.com:65024/directlink/4/onnxruntime/dtk23.04/onnxruntime-lite-1.14.0+git81e68c5.abi0.dtk2304-cp37-cp37m-manylinux2014_x86_64.whl
Python3.8:https://cancon.hpccube.com:65024/directlink/4/onnxruntime/dtk23.04/onnxruntime-lite-1.14.0+git81e68c5.abi0.dtk2304-cp38-cp38-manylinux2014_x86_64.whl
Python3.9:https://cancon.hpccube.com:65024/directlink/4/onnxruntime/dtk23.04/onnxruntime-lite-1.14.0+git81e68c5.abi0.dtk2304-cp39-cp39-manylinux2014_x86_64.whl
其他Python安装包可联系我们或者使用源码编译安装
### 源码安装
#### 编译环境准备
1. 拉取镜像:
```shell
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:onnxruntime-1.14.0-DTK-22.10.1 (待修改,新镜像正在上传)
```
2. 拉取源码:
```shell
git clone http://10.0.50.24/dcutoolkit/deeplearing/onnxruntime.git
```
3. 进行编译:
```shell
#激活环境变量
source /opt/dtk/env.sh
tar -zxvf ./cmake/external.tar.gz -C ./cmake/
tar -zxvf ./cmake/onnxruntimefiles.tar.gz -C /data/
sh OnnxRuntimeinstall.sh
```
4. 安装Python安装包:
```shell
#激活环境变量
pip install ./build/Linux/Release/dist/onnxruntime*.wheel
```
5. 编译C++ 项目所需文件:
```shell
#激活环境变量
cd ./build/Linux/Release/
make package
```
## 安装包命名
例:onnxruntime-lite-1.14.0+git81e68c5.abi0.dtk2304-cp39-cp39-manylinux2014_x86_64.whl
- onnxruntime-lite: 安装包名称;
- 1.14.0: 安装包版本号;
- git81e68c5: git号;
- abi0: 对应centos和rocky多系统的标识,取值为abi0(centos7),abi1(rocky8),可使用命令行动态查询获取辅助判断(echo '#include <string>' | gcc -x c++ -E -dM - | fgrep _GLIBCXX_USE_CXX11_ABI);
- dtk2304: 对应dtk大版本号,命令行动态查询取值(/opt/dtk-23.04/.info/rocm_version);
- cp39-cp39: 对应python版本号;
- manylinux2014_x86_64: 系统架构;
## 版本号查询
- onnxruntime.\_\_version__:与官方版本同步,查询该安装包的版本号,例如1.14.0(基于官方1.14.0修改);
## 参考
- [README_ORIGIN](README_ORIGIN.md)
<p align="center"><img width="50%" src="docs/images/ONNX_Runtime_logo_dark.png" /></p>
**ONNX Runtime is a cross-platform inference and training machine-learning accelerator**.
**ONNX Runtime inference** can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. [Learn more &rarr;](https://www.onnxruntime.ai/docs/#onnx-runtime-for-inferencing)
**ONNX Runtime training** can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. [Learn more &rarr;](https://www.onnxruntime.ai/docs/#onnx-runtime-for-training)
## Get Started & Resources
* **General Information**: [onnxruntime.ai](https://onnxruntime.ai)
* **Usage documention and tutorials**: [onnxruntime.ai/docs](https://onnxruntime.ai/docs)
* **YouTube video tutorials**: [youtube.com/@ONNXRuntime](https://www.youtube.com/@ONNXRuntime)
* [**Upcoming Release Roadmap**](https://github.com/microsoft/onnxruntime/wiki/Upcoming-Release-Roadmap)
* **Companion sample repositories**:
- ONNX Runtime Inferencing: [microsoft/onnxruntime-inference-examples](https://github.com/microsoft/onnxruntime-inference-examples)
- ONNX Runtime Training: [microsoft/onnxruntime-training-examples](https://github.com/microsoft/onnxruntime-training-examples)
## Build Pipeline Status
|System|CPU|GPU|EPs|
|---|---|---|---|
|Windows|[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Windows%20CPU%20CI%20Pipeline?label=Windows+CPU)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=9)|[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Windows%20GPU%20CI%20Pipeline?label=Windows+GPU)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=10)|[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Windows%20GPU%20TensorRT%20CI%20Pipeline?label=Windows+GPU+TensorRT)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=47)|
|Linux|[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Linux%20CPU%20CI%20Pipeline?label=Linux+CPU)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=11)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Linux%20CPU%20Minimal%20Build%20E2E%20CI%20Pipeline?label=Linux+CPU+Minimal+Build)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=64)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Linux%20CPU%20x64%20NoContribops%20CI%20Pipeline?label=Linux+CPU+x64+No+Contrib+Ops)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=110)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/centos7_cpu?label=Linux+CentOS7)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=78)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/orttraining-linux-ci-pipeline?label=Linux+CPU+Training)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=86)|[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Linux%20GPU%20CI%20Pipeline?label=Linux+GPU)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=12)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Linux%20GPU%20TensorRT%20CI%20Pipeline?label=Linux+GPU+TensorRT)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=45)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/orttraining-distributed?label=Distributed+Training)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=140)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/orttraining-linux-gpu-ci-pipeline?label=Linux+GPU+Training)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=84)|[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Linux%20OpenVINO%20CI%20Pipeline?label=Linux+OpenVINO)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=55)|
|Mac|[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/MacOS%20CI%20Pipeline?label=MacOS+CPU)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=13)<br>[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/MacOS%20NoContribops%20CI%20Pipeline?label=MacOS+NoContribops)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=65)|||
|Android|||[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Android%20CI%20Pipeline?label=Android)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=53)|
|iOS|||[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/iOS%20CI%20Pipeline?label=iOS)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=134)|
|WebAssembly|||[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Windows%20WebAssembly%20CI%20Pipeline?label=WASM)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=161)|
## Data/Telemetry
Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the [privacy statement](docs/Privacy.md) for more details.
## Contributions and Feedback
We welcome contributions! Please see the [contribution guidelines](CONTRIBUTING.md).
For feature requests or bug reports, please file a [GitHub Issue](https://github.com/Microsoft/onnxruntime/issues).
For general discussion or questions, please use [GitHub Discussions](https://github.com/microsoft/onnxruntime/discussions).
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## License
This project is licensed under the [MIT License](LICENSE).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment