Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Resnet50_onnxruntime_migraphx
Commits
3e7e503b
Commit
3e7e503b
authored
Jan 07, 2025
by
liucong
Browse files
修改readme及部分示例代码
parent
e7035bdf
Changes
6
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
36 additions
and
28 deletions
+36
-28
Doc/Tutorial_Python.md
Doc/Tutorial_Python.md
+8
-9
Python/Classifier.py
Python/Classifier.py
+1
-1
Python/Classifier_io_binding.py
Python/Classifier_io_binding.py
+1
-1
Python/Classifier_run_with_ort.py
Python/Classifier_run_with_ort.py
+1
-1
README.md
README.md
+24
-15
Resource/Configuration.xml
Resource/Configuration.xml
+1
-1
No files found.
Doc/Tutorial_Python.md
View file @
3e7e503b
...
@@ -67,20 +67,19 @@ def Preprocessing(pathOfImage):
...
@@ -67,20 +67,19 @@ def Preprocessing(pathOfImage):
```
```
def ort_seg_dcu(model_path,image):
def ort_seg_dcu(model_path,image):
#创建sess_options
# 选择migraphx后端推理
sess_options = ort.SessionOptions()
provider_options=[]
if staticInfer:
provider_options=[{'device_id':'0','migraphx_fp16_enable':'true','dynamic_model':'false'}]
#设置图优化
if dynamicInfer:
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_BASIC
provider_options=[{'device_id':'0','migraphx_fp16_enable':'true','dynamic_model':'true', 'migraphx_profile_max_shapes':'data:1x3x224x224'}]
#是否开启profiling
dcu_session = ort.InferenceSession(model_path, providers=['MIGraphXExecutionProvider'], provider_options=provider_options)
sess_options.enable_profiling = False
dcu_session = ort.InferenceSession(model_path,sess_options,providers=['ROCMExecutionProvider'],)
input_name=dcu_session.get_inputs()[0].name
input_name=dcu_session.get_inputs()[0].name
results = dcu_session.run(None, input_feed={input_name:image
})
results = dcu_session.run(None, input_feed={input_name:image})
scores=np.array(results[0])
scores=np.array(results[0])
# print("ort result.shape:",scores.shape)
return scores
return scores
```
```
...
...
Python/Classifier.py
View file @
3e7e503b
...
@@ -62,7 +62,7 @@ def ort_seg_dcu(model_path,image,staticInfer,dynamicInfer):
...
@@ -62,7 +62,7 @@ def ort_seg_dcu(model_path,image,staticInfer,dynamicInfer):
provider_options
=
[]
provider_options
=
[]
if
staticInfer
:
if
staticInfer
:
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
,
'dynamic_model'
:
'false'
}]
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
}]
if
dynamicInfer
:
if
dynamicInfer
:
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
,
'dynamic_model'
:
'true'
,
'migraphx_profile_max_shapes'
:
'data:1x3x224x224'
}]
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
,
'dynamic_model'
:
'true'
,
'migraphx_profile_max_shapes'
:
'data:1x3x224x224'
}]
...
...
Python/Classifier_io_binding.py
View file @
3e7e503b
...
@@ -60,7 +60,7 @@ def postprocess(scores,pathOfImage):
...
@@ -60,7 +60,7 @@ def postprocess(scores,pathOfImage):
def
ort_seg_dcu
(
model_path
,
image
):
def
ort_seg_dcu
(
model_path
,
image
):
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
,
'dynamic_model'
:
'false'
}]
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
}]
dcu_session
=
ort
.
InferenceSession
(
model_path
,
providers
=
[
'MIGraphXExecutionProvider'
],
provider_options
=
provider_options
)
dcu_session
=
ort
.
InferenceSession
(
model_path
,
providers
=
[
'MIGraphXExecutionProvider'
],
provider_options
=
provider_options
)
output_data
=
np
.
empty
(
dcu_session
.
get_outputs
()[
0
].
shape
).
astype
(
np
.
float32
)
output_data
=
np
.
empty
(
dcu_session
.
get_outputs
()[
0
].
shape
).
astype
(
np
.
float32
)
...
...
Python/Classifier_run_with_ort.py
View file @
3e7e503b
...
@@ -62,7 +62,7 @@ def ort_seg_dcu(model_path,image,staticInfer,dynamicInfer):
...
@@ -62,7 +62,7 @@ def ort_seg_dcu(model_path,image,staticInfer,dynamicInfer):
provider_options
=
[]
provider_options
=
[]
if
staticInfer
:
if
staticInfer
:
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
,
'dynamic_model'
:
'false'
}]
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
}]
if
dynamicInfer
:
if
dynamicInfer
:
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
,
'dynamic_model'
:
'true'
,
'migraphx_profile_max_shapes'
:
'data:1x3x224x224'
}]
provider_options
=
[{
'device_id'
:
'0'
,
'migraphx_fp16_enable'
:
'true'
,
'dynamic_model'
:
'true'
,
'migraphx_profile_max_shapes'
:
'data:1x3x224x224'
}]
...
...
README.md
View file @
3e7e503b
...
@@ -18,22 +18,16 @@ ResNet50使用了多个具有残差连接的残差块来解决梯度消失或梯
...
@@ -18,22 +18,16 @@ ResNet50使用了多个具有残差连接的残差块来解决梯度消失或梯
### Docker(方法一)
### Docker(方法一)
拉取镜像:
拉取镜像:
```
python
```
python
docker
pull
image
.
sourcefind
.
cn
:
5000
/
dcu
/
admin
/
base
/
migraphx
:
4.3
.
0
-
ubuntu20
.
04
-
dtk24
.
04.1
-
py3
.
10
```
```
创建并启动容器:
创建并启动容器:
```
```
docker run --shm-size 16g --network=host --name=resnet50_onnxruntime -v /opt/hyhal:/opt/hyhal:ro --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $PWD/resnet50_onnxruntime:/home/resnet50_onnxruntime -it <Your Image ID> /bin/bash
# 激活dtk
source /opt/dtk/env.sh
```
```
### Dockerfile(方法二)
### Dockerfile(方法二)
```
```
cd ./docker
docker build --no-cache -t resnet50_onnxruntime:2.0 .
docker run --shm-size 16g --network=host --name=resnet50_onnxruntime -v /opt/hyhal:/opt/hyhal:ro --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $PWD/resnet50_onnxruntime:/home/resnet50_onnxruntime -it <Your Image ID> /bin/bash
```
```
## 数据集
## 数据集
...
@@ -59,39 +53,54 @@ data
...
@@ -59,39 +53,54 @@ data
-->
-->
## 推理
## 推理
### Python版本推理
### Python版本推理
采用ONNXRuntime框架使用DCU进行
推理,下面介绍如何运行python代码示例,Python示例的详细说明见Doc目录下的Tutorial_Python.md
。
采用ONNXRuntime框架使用DCU进行
migraphx后端推理
。
#### 配置环境
#### 配置环境
```
python
```
python
# 进入resnet50 onnxruntime工程根目录
# 进入resnet50 onnxruntime工程根目录
cd
<
path_to_resnet50_onnxruntime
>
cd
<
path_to_resnet50_onnxruntime
_migraphx
>
# 安装依赖
# 安装依赖
pip
install
-
r
.
/
Python
/
requirements
.
txt
pip
install
-
r
.
/
Python
/
requirements
.
txt
```
```
#### 运行示例
#### 运行示例
本示例程序中一共给出了三种推理方式:
```
python
```
python
# 进入resnet50 onnxruntime工程根目录
# 进入resnet50 onnxruntime工程根目录
cd
<
path_to_resnet50_onnxruntime
>
cd
<
path_to_resnet50_onnxruntime
_migraphx
>
# 进入示例程序目录
# 进入示例程序目录
cd
Python
/
cd
Python
/
# 运行示例
# 静态推理,输入输出为cpu数据
python
Classifier
.
py
python
Classifier
.
py
--
staticInfer
# 动态推理,输入输出为cpu数据
python
Classifier
.
py
--
dynamicInfer
# 静态推理,输入为gpu数据,输出为cpu数据
python
Classifier_run_with_ort
.
py
--
staticInfer
# 动态推理,输入为gpu数据,输出为cpu数据
python
Classifier_run_with_ort
.
py
--
dynamicInfer
# 静态推理,输入输出为gpu数据
python
Classifier_io_binding
.
py
```
```
### C++版本推理
### C++版本推理
采用ONNXRuntime框架使用DCU进行推理,下面介绍如何运行C++代码示例
,C++示例的详细说明见Doc目录下的Tutorial_Cpp.md
。
采用ONNXRuntime框架使用DCU进行推理,下面介绍如何运行C++代码示例。
#### 构建工程
#### 构建工程
```
c++
```
c++
cd
<
path_to_resnet50_onnxruntime
>
cd
<
path_to_resnet50_onnxruntime
_migraphx
>
rbuild
build
-
d
depend
rbuild
build
-
d
depend
```
```
#### 设置环境变量
#### 设置环境变量
将依赖库依赖加入环境变量LD_LIBRARY_PATH,在~/.bashrc中添加如下语句:
将依赖库依赖加入环境变量LD_LIBRARY_PATH,在~/.bashrc中添加如下语句:
```
c++
```
c++
export
LD_LIBRARY_PATH
=<
path_to_resnet50_onnxruntime
>/
depend
/
lib64
/:
$
LD_LIBRARY_PATH
export
LD_LIBRARY_PATH
=<
path_to_resnet50_onnxruntime
_migraphx
>/
depend
/
lib64
/:
$
LD_LIBRARY_PATH
```
```
然后执行:
然后执行:
```
```
...
...
Resource/Configuration.xml
View file @
3e7e503b
...
@@ -3,7 +3,7 @@
...
@@ -3,7 +3,7 @@
<!--分类器-->
<!--分类器-->
<Classifier>
<Classifier>
<ModelPath>
"../Resource/Models/resnet50
-v2-7
.onnx"
</ModelPath>
<ModelPath>
"../Resource/Models/resnet50
_static
.onnx"
</ModelPath>
<UseInt8>
0
</UseInt8>
<!--是否使用int8,不支持-->
<UseInt8>
0
</UseInt8>
<!--是否使用int8,不支持-->
<UseFP16>
0
</UseFP16>
<!--是否使用FP16-->
<UseFP16>
0
</UseFP16>
<!--是否使用FP16-->
</Classifier>
</Classifier>
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment