"vscode:/vscode.git/clone" did not exist on "5c01c25f849fb55359cbb0ae565ff93ee89de0ba"
Commit c0dd2583 authored by liucong's avatar liucong
Browse files

更新GPT2示例工程

parent ef65e64e
...@@ -120,6 +120,19 @@ long unsigned int GPT2::Inference(const std::vector<long unsigned int> &input_id ...@@ -120,6 +120,19 @@ long unsigned int GPT2::Inference(const std::vector<long unsigned int> &input_id
1.执行推理,GPT-2模型的推理结果results是一个std::vector< migraphx::argument >类型,包含一个输出,所以result = results[0]。result中一共包含了input_id.size() * 22557个概率值,其中,input_id.size()代表输入数据的长度,22557代表了词汇表中词的数量。 1.执行推理,GPT-2模型的推理结果results是一个std::vector< migraphx::argument >类型,包含一个输出,所以result = results[0]。result中一共包含了input_id.size() * 22557个概率值,其中,input_id.size()代表输入数据的长度,22557代表了词汇表中词的数量。
另外,如果想要指定输出节点,可以在eval()方法中通过提供outputNames参数来实现:
```c++
...
// 推理
std::vector<std::string> outputNames = {"output","304","532"};
std::vector<migraphx::argument> results = net.eval(inputData, outputNames);
...
```
## 数据后处理 ## 数据后处理
得到模型推理结果后,还需要对数据做如下后处理: 得到模型推理结果后,还需要对数据做如下后处理:
......
...@@ -14,9 +14,20 @@ maxInput={"input":[1,1000]} ...@@ -14,9 +14,20 @@ maxInput={"input":[1,1000]}
# 加载模型 # 加载模型
print("INFO: Parsing and compiling the model") print("INFO: Parsing and compiling the model")
model = migraphx.parse_onnx("../Resource/GPT2_shici.onnx", map_input_dims=maxInput) model = migraphx.parse_onnx("../Resource/GPT2_shici.onnx", map_input_dims=maxInput)
inputName=model.get_parameter_names()[0]
inputShape=model.get_parameter_shapes()[inputName].lens() # 获取模型输入/输出节点信息
print("inputName:{0} \ninputShape:{1}".format(inputName,inputShape)) print("inputs:")
inputs = model.get_inputs()
for key,value in inputs.items():
print("{}:{}".format(key,value))
print("outputs:")
outputs = model.get_outputs()
for key,value in outputs.items():
print("{}:{}".format(key,value))
inputName="input"
inputShape=inputs[inputName].lens()
# 编译 # 编译
model.compile(t=migraphx.get_target("gpu"), device_id=0) model.compile(t=migraphx.get_target("gpu"), device_id=0)
......
...@@ -37,12 +37,6 @@ cd Python/ ...@@ -37,12 +37,6 @@ cd Python/
pip install -r requirements.txt pip install -r requirements.txt
``` ```
### 设置动态shape模式
```python
export MIGRAPHX_DYNAMIC_SHAPE=1
```
### 运行示例 ### 运行示例
```python ```python
...@@ -93,12 +87,6 @@ export LD_LIBRARY_PATH=<path_to_gpt2_migraphx>/depend/lib64/:$LD_LIBRARY_PATH ...@@ -93,12 +87,6 @@ export LD_LIBRARY_PATH=<path_to_gpt2_migraphx>/depend/lib64/:$LD_LIBRARY_PATH
source ~/.bashrc source ~/.bashrc
``` ```
### 设置动态shape模式
```
export MIGRAPHX_DYNAMIC_SHAPE=1
```
### 运行示例 ### 运行示例
```python ```python
......
...@@ -40,10 +40,21 @@ ErrorCode GPT2::Initialize() ...@@ -40,10 +40,21 @@ ErrorCode GPT2::Initialize()
net = migraphx::parse_onnx(modelPath, onnx_options); net = migraphx::parse_onnx(modelPath, onnx_options);
LOG_INFO(stdout,"succeed to load model: %s\n",GetFileName(modelPath).c_str()); LOG_INFO(stdout,"succeed to load model: %s\n",GetFileName(modelPath).c_str());
// 获取模型输入属性 // 获取模型输入/输出节点信息
std::unordered_map<std::string, migraphx::shape> inputMap=net.get_parameter_shapes(); std::cout<<"inputs:"<<std::endl;
inputName=inputMap.begin()->first; std::unordered_map<std::string, migraphx::shape> inputs=net.get_inputs();
inputShape=inputMap.begin()->second; for(auto i:inputs)
{
std::cout<<i.first<<":"<<i.second<<std::endl;
}
std::cout<<"outputs:"<<std::endl;
std::unordered_map<std::string, migraphx::shape> outputs=net.get_outputs();
for(auto i:outputs)
{
std::cout<<i.first<<":"<<i.second<<std::endl;
}
inputName=inputs.begin()->first;
inputShape=inputs.begin()->second;
// 设置模型为GPU模式 // 设置模型为GPU模式
migraphx::target gpuTarget = migraphx::gpu::target{}; migraphx::target gpuTarget = migraphx::gpu::target{};
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment