Commit 7e297b13 authored by Paul's avatar Paul
Browse files

Merge

parents 86ea5e91 aa7ff911
# Natural Language Processing Inference Examples
- [Python BERT-SQuAD](./python_bert_squad)
\ No newline at end of file
...@@ -62,7 +62,7 @@ ...@@ -62,7 +62,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"!wget -nc https://github.com/onnx/models/raw/master/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx" "!wget -nc https://github.com/onnx/models/raw/main/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx"
] ]
}, },
{ {
......
...@@ -23,7 +23,7 @@ unzip uncased_L-12_H-768_A-12.zip ...@@ -23,7 +23,7 @@ unzip uncased_L-12_H-768_A-12.zip
``` ```
5) Get BERT ONNX model (bertsquad-10.onnx): 5) Get BERT ONNX model (bertsquad-10.onnx):
``` ```
wget https://github.com/onnx/models/raw/master/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx wget https://github.com/onnx/models/raw/main/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx
``` ```
6) Run the inference, it will compile and run the model on three questions and small data provided in `inputs.json`: 6) Run the inference, it will compile and run the model on three questions and small data provided in `inputs.json`:
``` ```
......
tensorflow==2.5.0 tensorflow==2.7.2
onnxruntime onnxruntime
tokenizers tokenizers
\ No newline at end of file
# Modifications Copyright (C) 2022, Advanced Micro Devices, Inc. All rights reserved
# Copyright 2018 The Google AI Language Team Authors. # Copyright 2018 The Google AI Language Team Authors.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
......
# Vision Inference Examples
- [CPP MNIST](./cpp_mnist)
- [Python Resnet50](./python_resnet50)
- [Python Super Resolution](./python_super_resolution)
- [Python NFNet](./python_nfnet)
- [Python U-Net](./python_unet)
- [Python 3D-UNet](./python_3dunet)
\ No newline at end of file
...@@ -60,14 +60,14 @@ migraphx::quantize_int8(prog, targ, quant_opts); ...@@ -60,14 +60,14 @@ migraphx::quantize_int8(prog, targ, quant_opts);
## Compilation ## Compilation
Network graphs saved in e.g. ONNX or protobuf format are not target-specific. In order to run inference, we must compile the graph into a target-specific program. Network graphs saved in e.g. ONNX or protobuf format are not target-specific. In order to run inference, we must compile the graph into a target-specific program.
Two options may be turned on (default for both is `false`) when compiling: Two options may be turned on when compiling:
- `bool offload_copy`: For targets with offloaded memory (such as the gpu), this will insert instructions during compilation to copy the input parameters to the offloaded memory and to copy the final result from the offloaded memory back to main memory. - `set_offload_copy(bool value)`: For targets with offloaded memory (such as the gpu), this will insert instructions during compilation to copy the input parameters to the offloaded memory and to copy the final result from the offloaded memory back to main memory. Default value is `false` for offload_copy.
- `bool fast_math`: Optimize math functions to use faster approximate versions. There may be slight accuracy degredation when enabled. - `set_fast_math(bool value)`: Optimize math functions to use faster approximate versions. There may be slight accuracy degredation when enabled. Default value is `true` for fast_math.
The following snippet assumes `targ` has been set as "gpu", and will compile the program without the fast_math optimization. The following snippet assumes `targ` has been set as "gpu", and will compile the program without the fast_math optimization.
``` ```
migraphx_compile_options comp_opts; migraphx::compile_options comp_opts;
comp_opts.offload_copy = true; comp_opts.set_offload_copy();
prog.compile(targ, comp_opts); prog.compile(targ, comp_opts);
``` ```
......
...@@ -99,8 +99,8 @@ int main(int argc, char** argv) ...@@ -99,8 +99,8 @@ int main(int argc, char** argv)
if(GPU) if(GPU)
{ {
migraphx_compile_options comp_opts; migraphx::compile_options comp_opts;
comp_opts.offload_copy = true; comp_opts.set_offload_copy();
prog.compile(targ, comp_opts); prog.compile(targ, comp_opts);
} }
else else
......
...@@ -10,6 +10,16 @@ ...@@ -10,6 +10,16 @@
"https://github.com/naomifridman/Unet_Brain_tumor_segmentation" "https://github.com/naomifridman/Unet_Brain_tumor_segmentation"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"id": "09ceec31",
"metadata": {},
"outputs": [],
"source": [
"!pip install SimpleITK matplotlib scikit-image"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment