Unverified Commit 7ecc003b authored by turneram's avatar turneram Committed by GitHub
Browse files

Added initial examples (#672)

* Added initial examples

* Added python example from wiki

* Edited readme

* Added cpp interface files

* Made changes to readmes

* Added jupyter notebook for tf2 ex, added readme for tf1 ex

* Added dockerfile

* Re-structured driver example

* Removed unnecessary files

* Changed include path

* Removed cpp_interface to rewrite

* Added example of parsing, loading, saving with C++ API

* Updated readme

* Small code change, altered docker invocation, formatiing

* Formatting

* Added newline to end of dockerfile

* Formatting

* Formatting

* Added C++ API inference example program

* Formatting

* Added README to cpp inference example

* DeepCode suggested changed

* DeepCode suggested change

* Redesign python inference example

* Address review comments

* Address review comments

* Address review comments
parent 4d46cbdb
FROM ubuntu:18.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get clean && apt-get update && apt-get install -y locales
RUN locale-gen en_US.UTF-8
RUN update-locale LANG=en_US.UTF-8
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
# Install rocm
RUN apt-get update && apt-get install -y gnupg2 --no-install-recommends curl && \
curl -sL http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | apt-key add - && \
sh -c 'echo deb [arch=amd64] http://repo.radeon.com/rocm/apt/3.7/ xenial main > /etc/apt/sources.list.d/rocm.list'
RUN apt-get update &&\
apt-get install -y sudo git bash build-essential cmake rocm-dkms libpython3.6-dev python3-pip miopen-hip rocblas half
# Install rbuild
RUN pip3 install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
# Install MIGraphX from source
RUN mkdir -p /migraphx
RUN cd /migraphx && git clone --depth=1 --branch develop https://github.com/ROCmSoftwarePlatform/AMDMIGraphX src
RUN cd /migraphx && rbuild package --cxx /opt/rocm-3.7.0/llvm/bin/clang++ -d /migraphx/deps -B /migraphx/build -S /migraphx/src/ -DPYTHON_EXECUTABLE=/usr/bin/python3
RUN dpkg -i /migraphx/build/*.deb
WORKDIR /migraphx
# AMD MIGraphX Examples # AMD MIGraphX Examples
This directory contains examples of using MIGraphX. ## Description
This directory contains examples of common use cases for MIGraphX.
## Building the Dockerfile
## Examples:
You can build the provided dockerfile with the following command: - [C++ Parse, Load, and Save Graph Programs](./cpp_parse_load_save)
- [C++ MNIST Inference](./cpp_api_inference)
docker build -f Dockerfile.hip-clang -t migraphx . - [Exporting Frozen Graphs in TF1](./export_frozen_graph_tf1)
- [Exporting Frozen Graphs in TF2](./export_frozen_graph_tf2)
To run the docker image: - [MIGraphX Docker Container](./migraphx_docker)
- [MIGraphX Driver](./migraphx_driver)
docker run --device='/dev/kfd' --device='/dev/dri' -v=`pwd`:/data -w /data --group-add video -it migraphx - [Python Resnet50 Inference](./python_api_inference)
cmake_minimum_required(VERSION 3.5)
project (CAI)
set (CMAKE_CXX_STANDARD 14)
set (EXAMPLE mnist_inference)
list (APPEND CMAKE_PREFIX_PATH /opt/rocm/hip /opt/rocm)
find_package (migraphx)
message("source file: " ${EXAMPLE}.cpp " ---> bin: " ${EXAMPLE})
add_executable(${EXAMPLE} ${EXAMPLE}.cpp)
target_link_libraries(${EXAMPLE} migraphx::c)
# Performing Inference Using C++ API
## Description
This example demonstrates how to perform inference using the MIGraphX C++ API. The model used is a convolutional network pre-trained on the MNIST dataset, and inference is performed on a random digit selected from the test set.
## Content
- [Basic Setup](#Basic-Setup)
- [Quantization](#Quantization)
- [Compilation](#Compilation)
- [Preparing Input Data](#Preparing-Input-Data)
- [Evaluating Inputs and Handling Outputs](#Evaluating-Inputs-and-Handling-Outputs)
- [**Running this Example**](#Running-this-Example)
## Basic Setup
Before running inference, we must first instantiate a network graph and select a compilation target. See [this example](../cpp_parse_load_save) for more information about working with MIGraphX program objects.
```
migraphx::program prog;
migraphx::onnx_options onnx_opts;
prog = parse_onnx("../mnist-8.onnx", onnx_opts);
std::string target_str;
if(CPU)
target_str = "cpu";
else if(GPU)
target_str = "gpu";
else
target_str = "ref";
migraphx::target targ = migraphx::target(target_str.c_str());
```
## Quantization
Optionally, graph programs may be quantized to fp16 or int8 precision to improve performance and memory usage.
##### Floating Point 16-bit Precision
To quantize using fp16, we simply add the following line:
```
migraphx::quantize_fp16(prog);
```
##### Integer 8-bit Precision
Int8 quantization requires calibration to accurately map ranges of floating point values onto integer values.
To calibrate prior to inference, one or more inputs can be supplied as follows:
```
std::vector<float> calib_dig;
// ... read in data
migraphx::quantize_int8_options quant_opts;
migraphx::program_parameters quant_params;
auto param_shapes = prog.get_parameter_shapes();
for(auto&& name : param_shapes.names())
{
quant_params.add(name, migraphx::argument(param_shapes[name], calib_dig.data()));
}
quant_opts.add_calibration_data(quant_params);
migraphx::quantize_int8(prog, targ, quant_opts);
```
## Compilation
Network graphs saved in e.g. ONNX or protobuf format are not target-specific. In order to run inference, we must compile the graph into a target-specific program.
Two options may be turned on (default for both is `false`) when compiling:
- `bool offload_copy`: For targets with offloaded memory (such as the gpu), this will insert instructions during compilation to copy the input parameters to the offloaded memory and to copy the final result from the offloaded memory back to main memory.
- `bool fast_math`: Optimize math functions to use faster approximate versions. There may be slight accuracy degredation when enabled.
The following snippet assumes `targ` has been set as "gpu", and will compile the program without the fast_math optimization.
```
migraphx_compile_options comp_opts;
comp_opts.offload_copy = true;
prog.compile(targ, comp_opts);
```
To compile a program with the default options, we simply call:
```
prog.compile(targ);
```
The targets "ref" and "cpu" both compile the program to run on the CPU. The target "ref" is primarily used for correctness checking. The target "cpu" is under ongoing development and has more optimizations enabled. Additionally, the "cpu" target requires MIGraphX to be built with the `-DMIGRAPHX_ENABLE_CPU=On` flag. Specifically,
```
CXX=/opt/rocm/llvm/bin/clang++ cmake -DMIGRAPHX_ENABLE_CPU=On ..
```
## Preparing Input Data
Now that we have a compiled program, the last step to perform infernce is to prepare the input data as program parameters.
The first step is to read in the data and store it in a `std::vector<float>` we will in this case call `digit`.
Next, we create a program parameter containing the data stored in `digit`:
```
migraphx::program_parameters prog_params;
auto param_shapes = prog.get_parameter_shapes();
for(auto&& name : param_shapes.names())
{
prog_params.add(name, migraphx::argument(param_shapes[name], digit.data()));
}
```
## Evaluating Inputs and Handling Outputs
Now that everything is in place, the final step to run inference is to call:
```
auto outputs = prog.eval(prog_params);
```
The output layer(s) will be returned and stored in `outputs`. Our network for this example returns a single output layer with the shape (1, 10). The index of the largest value in this output layer corresponds to the digit that the model has predicted.
```
auto shape = outputs[0].get_shape();
auto lengths = shape.lengths();
auto num_results = std::accumulate(lengths.begin(), lengths.end(), 1, std::multiplies<size_t>();
float* results = reinterpret_cast<float*>(outputs[0].data());
float* max = std::max_element(results, results + num_results);
int answer = max - results;
```
Other networks may require alternative processing of outputs.
## Running this Example
This directory contains everything that is needed to perform inference on an MNIST digit. To create the executable:
```
$ mkdir build
$ cd build
$ cmake ..
$ make
```
There will now be an executable named `mnist_inference` in the `build` directory. This can be run with or without options. Executing without any options will produce the following output:
```
Usage: ./mnist_inference [options]
options:
-c, --cpu Compile for CPU
-g, --gpu Compile for GPU
-f, --fp16 FP16 Quantization
-i, --int8 Int8 Quantization
--cal Int8 Calibration ON
-p, --print Print Graph at Each Stage
Parsing ONNX model...
Compiling program for ref...
Model input:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@%=@@@@@@@@@
@@@@@@@@@@@@@0+. +@@@@@@@@
@@@@@@@@@@@0+ .. 0@@@@@@@
@@@@@@@@@@+ .00 #@@@@@@@
@@@@@@@@@% .0@0 #@@@@@@@
@@@@@@@@@- .*0@@% #@@@@@@@
@@@@@@@@@0+#@@@@@% #@@@@@@@
@@@@@@@@@@@@@@@@@* #@@@@@@@
@@@@@@@@@@@@@====- -@@@@@@@@
@@@@@@@@@@@#- .0@@@@@@@@
@@@@@@@@@#. .* =@@@@@@@@
@@@@@@@@% =#@@. %@@@@@@@
@@@@@@@+ -@@@- +* -#00@@@
@@@@@@+ =@@#- .#@@#* .@@@
@@@@@= %@#* =0@@@@@%--0@@@
@@@@@ .. =@@@@@@@@@@@@@@
@@@@@. *=0@@@@@@@@@@@@@@@
@@@@@@%+=@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Model evaluating input...
Inference complete
Inference time: 0.022ms
Randomly chosen digit: 2
Result from inference: 2
CORRECT
```
*Note: the actual digit selected and printed will not necessarily be the same as shown above.
\ No newline at end of file
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <numeric>
#include <algorithm>
#include <chrono>
#include <random>
#include <migraphx/migraphx.hpp>
void read_nth_digit(const int, std::vector<float>&);
int main(int argc, char** argv)
{
if(argc == 1)
{
std::cout << "Usage: " << argv[0] << " [options]" << std::endl
<< "options:" << std::endl
<< "\t -c, --cpu Compile for CPU" << std::endl
<< "\t -g, --gpu Compile for GPU" << std::endl
<< "\t -f, --fp16 FP16 Quantization" << std::endl
<< "\t -i, --int8 Int8 Quantization" << std::endl
<< "\t --cal Int8 Calibration ON" << std::endl
<< "\t -p, --print Print Graph at Each Stage" << std::endl
<< std::endl
<< std::endl;
}
char** begin = argv + 1;
char** end = argv + argc;
const bool CPU = (std::find(begin, end, std::string("-c")) != end) ||
std::find(begin, end, std::string("--cpu")) != end;
const bool GPU = std::find(begin, end, std::string("-g")) != end ||
std::find(begin, end, std::string("--gpu")) != end;
const bool FP16 = std::find(begin, end, std::string("-f")) != end ||
std::find(begin, end, std::string("--fp16")) != end;
const bool INT8 = std::find(begin, end, std::string("-i")) != end ||
std::find(begin, end, std::string("--int8")) != end;
const bool CALIB = std::find(begin, end, std::string("--cal")) != end;
const bool PRINT = std::find(begin, end, std::string("-p")) != end ||
std::find(begin, end, std::string("--print")) != end;
migraphx::program prog;
migraphx::onnx_options onnx_opts;
prog = parse_onnx("../mnist-8.onnx", onnx_opts);
std::cout << "Parsing ONNX model..." << std::endl;
if(PRINT)
prog.print();
std::cout << std::endl;
std::string target_str;
if(CPU)
target_str = "cpu";
else if(GPU)
target_str = "gpu";
else
target_str = "ref";
migraphx::target targ = migraphx::target(target_str.c_str());
if(FP16)
{
migraphx::quantize_fp16(prog);
std::cout << "Quantizing program for FP16..." << std::endl;
if(PRINT)
prog.print();
std::cout << std::endl;
}
else if(INT8)
{
if(CALIB)
{
std::cout << "Calibration data: " << std::endl;
std::vector<float> calib_dig;
read_nth_digit(9, calib_dig);
migraphx::quantize_int8_options quant_opts;
migraphx::program_parameters quant_params;
auto param_shapes = prog.get_parameter_shapes();
for(auto&& name : param_shapes.names())
{
quant_params.add(name, migraphx::argument(param_shapes[name], calib_dig.data()));
}
quant_opts.add_calibration_data(quant_params);
migraphx::quantize_int8(prog, targ, quant_opts);
}
else
{
migraphx::quantize_int8(prog, targ, migraphx::quantize_int8_options());
}
std::cout << "Quantizing program for INT8..." << std::endl;
if(PRINT)
prog.print();
std::cout << std::endl;
}
if(GPU)
{
migraphx_compile_options comp_opts;
comp_opts.offload_copy = true;
prog.compile(targ, comp_opts);
}
else
{
prog.compile(targ);
}
std::cout << "Compiling program for " << target_str << "..." << std::endl;
if(PRINT)
prog.print();
std::cout << std::endl;
std::vector<float> digit;
std::random_device rd;
std::uniform_int_distribution<int> dist(0, 9);
const int rand_digit = dist(rd);
std::cout << "Model input: " << std::endl;
read_nth_digit(rand_digit, digit);
migraphx::program_parameters prog_params;
auto param_shapes = prog.get_parameter_shapes();
for(auto&& name : param_shapes.names())
{
prog_params.add(name, migraphx::argument(param_shapes[name], digit.data()));
}
std::cout << "Model evaluating input..." << std::endl;
auto start = std::chrono::high_resolution_clock::now();
auto outputs = prog.eval(prog_params);
auto stop = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
std::cout << "Inference complete" << std::endl;
std::cout << "Inference time: " << elapsed.count() * 1e-3 << "ms" << std::endl;
auto shape = outputs[0].get_shape();
auto lengths = shape.lengths();
auto num_results =
std::accumulate(lengths.begin(), lengths.end(), 1, std::multiplies<size_t>());
float* results = reinterpret_cast<float*>(outputs[0].data());
float* max = std::max_element(results, results + num_results);
int answer = max - results;
std::cout << std::endl
<< "Randomly chosen digit: " << rand_digit << std::endl
<< "Result from inference: " << answer << std::endl
<< std::endl
<< (answer == rand_digit ? "CORRECT" : "INCORRECT") << std::endl
<< std::endl;
return 0;
}
void read_nth_digit(const int n, std::vector<float>& digit)
{
const std::string SYMBOLS = "@0#%=+*-. ";
std::ifstream file("../digits.txt");
const int DIGITS = 10;
const int HEIGHT = 28;
const int WIDTH = 28;
if(!file.is_open())
{
return;
}
for(int d = 0; d < DIGITS; ++d)
{
for(int i = 0; i < HEIGHT * WIDTH; ++i)
{
unsigned char temp = 0;
file.read((char*)&temp, sizeof(temp));
if(d == n)
{
float data = temp / 255.0;
digit.push_back(data);
std::cout << SYMBOLS[(int)(data * 10) % 11];
if((i + 1) % WIDTH == 0)
std::cout << std::endl;
}
}
}
std::cout << std::endl;
}
cmake_minimum_required(VERSION 3.5)
project (PLS)
set (CMAKE_CXX_STANDARD 14)
set (EXAMPLE parse_load_save)
list (APPEND CMAKE_PREFIX_PATH /opt/rocm/hip /opt/rocm)
find_package (migraphx)
message("source file: " ${EXAMPLE}.cpp " ---> bin: " ${EXAMPLE})
add_executable(${EXAMPLE} ${EXAMPLE}.cpp)
target_link_libraries(${EXAMPLE} migraphx::c)
# Parsing, Loading, and Saving MIGraphX Programs
## Description
This examples demonstrates how to parse, load, and save a graph program using the MIGraphX C++ API.
## Parsing
Computation graphs that have been saved in a compatible serialized format, such as [ONNX](https://onnx.ai/get-started.html), can be read in by MIGraphX to create a runable program.
```
migraphx::program p;
unsigned batch = 1; //Or read in as argument
migraphx::onnx_options options;
options.set_default_dim_value(batch);
p = parse_onnx(input_file, options);
```
## Saving
An instantiated migraphx::program object can then be serialized to MessagePack (.msgpack) format and saved so that it can be loaded for future uses.
A program can be saved with either of the following:
```
migraphx::program p = ... <migraphx::program>;
migraphx::save(p, output_file);
```
```
migraphx::program p = ... <migraphx::program>;
migraphx_file_options options;
options.format = "msgpack";
migraphx::save(p, output_file, options);
```
## Loading
Similarly, graphs that have been previously parsed, and possibly compiled, and then saved in either MessagePack or JSON format can be loaded at later time.
MessagePack is the default format, and can be loaded with either:
```
migraphx::program p;
p = migraphx::load(input_file);
```
```
migraphx::program p;
migraphx_file_options options;
options.format = "msgpack";
p = migraphx::load(input_file, options);
```
To load a program that has been saved in JSON format:
```
migraphx::program p;
migraphx_file_options options;
options.format = "json";
p = migraphx::load(input_file, options);
```
## Running the Example
The provided example [`parse_load_save.cpp`](./parse_load_save.cpp) has these features implemented to allow for comparing outputs.
To compile and run the example from this directory:
```
$ mkdir build
$ cd build
$ cmake ..
$ make
```
There will now be an executable named `parse_load_save` with the following usage:
```
$ ./parse_load_save <input_file> [options]
options:
--parse onnx
--load json/msgpack
--save <output_file>
```
The program will then attempt to parse or load the graph file, print out its internal graph structure if successful, and optionally save the program to a given file name.
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <algorithm>
// MIGraphX C++ API
#include <migraphx/migraphx.hpp>
char* getCmdOption(char**, char**, const std::string&);
bool cmdOptionExists(char**, char**, const std::string&);
int main(int argc, char** argv)
{
if(argc < 2)
{
std::cout << "Usage: " << argv[0] << " <input_file> "
<< "[options]" << std::endl;
std::cout << "options:" << std::endl;
std::cout << "\t--parse onnx" << std::endl;
std::cout << "\t--load json/msgpack" << std::endl;
std::cout << "\t--save <output_file>" << std::endl;
return 0;
}
char* parse_arg = getCmdOption(argv + 2, argv + argc, "--parse");
char* load_arg = getCmdOption(argv + 2, argv + argc, "--load");
char* save_arg = getCmdOption(argv + 2, argv + argc, "--save");
const char* input_file = argv[1];
migraphx::program p;
if(cmdOptionExists(argv + 2, argv + argc, "--parse") ||
!cmdOptionExists(argv + 2, argv + argc, "--load"))
{
std::cout << "Parsing ONNX File" << std::endl;
migraphx::onnx_options options;
p = parse_onnx(input_file, options);
}
else if(load_arg != nullptr)
{
std::cout << "Loading Graph File" << std::endl;
std::string format = load_arg;
if(format == "json")
{
migraphx_file_options options;
options.format = "json";
p = migraphx::load(input_file, options);
}
else if(format == "msgpack")
{
migraphx_file_options options;
options.format = "msgpack";
p = migraphx::load(input_file, options);
}
else
p = migraphx::load(input_file);
}
else
{
std::cout << "Error: Incorrect Usage" << std::endl;
std::cout << "Usage: " << argv[0] << " <input_file> "
<< "[options]" << std::endl;
std::cout << "options:" << std::endl;
std::cout << "\t--parse onnx" << std::endl;
std::cout << "\t--load json/msgpack" << std::endl;
std::cout << "\t--save <output_file>" << std::endl;
return 0;
}
std::cout << "Input Graph: " << std::endl;
p.print();
std::cout << std::endl;
if(cmdOptionExists(argv + 2, argv + argc, "--save"))
{
std::cout << "Saving program..." << std::endl;
std::string output_file;
output_file = save_arg == nullptr ? "out" : save_arg;
output_file.append(".msgpack");
migraphx_file_options options;
options.format = "msgpack";
migraphx::save(p, output_file.c_str(), options);
std::cout << "Program has been saved as ./" << output_file << std::endl;
}
return 0;
}
char* getCmdOption(char** begin, char** end, const std::string& option)
{
char** itr = std::find(begin, end, option);
if(itr != end && ++itr != end)
{
return *itr;
}
return nullptr;
}
bool cmdOptionExists(char** begin, char** end, const std::string& option)
{
return std::find(begin, end, option) != end;
}
# Exporting Frozen Graphs in Tensorflow 1
## Description
This example demonstrates how to export a frozen graph protobuf in Tensorflow 1.X that can be used as input to MIGraphX. Specifically, this is an example of exporting a frozen protobuf of a tensorflow BERT model.
## How to Use this Example
In order to support bert from tensorflow's official [repository](https://github.com/google-research/bert), a serving_input_fn for the estimator must be implemented in [run_classifier.py](https://github.com/google-research/bert/blob/master/run_classifier.py). In this script, insert the following function after importing all libraries and setting up flags:
```
#...
flags.DEFINE_integer(
"num_tpu_cores", 8,
"Only used if `use_tpu` is True. Total number of TPU cores to use.")
# insert function here
def serving_input_fn():
label_ids = tf.placeholder(tf.int32, [None], name='label_ids')
input_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_ids')
input_mask = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='input_mask')
segment_ids = tf.placeholder(tf.int32, [None, FLAGS.max_seq_length], name='segment_ids')
input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn({
'label_ids': label_ids,
'input_ids': input_ids,
'input_mask': input_mask,
'segment_ids': segment_ids,
}, default_batch_size=1)()
return input_fn
```
Since we are passing dynamic shape placeholders in the serving_input_fn, the default_batch_size value will essentially determine the resulting shape in the graph.
For inference, we will focus on the "probabilities" layer's output, and we can name this layer by modifying the following [line](https://github.com/google-research/bert/blob/master/run_classifier.py#L608):
```
probabilities = tf.nn.softmax(logits, axis=-1, name="output")
```
Next, we need to export the saved model after training:
```
def main(_):
# ...
with tf.gfile.GFile(output_predict_file, "w") as writer:
num_written_lines = 0
tf.logging.info("***** Predict results *****")
for (i, prediction) in enumerate(result):
probabilities = prediction["probabilities"]
if i >= num_actual_predict_examples:
break
output_line = "\t".join(
str(class_probability)
for class_probability in probabilities) + "\n"
writer.write(output_line)
num_written_lines += 1
assert num_written_lines == num_actual_predict_examples
# insert code here
if FLAGS.do_train: # optional to attach export to train flag
estimator._export_to_tpu = False
estimator.export_savedmodel('saved_models', serving_input_fn)
# ...
```
Run bert with the suggested arguments:
```
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12
export GLUE_DIR=/path/to/glue
python run_classifier.py \
--task_name=MRPC \
--do_train=true \
--do_eval=true \
--data_dir=$GLUE_DIR/MRPC \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--max_seq_length=128 \
--train_batch_size=32 \ # change to appropriate size that fits on GPU
--learning_rate=2e-5 \
--num_train_epochs=3.0 \
--output_dir=/tmp/mrpc_output/
```
When running, search for the following lines in the output:
```
INFO:tensorflow:Restoring parameters from /tmp/model.ckpt-1603
INFO:tensorflow:SavedModel written to: saved_models/temp-1564086017/saved_model.pb
```
Note the ID followed by "temp-" (in this case, 1564086017). A directory should exist under saved_models/ that is named with the ID.
We also need to record the name of the output layer in bert. This can be done by inspecting the saved model.
```
saved_model_cli show --dir saved_models/1564086017 --tag_set serve --signature_def serving_default
```
The output should look like this:
```
The given SavedModel SignatureDef contains the following input(s):
inputs['input_ids'] tensor_info:
dtype: DT_INT32
shape: (1, 128)
name: input_ids_1:0
inputs['input_mask'] tensor_info:
dtype: DT_INT32
shape: (1, 128)
name: input_mask_1:0
inputs['label_ids'] tensor_info:
dtype: DT_INT32
shape: (1)
name: label_ids_1:0
inputs['segment_ids'] tensor_info:
dtype: DT_INT32
shape: (1, 128)
name: segment_ids_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['probabilities'] tensor_info:
dtype: DT_FLOAT
shape: (1, 2)
name: loss/output:0
Method name is: tensorflow/serving/predict
```
Here the output name is given as "loss/output:0", but we will strip the ":0" from the end, as we are concerned with the node only.
We will use tensorflow's freeze graph utility script and the information gathered above to create the frozen protobuf file.
```
CKPT_NUM=1603
MODEL_ID=1564086017
OUT_NAME=loss/output
cd /path/to/tensorflow
python tensorflow/python/tools/freeze_graph.py \
--input_graph=/tmp/mrpc_model/graph.pbtxt \
--input_binary=false \
--input_checkpoint=/tmp/mrpc_model/model.ckpt-${CKPT_NUM} \
--input_saved_model_dir=/path/to/bert/saved_models/${MODEL_ID} \
--output_graph=/tmp/frozen_bert.pb \
--output_node_names=${OUT_NAME}
```
The final output should be a frozen protobuf that is compatible with MIGraphX.
\ No newline at end of file
{
"cells": [],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 4
}
This source diff could not be displayed because it is too large. You can view the blob instead.
# Exporting Frozen Graphs in Tensorflow 2
## Description
This example demonstrates how to export a frozen graph protobuf in Tensorflow 2.X that can be used as input for MIGraphX. The method for accomplishing this has changed from Tensorflow 1.X. Please refer to [export_frozen_graphs_tf1]() if you are not yet using Tensorflow 2.
## How to Use this Example
If you do not already have Jupyter Notebooks installed, please refer to this [page](https://jupyter.org/install) for instructions.
Once Jupyter Notebooks is installed, you can navigate to this directory and issue the command:
```
$ jupyter notebook
```
From the browser window that is launched, click on `example.ipynb`
You should now be able to run the notebook from your browser.
To use this on your own models you wish to save, simply edit the first cell to include any additional libraries and modify `MODEL_NAME` and `model` to the model of your choosing. Additionally, training and fine-tuning can be performed before moving on to cells 2 and beyond.
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exporting Frozen Graphs in Tensorflow 2 \n",
"In order to use a trained model as input to MIGraphX, the model must be first be saved in a frozen graph format. This was accomplished in Tensorflow 1 by launching a graph in a tf.Session and then saving the session. However, Tensorflow has decided to deprecate Sessions in favor of functions and SavedModel format. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After importing the necessary libraries, the next step is to instantiate a model. For simplicity, in this example we will use a resnet50 architecture with pre-trained imagenet weights. These weights may also be trained or fine-tuned before freezing. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"tf.enable_eager_execution() #May not be required depending on tensorflow version\n",
"from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2\n",
"from tensorflow import keras\n",
"from tensorflow.keras import layers\n",
"\n",
"MODEL_NAME = \"resnet50\"\n",
"model = tf.keras.applications.ResNet50(weights=\"imagenet\")\n",
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## SavedModel format\n",
"The simplest way to save a model is through saved\\_model.save()\n",
"\n",
"This will create an equivalent tensorflow program which can later be loaded for fine-tuning or inference, although it is not directly compatible with MIGraphX."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tf.saved_model.save(model, \"./Saved_Models/{}\".format(MODEL_NAME))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Convert to ConcreteFunction\n",
"To begin, we need to get the function equivalent of the model and then concretize the function to avoid retracing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"full_model = tf.function(lambda x: model(x))\n",
"full_model = full_model.get_concrete_function(\n",
" x=tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Freeze ConcreteFunction and Serialize\n",
"Since we are saving the graph for the purpose of inference, all variables can be made constant (i.e. \"frozen\").\n",
"\n",
"Next, we need to obtain a serialized GraphDef representation of the graph. \n",
"\n",
"\n",
"Optionally, the operators can be printed out layer by layer followed by the inputs and outputs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"frozen_func = convert_variables_to_constants_v2(full_model)\n",
"frozen_func.graph.as_graph_def()\n",
"\n",
"layers = [op.name for op in frozen_func.graph.get_operations()]\n",
"print(\"-\" * 50)\n",
"print(\"Frozen model layers: \")\n",
"for layer in layers:\n",
" print(layer)\n",
"\n",
"print(\"-\" * 50)\n",
"print(\"Frozen model inputs: \")\n",
"print(frozen_func.inputs)\n",
"print(\"Frozen model outputs: \")\n",
"print(frozen_func.outputs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Save Frozen Graph as Protobuf\n",
"Finally, we can save to hard drive, and now the frozen graph will be stored as `./frozen_models/<MODEL_NAME>_frozen_graph.pb`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tf.io.write_graph(graph_or_graph_def=frozen_func.graph,\n",
" logdir=\"./frozen_models\",\n",
" name=\"{}_frozen_graph.pb\".format(MODEL_NAME),\n",
" as_text=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Assuming MIGraphX has already been built and installed on your system, the driver can be used to verify that the frozen graph has been correctly exported. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import subprocess\n",
"driver = \"/opt/rocm/bin/migraphx-driver\"\n",
"command = \"read\"\n",
"model_path = \"./frozen_models/{}_frozen_graph.pb\".format(MODEL_NAME)\n",
"process = subprocess.run([driver, command, model_path], \n",
" stdout=subprocess.PIPE, \n",
" universal_newlines=True)\n",
"\n",
"print(process.stdout)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "tensorflow",
"language": "python",
"name": "tensorflow"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
# MIGraphX Dockerfile
Instructions for building and running the MIGraphX docker container can be found [here](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/blob/develop/README.md#using-docker) in this project's top level README.
# MIGraphX Driver
## Description
The MIGraphX driver is a tool that allows you to utilize many of the core functions of MIGraphX without having to write your own program.
## How to Use this Example
The MIGraphX driver is installed with MIGraphX and can be found in `/opt/rocm/bin/migraphx-driver`, or in `AMDMIGraphX/build/bin/migraphx-driver` after building the source code.
See below for a comprehensive list of commands and option arguments, as well as some usage examples.
### Commands
| Command | Description |
| --- | ---|
| op | When followed by the option --list or -l, prints all operators of MIGraphX |
| params | Prints the input and output parameter shapes |
| run | Compiles, allocates parameters, evaluates, and prints input graph |
| read | Loads and prints input graph |
| compile | Compiles and prints input graph |
| verify | Runs reference and GPU implementations and checks outputs for consistency |
| perf | Compiles and runs input graph then prints performance report |
### Options
| Option | Description |
| --- | --- |
| --help \| -h | Show help |
| --model <resnet50\|inceptionv3\|alexnet> | Loads one of the three default models |
| --onnx | Load file as onnx graph |
| --tf | Load file as a tensorflow graph |
| --migraphx | Load file as a migraphx graph |
| --migraphx-json | Load file as a migraphx JSON graph |
| --nhwc | Treat tensorflow format as nhwc |
| --nchw | Treat tensorflow format as nchw |
| --skip-unknown-operators | Skip unknown operators when parsing and continue to parse |
| --trim \| -t | Trim instructions from the end |
| --optimize \| -O | Optimize when reading |
| --graphviz \| -g | Print out a graphviz representation |
| --brief | Make the output brief |
| --cpp | Print out the program as cpp program |
| --json | Print out program as json |
| --text | Print out program in text format |
| --binary | Print out program in binary format |
| --output \| -o | Output to file |
| --fill0 | Fill parameter with 0s |
| --fill1 | Fill parameter with 1s |
| --gpu | Compile on the gpu |
| --cpu | Compile on the cpu |
| --ref | Compile on the reference implementation |
| --enable-offload-copy | Enable implicit offload copying |
| --disable-fast-math | Disable fast math optimization |
| --fp16 | Quantize for fp16 |
| --int8 | Quantize for int8 |
| --tolerance | Tolerance for errors |
| --per-instruction \| -i | Verify each instruction |
| --reduce \| -r | Reduce program and verify |
| --iterations \| -n | Number of iterations to run for perf report |
| --list \| -l | List all the operators of MIGraphX |
## Usage Examples
The examples below supply a simple MNIST ConvNet as the input graph. Models of higher complexity will have considerably larger outputs in most cases.
##### Example: op
```
$ /opt/rocm/bin/migraphx-driver op --list
```
<details>
<summary>View output</summary>
```
@literal
@param
@return
abs
acos
acosh
add
argmax
argmin
as_shape
asin
asinh
atan
atanh
batch_norm_inference
broadcast
capture
ceil
check_context::migraphx::version_1::gpu::context
clip
concat
contiguous
convert
convolution
cos
cosh
deconvolution
div
dot
elu
equal
erf
exp
flatten
floor
gather
gpu::abs
gpu::acos
gpu::acosh
gpu::add
gpu::add_clip
gpu::add_gelu
gpu::add_gelu_new
gpu::add_relu
gpu::add_tanh
gpu::argmax
gpu::argmin
gpu::asin
gpu::asinh
gpu::atan
gpu::atanh
gpu::batch_norm_inference
gpu::ceil
gpu::clip
gpu::concat
gpu::contiguous
gpu::conv_bias
gpu::conv_bias_relu
gpu::convert
gpu::convolution
gpu::cos
gpu::cosh
gpu::deconv
gpu::div
gpu::elu
gpu::equal
gpu::erf
gpu::exp
gpu::floor
gpu::gather
gpu::gelu
gpu::gelu_new
gpu::gemm
gpu::greater
gpu::int8_conv_pack
gpu::int8_gemm_pack_a
gpu::int8_gemm_pack_b
gpu::layernorm
gpu::leaky_relu
gpu::less
gpu::log
gpu::logsoftmax
gpu::lrn
gpu::max
gpu::min
gpu::mul
gpu::mul_add
gpu::mul_add_relu
gpu::pad
gpu::pooling
gpu::pow
gpu::prelu
gpu::quant_convolution
gpu::quant_gemm
gpu::recip
gpu::record_event
gpu::reduce_max
gpu::reduce_mean
gpu::reduce_min
gpu::reduce_prod
gpu::reduce_sum
gpu::relu
gpu::rnn_var_sl_last_output
gpu::rnn_var_sl_shift_output
gpu::rnn_var_sl_shift_sequence
gpu::round
gpu::rsqrt
gpu::set_stream
gpu::sigmoid
gpu::sign
gpu::sin
gpu::sinh
gpu::softmax
gpu::sqdiff
gpu::sqrt
gpu::sub
gpu::tan
gpu::tanh
gpu::triadd
gpu::triadd_clip
gpu::triadd_relu
gpu::triadd_sigmoid
gpu::triadd_tanh
gpu::wait_event
greater
gru
hip::allocate
hip::copy
hip::copy_from_gpu
hip::copy_to_gpu
hip::hip_allocate_memory
hip::hip_copy_literal
hip::sync_device
identity
im2col
leaky_relu
less
load
log
logsoftmax
lrn
lstm
max
min
mul
multibroadcast
neg
outline
pad
pooling
pow
prelu
quant_convolution
quant_dot
recip
reduce_max
reduce_mean
reduce_min
reduce_prod
reduce_sum
ref::batch_norm_inference
ref::convolution
ref::deconvolution
ref::dot
ref::elu
ref::im2col
ref::leaky_relu
ref::logsoftmax
ref::lrn
ref::op
ref::pad
ref::pooling_average
ref::pooling_max
ref::quant_convolution
ref::rnn_var_sl_last_output
ref::softmax
relu
reshape
rnn
rnn_last_cell_output
rnn_last_hs_output
rnn_var_sl_last_output
rnn_var_sl_shift_output
rnn_var_sl_shift_sequence
round
rsqrt
scalar
sigmoid
sign
sin
sinh
slice
softmax
sqdiff
sqrt
squeeze
sub
tan
tanh
transpose
undefined
unknown:
unsqueeze
```
</details>
<br/><br/>
##### Example: params
```
$ /opt/rocm/bin/migraphx-driver params simple_graph.pb
```
<details>
<summary>View output</summary>
```
Reading: simple_graph.pb
x: float_type, {1, 28, 28}, {784, 28, 1}
```
</details>
<br/><br/>
##### Example: run (onnx file input)
```
$ /opt/rocm/bin/migraphx-driver run --onnx simple_graph.onnx
```
<details>
<summary>View output</summary>
```
Compiling ...
Reading: simple_graph.onnx
@0 = check_context::migraphx::version_1::gpu::context -> float_type, {}, {}
@1 = hip::hip_allocate_memory[shape=float_type, {256}, {1},id=scratch] -> float_type, {256}, {1}
@2 = hip::hip_copy_literal[id=@literal:1] -> float_type, {784, 128}, {128, 1}
x:0 = @param:x:0 -> float_type, {1, 28, 28}, {784, 28, 1}
@3 = reshape[dims={-1, 784}](x:0) -> float_type, {1, 784}, {784, 1}
@4 = load[offset=0,end=512](@1) -> float_type, {1, 128}, {128, 1}
@5 = gpu::gemm[alpha=1,beta=0](@3,@2,@4) -> float_type, {1, 128}, {128, 1}
@6 = hip::hip_copy_literal[id=@literal:0] -> float_type, {128}, {1}
@7 = hip::hip_copy_literal[id=@literal:2] -> float_type, {10}, {1}
@8 = hip::hip_copy_literal[id=@literal:3] -> float_type, {128, 10}, {10, 1}
@9 = multibroadcast[output_lens={1, 128}](@6) -> float_type, {1, 128}, {0, 1}
@10 = load[offset=512,end=1024](@1) -> float_type, {1, 128}, {128, 1}
@11 = gpu::add_relu(@5,@9,@10) -> float_type, {1, 128}, {128, 1}
@12 = load[offset=0,end=40](@1) -> float_type, {1, 10}, {10, 1}
@13 = gpu::gemm[alpha=1,beta=0](@11,@8,@12) -> float_type, {1, 10}, {10, 1}
@14 = multibroadcast[output_lens={1, 10}](@7) -> float_type, {1, 10}, {0, 1}
@15 = load[offset=40,end=80](@1) -> float_type, {1, 10}, {10, 1}
@16 = gpu::add(@13,@14,@15) -> float_type, {1, 10}, {10, 1}
#output_0 = @param:#output_0 -> float_type, {1, 10}, {10, 1}
@17 = gpu::softmax[axis=1](@16,#output_0) -> float_type, {1, 10}, {10, 1}
@18 = @return(@17)
Allocating params ...
@0 = check_context::migraphx::version_1::gpu::context -> float_type, {}, {}
@1 = hip::hip_allocate_memory[shape=float_type, {256}, {1},id=scratch] -> float_type, {256}, {1}
@2 = hip::hip_copy_literal[id=@literal:1] -> float_type, {784, 128}, {128, 1}
x:0 = @param:x:0 -> float_type, {1, 28, 28}, {784, 28, 1}
@3 = reshape[dims={-1, 784}](x:0) -> float_type, {1, 784}, {784, 1}
@4 = load[offset=0,end=512](@1) -> float_type, {1, 128}, {128, 1}
@5 = gpu::gemm[alpha=1,beta=0](@3,@2,@4) -> float_type, {1, 128}, {128, 1}
@6 = hip::hip_copy_literal[id=@literal:0] -> float_type, {128}, {1}
@7 = hip::hip_copy_literal[id=@literal:2] -> float_type, {10}, {1}
@8 = hip::hip_copy_literal[id=@literal:3] -> float_type, {128, 10}, {10, 1}
@9 = multibroadcast[output_lens={1, 128}](@6) -> float_type, {1, 128}, {0, 1}
@10 = load[offset=512,end=1024](@1) -> float_type, {1, 128}, {128, 1}
@11 = gpu::add_relu(@5,@9,@10) -> float_type, {1, 128}, {128, 1}
@12 = load[offset=0,end=40](@1) -> float_type, {1, 10}, {10, 1}
@13 = gpu::gemm[alpha=1,beta=0](@11,@8,@12) -> float_type, {1, 10}, {10, 1}
@14 = multibroadcast[output_lens={1, 10}](@7) -> float_type, {1, 10}, {0, 1}
@15 = load[offset=40,end=80](@1) -> float_type, {1, 10}, {10, 1}
@16 = gpu::add(@13,@14,@15) -> float_type, {1, 10}, {10, 1}
#output_0 = @param:#output_0 -> float_type, {1, 10}, {10, 1}
@17 = gpu::softmax[axis=1](@16,#output_0) -> float_type, {1, 10}, {10, 1}
@18 = @return(@17)
```
</details>
<br/><br/>
##### Example: read
```
$ /opt/rocm/bin/migraphx-driver read simple_graph.pb
```
<details>
<summary>View output</summary>
```
Reading: simple_graph.pb
@0 = @literal{0.0136018, -0.0839988, 0.0375392, 0.0613085, -0.125795, 0.176185, 0.0761055, 0.0093384, -0.110057, -0.170587} -> float_type, {10}, {1}
@1 = @literal{ ... } -> float_type, {128, 10}, {10, 1}
@2 = @literal{ ... } -> float_type, {128}, {1}
@3 = @literal{ ... } -> float_type, {784, 128}, {128, 1}
@4 = @literal{-1, 784} -> int32_type, {2}, {1}
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}
@5 = reshape[dims={-1, 784}](x) -> float_type, {1, 784}, {784, 1}
@6 = identity(@3) -> float_type, {784, 128}, {128, 1}
@7 = dot[alpha=1,beta=1](@5,@6) -> float_type, {1, 128}, {128, 1}
@8 = identity(@2) -> float_type, {128}, {1}
@9 = broadcast[axis=1,dims={1, 128}](@8) -> float_type, {1, 128}, {0, 1}
@10 = add(@7,@9) -> float_type, {1, 128}, {128, 1}
@11 = relu(@10) -> float_type, {1, 128}, {128, 1}
@12 = identity(@1) -> float_type, {128, 10}, {10, 1}
@13 = dot[alpha=1,beta=1](@11,@12) -> float_type, {1, 10}, {10, 1}
@14 = identity(@0) -> float_type, {10}, {1}
@15 = broadcast[axis=1,dims={1, 10}](@14) -> float_type, {1, 10}, {0, 1}
@16 = add(@13,@15) -> float_type, {1, 10}, {10, 1}
@17 = softmax[axis=1](@16) -> float_type, {1, 10}, {10, 1}
@18 = identity(@17) -> float_type, {1, 10}, {10, 1}
```
</details>
<br/><br/>
##### Example: compile (on GPU, quantized for fp16)
```
$ /opt/rocm/bin/migraphx-driver compile --gpu --fp16 simple_graph.pb
```
<details>
<summary>View output</summary>
```
Compiling ...
Reading: simple_graph.pb
@0 = check_context::migraphx::version_1::gpu::context -> float_type, {}, {}
@1 = hip::hip_allocate_memory[shape=float_type, {456}, {1},id=scratch] -> float_type, {456}, {1}
@2 = hip::hip_copy_literal[id=@literal:0] -> half_type, {784, 128}, {128, 1}
@3 = load[offset=256,end=1824](@1) -> half_type, {1, 28, 28}, {784, 28, 1}
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}
@4 = gpu::convert[target_type=1](x,@3) -> half_type, {1, 28, 28}, {784, 28, 1}
@5 = reshape[dims={-1, 784}](@4) -> half_type, {1, 784}, {784, 1}
@6 = load[offset=0,end=256](@1) -> half_type, {1, 128}, {128, 1}
@7 = gpu::gemm[alpha=1,beta=0](@5,@2,@6) -> half_type, {1, 128}, {128, 1}
@8 = hip::hip_copy_literal[id=@literal:2] -> half_type, {128, 10}, {10, 1}
@9 = hip::hip_copy_literal[id=@literal:1] -> half_type, {128}, {1}
@10 = hip::hip_copy_literal[id=@literal:3] -> half_type, {10}, {1}
@11 = load[offset=256,end=512](@1) -> half_type, {1, 128}, {128, 1}
@12 = broadcast[axis=1,dims={1, 128}](@9) -> half_type, {1, 128}, {0, 1}
@13 = gpu::add_relu(@7,@12,@11) -> half_type, {1, 128}, {128, 1}
@14 = load[offset=0,end=20](@1) -> half_type, {1, 10}, {10, 1}
@15 = gpu::gemm[alpha=1,beta=0](@13,@8,@14) -> half_type, {1, 10}, {10, 1}
@16 = broadcast[axis=1,dims={1, 10}](@10) -> half_type, {1, 10}, {0, 1}
@17 = load[offset=20,end=40](@1) -> half_type, {1, 10}, {10, 1}
@18 = gpu::add(@15,@16,@17) -> half_type, {1, 10}, {10, 1}
@19 = load[offset=0,end=20](@1) -> half_type, {1, 10}, {10, 1}
@20 = gpu::softmax[axis=1](@18,@19) -> half_type, {1, 10}, {10, 1}
output = @param:output -> float_type, {1, 10}, {10, 1}
@21 = gpu::convert[target_type=2](@20,output) -> float_type, {1, 10}, {10, 1}
```
</details>
<br/><br/>
##### Example: verify
```
$ /opt/rocm/bin/migraphx-driver verify simple_graph.pb
```
<details>
<summary>View output</summary>
```
Reading: simple_graph.pb
@0 = @literal{0.0136018, -0.0839988, 0.0375392, 0.0613085, -0.125795, 0.176185, 0.0761055, 0.0093384, -0.110057, -0.170587} -> float_type, {10}, {1}
@1 = @literal{ ... } -> float_type, {128, 10}, {10, 1}
@2 = @literal{ ... } -> float_type, {128}, {1}
@3 = @literal{ ... } -> float_type, {784, 128}, {128, 1}
@4 = @literal{-1, 784} -> int32_type, {2}, {1}
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}
@5 = reshape[dims={-1, 784}](x) -> float_type, {1, 784}, {784, 1}
@6 = identity(@3) -> float_type, {784, 128}, {128, 1}
@7 = dot[alpha=1,beta=1](@5,@6) -> float_type, {1, 128}, {128, 1}
@8 = identity(@2) -> float_type, {128}, {1}
@9 = broadcast[axis=1,dims={1, 128}](@8) -> float_type, {1, 128}, {0, 1}
@10 = add(@7,@9) -> float_type, {1, 128}, {128, 1}
@11 = relu(@10) -> float_type, {1, 128}, {128, 1}
@12 = identity(@1) -> float_type, {128, 10}, {10, 1}
@13 = dot[alpha=1,beta=1](@11,@12) -> float_type, {1, 10}, {10, 1}
@14 = identity(@0) -> float_type, {10}, {1}
@15 = broadcast[axis=1,dims={1, 10}](@14) -> float_type, {1, 10}, {0, 1}
@16 = add(@13,@15) -> float_type, {1, 10}, {10, 1}
@17 = softmax[axis=1](@16) -> float_type, {1, 10}, {10, 1}
@18 = identity(@17) -> float_type, {1, 10}, {10, 1}
@0 = @literal{0.0136018, -0.0839988, 0.0375392, 0.0613085, -0.125795, 0.176185, 0.0761055, 0.0093384, -0.110057, -0.170587} -> float_type, {10}, {1}
@1 = @literal{ ... } -> float_type, {128, 10}, {10, 1}
@2 = @literal{ ... } -> float_type, {128}, {1}
@3 = @literal{ ... } -> float_type, {784, 128}, {128, 1}
@4 = @literal{-1, 784} -> int32_type, {2}, {1}
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}
@5 = reshape[dims={-1, 784}](x) -> float_type, {1, 784}, {784, 1}
@6 = identity(@3) -> float_type, {784, 128}, {128, 1}
@7 = dot[alpha=1,beta=1](@5,@6) -> float_type, {1, 128}, {128, 1}
@8 = identity(@2) -> float_type, {128}, {1}
@9 = broadcast[axis=1,dims={1, 128}](@8) -> float_type, {1, 128}, {0, 1}
@10 = add(@7,@9) -> float_type, {1, 128}, {128, 1}
@11 = relu(@10) -> float_type, {1, 128}, {128, 1}
@12 = identity(@1) -> float_type, {128, 10}, {10, 1}
@13 = dot[alpha=1,beta=1](@11,@12) -> float_type, {1, 10}, {10, 1}
@14 = identity(@0) -> float_type, {10}, {1}
@15 = broadcast[axis=1,dims={1, 10}](@14) -> float_type, {1, 10}, {0, 1}
@16 = add(@13,@15) -> float_type, {1, 10}, {10, 1}
@17 = softmax[axis=1](@16) -> float_type, {1, 10}, {10, 1}
@18 = identity(@17) -> float_type, {1, 10}, {10, 1}
@0 = @literal{0.0136018, -0.0839988, 0.0375392, 0.0613085, -0.125795, 0.176185, 0.0761055, 0.0093384, -0.110057, -0.170587} -> float_type, {10}, {1}
@1 = @literal{ ... } -> float_type, {128, 10}, {10, 1}
@2 = @literal{ ... } -> float_type, {128}, {1}
@3 = @literal{ ... } -> float_type, {784, 128}, {128, 1}
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}
@4 = ref::reshape[dims={-1, 784}](x) -> float_type, {1, 784}, {784, 1}
@5 = ref::identity(@3) -> float_type, {784, 128}, {128, 1}
@6 = ref::dot[alpha=1,beta=1](@4,@5) -> float_type, {1, 128}, {128, 1}
@7 = ref::identity(@2) -> float_type, {128}, {1}
@8 = ref::broadcast[axis=1,dims={1, 128}](@7) -> float_type, {1, 128}, {0, 1}
@9 = ref::contiguous(@8) -> float_type, {1, 128}, {128, 1}
@10 = ref::add(@6,@9) -> float_type, {1, 128}, {128, 1}
@11 = ref::relu(@10) -> float_type, {1, 128}, {128, 1}
@12 = ref::identity(@1) -> float_type, {128, 10}, {10, 1}
@13 = ref::dot[alpha=1,beta=1](@11,@12) -> float_type, {1, 10}, {10, 1}
@14 = ref::identity(@0) -> float_type, {10}, {1}
@15 = ref::broadcast[axis=1,dims={1, 10}](@14) -> float_type, {1, 10}, {0, 1}
@16 = ref::contiguous(@15) -> float_type, {1, 10}, {10, 1}
@17 = ref::add(@13,@16) -> float_type, {1, 10}, {10, 1}
@18 = ref::softmax[axis=1](@17) -> float_type, {1, 10}, {10, 1}
@19 = ref::identity(@18) -> float_type, {1, 10}, {10, 1}
@0 = check_context::migraphx::version_1::gpu::context -> float_type, {}, {}
@1 = hip::hip_allocate_memory[shape=float_type, {256}, {1},id=scratch] -> float_type, {256}, {1}
@2 = hip::hip_copy_literal[id=@literal:3] -> float_type, {784, 128}, {128, 1}
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}
@3 = load[offset=0,end=512](@1) -> float_type, {1, 128}, {128, 1}
@4 = reshape[dims={-1, 784}](x) -> float_type, {1, 784}, {784, 1}
@5 = gpu::gemm[alpha=1,beta=0](@4,@2,@3) -> float_type, {1, 128}, {128, 1}
@6 = hip::hip_copy_literal[id=@literal:1] -> float_type, {128, 10}, {10, 1}
@7 = hip::hip_copy_literal[id=@literal:2] -> float_type, {128}, {1}
@8 = hip::hip_copy_literal[id=@literal:0] -> float_type, {10}, {1}
@9 = load[offset=512,end=1024](@1) -> float_type, {1, 128}, {128, 1}
@10 = broadcast[axis=1,dims={1, 128}](@7) -> float_type, {1, 128}, {0, 1}
@11 = gpu::add_relu(@5,@10,@9) -> float_type, {1, 128}, {128, 1}
@12 = load[offset=40,end=80](@1) -> float_type, {1, 10}, {10, 1}
@13 = gpu::gemm[alpha=1,beta=0](@11,@6,@12) -> float_type, {1, 10}, {10, 1}
@14 = load[offset=0,end=40](@1) -> float_type, {1, 10}, {10, 1}
@15 = broadcast[axis=1,dims={1, 10}](@8) -> float_type, {1, 10}, {0, 1}
@16 = gpu::add(@13,@15,@14) -> float_type, {1, 10}, {10, 1}
output = @param:output -> float_type, {1, 10}, {10, 1}
@17 = gpu::softmax[axis=1](@16,output) -> float_type, {1, 10}, {10, 1}
```
</details>
<br/><br/>
##### Example: perf
```
$ /opt/rocm/bin/migraphx-driver perf simple_graph.pb
```
<details>
<summary>View output</summary>
```
Compiling ...
Reading: simple_graph.pb
@0 = check_context::migraphx::version_1::gpu::context -> float_type, {}, {}
@1 = hip::hip_allocate_memory[shape=float_type, {256}, {1},id=scratch] -> float_type, {256}, {1}
@2 = hip::hip_copy_literal[id=@literal:3] -> float_type, {784, 128}, {128, 1}
@3 = load[offset=0,end=512](@1) -> float_type, {1, 128}, {128, 1}
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}
@4 = reshape[dims={-1, 784}](x) -> float_type, {1, 784}, {784, 1}
@5 = gpu::gemm[alpha=1,beta=0](@4,@2,@3) -> float_type, {1, 128}, {128, 1}
@6 = hip::hip_copy_literal[id=@literal:1] -> float_type, {128, 10}, {10, 1}
@7 = hip::hip_copy_literal[id=@literal:0] -> float_type, {10}, {1}
@8 = hip::hip_copy_literal[id=@literal:2] -> float_type, {128}, {1}
@9 = broadcast[axis=1,dims={1, 128}](@8) -> float_type, {1, 128}, {0, 1}
@10 = load[offset=512,end=1024](@1) -> float_type, {1, 128}, {128, 1}
@11 = gpu::add_relu(@5,@9,@10) -> float_type, {1, 128}, {128, 1}
@12 = load[offset=0,end=40](@1) -> float_type, {1, 10}, {10, 1}
@13 = gpu::gemm[alpha=1,beta=0](@11,@6,@12) -> float_type, {1, 10}, {10, 1}
@14 = broadcast[axis=1,dims={1, 10}](@7) -> float_type, {1, 10}, {0, 1}
@15 = load[offset=40,end=80](@1) -> float_type, {1, 10}, {10, 1}
@16 = gpu::add(@13,@14,@15) -> float_type, {1, 10}, {10, 1}
output = @param:output -> float_type, {1, 10}, {10, 1}
@17 = gpu::softmax[axis=1](@16,output) -> float_type, {1, 10}, {10, 1}
Allocating params ...
Running performance report ...
@0 = check_context::migraphx::version_1::gpu::context -> float_type, {}, {}: 0.00057782ms, 1%
@1 = hip::hip_allocate_memory[shape=float_type, {256}, {1},id=scratch] -> float_type, {256}, {1}: 0.000295ms, 1%
@2 = hip::hip_copy_literal[id=@literal:3] -> float_type, {784, 128}, {128, 1}: 0.00027942ms, 1%
@3 = load[offset=0,end=512](@1) -> float_type, {1, 128}, {128, 1}: 0.000232ms, 1%
x = @param:x -> float_type, {1, 28, 28}, {784, 28, 1}: 0.0003206ms, 1%
@4 = reshape[dims={-1, 784}](x) -> float_type, {1, 784}, {784, 1}: 0.00033842ms, 1%
@5 = gpu::gemm[alpha=1,beta=0](@4,@2,@3) -> float_type, {1, 128}, {128, 1}: 0.212592ms, 52%
@6 = hip::hip_copy_literal[id=@literal:1] -> float_type, {128, 10}, {10, 1}: 0.00085822ms, 1%
@7 = hip::hip_copy_literal[id=@literal:0] -> float_type, {10}, {1}: 0.000382ms, 1%
@8 = hip::hip_copy_literal[id=@literal:2] -> float_type, {128}, {1}: 0.0003486ms, 1%
@9 = broadcast[axis=1,dims={1, 128}](@8) -> float_type, {1, 128}, {0, 1}: 0.000299ms, 1%
@10 = load[offset=512,end=1024](@1) -> float_type, {1, 128}, {128, 1}: 0.000234ms, 1%
@11 = gpu::add_relu(@5,@9,@10) -> float_type, {1, 128}, {128, 1}: 0.0416597ms, 11%
@12 = load[offset=0,end=40](@1) -> float_type, {1, 10}, {10, 1}: 0.0007548ms, 1%
@13 = gpu::gemm[alpha=1,beta=0](@11,@6,@12) -> float_type, {1, 10}, {10, 1}: 0.0733071ms, 18%
@14 = broadcast[axis=1,dims={1, 10}](@7) -> float_type, {1, 10}, {0, 1}: 0.00088142ms, 1%
@15 = load[offset=40,end=80](@1) -> float_type, {1, 10}, {10, 1}: 0.000408ms, 1%
@16 = gpu::add(@13,@14,@15) -> float_type, {1, 10}, {10, 1}: 0.0410144ms, 10%
output = @param:output -> float_type, {1, 10}, {10, 1}: 0.0010222ms, 1%
@17 = gpu::softmax[axis=1](@16,output) -> float_type, {1, 10}, {10, 1}: 0.0385636ms, 10%
Summary:
gpu::gemm: 0.285899ms, 69%
gpu::add_relu: 0.0416597ms, 11%
gpu::add: 0.0410144ms, 10%
gpu::softmax: 0.0385636ms, 10%
hip::hip_copy_literal: 0.00186824ms, 1%
load: 0.0016288ms, 1%
@param: 0.0013428ms, 1%
broadcast: 0.00118042ms, 1%
check_context::migraphx::version_1::gpu::context: 0.00057782ms, 1%
reshape: 0.00033842ms, 1%
hip::hip_allocate_memory: 0.000295ms, 1%
Rate: 2866.1/sec
Total time: 0.348906ms
Total instructions time: 0.414369ms
Overhead time: 0.00348144ms, -0.0654627ms
Overhead: 1%, -19%
```
</details>
<br/><br/>
# Performing Inference using MIGraphX Python Library
## Description
This example uses a pre-trained Resnet50 V2 model to demonstrate how inference can be run in Python using the MIGraphX library.
## How to Use this Example
If you do not already have Jupyter Notebooks installed, please refer to this [page](https://jupyter.org/install) for instructions.
Once Jupyter Notebooks is installed, you can navigate to this directory and issue the command:
```
$ jupyter notebook
```
From the browser window that is launched, click on `resnet50_inference.ipynb`
You should now be able to run the notebook from your browser.
["tench",
"goldfish",
"great white shark",
"tiger shark",
"hammerhead shark",
"electric ray",
"stingray",
"cock",
"hen",
"ostrich",
"brambling",
"goldfinch",
"house finch",
"junco",
"indigo bunting",
"American robin",
"bulbul",
"jay",
"magpie",
"chickadee",
"American dipper",
"kite",
"bald eagle",
"vulture",
"great grey owl",
"fire salamander",
"smooth newt",
"newt",
"spotted salamander",
"axolotl",
"American bullfrog",
"tree frog",
"tailed frog",
"loggerhead sea turtle",
"leatherback sea turtle",
"mud turtle",
"terrapin",
"box turtle",
"banded gecko",
"green iguana",
"Carolina anole",
"desert grassland whiptail lizard",
"agama",
"frilled-necked lizard",
"alligator lizard",
"Gila monster",
"European green lizard",
"chameleon",
"Komodo dragon",
"Nile crocodile",
"American alligator",
"triceratops",
"worm snake",
"ring-necked snake",
"eastern hog-nosed snake",
"smooth green snake",
"kingsnake",
"garter snake",
"water snake",
"vine snake",
"night snake",
"boa constrictor",
"African rock python",
"Indian cobra",
"green mamba",
"sea snake",
"Saharan horned viper",
"eastern diamondback rattlesnake",
"sidewinder",
"trilobite",
"harvestman",
"scorpion",
"yellow garden spider",
"barn spider",
"European garden spider",
"southern black widow",
"tarantula",
"wolf spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse",
"prairie grouse",
"peacock",
"quail",
"partridge",
"grey parrot",
"macaw",
"sulphur-crested cockatoo",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"duck",
"red-breasted merganser",
"goose",
"black swan",
"tusker",
"echidna",
"platypus",
"wallaby",
"koala",
"wombat",
"jellyfish",
"sea anemone",
"brain coral",
"flatworm",
"nematode",
"conch",
"snail",
"slug",
"sea slug",
"chiton",
"chambered nautilus",
"Dungeness crab",
"rock crab",
"fiddler crab",
"red king crab",
"American lobster",
"spiny lobster",
"crayfish",
"hermit crab",
"isopod",
"white stork",
"black stork",
"spoonbill",
"flamingo",
"little blue heron",
"great egret",
"bittern",
"crane (bird)",
"limpkin",
"common gallinule",
"American coot",
"bustard",
"ruddy turnstone",
"dunlin",
"common redshank",
"dowitcher",
"oystercatcher",
"pelican",
"king penguin",
"albatross",
"grey whale",
"killer whale",
"dugong",
"sea lion",
"Chihuahua",
"Japanese Chin",
"Maltese",
"Pekingese",
"Shih Tzu",
"King Charles Spaniel",
"Papillon",
"toy terrier",
"Rhodesian Ridgeback",
"Afghan Hound",
"Basset Hound",
"Beagle",
"Bloodhound",
"Bluetick Coonhound",
"Black and Tan Coonhound",
"Treeing Walker Coonhound",
"English foxhound",
"Redbone Coonhound",
"borzoi",
"Irish Wolfhound",
"Italian Greyhound",
"Whippet",
"Ibizan Hound",
"Norwegian Elkhound",
"Otterhound",
"Saluki",
"Scottish Deerhound",
"Weimaraner",
"Staffordshire Bull Terrier",
"American Staffordshire Terrier",
"Bedlington Terrier",
"Border Terrier",
"Kerry Blue Terrier",
"Irish Terrier",
"Norfolk Terrier",
"Norwich Terrier",
"Yorkshire Terrier",
"Wire Fox Terrier",
"Lakeland Terrier",
"Sealyham Terrier",
"Airedale Terrier",
"Cairn Terrier",
"Australian Terrier",
"Dandie Dinmont Terrier",
"Boston Terrier",
"Miniature Schnauzer",
"Giant Schnauzer",
"Standard Schnauzer",
"Scottish Terrier",
"Tibetan Terrier",
"Australian Silky Terrier",
"Soft-coated Wheaten Terrier",
"West Highland White Terrier",
"Lhasa Apso",
"Flat-Coated Retriever",
"Curly-coated Retriever",
"Golden Retriever",
"Labrador Retriever",
"Chesapeake Bay Retriever",
"German Shorthaired Pointer",
"Vizsla",
"English Setter",
"Irish Setter",
"Gordon Setter",
"Brittany",
"Clumber Spaniel",
"English Springer Spaniel",
"Welsh Springer Spaniel",
"Cocker Spaniels",
"Sussex Spaniel",
"Irish Water Spaniel",
"Kuvasz",
"Schipperke",
"Groenendael",
"Malinois",
"Briard",
"Australian Kelpie",
"Komondor",
"Old English Sheepdog",
"Shetland Sheepdog",
"collie",
"Border Collie",
"Bouvier des Flandres",
"Rottweiler",
"German Shepherd Dog",
"Dobermann",
"Miniature Pinscher",
"Greater Swiss Mountain Dog",
"Bernese Mountain Dog",
"Appenzeller Sennenhund",
"Entlebucher Sennenhund",
"Boxer",
"Bullmastiff",
"Tibetan Mastiff",
"French Bulldog",
"Great Dane",
"St. Bernard",
"husky",
"Alaskan Malamute",
"Siberian Husky",
"Dalmatian",
"Affenpinscher",
"Basenji",
"pug",
"Leonberger",
"Newfoundland",
"Pyrenean Mountain Dog",
"Samoyed",
"Pomeranian",
"Chow Chow",
"Keeshond",
"Griffon Bruxellois",
"Pembroke Welsh Corgi",
"Cardigan Welsh Corgi",
"Toy Poodle",
"Miniature Poodle",
"Standard Poodle",
"Mexican hairless dog",
"grey wolf",
"Alaskan tundra wolf",
"red wolf",
"coyote",
"dingo",
"dhole",
"African wild dog",
"hyena",
"red fox",
"kit fox",
"Arctic fox",
"grey fox",
"tabby cat",
"tiger cat",
"Persian cat",
"Siamese cat",
"Egyptian Mau",
"cougar",
"lynx",
"leopard",
"snow leopard",
"jaguar",
"lion",
"tiger",
"cheetah",
"brown bear",
"American black bear",
"polar bear",
"sloth bear",
"mongoose",
"meerkat",
"tiger beetle",
"ladybug",
"ground beetle",
"longhorn beetle",
"leaf beetle",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant",
"grasshopper",
"cricket",
"stick insect",
"cockroach",
"mantis",
"cicada",
"leafhopper",
"lacewing",
"dragonfly",
"damselfly",
"red admiral",
"ringlet",
"monarch butterfly",
"small white",
"sulphur butterfly",
"gossamer-winged butterfly",
"starfish",
"sea urchin",
"sea cucumber",
"cottontail rabbit",
"hare",
"Angora rabbit",
"hamster",
"porcupine",
"fox squirrel",
"marmot",
"beaver",
"guinea pig",
"common sorrel",
"zebra",
"pig",
"wild boar",
"warthog",
"hippopotamus",
"ox",
"water buffalo",
"bison",
"ram",
"bighorn sheep",
"Alpine ibex",
"hartebeest",
"impala",
"gazelle",
"dromedary",
"llama",
"weasel",
"mink",
"European polecat",
"black-footed ferret",
"otter",
"skunk",
"badger",
"armadillo",
"three-toed sloth",
"orangutan",
"gorilla",
"chimpanzee",
"gibbon",
"siamang",
"guenon",
"patas monkey",
"baboon",
"macaque",
"langur",
"black-and-white colobus",
"proboscis monkey",
"marmoset",
"white-headed capuchin",
"howler monkey",
"titi",
"Geoffroy's spider monkey",
"common squirrel monkey",
"ring-tailed lemur",
"indri",
"Asian elephant",
"African bush elephant",
"red panda",
"giant panda",
"snoek",
"eel",
"coho salmon",
"rock beauty",
"clownfish",
"sturgeon",
"garfish",
"lionfish",
"pufferfish",
"abacus",
"abaya",
"academic gown",
"accordion",
"acoustic guitar",
"aircraft carrier",
"airliner",
"airship",
"altar",
"ambulance",
"amphibious vehicle",
"analog clock",
"apiary",
"apron",
"waste container",
"assault rifle",
"backpack",
"bakery",
"balance beam",
"balloon",
"ballpoint pen",
"Band-Aid",
"banjo",
"baluster",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel",
"wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"swimming cap",
"bath towel",
"bathtub",
"station wagon",
"lighthouse",
"beaker",
"military cap",
"beer bottle",
"beer glass",
"bell-cot",
"bib",
"tandem bicycle",
"bikini",
"ring binder",
"binoculars",
"birdhouse",
"boathouse",
"bobsleigh",
"bolo tie",
"poke bonnet",
"bookcase",
"bookstore",
"bottle cap",
"bow",
"bow tie",
"brass",
"bra",
"breakwater",
"breastplate",
"broom",
"bucket",
"buckle",
"bulletproof vest",
"high-speed train",
"butcher shop",
"taxicab",
"cauldron",
"candle",
"cannon",
"canoe",
"can opener",
"cardigan",
"car mirror",
"carousel",
"tool kit",
"carton",
"car wheel",
"automated teller machine",
"cassette",
"cassette player",
"castle",
"catamaran",
"CD player",
"cello",
"mobile phone",
"chain",
"chain-link fence",
"chain mail",
"chainsaw",
"chest",
"chiffonier",
"chime",
"china cabinet",
"Christmas stocking",
"church",
"movie theater",
"cleaver",
"cliff dwelling",
"cloak",
"clogs",
"cocktail shaker",
"coffee mug",
"coffeemaker",
"coil",
"combination lock",
"computer keyboard",
"confectionery store",
"container ship",
"convertible",
"corkscrew",
"cornet",
"cowboy boot",
"cowboy hat",
"cradle",
"crane (machine)",
"crash helmet",
"crate",
"infant bed",
"Crock Pot",
"croquet ball",
"crutch",
"cuirass",
"dam",
"desk",
"desktop computer",
"rotary dial telephone",
"diaper",
"digital clock",
"digital watch",
"dining table",
"dishcloth",
"dishwasher",
"disc brake",
"dock",
"dog sled",
"dome",
"doormat",
"drilling rig",
"drum",
"drumstick",
"dumbbell",
"Dutch oven",
"electric fan",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso machine",
"face powder",
"feather boa",
"filing cabinet",
"fireboat",
"fire engine",
"fire screen sheet",
"flagpole",
"flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster bed",
"freight car",
"French horn",
"frying pan",
"fur coat",
"garbage truck",
"gas mask",
"gas pump",
"goblet",
"go-kart",
"golf ball",
"golf cart",
"gondola",
"gong",
"gown",
"grand piano",
"greenhouse",
"grille",
"grocery store",
"guillotine",
"barrette",
"hair spray",
"half-track",
"hammer",
"hamper",
"hair dryer",
"hand-held computer",
"handkerchief",
"hard disk drive",
"harmonica",
"harp",
"harvester",
"hatchet",
"holster",
"home theater",
"honeycomb",
"hook",
"hoop skirt",
"horizontal bar",
"horse-drawn vehicle",
"hourglass",
"iPod",
"clothes iron",
"jack-o'-lantern",
"jeans",
"jeep",
"T-shirt",
"jigsaw puzzle",
"pulled rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat",
"ladle",
"lampshade",
"laptop computer",
"lawn mower",
"lens cap",
"paper knife",
"library",
"lifeboat",
"lighter",
"limousine",
"ocean liner",
"lipstick",
"slip-on shoe",
"lotion",
"speaker",
"loupe",
"sawmill",
"magnetic compass",
"mail bag",
"mailbox",
"tights",
"tank suit",
"manhole cover",
"maraca",
"marimba",
"mask",
"match",
"maypole",
"maze",
"measuring cup",
"medicine chest",
"megalith",
"microphone",
"microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home",
"Model T",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"square academic cap",
"mosque",
"mosquito net",
"scooter",
"mountain bike",
"tent",
"computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook computer",
"obelisk",
"oboe",
"ocarina",
"odometer",
"oil filter",
"organ",
"oscilloscope",
"overskirt",
"bullock cart",
"oxygen mask",
"packet",
"paddle",
"paddle wheel",
"padlock",
"paintbrush",
"pajamas",
"palace",
"pan flute",
"paper towel",
"parachute",
"parallel bars",
"park bench",
"parking meter",
"passenger car",
"patio",
"payphone",
"pedestal",
"pencil case",
"pencil sharpener",
"perfume",
"Petri dish",
"photocopier",
"plectrum",
"Pickelhaube",
"picket fence",
"pickup truck",
"pier",
"piggy bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate ship",
"pitcher",
"hand plane",
"planetarium",
"plastic bag",
"plate rack",
"plow",
"plunger",
"Polaroid camera",
"pole",
"police van",
"poncho",
"billiard table",
"soda bottle",
"pot",
"potter's wheel",
"power drill",
"prayer rug",
"printer",
"prison",
"projectile",
"projector",
"hockey puck",
"punching bag",
"purse",
"quill",
"quilt",
"race car",
"racket",
"radiator",
"radio",
"radio telescope",
"rain barrel",
"recreational vehicle",
"reel",
"reflex camera",
"refrigerator",
"remote control",
"restaurant",
"revolver",
"rifle",
"rocking chair",
"rotisserie",
"eraser",
"rugby ball",
"ruler",
"running shoe",
"safe",
"safety pin",
"salt shaker",
"sandal",
"sarong",
"saxophone",
"scabbard",
"weighing scale",
"school bus",
"schooner",
"scoreboard",
"CRT screen",
"screw",
"screwdriver",
"seat belt",
"sewing machine",
"shield",
"shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule",
"sliding door",
"slot machine",
"snorkel",
"snowmobile",
"snowplow",
"soap dispenser",
"soccer ball",
"sock",
"solar thermal collector",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"motorboat",
"spider web",
"spindle",
"sports car",
"spotlight",
"stage",
"steam locomotive",
"through arch bridge",
"steel drum",
"stethoscope",
"scarf",
"stone wall",
"stopwatch",
"stove",
"strainer",
"tram",
"stretcher",
"couch",
"stupa",
"submarine",
"suit",
"sundial",
"sunglass",
"sunglasses",
"sunscreen",
"suspension bridge",
"mop",
"sweatshirt",
"swimsuit",
"swing",
"switch",
"syringe",
"table lamp",
"tank",
"tape player",
"teapot",
"teddy bear",
"television",
"tennis ball",
"thatched roof",
"front curtain",
"thimble",
"threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop",
"toilet seat",
"torch",
"totem pole",
"tow truck",
"toy store",
"tractor",
"semi-trailer truck",
"tray",
"trench coat",
"tricycle",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus",
"trombone",
"tub",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle",
"upright piano",
"vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin",
"volleyball",
"waffle iron",
"wall clock",
"wallet",
"wardrobe",
"military aircraft",
"sink",
"washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"Windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool",
"split-rail fence",
"shipwreck",
"yawl",
"yurt",
"website",
"comic book",
"crossword",
"traffic sign",
"traffic light",
"dust jacket",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot",
"trifle",
"ice cream",
"ice pop",
"baguette",
"bagel",
"pretzel",
"cheeseburger",
"hot dog",
"mashed potato",
"cabbage",
"broccoli",
"cauliflower",
"zucchini",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber",
"artichoke",
"bell pepper",
"cardoon",
"mushroom",
"Granny Smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple",
"banana",
"jackfruit",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate syrup",
"dough",
"meatloaf",
"pizza",
"pot pie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff",
"coral reef",
"geyser",
"lakeshore",
"promontory",
"shoal",
"seashore",
"valley",
"volcano",
"baseball player",
"bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper",
"corn",
"acorn",
"rose hip",
"horse chestnut seed",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn mushroom",
"earth star",
"hen-of-the-woods",
"bolete",
"ear",
"toilet paper"]
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment