Commit 7e297b13 authored by Paul's avatar Paul
Browse files

Merge

parents 86ea5e91 aa7ff911
Developer Guide Contributor Guide
=============== ===============
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Contents: :caption: Contents:
overview dev_intro
dev/data dev/data
dev/operators dev/operators
dev/program dev/program
...@@ -13,3 +13,4 @@ Developer Guide ...@@ -13,3 +13,4 @@ Developer Guide
dev/quantization dev/quantization
dev/pass dev/pass
dev/matchers dev/matchers
dev/tools
Tools
=====
roctx.py
--------
MIGraphX driver provides `roctx` command which can be used with `rocprof` binary to get marker timing information for each MIGraphX operator.
In order to help user to process timing information, rocTX helper script is provided at `tools/roctx.py`.
The `roctx.py` helper script provides two main functionality: `run` and `parse`. Available knobs and usage are given below:
::
Usage: roctx.py [-h] [--json-path json_path] [--out out]
[--study-name study-name] [--repeat repeat] [--parse]
[--run run] [--debug]
.. option:: --run
Runs `migraphx-driver roctx` command and given `migraphx-driver` knobs, and then parses the results, providing GPU kernel timing information.
MIGraphX knobs can be given via a string to `--run` knob. Please see the examples below.
.. option:: --parse
Given `--json-path`, parses JSON file and provides GPU kernel timing information.
.. option:: --out
Output folder
.. option:: --study-name
Optional. Allows user to name a study for easier interpretation. Defaults to timestamp.
.. option:: --repeat
Number of iterations. Set to **2** by default.
.. option:: --debug
Provides additional debug information related to data. Only use for debugging purposes.
**Examples:**
**Running inference with rocTX for a given ONNX file:**
::
python roctx.py --run '--onnx --gpu fcn-resnet50-11.onnx' --out output_folder --repeat 5
After a run, similar to output given below is expected at terminal. The output will provide `SUM`, `MIN`, `MAX` and `COUNT` information for each kernel executed for a given model.
Average total time is also provided. There are three files provided for reference:
1. `OUTPUT CSV FILE` provides a summary of the run, providing utilized MIGraphX knobs and related kernel timing information
2. `KERNEL TIMING DETAILS` provides the hotspot kernel timing information
3. This will provide all output data related to all iterations executed during a run.
An example output:
.. image:: ./roctx1.jpg
Hotspot kerel timing information:
.. image:: ./roctx2.jpg
**Parsing an already existing JSON file:**
::
python roctx.py --parse --json-path ../trace.json
\ No newline at end of file
MIGraphX Fundamentals
======================
MIGraphX provides an optimized execution engine for deep learning neural networks.
We will cover some simple operations in the MIGraphX framework here.
For a quick start guide to using MIGraphX, look in the examples directory: ``https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/tree/develop/examples/migraphx``.
Location of the Examples
-------------------------
The ``ref_dev_examples.cpp`` can be found in the test directory (``/test``).
The executable file ``test_ref_dev_examples`` based on this file will be created in the ``bin/`` of the build directory after running ``make -j$(nproc) test_ref_dev_examples``.
The executable will also be created when running ``make -j$(nproc) check``, alongside with all the other tests.
Directions for building MIGraphX from source can be found in the main README file: ``https://github.com/ROCmSoftwarePlatform/AMDMIGraphX#readme``.
Adding Two Literals
--------------------
A program is a collection of modules, which are collections of instructions to be executed when calling `eval <migraphx::program::eval>`.
Each instruction has an associated `operation <migraphx::operation>` which represents the computation to be performed by the instruction.
We start with a snippet of the simple ``add_two_literals()`` function::
// create the program and get a pointer to the main module
migraphx::program p;
auto* mm = p.get_main_module();
// add two literals to the program
auto one = mm->add_literal(1);
auto two = mm->add_literal(2);
// make the add operation between the two literals and add it to the program
mm->add_instruction(migraphx::make_op("add"), one, two);
// compile the program on the reference device
p.compile(migraphx::ref::target{});
// evaulate the program and retreive the result
auto result = p.eval({}).back();
std::cout << "add_two_literals: 1 + 2 = " << result << "\n";
We start by creating a simple ``migraphx::program`` object and then getting a pointer to the main module of it.
The program is a collection of ``modules`` that start executing from the main module, so instructions are added to the modules rather than directly onto the program object.
We then use the `add_literal <migraphx::program::add_literal>` function to add an instruction that stores the literal number ``1`` while returning an `instruction_ref <migraphx::instruction_ref>`.
The returned `instruction_ref <migraphx::instruction_ref>` can be used in another instruction as an input.
We use the same `add_literal <migraphx::program::add_literal>` function to add a ``2`` to the program.
After creating the literals, we then create the instruction to add the numbers together.
This is done by using the `add_instruction <migraphx::program::add_instruction>` function with the ``"add"`` `operation <migraphx::program::operation>` created by `make_op <migraphx::program::make_op>` along with the previous `add_literal` `instruction_ref <migraphx::instruction_ref>` for the input arguments of the instruction.
Finally, we can run this `program <migraphx::program>` by compiling it for the reference target (CPU) and then running it with `eval <migraphx::program::eval>`
The result is then retreived and printed to the console.
We can compile the program for the GPU as well, but the file will have to be moved to the ``test/gpu/`` directory and the correct target must be included::
#include <migraphx/gpu/target.hpp>
Using Parameters
-----------------
The previous program will always produce the same value of adding ``1`` and ``2``.
In the next program we want to pass an input to a program and compute a value based on the input.
We can modify the program to take an input parameter ``x``, as seen in the ``add_parameter()`` function::
migraphx::program p;
auto* mm = p.get_main_module();
migraphx::shape s{migraphx::shape::int32_type, {1}};
// add a "x" parameter with the shape s
auto x = mm->add_parameter("x", s);
auto two = mm->add_literal(2);
// add the "add" instruction between the "x" parameter and "two" to the module
mm->add_instruction(migraphx::make_op("add"), x, two);
p.compile(migraphx::ref::target{});
This adds a parameter of type ``int32``, and compiles it for the CPU.
To run the program, we need to pass the parameter as a ``parameter_map`` when we call `eval <migraphx::program::eval>`.
We create the ``parameter_map`` by setting the ``x`` key to an `argument <migraphx::argument>` object with an ``int`` data type::
// create a parameter_map object for passing a value to the "x" parameter
std::vector<int> data = {4};
migraphx::parameter_map params;
params["x"] = migraphx::argument(s, data.data());
auto result = p.eval(params).back();
std::cout << "add_parameters: 4 + 2 = " << result << "\n";
EXPECT(result.at<int>() == 6);
Handling Tensor Data
---------------------
In the previous examples we have only been dealing with scalars, but the `shape <migraphx::shape>` class can describe multi-dimensional tensors.
For example, we can compute a simple convolution::
migraphx::program p;
auto* mm = p.get_main_module();
// create shape objects for the input tensor and weights
migraphx::shape input_shape{migraphx::shape::float_type, {2, 3, 4, 4}};
migraphx::shape weights_shape{migraphx::shape::float_type, {3, 3, 3, 3}};
// create the parameters and add the "convolution" operation to the module
auto input = mm->add_parameter("X", input_shape);
auto weights = mm->add_parameter("W", weights_shape);
mm->add_instruction(migraphx::make_op("convolution", {{"padding", {1, 1}}, {"stride", {2, 2}}}), input, weights);
Here we create two parameters for both the ``input`` and ``weights``.
In the previous examples, we created simple literals, however, most programs will take data from allocated buffers (usually on the GPU).
In this case, we can create `argument <migraphx::argument>` objects directly from the pointers to the buffers::
// Compile the program
p.compile(migraphx::ref::target{});
// Allocated buffers by the user
std::vector<float> a = ...;
std::vector<float> c = ...;
// Solution vector
std::vector<float> sol = ...;
// Create the arguments in a parameter_map
migraphx::parameter_map params;
params["X"] = migraphx::argument(input_shape, a.data());
params["W"] = migraphx::argument(weights_shape, c.data());
// Evaluate and confirm the result
auto result = p.eval(params).back();
std::vector<float> results_vector(64);
result.visit([&](auto output) { results_vector.assign(output.begin(), output.end()); });
EXPECT(migraphx::verify_range(results_vector, sol));
An `argument <migraphx::argument>` can handle memory buffers from either the GPU or the CPU.
By default when running the `program <migraphx::program>`, buffers are allocated on the corresponding target.
When compiling for the CPU, the buffers by default will be allocated on the CPU.
When compiling for the GPU, the buffers by default will be allocated on the GPU.
With the option ``offloaf_copy=true`` set while compiling for the GPU, the buffers will be located on the CPU.
Importing From ONNX
--------------------
A `program <migraphx::program>` can be built directly from an onnx file using the MIGraphX ONNX parser.
This makes it easier to use neural networks directly from other frameworks.
In this case, there is an ``parse_onnx`` function::
program p = migraphx::parse_onnx("model.onnx");
p.compile(migraphx::gpu::target{});
...@@ -61,3 +61,21 @@ Verify each instruction ...@@ -61,3 +61,21 @@ Verify each instruction
.. option:: -r, --reduce .. option:: -r, --reduce
Reduce program and verify Reduce program and verify
roctx
----
.. program:: migraphx-driver roctx
Provides marker information for each operation, allowing MIGraphX to be used with `rocprof <https://rocmdocs.amd.com/en/latest/ROCm_Tools/ROCm-Tools.html>`_ for performance analysis.
This allows user to get GPU-level kernel timing information.
An example command line combined with rocprof for tracing purposes is given below:
.. code-block:: bash
/opt/rocm/bin/rocprof --hip-trace --roctx-trace --flush-rate 1ms --timestamp on -d <OUTPUT_PATH> --obj-tracking on /opt/rocm/bin/migraphx-driver roctx <ONNX_FILE> <MIGRAPHX_OPTIONS>
After `rocprof` is run, the output directory will contain trace information for HIP, HCC and ROCTX in seperate `.txt` files.
To understand the interactions between API calls, it is recommended to utilize `roctx.py` helper script as desribed in :ref:`dev/tools:rocTX` section.
.. include:: ./driver/compile.rst
\ No newline at end of file
...@@ -13,7 +13,7 @@ Welcome to AMD MIGraphX's documentation! ...@@ -13,7 +13,7 @@ Welcome to AMD MIGraphX's documentation!
py_user_guide py_user_guide
cpp_user_guide cpp_user_guide
driver driver
developer_guide contributor_guide
Indices and tables Indices and tables
......
Overview
========
MIGraphX provides an optimized execution engine for deep learning neural networks.
Building a program
------------------
A program consists of a set of instructions to be executed when calling `eval <migraphx::program::eval>`. Each instruction has an associated `operation <migraphx::operation>` which represents the computation to be performed by the instruction.
We can start by building a simple program to add two numbers together::
program p;
instruction_ref one = p.add_literal(1);
instruction_ref two = p.add_literal(2);
p.add_instruction(add{}, one, two);
The `add_literal <migraphx::program::add_literal>` function will add an instruction to the program to store a literal number. The `instruction_ref <migraphx::instruction_ref>` is a reference to the instruction in the program, which can be used to compose the output of the instruction with another instruction.
After creating the literals, we then create the instruction to add the numbers together. This is done by using the `add{} <migraphx::add>` operation class along with the `instruction_ref <migraphx::instruction_ref>` for the input arguments of the instruction.
Finally, we can run this `program <migraphx::program>` by compiling it for the cpu and then running it with `eval <migraphx::program::eval>`::
p.compile(cpu::target{});
argument result = p.eval({});
The easiest way to see the result is to print it::
std::cout << result;
Which will print ``3``.
We can also compile the program for the gpu as well.
Adding parameters
-----------------
Of course, this program will always produce the same value which is quite uninteresting. Instead, we want to pass an input to a program and compute a value based on the input. This can be done with a parameter. For example, we can modify the program to take an input ``x``::
program p;
instruction_ref x = p.add_parameter("x", {shape::int64_type});
instruction_ref two = p.add_literal(2);
p.add_instruction(add{}, x, two);
p.compile(cpu::target{});
This adds a parameter of type ``int64``, and compiles it for the ``cpu``. To run the program, we need to pass the parameter to it when we call `eval <migraphx::program::eval>`::
argument result = p.eval({
{"x", literal{1}.get_argument()}
});
std::cout << result;
This will print ``3``.
A parameter is given as an `argument <migraphx::argument>`. In this case, the simplest way of creating an `argument <migraphx::argument>` is from a `literal <migraphx::literal>`.
Tensor data
-----------
In this example we are just creating numbers, but the `shape <migraphx::shape>` class can describe multi-dimensional tensors. For example, we can build a simple network with convolution and relu::
program p;
instruction_ref input = p.add_parameter("x", shape{shape::float_type, {1, 3, 32, 32}});
instruction_ref weights = p.add_parameter("w", shape{shape::float_type, {1, 3, 5, 5}});
instruction_ref conv = p.add_instruction(convolution{}, input, weights);
p.add_instruction(activation{"relu"}, conv);
Here we create two parameters for both the ``input`` and ``weights``. In the previous examples, we just created simple literals, however, most programs will take data from already allocated buffers(usually on the GPU). In this case, we can create `argument <migraphx::argument>` objects directly from the pointers to the buffers::
// Compile the program
p.compile(gpu::target{});
// Allocated buffers by the user
float* input = ...;
float* weights = ...;
// Create the arguments
argument input_arg{shape{shape::float_type, {1, 3, 32, 32}}, input};
argument weights_arg{shape{shape::float_type, {1, 3, 32, 32}}, weights};
p.eval({{"x", input_arg}, {"w", weights_arg}})
An `argument <migraphx::argument>` can handle memory buffers from either the GPU or the CPU, but when running the `program <migraphx::program>`, buffers should be allocated for the corresponding target. That is, when compiling for the CPU, the buffers should be allocated on the CPU, and when compiling for the GPU the buffers should be allocated on the GPU.
Importing from onnx
-------------------
A `program <migraphx::program>` can be built directly from an onnx file, which makes it easier to use neural networks directly from other frameworks. In this case, there is an ``parse_onnx`` function::
program p = parse_onnx("model.onnx");
p.compile(gpu::target{});
...@@ -12,31 +12,31 @@ shape ...@@ -12,31 +12,31 @@ shape
.. py:method:: type() .. py:method:: type()
An integer that represents the type An integer that represents the type.
:rtype: int :rtype: int
.. py:method:: lens() .. py:method:: lens()
A list of the lengths of the shape A list of the lengths of the shape.
:rtype: list[int] :rtype: list[int]
.. py:method:: strides() .. py:method:: strides()
A list of the strides of the shape A list of the strides of the shape.
:rtype: list[int] :rtype: list[int]
.. py:method:: elements() .. py:method:: elements()
The number of elements in the shape The number of elements in the shape.
:rtype: int :rtype: int
.. py:method:: bytes() .. py:method:: bytes()
The number of bytes the shape uses The number of bytes the shape uses.
:rtype: int :rtype: int
...@@ -102,30 +102,73 @@ argument ...@@ -102,30 +102,73 @@ argument
Generate an argument with random data. Generate an argument with random data.
:param shape s: Shape of argument to generate. :param shape s: Shape of argument to generate.
:param int seed: The seed used for random number generation :param int seed: The seed used for random number generation.
:rtype: argument :rtype: argument
.. py:function:: fill_argument(s, value)
Fill argument of shape s with value.
:param shape s: Shape of argument to fill.
:param int value: Value to fill in the argument.
:rtype argument
target target
------ ------
.. py:class:: target() .. py:class:: target()
This represents the compiliation target. This represents the compilation target.
.. py:function:: get_target(name) .. py:function:: get_target(name)
Constructs the target. Constructs the target.
:param str name: The name of the target to construct. This can either be 'cpu' or 'gpu'. :param str name: The name of the target to construct. This can either be 'gpu' or 'ref'.
:rtype: target :rtype: target
module
------
.. py:method:: print()
Prints the contents of the module as list of instructions.
.. py:method:: add_instruction(op, args, mod_args=[])
Adds instruction into the module.
:param operation op: 'migraphx.op' to be added as instruction.
:param list[instruction] args: list of inputs to the op.
:param list[module] mod_args: optional list of module arguments to the operator.
:rtype instruction
.. py:method:: add_literal(data)
Adds constant or literal data of provided shape into the module from python buffer which includes numpy array.
:param py::buffer data: Python buffer or numpy array
:rtype instruction
.. py:method:: add_parameter(name, shape)
Adds a parameter to the module with provided name and shape.
:param str name: name of the parameter.
:param shape shape: shape of the parameter.
:rtype instruction
.. py:method:: add_return(args)
Adds a return instruction into the module.
:param list[instruction] args: instruction arguments which need to be returned from the module.
:rtype instruction
program program
------- -------
...@@ -135,21 +178,27 @@ program ...@@ -135,21 +178,27 @@ program
.. py:method:: clone() .. py:method:: clone()
Make a copy of the program Make a copy of the program.
:rtype: program :rtype: program
.. py:method:: get_parameter_names()
Get all the input arguments' or parameters' names to the program as a list.
:rtype list[str]
.. py:method:: get_parameter_shapes() .. py:method:: get_parameter_shapes()
Get the shapes of all the input parameters in the program. Get the shapes of all the input parameters in the program.
:rtype: dict[str, shape] :rtype: dict[str, shape]
.. py:method:: get_shape() .. py:method:: get_output_shapes()
Get the shape of the final output of the program. Get the shapes of the final outputs of the program.
:rtype: shape :rtype: list[shape]
.. py:method:: compile(t, offload_copy=True, fast_math=True) .. py:method:: compile(t, offload_copy=True, fast_math=True)
...@@ -159,6 +208,19 @@ program ...@@ -159,6 +208,19 @@ program
:param bool offload_copy: For targets with offloaded memory(such as the gpu), this will insert instructions during compilation to copy the input parameters to the offloaded memory and to copy the final result from the offloaded memory back to main memory. :param bool offload_copy: For targets with offloaded memory(such as the gpu), this will insert instructions during compilation to copy the input parameters to the offloaded memory and to copy the final result from the offloaded memory back to main memory.
:param bool fast_math: Optimize math functions to use faster approximate versions. There may be slight accuracy degredation when enabled. :param bool fast_math: Optimize math functions to use faster approximate versions. There may be slight accuracy degredation when enabled.
.. py:method:: get_main_module()
Get main module of the program.
:rtype module
.. py:method:: create_module(name)
Create and add a module of provided name into the program.
:param str name : name of the new module.
:rtype module
.. py:method:: run(params) .. py:method:: run(params)
Run the program. Run the program.
...@@ -167,7 +229,11 @@ program ...@@ -167,7 +229,11 @@ program
:type params: dict[str, argument] :type params: dict[str, argument]
:return: The result of the last instruction. :return: The result of the last instruction.
:rtype: argument :rtype: list[argument]
.. py:method:: sort()
Sort the modules of the program such that instructions appear in topologically sorted order.
.. py:function:: quantize_fp16(prog, ins_names=["all"]) .. py:function:: quantize_fp16(prog, ins_names=["all"])
...@@ -190,10 +256,22 @@ program ...@@ -190,10 +256,22 @@ program
:type ins_names: list[str] :type ins_names: list[str]
op
--
.. py::class:: op(name, kwargs)
Construct an operation with name and arguments.
:param str name : name of the operation, must be supported by MIGraphX.
:param dict[str, any] kwargs: arguments to the operation.
:rtype operation
parse_onnx parse_onnx
---------- ----------
.. py:function:: parse_onnx(filename, default_dim_value=1, map_input_dims={}, skip_unknown_operators=false, print_program_on_error=false) .. py:function:: parse_onnx(filename, default_dim_value=1, map_input_dims={}, skip_unknown_operators=false, print_program_on_error=false, max_loop_iterations=10)
Load and parse an onnx file. Load and parse an onnx file.
...@@ -202,20 +280,21 @@ parse_onnx ...@@ -202,20 +280,21 @@ parse_onnx
:param str map_input_dims: Explicitly specify the dims of an input. :param str map_input_dims: Explicitly specify the dims of an input.
:param str skip_unknown_operators: Continue parsing onnx file if an unknown operator is found. :param str skip_unknown_operators: Continue parsing onnx file if an unknown operator is found.
:param str print_program_on_error: Print program if an error occurs. :param str print_program_on_error: Print program if an error occurs.
:param int max_loop_iterations: Maximum iteration number for the loop operator.
:rtype: program :rtype: program
parse_tf parse_tf
-------- --------
.. py:function:: parse_tf(filename, is_nhwc=True, batch_size=1) .. py:function:: parse_tf(filename, is_nhwc=True, batch_size=1, map_input_dims=dict(), output_names=[])
Load and parse an tensorflow protobuf file file. Load and parse an tensorflow protobuf file file.
:param str filename: Path to file. :param str filename: Path to file.
:param bool is_nhwc: Use nhwc as default format. :param bool is_nhwc: Use nhwc as default format.
:param str batch_size: default batch size to use (if not specified in protobuf). :param str batch_size: default batch size to use (if not specified in protobuf).
:param dict[str, list[int]] map_input_dims: Optional arg to explictly specify dimensions of the inputs.
:param list[str] output_names: Optional argument specify names of the output nodes.
:rtype: program :rtype: program
load load
...@@ -223,7 +302,7 @@ load ...@@ -223,7 +302,7 @@ load
.. py:function:: load(filename, format='msgpack') .. py:function:: load(filename, format='msgpack')
Load a MIGraphX program Load a MIGraphX program.
:param str filename: Path to file. :param str filename: Path to file.
:param str format: Format of file. Valid options are msgpack or json. :param str format: Format of file. Valid options are msgpack or json.
...@@ -235,7 +314,7 @@ save ...@@ -235,7 +314,7 @@ save
.. py:function:: save(p, filename, format='msgpack') .. py:function:: save(p, filename, format='msgpack')
Save a MIGraphX program Save a MIGraphX program.
:param program p: Program to save. :param program p: Program to save.
:param str filename: Path to file. :param str filename: Path to file.
......
...@@ -4,15 +4,6 @@ ...@@ -4,15 +4,6 @@
This directory contains examples of common use cases for MIGraphX. This directory contains examples of common use cases for MIGraphX.
## Examples: ## Examples:
- [C++ Parse, Load, and Save Graph Programs](./cpp_parse_load_save) - [MIGraphX usage and utilities](./migraphx)
- [C++ MNIST Inference](./cpp_api_inference) - [Vision inference examples](./vision)
- [Exporting Frozen Graphs in TF1](./export_frozen_graph_tf1) - [Natural language inference examples](./nlp)
- [Exporting Frozen Graphs in TF2](./export_frozen_graph_tf2) \ No newline at end of file
- [MIGraphX Docker Container](./migraphx_docker)
- [MIGraphX Driver](./migraphx_driver)
- [Python Resnet50](./python_api_inference)
- [Python BERT-SQuAD](./python_bert_squad_example)
- [Python Super Resolution](./python_super_resolution)
- [Python NFNet](./python_nfnet_inference)
- [Python U-Net](./python_unet)
- [Python 3D-UNet](./python_3dunet)
# AMD MIGraphX usage and utilities
- [C++ Parse, Load, and Save Graph Programs](./cpp_parse_load_save)
- [Exporting Frozen Graphs in TF1](./export_frozen_graph_tf1)
- [Exporting Frozen Graphs in TF2](./export_frozen_graph_tf2)
- [MIGraphX Docker Container](./migraphx_docker)
- [MIGraphX Driver](./migraphx_driver)
\ No newline at end of file
...@@ -15,7 +15,7 @@ p = parse_onnx(input_file, options); ...@@ -15,7 +15,7 @@ p = parse_onnx(input_file, options);
``` ```
## Saving ## Saving
An instantiated migraphx::program object can then be serialized to MessagePack (.msgpack) format and saved so that it can be loaded for future uses. An instantiated migraphx::program object can then be serialized to MessagePack (.mxr) format and saved so that it can be loaded for future uses.
A program can be saved with either of the following: A program can be saved with either of the following:
``` ```
...@@ -25,8 +25,8 @@ migraphx::save(p, output_file); ...@@ -25,8 +25,8 @@ migraphx::save(p, output_file);
``` ```
migraphx::program p = ... <migraphx::program>; migraphx::program p = ... <migraphx::program>;
migraphx_file_options options; migraphx::file_options options;
options.format = "msgpack"; options.set_file_format("msgpack");
migraphx::save(p, output_file, options); migraphx::save(p, output_file, options);
``` ```
...@@ -41,15 +41,15 @@ p = migraphx::load(input_file); ...@@ -41,15 +41,15 @@ p = migraphx::load(input_file);
``` ```
migraphx::program p; migraphx::program p;
migraphx_file_options options; migraphx::file_options options;
options.format = "msgpack"; options.set_file_format("msgpack");
p = migraphx::load(input_file, options); p = migraphx::load(input_file, options);
``` ```
To load a program that has been saved in JSON format: To load a program that has been saved in JSON format:
``` ```
migraphx::program p; migraphx::program p;
migraphx_file_options options; migraphx::file_options options;
options.format = "json"; options.set_file_format("json");
p = migraphx::load(input_file, options); p = migraphx::load(input_file, options);
``` ```
......
...@@ -24,7 +24,6 @@ int main(int argc, char** argv) ...@@ -24,7 +24,6 @@ int main(int argc, char** argv)
return 0; return 0;
} }
char* parse_arg = getCmdOption(argv + 2, argv + argc, "--parse");
char* load_arg = getCmdOption(argv + 2, argv + argc, "--load"); char* load_arg = getCmdOption(argv + 2, argv + argc, "--load");
char* save_arg = getCmdOption(argv + 2, argv + argc, "--save"); char* save_arg = getCmdOption(argv + 2, argv + argc, "--save");
const char* input_file = argv[1]; const char* input_file = argv[1];
...@@ -44,14 +43,14 @@ int main(int argc, char** argv) ...@@ -44,14 +43,14 @@ int main(int argc, char** argv)
std::string format = load_arg; std::string format = load_arg;
if(format == "json") if(format == "json")
{ {
migraphx_file_options options; migraphx::file_options options;
options.format = "json"; options.set_file_format("json");
p = migraphx::load(input_file, options); p = migraphx::load(input_file, options);
} }
else if(format == "msgpack") else if(format == "msgpack")
{ {
migraphx_file_options options; migraphx::file_options options;
options.format = "msgpack"; options.set_file_format("msgpack");
p = migraphx::load(input_file, options); p = migraphx::load(input_file, options);
} }
else else
...@@ -78,10 +77,10 @@ int main(int argc, char** argv) ...@@ -78,10 +77,10 @@ int main(int argc, char** argv)
std::cout << "Saving program..." << std::endl; std::cout << "Saving program..." << std::endl;
std::string output_file; std::string output_file;
output_file = save_arg == nullptr ? "out" : save_arg; output_file = save_arg == nullptr ? "out" : save_arg;
output_file.append(".msgpack"); output_file.append(".mxr");
migraphx_file_options options; migraphx::file_options options;
options.format = "msgpack"; options.set_file_format("msgpack");
migraphx::save(p, output_file.c_str(), options); migraphx::save(p, output_file.c_str(), options);
std::cout << "Program has been saved as ./" << output_file << std::endl; std::cout << "Program has been saved as ./" << output_file << std::endl;
} }
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment