Unverified Commit 455ad1f8 authored by guoshzhao's avatar guoshzhao Committed by GitHub
Browse files

revise the term onnx to onnxruntime. (#232)

**Description**
Revise the all the term `onnx` to `onnxruntime`.
parent 2664850a
...@@ -31,7 +31,7 @@ The structure of `benchmarks` package can be divided into layers from the bottom ...@@ -31,7 +31,7 @@ The structure of `benchmarks` package can be divided into layers from the bottom
4. `DockerBenchmark` is the base class for real workloads based on docker. It also defines the abstract interfaces that need to be implemented by the subclasses. 4. `DockerBenchmark` is the base class for real workloads based on docker. It also defines the abstract interfaces that need to be implemented by the subclasses.
2. Derived classes for all implemented benchmarks, which need to realize all the abstract interfaces. The benchmarks will be registered into `BenchmarkRegistry`. 2. Derived classes for all implemented benchmarks, which need to realize all the abstract interfaces. The benchmarks will be registered into `BenchmarkRegistry`.
3. `BenchmarkRegistry` provides a way of benchmark registration, maintains all the registered benchmarks, and supports benchmark launching by `BenchmarkContext`. 3. `BenchmarkRegistry` provides a way of benchmark registration, maintains all the registered benchmarks, and supports benchmark launching by `BenchmarkContext`.
4. `BenchmarkContext` provides the context to launch one benchmark, including name, parameters, platform(CPU, GPU, etc.), and framework(Pytorch, TF, ONNX, etc.). 4. `BenchmarkContext` provides the context to launch one benchmark, including name, parameters, platform(CPU, GPU, etc.), and framework(Pytorch, TF, ONNXRuntime, etc.).
5. `BenchmarkResult` defines the structured results for each benchmark in json format, including name, return_code, start_time, end_time, raw_data, summarized metrics, reduce type, etc. 5. `BenchmarkResult` defines the structured results for each benchmark in json format, including name, return_code, start_time, end_time, raw_data, summarized metrics, reduce type, etc.
The `Executor` on the uppermost layer is the entrance for all the benchmarks. It launches the benchmark by `BenchmarkRegistry` and fetch `BenchmarkResult`. The `Executor` on the uppermost layer is the entrance for all the benchmarks. It launches the benchmark by `BenchmarkRegistry` and fetch `BenchmarkResult`.
...@@ -114,7 +114,7 @@ class BenchmarkRegistry: ...@@ -114,7 +114,7 @@ class BenchmarkRegistry:
name (str): name of benchmark in config file. name (str): name of benchmark in config file.
platform (Platform): Platform types like Platform.CPU, Platform.CUDA, Platform.ROCM. platform (Platform): Platform types like Platform.CPU, Platform.CUDA, Platform.ROCM.
parameters (str): predefined parameters of benchmark. parameters (str): predefined parameters of benchmark.
framework (Framework): Framework types like Framework.PYTORCH, Framework.ONNX. framework (Framework): Framework types like Framework.PYTORCH, Framework.ONNXRUNTIME.
Return: Return:
benchmark_context (BenchmarkContext): the benchmark context. benchmark_context (BenchmarkContext): the benchmark context.
""" """
......
...@@ -28,7 +28,7 @@ class Platform(Enum): ...@@ -28,7 +28,7 @@ class Platform(Enum):
class Framework(Enum): class Framework(Enum):
"""The Enum class representing different frameworks.""" """The Enum class representing different frameworks."""
ONNX = 'onnx' ONNXRUNTIME = 'onnxruntime'
PYTORCH = 'pytorch' PYTORCH = 'pytorch'
TENSORFLOW1 = 'tf1' TENSORFLOW1 = 'tf1'
TENSORFLOW2 = 'tf2' TENSORFLOW2 = 'tf2'
...@@ -89,7 +89,7 @@ def __init__(self, name, platform, parameters='', framework=Framework.NONE): ...@@ -89,7 +89,7 @@ def __init__(self, name, platform, parameters='', framework=Framework.NONE):
name (str): name of benchmark in config file. name (str): name of benchmark in config file.
platform (Platform): Platform types like CUDA, ROCM. platform (Platform): Platform types like CUDA, ROCM.
parameters (str): predefined parameters of benchmark. parameters (str): predefined parameters of benchmark.
framework (Framework): Framework types like ONNX, PYTORCH. framework (Framework): Framework types like ONNXRUNTIME, PYTORCH.
""" """
self.__name = name self.__name = name
self.__platform = platform self.__platform = platform
......
...@@ -124,7 +124,7 @@ def create_benchmark_context(cls, name, platform=Platform.CPU, parameters='', fr ...@@ -124,7 +124,7 @@ def create_benchmark_context(cls, name, platform=Platform.CPU, parameters='', fr
name (str): name of benchmark in config file. name (str): name of benchmark in config file.
platform (Platform): Platform types like Platform.CPU, Platform.CUDA, Platform.ROCM. platform (Platform): Platform types like Platform.CPU, Platform.CUDA, Platform.ROCM.
parameters (str): predefined parameters of benchmark. parameters (str): predefined parameters of benchmark.
framework (Framework): Framework types like Framework.PYTORCH, Framework.ONNX. framework (Framework): Framework types like Framework.PYTORCH, Framework.ONNXRUNTIME.
Return: Return:
benchmark_context (BenchmarkContext): the benchmark context. benchmark_context (BenchmarkContext): the benchmark context.
......
...@@ -88,12 +88,12 @@ def test_is_benchmark_context_valid(): ...@@ -88,12 +88,12 @@ def test_is_benchmark_context_valid():
def test_get_benchmark_name(): def test_get_benchmark_name():
"""Test interface BenchmarkRegistry.get_benchmark_name().""" """Test interface BenchmarkRegistry.get_benchmark_name()."""
# Register benchmarks for testing. # Register benchmarks for testing.
benchmark_names = ['accumulation', 'pytorch-accumulation', 'tf1-accumulation', 'onnx-accumulation'] benchmark_names = ['accumulation', 'pytorch-accumulation', 'tf1-accumulation', 'onnxruntime-accumulation']
for name in benchmark_names: for name in benchmark_names:
BenchmarkRegistry.register_benchmark(name, AccumulationBenchmark) BenchmarkRegistry.register_benchmark(name, AccumulationBenchmark)
# Test benchmark name for different Frameworks. # Test benchmark name for different Frameworks.
benchmark_frameworks = [Framework.NONE, Framework.PYTORCH, Framework.TENSORFLOW1, Framework.ONNX] benchmark_frameworks = [Framework.NONE, Framework.PYTORCH, Framework.TENSORFLOW1, Framework.ONNXRUNTIME]
for i in range(len(benchmark_names)): for i in range(len(benchmark_names)):
context = BenchmarkRegistry.create_benchmark_context( context = BenchmarkRegistry.create_benchmark_context(
'accumulation', platform=Platform.CPU, framework=benchmark_frameworks[i] 'accumulation', platform=Platform.CPU, framework=benchmark_frameworks[i]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment