Unverified Commit d23ad898 authored by guoshzhao's avatar guoshzhao Committed by GitHub
Browse files

Benchmarks: Doc Revision - Add ReduceType into benchmarks doc. (#150)

Add ReduceType description into benchmarks doc.
parent acf365a8
...@@ -32,7 +32,7 @@ The structure of `benchmarks` package can be divided into layers from the bottom ...@@ -32,7 +32,7 @@ The structure of `benchmarks` package can be divided into layers from the bottom
2. Derived classes for all implemented benchmarks, which need to realize all the abstract interfaces. The benchmarks will be registered into `BenchmarkRegistry`. 2. Derived classes for all implemented benchmarks, which need to realize all the abstract interfaces. The benchmarks will be registered into `BenchmarkRegistry`.
3. `BenchmarkRegistry` provides a way of benchmark registration, maintains all the registered benchmarks, and supports benchmark launching by `BenchmarkContext`. 3. `BenchmarkRegistry` provides a way of benchmark registration, maintains all the registered benchmarks, and supports benchmark launching by `BenchmarkContext`.
4. `BenchmarkContext` provides the context to launch one benchmark, including name, parameters, platform(CPU, GPU, etc.), and framework(Pytorch, TF, ONNX, etc.). 4. `BenchmarkContext` provides the context to launch one benchmark, including name, parameters, platform(CPU, GPU, etc.), and framework(Pytorch, TF, ONNX, etc.).
5. `BenchmarkResult` defines the structured results for each benchmark in json format, including name, return_code, start_time, end_time, raw_data, summarized metrics, etc. 5. `BenchmarkResult` defines the structured results for each benchmark in json format, including name, return_code, start_time, end_time, raw_data, summarized metrics, reduce type, etc.
The `Executor` on the uppermost layer is the entrance for all the benchmarks. It launches the benchmark by `BenchmarkRegistry` and fetch `BenchmarkResult`. The `Executor` on the uppermost layer is the entrance for all the benchmarks. It launches the benchmark by `BenchmarkRegistry` and fetch `BenchmarkResult`.
...@@ -181,7 +181,7 @@ This chapter will describe the interfaces with the caller (Superbench executor), ...@@ -181,7 +181,7 @@ This chapter will describe the interfaces with the caller (Superbench executor),
The inputs needed by the `benchmarks` package is simple, just the context object of the benchmark want to run: The inputs needed by the `benchmarks` package is simple, just the context object of the benchmark want to run:
### Invoke #### Invoke
```py ```py
context = BenchmarkRegistry.create_benchmark_context( context = BenchmarkRegistry.create_benchmark_context(
...@@ -219,6 +219,11 @@ result = { ...@@ -219,6 +219,11 @@ result = {
... ...
'metricsM': List[Number], 'metricsM': List[Number],
}, },
'reduce': {
'metrics1': ReduceType,
...
'metricsM': ReduceType,
},
} }
``` ```
...@@ -244,6 +249,12 @@ result = { ...@@ -244,6 +249,12 @@ result = {
'throughput-inference-float32': [avg_throughput1, ..., avg_throughputN], 'throughput-inference-float32': [avg_throughput1, ..., avg_throughputN],
'throughput-inference-float16': [avg_throughput1, ..., avg_throughputN], 'throughput-inference-float16': [avg_throughput1, ..., avg_throughputN],
}, },
'reduce': {
'throughput-train-float32': 'min',
'throughput-train-float16': 'min',
'throughput-inference-float32': None,
'throughput-inference-float16': None,
},
} }
``` ```
...@@ -261,5 +272,8 @@ result = { ...@@ -261,5 +272,8 @@ result = {
'result': { # Key is metrics 'result': { # Key is metrics
'overhead': [overhead1, ..., overheadN], 'overhead': [overhead1, ..., overheadN],
}, },
'reduce': {
'overhead': None,
},
} }
``` ```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment