Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
tsoc
superbenchmark
Commits
f15fdf72
Unverified
Commit
f15fdf72
authored
Nov 09, 2021
by
guoshzhao
Committed by
GitHub
Nov 09, 2021
Browse files
Docs: Update docs to add ORT AMD benchmarks based on docker (#237)
Update docs to add ORT AMD benchmarks based on docker.
parent
008e0fe1
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
32 additions
and
3 deletions
+32
-3
docs/superbench-config.mdx
docs/superbench-config.mdx
+3
-3
docs/user-tutorial/benchmarks/docker-benchmarks.md
docs/user-tutorial/benchmarks/docker-benchmarks.md
+28
-0
website/sidebars.js
website/sidebars.js
+1
-0
No files found.
docs/superbench-config.mdx
View file @
f15fdf72
...
...
@@ -148,8 +148,8 @@ superbench:
Mappings of `${benchmark_name}: Benchmark`.
There are t
wo
types of benchmarks, micro-benchmark
and model
-benchmark.
For micro-benchmark, `${benchmark_name}` should be the exact same as provided
micro-
benchmarks' name.
There are t
hree
types of benchmarks, micro-benchmark
, model-benchmark and docker
-benchmark.
For micro-benchmark
and docker-benchmark
, `${benchmark_name}` should be the exact same as provided benchmarks' name.
For model-benchmark, `${benchmark_name}` should be in `${name}_models` format,
each model-benchmark can have a customized name while ending with `_models`.
...
...
@@ -269,7 +269,7 @@ See [`Mode` Schema](#mode-schema) for mode definition.
A list of frameworks in which the benchmark runs.
Some benchmarks can support multiple frameworks while others only support one.
* accepted values: `[ onnx | pytorch | tf1 | tf2 | none ]`
* accepted values: `[ onnx
runtime
| pytorch | tf1 | tf2 | none ]`
* default value: `[ none ]`
### `models`
...
...
docs/user-tutorial/benchmarks/docker-benchmarks.md
0 → 100644
View file @
f15fdf72
---
id
:
docker-benchmarks
---
# Docker Benchmarks
## ROCm ONNXRuntime Model Benchmarks
### `ort-models`
#### Introduction
Run the rocm onnxruntime model training benchmarks packaged in docker
`superbench/benchmark:rocm4.3.1-onnxruntime1.9.0`
which includes Bert-large, Distilbert-base, GPT-2, facebook/Bart-large and Roberta-large.
#### Metrics
| Name | Unit | Description |
|-------------------------------------------------------|------------------------|-----------------------------------------------------------|
| onnxruntime-ort-models/bert_large_uncased_ngpu_1 | throughput (samples/s) | The throughput of bert large uncased model on 1 GPU. |
| onnxruntime-ort-models/bert_large_uncased_ngpu_8 | throughput (samples/s) | The throughput of bert large uncased model on 8 GPU. |
| onnxruntime-ort-models/distilbert_base_uncased_ngpu_1 | throughput (samples/s) | The throughput of distilbert base uncased model on 1 GPU. |
| onnxruntime-ort-models/distilbert_base_uncased_ngpu_8 | throughput (samples/s) | The throughput of distilbert base uncased model on 8 GPU. |
| onnxruntime-ort-models/gpt2_ngpu_1 | throughput (samples/s) | The throughput of gpt2 model on 1 GPU. |
| onnxruntime-ort-models/gpt2_ngpu_8 | throughput (samples/s) | The throughput of gpt2 model on 8 GPU. |
| onnxruntime-ort-models/facebook_bart_large_ngpu_1 | throughput (samples/s) | The throughput of facebook bart large model on 1 GPU. |
| onnxruntime-ort-models/facebook_bart_large_ngpu_8 | throughput (samples/s) | The throughput of facebook bart large model on 8 GPU. |
| onnxruntime-ort-models/roberta_large_ngpu_1 | throughput (samples/s) | The throughput of roberta large model on 1 GPU. |
| onnxruntime-ort-models/roberta_large_ngpu_8 | throughput (samples/s) | The throughput of roberta large model on 8 GPU. |
website/sidebars.js
View file @
f15fdf72
...
...
@@ -27,6 +27,7 @@ module.exports = {
items
:
[
'
user-tutorial/benchmarks/micro-benchmarks
'
,
'
user-tutorial/benchmarks/model-benchmarks
'
,
'
user-tutorial/benchmarks/docker-benchmarks
'
,
],
},
'
user-tutorial/system-config
'
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment