Unverified Commit 43620c3f authored by Yifan Xiong's avatar Yifan Xiong Committed by GitHub
Browse files

Docs - Update README and version for v0.2.0 release (#111)

Update README and version for v0.2 release.
parent fb7d4a73
# SuperBench
[![MIT licensed](https://img.shields.io/badge/license-MIT-brightgreen.svg)](LICENSE)
[![Lint](https://github.com/microsoft/superbenchmark/workflows/Lint/badge.svg)](https://github.com/microsoft/superbenchmark/actions?query=workflow%3ALint)
[![Build Image](https://github.com/microsoft/superbenchmark/workflows/Build%20Image/badge.svg)](https://github.com/microsoft/superbenchmark/actions/workflows/build-image.yml)
[![Codecov](https://codecov.io/gh/microsoft/superbenchmark/branch/main/graph/badge.svg?token=DDiDLW7pSd)](https://codecov.io/gh/microsoft/superbenchmark)
[![Website](https://img.shields.io/website?down_color=lightgrey&url=https%3A%2F%2Faka.ms%2Fsuperbench)](https://aka.ms/superbench)
[![Latest Release](https://img.shields.io/github/release/microsoft/superbenchmark.svg)](https://github.com/microsoft/superbenchmark/releases/latest)
[![Docker Pulls](https://img.shields.io/docker/pulls/superbench/superbench.svg)](https://hub.docker.com/r/superbench/superbench/tags)
[![License](https://img.shields.io/github/license/microsoft/superbenchmark.svg)](LICENSE)
| Azure Pipelines | Build Status |
| :---: | :---: |
| cpu-unit-test | [![Build Status](https://dev.azure.com/msrasrg/SuperBenchmark/_apis/build/status/microsoft.superbenchmark?branchName=main)](https://dev.azure.com/msrasrg/SuperBenchmark/_build/latest?definitionId=77&branchName=main) |
| gpu-unit-test | [![Build Status](https://dev.azure.com/msrasrg/SuperBenchmark/_apis/build/status/cuda-unit-test?branchName=main)](https://dev.azure.com/msrasrg/SuperBenchmark/_build/latest?definitionId=80&branchName=main) |
| cpu-unit-test | [![Build Status](https://dev.azure.com/msrasrg/SuperBenchmark/_apis/build/status/cpu-unit-test?branchName=main)](https://dev.azure.com/msrasrg/SuperBenchmark/_build/latest?definitionId=77&branchName=main) |
| cuda-unit-test | [![Build Status](https://dev.azure.com/msrasrg/SuperBenchmark/_apis/build/status/cuda-unit-test?branchName=main)](https://dev.azure.com/msrasrg/SuperBenchmark/_build/latest?definitionId=80&branchName=main) |
| ansible-integration-test | [![Build Status](https://dev.azure.com/msrasrg/SuperBenchmark/_apis/build/status/ansible-integration-test?branchName=main)](https://dev.azure.com/msrasrg/SuperBenchmark/_build/latest?definitionId=82&branchName=main) |
__SuperBench__ is a validation and profiling tool for AI infrastructure.
_Check SuperBench website for more details._
📢 [v0.2.0](https://github.com/microsoft/superbenchmark/releases/tag/v0.2.0) has been released!
## _Check [aka.ms/superbench](https://aka.ms/superbench) for more details._
## Trademarks
......
This diff is collapsed.
......@@ -80,14 +80,26 @@ superbench:
parameters:
<<: *common_model_config
batch_size: 128
cnn_models:
resnet_models:
<<: *default_pytorch_mode
models:
- resnet50
- resnet101
- resnet152
parameters:
<<: *common_model_config
batch_size: 128
densenet_models:
<<: *default_pytorch_mode
models:
- densenet169
- densenet201
parameters:
<<: *common_model_config
batch_size: 128
vgg_models:
<<: *default_pytorch_mode
models:
- vgg11
- vgg13
- vgg16
......@@ -99,26 +111,24 @@ superbench:
By default, all benchmarks in default configuration will be run if you don't specify customized configuration.
If you want to have a quick try, you can modify this config a little bit. For example, only run resnet models.
If you want to have a quick try, you can modify this config a little bit. For example, only run resnet101 model.
1. copy the default config to a file named `resnet.yaml` in current path.
```bash
cp superbench/config/default.yaml resnet.yaml
```
2. enable only `cnn_models` in the config and remove other models except resnet under `benchmarks.cnn_models.models`.
```yaml {3,10-13} title="resnet.yaml"
2. enable only `resnet_models` in the config and remove other models except resnet101 under `benchmarks.resnet_models.models`.
```yaml {3,11} title="resnet.yaml"
# SuperBench Config
superbench:
enable: ['cnn_models']
enable: ['resnet_models']
var:
# ...
# omit the middle part
# ...
cnn_models:
resnet_models:
<<: *default_pytorch_mode
models:
- resnet50
- resnet101
- resnet152
parameters:
<<: *common_model_config
batch_size: 128
......
......@@ -54,6 +54,12 @@ deactivate
You can clone the source from GitHub and build it.
:::note Note
You should checkout corresponding tag to use release version, for example,
`git clone -b v0.2.0 https://github.com/microsoft/superbenchmark`
:::
```bash
git clone https://github.com/microsoft/superbenchmark
cd superbenchmark
......
......@@ -24,6 +24,12 @@ or your private key requires a passphase before use, you can do
sb deploy -f remote.ini --host-password [password]
```
:::note Note
You should deploy corresponding Docker image to use release version, for example,
`sb deploy -f local.ini -i superbench/superbench:v0.2.0-cuda11.1.1`
:::
## Run
After deployment, you can start to run the SuperBench benchmarks on all managed nodes using `sb run` command.
......
......@@ -26,4 +26,4 @@ as well as model-benchmark to measure domain-aware end-to-end deep learning work
The following figure shows the capabilities provided by SuperBench core framework and its extension.
![SuperBench Structure](./assets/architecture.png)
![SuperBench Structure](./assets/architecture.svg)
......@@ -6,5 +6,5 @@
Provide hardware and software benchmarks for AI systems.
"""
__version__ = '0.0.0'
__version__ = '0.2.0'
__author__ = 'Microsoft'
# SuperBench Config
version: v0.2
superbench:
enable: null
var:
......
# SuperBench Config
version: v0.2
superbench:
enable: null
var:
......
......@@ -15,11 +15,11 @@ This blog is to introduce [SuperBench](https://github.com/microsoft/superbenchma
### Easy-to-use CLI
In order to provide good user experience, SuperBench provides a command line interface to help users deploy and run benchmarks.
Empowered by SuperBench CLI, user can deploy and run their benchmarks with only one command, which greatly shorten the learning curve of using tools,
In order to provide good user experience, SuperBench provides a command line interface to help users deploy and run benchmarks.
Empowered by SuperBench CLI, user can deploy and run their benchmarks with only one command, which greatly shorten the learning curve of using tools,
to help user easily evaluate the performance of AI workload.
Below is a simple example to show how to deploy and run benchmarks locally. For more information,
Below is a simple example to show how to deploy and run benchmarks locally. For more information,
please view [CLI Document](https://microsoft.github.io/superbenchmark/docs/cli)
1. Deploy
......@@ -48,9 +48,9 @@ For more information, please view [configuration](https://microsoft.github.io/su
1. Executor Framework
In order to facilitate the benchmarking and validation on large-scale clusters, we designed and implemented a modular and extensible framework.
SuperBench framework includes a runner as control node, as well as multiple executors as worker nodes.
A runner received commands from CLI and distribute to all nodes (worker nodes) in the cluster, collect data, and summarize the results.
In order to facilitate the benchmarking and validation on large-scale clusters, we designed and implemented a modular and extensible framework.
SuperBench framework includes a runner as control node, as well as multiple executors as worker nodes.
A runner received commands from CLI and distribute to all nodes (worker nodes) in the cluster, collect data, and summarize the results.
Each worker will run executor to execute the specified benchmark tasks.
![SuperBench Executor Workflow](../../docs/assets/executor_workflow.png)
......@@ -88,7 +88,7 @@ SuperBench supports a set of benchmarks listed as below.
* BERT models
* GPT-2 models
For the details of each benchmark, please view [micro-benchmarks](https://microsoft.github.io/superbenchmark/docs/benchmarks/micro-benchmarks.md)
For the details of each benchmark, please view [micro-benchmarks](https://microsoft.github.io/superbenchmark/docs/benchmarks/micro-benchmarks.md)
and [model-benchmarks](https://microsoft.github.io/superbenchmark/docs/benchmarks/model-benchmarks.md).
......@@ -96,7 +96,7 @@ and [model-benchmarks](https://microsoft.github.io/superbenchmark/docs/benchmark
We want to extend SuperBench capability to distributed validation and auto-diagnosis, to build a benchmarking eco-system.
The following figure shows the whole picture.
![SuperBench Capabilities and Extension](../../docs/assets/architecture.png)
![SuperBench Capabilities and Extension](../../docs/assets/architecture.svg)
With SuperBench and its extensions, we can support:
......@@ -111,4 +111,4 @@ With SuperBench and its extensions, we can support:
## Call for Contributor
This project welcomes contributions and suggestions.
This project welcomes contributions and suggestions.
......@@ -101,6 +101,7 @@ module.exports = {
announcementBar: {
id: 'supportus',
content:
'📢 <a href="https://microsoft.github.io/superbenchmark/blog/release-sb-v0.2">v0.2</a> has been released! ' +
'⭐️ If you like SuperBench, give it a star on <a target="_blank" rel="noopener noreferrer" href="https://github.com/microsoft/superbenchmark">GitHub</a>! ⭐️',
},
prism: {
......
{
"name": "superbench-website",
"version": "0.0.0",
"version": "0.2.0",
"lockfileVersion": 1,
"requires": true,
"dependencies": {
......
{
"name": "superbench-website",
"version": "0.0.0",
"version": "0.2.0",
"private": true,
"scripts": {
"docusaurus": "docusaurus",
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment