micro-benchmarks.md 56.6 KB
Newer Older
1
2
3
4
5
6
---
id: micro-benchmarks
---

# Micro Benchmarks

7
8
9
10
11
12
13
14
15
16
17
## Computation Benchmarks

### `kernel-launch`

#### Introduction

Measure GPU kernel launch latency,
which is defined as the time range from the beginning of the launch API call to the beginning of the kernel execution.

#### Metrics

18
19
20
21
| Name                     | Unit      | Description                          |
|--------------------------|-----------|--------------------------------------|
| kernel-launch/event_time | time (ms) | Launch latency measured in GPU time. |
| kernel-launch/wall_time  | time (ms) | Launch latency measured in CPU time. |
22
23
24
25
26
27
28
29
30
31
32

### `gemm-flops`

#### Introduction

Measure the GPU GEMM FLOPS for different float and int data types, with or without Tensor Core (XDLOPS),
performed by NVIDIA [cutlass](https://github.com/NVIDIA/cutlass/tree/ccb697bac77fcc898e9c897b2c90aa5b60ac72fb)
or AMD [rocblas-bench](https://github.com/ROCmSoftwarePlatform/rocBLAS/tree/develop/clients/benchmarks).

#### Metrics

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
| Name                         | Unit           | Description                                             |
|------------------------------|----------------|---------------------------------------------------------|
| gemm-flops/fp64_flops        | FLOPS (GFLOPS) | GEMM float64 peak FLOPS.                                |
| gemm-flops/fp32_flops        | FLOPS (GFLOPS) | GEMM float32 peak FLOPS.                                |
| gemm-flops/fp16_flops        | FLOPS (GFLOPS) | GEMM float16 peak FLOPS.                                |
| gemm-flops/fp64_tc_flops     | FLOPS (GFLOPS) | GEMM float64 peak FLOPS with NVIDIA Tensor Core.        |
| gemm-flops/tf32_tc_flops     | FLOPS (GFLOPS) | GEMM tensor-float32 peak FLOPS with NVIDIA Tensor Core. |
| gemm-flops/fp16_tc_flops     | FLOPS (GFLOPS) | GEMM float16 peak FLOPS with NVIDIA Tensor Core.        |
| gemm-flops/bf16_tc_flops     | FLOPS (GFLOPS) | GEMM bfloat16 peak FLOPS with NVIDIA Tensor Core.       |
| gemm-flops/int8_tc_iops      | IOPS (GIOPS)   | GEMM int8 peak IOPS with NVIDIA Tensor Core.            |
| gemm-flops/int4_tc_iops      | IOPS (GIOPS)   | GEMM int4 peak IOPS with NVIDIA Tensor Core.            |
| gemm-flops/fp32_xdlops_flops | FLOPS (GFLOPS) | GEMM tensor-float32 peak FLOPS with AMD XDLOPS.         |
| gemm-flops/fp16_xdlops_flops | FLOPS (GFLOPS) | GEMM float16 peak FLOPS with AMD XDLOPS.                |
| gemm-flops/bf16_xdlops_flops | FLOPS (GFLOPS) | GEMM bfloat16 peak FLOPS with AMD XDLOPS.               |
| gemm-flops/int8_xdlops_iops  | IOPS (GIOPS)   | GEMM int8 peak IOPS with AMD XDLOPS.                    |
48
49
50
51
52
53
54
55
56

### `matmul`

#### Introduction

Large scale matmul operation using `torch.matmul` with one GPU.

#### Metrics

57
58
59
| Name                           | Unit      | Description                    |
|--------------------------------|-----------|--------------------------------|
| pytorch-matmul/nosharding_time | time (ms) | Time of pure matmul operation. |
60

61
### `cublaslt-gemm` / `hipblaslt-gemm`
62
63
64

#### Introduction

65
Measure the GEMM performance of [`cublasLtMatmul`](https://docs.nvidia.com/cuda/cublas/#cublasltmatmul) or [`hipblasLt-bench`](https://github.com/ROCm/hipBLASLt/blob/develop/clients/benchmarks/README.md).
66
67
68

#### Metrics

69
70
71
72
| Name                                                      | Unit           | Description                     |
|-----------------------------------------------------------|----------------|---------------------------------|
| cublaslt-gemm/${dtype}\_${batch}\_${m}\_${n}\_${k}_flops  | FLOPS (TFLOPS) | TFLOPS of measured GEMM kernel. |
| hipblaslt-gemm/${dtype}\_${batch}\_${m}\_${n}\_${k}_flops | FLOPS (TFLOPS) | TFLOPS of measured GEMM kernel. |
73

74
75
### `cublas-function`

76
77
78
79
80
81
82
83
84
85
86
87
88
89
#### Introduction

Measure the performance of most common Nvidia cuBLAS functions with parameters in models training including ResNet, VGG, DenseNet, LSTM, BERT, and GPT-2.

The supported functions for cuBLAS are as follows:
 - cublasSgemm
 - cublasSgemmStridedBatched
 - cublasGemmStridedBatchedEx
 - cublasGemmEx
 - cublasCgemm3mStridedBatched
 - cublasCgemm

#### Metrics

90
91
92
93
94
| Name                                                              | Unit      | Description                                                                                                                                  |
|-------------------------------------------------------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------------------|
| cublas-function/name\_${function_name}\_${parameters}_time        | time (us) | The mean time to execute the cublas function with the parameters.                                                                            |
| cublas-function/name\_${function_name}\_${parameters}_correctness |           | Whether the calculation results of executing the cublas function with the parameters pass the correctness check if enable correctness check. |
| cublas-function/name\_${function_name}\_${parameters}_error       |           | The error ratio of the calculation results of executing the cublas function with the parameters if enable correctness check.                 |
95
96
97

### `cudnn-function`

98
99
100
101
102
103
104
105
106
107
108
#### Introduction

Measure the performance of most common Nvidia cuDNN functions with parameters in models training including ResNet, VGG, DenseNet, LSTM, BERT, and GPT-2.

The supported functions for cuDNN are as follows:
 - cudnnConvolutionBackwardFilter
 - cudnnConvolutionBackwardData
 - cudnnConvolutionForward

#### Metrics

109
110
111
| Name                                                      | Unit      | Description                                                      |
|-----------------------------------------------------------|-----------|------------------------------------------------------------------|
| cudnn-function/name\_${function_name}\_${parameters}_time | time (us) | The mean time to execute the cudnn function with the parameters. |
112

113
114
115
116
117
### `tensorrt-inference`

#### Introduction

Inference PyTorch/ONNX models on NVIDIA GPUs with [TensorRT](https://developer.nvidia.com/tensorrt).
118

119
120
121
122
123
124
Currently the following models are supported:

> alexnet, densenet121, densenet169, densenet201, densenet161, googlenet, inception_v3, mnasnet0_5,
> mnasnet1_0, mobilenet_v2, resnet18, resnet34, resnet50, resnet101, resnet152, resnext50_32x4d,
> resnext101_32x8d, wide_resnet50_2, wide_resnet101_2, shufflenet_v2_x0_5, shufflenet_v2_x1_0,
> squeezenet1_0, squeezenet1_1, vgg11, vgg11_bn, vgg13, vgg13_bn, vgg16, vgg16_bn, vgg19_bn, vgg19
125
126
127
> lstm, bert-base, bert-large, gpt2-small

> Do not support large models like `gpt2-large` currently because models larger than 2GB (maximum protobuf size) cannot be exported in one ONNX file.
128
129
130

#### Metrics

131
132
133
134
135
136
137
138
| Name                                             | Unit      | Description                                                                                              |
|--------------------------------------------------|-----------|----------------------------------------------------------------------------------------------------------|
| tensorrt-inference/${model}_gpu_time_mean        | time (ms) | The mean GPU latency to execute the kernels for a query.                                                 |
| tensorrt-inference/${model}_gpu_time_99          | time (ms) | The 99th percentile GPU latency to execute the kernels for a query.                                      |
| tensorrt-inference/${model}_host_time_mean       | time (ms) | The mean H2D, GPU, and D2H latency to execute the kernels for a query.                                   |
| tensorrt-inference/${model}_host_time_99         | time (ms) | The 99th percentile H2D, GPU, and D2H latency to execute the kernels for a query.                        |
| tensorrt-inference/${model}_end_to_end_time_mean | time (ms) | The mean duration from when the H2D of a query is called to when the D2H of the same query is completed. |
| tensorrt-inference/${model}_end_to_end_time_99   | time (ms) | The P99 duration from when the H2D of a query is called to when the D2H of the same query is completed.  |
139

140
141
142
143
144
145
146
147
148
149
150
### `ort-inference`

#### Introduction

Inference performance of the torchvision models using ONNXRuntime. Currently the following models are supported:

> alexnet, densenet121, densenet169, densenet201, densenet161, googlenet, inception_v3, mnasnet0_5,
> mnasnet1_0, mobilenet_v2, resnet18, resnet34, resnet50, resnet101, resnet152, resnext50_32x4d,
> resnext101_32x8d, wide_resnet50_2, wide_resnet101_2, shufflenet_v2_x0_5, shufflenet_v2_x1_0,
> squeezenet1_0, squeezenet1_1, vgg11, vgg11_bn, vgg13, vgg13_bn, vgg16, vgg16_bn, vgg19_bn, vgg19

151
152
The supported percentiles are 50, 90, 95, 99, and 99.9.

153
154
#### Metrics

155
156
157
158
| Name                                                | Unit      | Description                                                              |
|-----------------------------------------------------|-----------|--------------------------------------------------------------------------|
| ort-inference/{precision}_{model}_time              | time (ms) | The mean latency to execute one batch of inference.                      |
| ort-inference/{precision}_{model}_time_{percentile} | time (ms) | The {percentile}th percentile latency to execute one batch of inference. |
159

160
161
162
163
164
165
166
167
168
### `gpu-burn`

#### Introduction

Multi-GPU CUDA stress test for GPU compute and memory utilization, performed by [gpu-burn](https://github.com/wilicc/gpu-burn).
Supports the use of double unit types and the use of tensor cores.

#### Metrics

169
170
171
172
173
| Name                    | Unit     | Description                                                                        |
|-------------------------|----------|------------------------------------------------------------------------------------|
| gpu-burn/time           | time (s) | The runtime for gpu-burn test.                                                     |
| gpu-burn/gpu_[0-9]_pass | yes/no   | The result of the gpu-burn test for each GPU (1: yes, 0: no).                      |
| gpu-burn/abort          | yes/no   | Whether or not GPU-burn test aborted before returning GPU results (1: yes, 0: no). |
174

rafsalas19's avatar
rafsalas19 committed
175
176
177
178
179
180
181
182
183
### `cpu-hpl`

#### Introduction

HPL or High Performance Computing Linpack evaluates compute bandwidth by solving dense linear systems in double precision arethmetic.
Performed by [High-Performance Linpack Benchmark for Distributed-Memory Computers](https://netlib.org/benchmark/hpl/)

#### Metrics

184
185
186
187
188
| Name               | Unit               | Description                                                               |
|--------------------|--------------------|---------------------------------------------------------------------------|
| cpu-hpl/tests_pass |                    | HPL completed running and correctness test has passed (1: pass, 0: fail). |
| cpu-hpl/throughput | bandwidth (GFlops) | Compute bandwidth.                                                        |
| cpu-hpl/time       | time (s)           | Time elapsed during HPL run.                                              |
rafsalas19's avatar
rafsalas19 committed
189

one's avatar
one committed
190
191
192
193
194
195
196
197
198
199
200
201
202
203
### `gpu-hpcg`

#### Introduction

Measure GPU HPCG performance using `run_rochpcg`, which wraps `rocHPCG` execution and emits rocHPCG-native summary output.

When collecting multiple MPI scales or problem sizes, use separate benchmark section names such as
`gpu-hpcg:ranks8` and `gpu-hpcg:ranks4` instead of placing multiple `modes` under one `gpu-hpcg` section.

#### Metrics

| Name                                  | Unit               | Description                                          |
|---------------------------------------|--------------------|------------------------------------------------------|
| gpu-hpcg/final_gflops                 | FLOPS (GFLOPS)     | Final rocHPCG score.                                 |
one's avatar
one committed
204
| gpu-hpcg/final_bandwidth              | bandwidth (GB/s)   | Final rocHPCG bandwidth.                             |
one's avatar
one committed
205
| gpu-hpcg/final_gflops_per_process     | FLOPS (GFLOPS)     | Final rocHPCG score per process.                     |
one's avatar
one committed
206
| gpu-hpcg/final_bandwidth_per_process  | bandwidth (GB/s)   | Final rocHPCG bandwidth per process.                 |
one's avatar
one committed
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
| gpu-hpcg/ddot_gflops                  | FLOPS (GFLOPS)     | DDOT throughput.                                     |
| gpu-hpcg/ddot_bandwidth               | bandwidth (GB/s)   | DDOT bandwidth.                                      |
| gpu-hpcg/ddot_gflops_per_process      | FLOPS (GFLOPS)     | DDOT throughput per process.                         |
| gpu-hpcg/ddot_bandwidth_per_process   | bandwidth (GB/s)   | DDOT bandwidth per process.                          |
| gpu-hpcg/waxpby_gflops                | FLOPS (GFLOPS)     | WAXPBY throughput.                                   |
| gpu-hpcg/waxpby_bandwidth             | bandwidth (GB/s)   | WAXPBY bandwidth.                                    |
| gpu-hpcg/waxpby_gflops_per_process    | FLOPS (GFLOPS)     | WAXPBY throughput per process.                       |
| gpu-hpcg/waxpby_bandwidth_per_process | bandwidth (GB/s)   | WAXPBY bandwidth per process.                        |
| gpu-hpcg/spmv_gflops                  | FLOPS (GFLOPS)     | SpMV throughput.                                     |
| gpu-hpcg/spmv_bandwidth               | bandwidth (GB/s)   | SpMV bandwidth.                                      |
| gpu-hpcg/spmv_gflops_per_process      | FLOPS (GFLOPS)     | SpMV throughput per process.                         |
| gpu-hpcg/spmv_bandwidth_per_process   | bandwidth (GB/s)   | SpMV bandwidth per process.                          |
| gpu-hpcg/mg_gflops                    | FLOPS (GFLOPS)     | MG throughput.                                       |
| gpu-hpcg/mg_bandwidth                 | bandwidth (GB/s)   | MG bandwidth.                                        |
| gpu-hpcg/mg_gflops_per_process        | FLOPS (GFLOPS)     | MG throughput per process.                           |
| gpu-hpcg/mg_bandwidth_per_process     | bandwidth (GB/s)   | MG bandwidth per process.                            |
| gpu-hpcg/total_gflops                 | FLOPS (GFLOPS)     | Aggregate rocHPCG throughput.                        |
| gpu-hpcg/total_bandwidth              | bandwidth (GB/s)   | Aggregate rocHPCG bandwidth.                         |
| gpu-hpcg/total_gflops_per_process     | FLOPS (GFLOPS)     | Aggregate rocHPCG throughput per process.            |
| gpu-hpcg/total_bandwidth_per_process  | bandwidth (GB/s)   | Aggregate rocHPCG bandwidth per process.             |
| gpu-hpcg/setup_time                   | time (s)           | Setup phase duration.                                |
| gpu-hpcg/optimization_time            | time (s)           | Optimization phase duration.                         |
| gpu-hpcg/total_time                   | time (s)           | Total runtime.                                       |
| gpu-hpcg/is_valid                     |                    | Run validity inferred from rocHPCG invalid markers.  |
| gpu-hpcg/local_domain_x               |                    | Local domain size in x dimension.                    |
| gpu-hpcg/local_domain_y               |                    | Local domain size in y dimension.                    |
| gpu-hpcg/local_domain_z               |                    | Local domain size in z dimension.                    |
| gpu-hpcg/global_domain_x              |                    | Global domain size in x dimension.                   |
| gpu-hpcg/global_domain_y              |                    | Global domain size in y dimension.                   |
| gpu-hpcg/global_domain_z              |                    | Global domain size in z dimension.                   |
| gpu-hpcg/process_domain_x             |                    | Process topology in x dimension.                     |
| gpu-hpcg/process_domain_y             |                    | Process topology in y dimension.                     |
| gpu-hpcg/process_domain_z             |                    | Process topology in z dimension.                     |

rafsalas19's avatar
rafsalas19 committed
241
242
243
244
245
246
247
248
249
### `cpu-stream`

#### Introduction

Measure of memory bandwidth and computation rate for simple vector kernels.
performed by [University of Virginia STREAM benchmark](https://www.cs.virginia.edu/stream/ref.html).

#### Metrics

250
251
252
253
254
255
256
| Name                                                     | Unit             | Description                                                    |
|----------------------------------------------------------|------------------|----------------------------------------------------------------|
| cpu-stream/threads                                       |                  | Number of threads used for the test. Determined by core count. |
| cpu-stream/['copy', 'scale', 'add', 'triad']\_throughput | bandwidth (MB/s) | Memory throughput of designated kerel operation.               |
| cpu-stream/['copy', 'scale', 'add', 'triad']\_time_avg   | time (s)         | Average elapsed times over all iterations.                     |
| cpu-stream/['copy', 'scale', 'add', 'triad']\_time_min   | time (s)         | Minimum elapsed times over all iterations.                     |
| cpu-stream/['copy', 'scale', 'add', 'triad']\_time_max   | time (s)         | Maximum elapsed times over all iterations.                     |
rafsalas19's avatar
rafsalas19 committed
257

258
259
## Communication Benchmarks

260
261
262
263
264
265
266
267
268
269
270
### `cpu-memory-bw-latency`

#### Introduction

Measure the memory copy bandwidth and latency across different CPU NUMA nodes.
performed by [Intel MLC Tool](https://www.intel.com/content/www/us/en/developer/articles/tool/intelr-memory-latency-checker.html).

#### Metrics

| Name                                                                    | Unit             | Description                                                         |
|-------------------------------------------------------------------------|------------------|---------------------------------------------------------------------|
271
272
273
274
275
276
277
| cpu-memory-bw-latency/mem\_bandwidth\_matrix\_numa\_[0-9]+\_[0-9]+\_bw  | bandwidth (MB/s) | Former NUMA to latter NUMA memory bandwidth.                        |
| cpu-memory-bw-latency/mem\_bandwidth\_matrix\_numa\_[0-9]+\_[0-9]+\_lat | time (ns)        | Former NUMA to latter NUMA memory latency.                          |
| cpu-memory-bw-latency/mem\_max\_bandwidth\_all\_reads\_bw               | bandwidth (MB/s) | Whole-CPU maximum memory bandwidth, full read.                      |
| cpu-memory-bw-latency/mem\_max\_bandwidth\_3_1\_reads-writes\_bw        | bandwidth (MB/s) | Whole-CPU maximum memory bandwidth, read : write = 3 : 1.           |
| cpu-memory-bw-latency/mem\_max\_bandwidth\_2_1\_reads-writes\_bw        | bandwidth (MB/s) | Whole-CPU maximum memory bandwidth, read : write = 2 : 1.           |
| cpu-memory-bw-latency/mem\_max\_bandwidth\_1_1\_reads-writes\_bw        | bandwidth (MB/s) | Whole-CPU maximum memory bandwidth, read : write = 1 : 1.           |
| cpu-memory-bw-latency/mem\_max\_bandwidth\_stream-triad\_like\_bw       | bandwidth (MB/s) | Whole-CPU maximum memory bandwidth, with stream-triad like pattern. |
278

279
280
281
282
283
### `mem-bw`

#### Introduction

Measure the memory copy bandwidth across PCI-e and memory copy bandwidth between GPUs,
284
performed by [NVIDIA](https://github.com/NVIDIA/cuda-samples/tree/master/Samples/1_Utilities/bandwidthTest)
285
or [AMD](https://github.com/ROCm-Developer-Tools/HIP/tree/master/samples/1_Utils/hipBusBandwidth) bandwidth test tool.
286
The bandwidthTest sample was out-of-date and has been removed as of the CUDA Samples 12.9 release. For up-to-date bandwidth measurements, refer instead to the nvbandwidth benchmark.
287
288
289

#### Metrics

290
291
292
293
294
| Name          | Unit             | Description                      |
|---------------|------------------|----------------------------------|
| mem-bw/h2d_bw | bandwidth (GB/s) | Host to device copy bandwidth.   |
| mem-bw/d2h_bw | bandwidth (GB/s) | Device to host copy bandwidth.   |
| mem-bw/d2d_bw | bandwidth (GB/s) | Device to device copy bandwidth. |
295

296
### `gpu-copy-bw`
297

298
Measure the memory copy bandwidth performed by GPU SM/DMA engine, including device-to-host, host-to-device and device-to-device.
299
For measurements of peer-to-peer communication performance between AMD GPUs, GPU memory buffers are allocated in `hipDeviceMallocUncached` (previous `hipDeviceMallocFinegrained`) mode to maximize performance.
300
301
302

#### Metrics

303
304
305
306
307
308
309
310
311
312
| Name                                                        | Unit             | Description                                                                                                                              |
|-------------------------------------------------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------|
| cpu\_to\_gpu[0-9]+\_by\_(sm\|dma)\_under\_numa[0-9]+\_bw    | bandwidth (GB/s) | The unidirectional bandwidth of one GPU reading one NUMA node's host memory using DMA engine or GPU SM.                                  |
| gpu[0-9]+\_to\_cpu\_by\_(sm\|dma)\_under\_numa[0-9]+\_bw    | bandwidth (GB/s) | The unidirectional bandwidth of one GPU writing one NUMA node's host memory using DMA engine or GPU SM.                                  |
| gpu[0-9]+\_to\_gpu[0-9]+\_by\_(sm\|dma)\_bw                 | bandwidth (GB/s) | The unidirectional bandwidth of one GPU reading or writing self's memory using DMA engine or GPU SM.                                     |
| gpu[0-9]+\_to\_gpu[0-9]+\_(read\|write)\_by\_(sm\|dma)\_bw  | bandwidth (GB/s) | The unidirectional bandwidth of one GPU reading or writing peer GPU's memory using DMA engine or GPU SM with peer communication enabled. |
| cpu\_and\_gpu[0-9]+\_by\_(sm\|dma)\_under\_numa[0-9]+\_bw   | bandwidth (GB/s) | The bidirectional bandwidth of one GPU reading and writing one NUMA node's host memory using DMA engine or GPU SM.                       |
| gpu[0-9]+\_and\_cpu\_by\_(sm\|dma)\_under\_numa[0-9]+\_bw   | bandwidth (GB/s) | Same as above, but generated by --dtoh --bidirectional.                                                                                  |
| gpu[0-9]+\_and\_gpu[0-9]+\_by\_(sm\|dma)\_bw                | bandwidth (GB/s) | The bidirectional bandwidth of one GPU reading and writing self's memory using DMA engine or GPU SM.                                     |
| gpu[0-9]+\_and\_gpu[0-9]+\_(read\|write)\_by\_(sm\|dma)\_bw | bandwidth (GB/s) | The bidirectional bandwidth of one GPU reading and writing peer GPU's memory using DMA engine or GPU SM with peer communication enabled. |
313
314
315
| gpu[0-9]+\_to\_gpu\_all\_write\_by\_sm\_bw                  | bandwidth (GB/s) | The unidirectional bandwidth of one GPU writing all peer GPUs' memory using GPU SM with peer communication enabled.                      |
| gpu\_all\_to\_gpu[0-9]+\_write\_by\_sm\_bw                  | bandwidth (GB/s) | The unidirectional bandwidth of all peer GPUs writing one GPU's memory using GPU SM with peer communication enabled.                     |
| gpu\_all\_to\_gpu\_all\_write\_by\_sm\_bw                   | bandwidth (GB/s) | The unidirectional bandwidth of all peer GPUs writing all peer GPUs' memory using GPU SM with peer communication enabled.                |
316

317
318
319
320
### `gpu-stream`

#### Introduction

one's avatar
one committed
321
322
323
324
Measure the memory bandwidth of GPU using BabelStream (`hip-stream`) backend.
The benchmark executes copy, scale, add, triad, and dot operations.
The `array_size` parameter represents the number of elements.
Each benchmark run measures the GPU visible to the current process.
325
326
327

#### Metrics

one's avatar
one committed
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
| Metric Name                                                       | Unit             | Description                                                                                    |
|-------------------------------------------------------------------|------------------|------------------------------------------------------------------------------------------------|
| STREAM\_INIT\_[float\|double]\_array\_[0-9]+\_bw                 | bandwidth (GB/s) | Initialization phase bandwidth for the current benchmark run and one array size.              |
| STREAM\_INIT\_[float\|double]\_array\_[0-9]+\_time               | time (s)         | Initialization phase runtime for the current benchmark run and one array size.                |
| STREAM\_READ\_[float\|double]\_array\_[0-9]+\_bw                 | bandwidth (GB/s) | Read phase bandwidth for the current benchmark run and one array size.                        |
| STREAM\_READ\_[float\|double]\_array\_[0-9]+\_time               | time (s)         | Read phase runtime for the current benchmark run and one array size.                          |
| STREAM\_COPY\_[float\|double]\_array\_[0-9]+\_bw                 | bandwidth (GB/s) | Maximum copy bandwidth for the current benchmark run and one array size.                       |
| STREAM\_COPY\_[float\|double]\_array\_[0-9]+\_time\_min          | time (s)         | Minimum copy runtime for the current benchmark run and one array size.                        |
| STREAM\_COPY\_[float\|double]\_array\_[0-9]+\_time\_max          | time (s)         | Maximum copy runtime for the current benchmark run and one array size.                        |
| STREAM\_COPY\_[float\|double]\_array\_[0-9]+\_time\_avg          | time (s)         | Average copy runtime for the current benchmark run and one array size.                        |
| STREAM\_MUL\_[float\|double]\_array\_[0-9]+\_bw                  | bandwidth (GB/s) | Maximum mul bandwidth for the current benchmark run and one array size.                       |
| STREAM\_MUL\_[float\|double]\_array\_[0-9]+\_time\_min           | time (s)         | Minimum mul runtime for the current benchmark run and one array size.                         |
| STREAM\_MUL\_[float\|double]\_array\_[0-9]+\_time\_max           | time (s)         | Maximum mul runtime for the current benchmark run and one array size.                         |
| STREAM\_MUL\_[float\|double]\_array\_[0-9]+\_time\_avg           | time (s)         | Average mul runtime for the current benchmark run and one array size.                         |
| STREAM\_ADD\_[float\|double]\_array\_[0-9]+\_bw                  | bandwidth (GB/s) | Maximum add bandwidth for the current benchmark run and one array size.                        |
| STREAM\_ADD\_[float\|double]\_array\_[0-9]+\_time\_min           | time (s)         | Minimum add runtime for the current benchmark run and one array size.                         |
| STREAM\_ADD\_[float\|double]\_array\_[0-9]+\_time\_max           | time (s)         | Maximum add runtime for the current benchmark run and one array size.                         |
| STREAM\_ADD\_[float\|double]\_array\_[0-9]+\_time\_avg           | time (s)         | Average add runtime for the current benchmark run and one array size.                         |
| STREAM\_TRIAD\_[float\|double]\_array\_[0-9]+\_bw                | bandwidth (GB/s) | Maximum triad bandwidth for the current benchmark run and one array size.                      |
| STREAM\_TRIAD\_[float\|double]\_array\_[0-9]+\_time\_min         | time (s)         | Minimum triad runtime for the current benchmark run and one array size.                       |
| STREAM\_TRIAD\_[float\|double]\_array\_[0-9]+\_time\_max         | time (s)         | Maximum triad runtime for the current benchmark run and one array size.                       |
| STREAM\_TRIAD\_[float\|double]\_array\_[0-9]+\_time\_avg         | time (s)         | Average triad runtime for the current benchmark run and one array size.                       |
| STREAM\_DOT\_[float\|double]\_array\_[0-9]+\_bw                  | bandwidth (GB/s) | Maximum dot bandwidth for the current benchmark run and one array size.                        |
| STREAM\_DOT\_[float\|double]\_array\_[0-9]+\_time\_min           | time (s)         | Minimum dot runtime for the current benchmark run and one array size.                         |
| STREAM\_DOT\_[float\|double]\_array\_[0-9]+\_time\_max           | time (s)         | Maximum dot runtime for the current benchmark run and one array size.                         |
| STREAM\_DOT\_[float\|double]\_array\_[0-9]+\_time\_avg           | time (s)         | Average dot runtime for the current benchmark run and one array size.                         |

`gpu-stream` reports `phase` and `function` metrics. `_ratio` and `block_*` metrics are removed.
Bandwidth metrics are converted from BabelStream `max_mbytes_per_sec` by using `GB/s = MB/s / 1000`.
357

358
359
360
361
362
363
364
365
366
### `ib-loopback`

#### Introduction

Measure the InfiniBand loopback verbs bandwidth, performed by
[OFED performance tests](https://github.com/linux-rdma/perftest/tree/7504ce48ac396a02f4d00de359257b2cb8458f06).

#### Metrics

367
368
369
370
371
| Name                                | Unit             | Description                                                  |
|-------------------------------------|------------------|--------------------------------------------------------------|
| ib-loopback/ib_write_bw_${msg_size} | bandwidth (GB/s) | InfiniBand loopback write bandwidth with given message size. |
| ib-loopback/ib_read_bw_${msg_size}  | bandwidth (GB/s) | InfiniBand loopback read bandwidth with given message size.  |
| ib-loopback/ib_send_bw_${msg_size}  | bandwidth (GB/s) | InfiniBand loopback send bandwidth with given message size.  |
372
373
374
375
376

### `nccl-bw` / `rccl-bw`

#### Introduction

377
Measure the performance of NCCL/RCCL operations under multi nodes' traffic pattern,
378
379
380
performed by [nccl-tests](https://github.com/NVIDIA/nccl-tests/tree/44df0bf010dcc95e840ca0fb7466c67cff3f1f0f)
or [rccl-tests](https://github.com/ROCmSoftwarePlatform/rccl-tests/tree/dc1ad4853d7ec738387d42a75a58a98d7af00c7b).
Support the following operations currently: allreduce, allgather, broadcast, reduce, reducescatter, alltoall.
381
Support both in-place and out-of-place measurements.
382

383
384
385
386
387
388
Support the following traffic patterns:
* `all-nodes`, validate the NCCL/RCCL performance across all VM nodes simultaneously.
* `pair-wise`, validate the NCCL/RCCL performance across VM pairs with all possible combinations in parallel.
* `k-batch`, validate the NCCL/RCCL performance across VM groups with a specified batch scale.
* `topo-aware`, validate the NCCL/RCCL performance across VM pairs with different distances/hops as a quick test.

389
390
391
392
393
394
395
396
397
398
399
#### Metrics

| Name                                   | Unit             | Description                                                 |
|----------------------------------------|------------------|-------------------------------------------------------------|
| nccl-bw/${operation}_${msg_size}_time  | time (us)        | NCCL operation lantency with given message size.            |
| nccl-bw/${operation}_${msg_size}_algbw | bandwidth (GB/s) | NCCL operation algorithm bandwidth with given message size. |
| nccl-bw/${operation}_${msg_size}_busbw | bandwidth (GB/s) | NCCL operation bus bandwidth with given message size.       |
| rccl-bw/${operation}_${msg_size}_time  | time (us)        | RCCL operation lantency with given message size.            |
| rccl-bw/${operation}_${msg_size}_algbw | bandwidth (GB/s) | RCCL operation algorithm bandwidth with given message size. |
| rccl-bw/${operation}_${msg_size}_busbw | bandwidth (GB/s) | RCCL operation bus bandwidth with given message size.       |

400
If mpi mode is enable and traffic pattern is specified, the metrics pattern will change to `nccl-bw/${operation}_${serial_index)_${parallel_index):${msg_size}_time`
401
402
- `serial_index` represents the serial index of the host group in serial.
- `parallel_index` represents the parallel index of the host list in parallel.
403

404
405
406
407
408
409
410
411
412
### `tcp-connectivity`

#### Introduction

Test the TCP connectivity between current node and nodes in the hostfile,
performed by [tcping](https://github.com/zhengxiaowai/tcping)

#### Metrics

413
414
415
416
417
418
419
420
| Metrics                                         | Unit      | Description                                                                           |
|-------------------------------------------------|-----------|---------------------------------------------------------------------------------------|
| tcp-connectivity/${hostname/ip}_successed_count | count     | successed times of tcp connections between current node and other nodes               |
| tcp-connectivity/${hostname/ip}_failed_count    | count     | failed times of tcp connections between current node and other nodes                  |
| tcp-connectivity/${hostname/ip}_success_rate    |           | success rate (successed/total) of tcp connection between current node and other nodes |
| tcp-connectivity/${hostname/ip}_time_min        | time (ms) | mininum latency of tcp connections between current node and other nodes               |
| tcp-connectivity/${hostname/ip}_time_max        | time (ms) | maximum latency of tcp connections between current node and other nodes               |
| tcp-connectivity/${hostname/ip}_time_avg        | time (ms) | average latency of tcp connections between current node and other nodes               |
421
422
423
424
425
426
427
428
429
430
431
432

### `gpcnet-network-test` / `gpcnet-network-load-test`

#### Introduction

Distributed test, test the global network performance and congestion,
performed by [GPCNET](https://github.com/netbench/GPCNET)

gpcnet-network-test: Full system network tests in random and natural ring, alltoall and allreduce, at least 2 nodes

gpcnet-network-load-test: Select full system network tests run with four congestors to measure network congestion or contention, at least 10 nodes

433
434
 - supporting network tests: RR Two-sided Lat (8 B), RR Get Lat (8 B), RR Two-sided BW (131072 B), RR Put BW (131072 B), RR Two-sided BW+Sync (131072 B), Nat Two-sided BW (131072 B), Multiple Allreduce (8 B), Multiple Alltoall (4096 B)
 - supporting congestors: Alltoall (4096 B), Two-sided Incast (4096 B), Put Incast (4096 B), Get Bcast (4096 B)
435
436
437

#### Metrics

438
439
440
441
442
443
444
445
446
447
448
449
| Metrics                                                 | Unit                   | Description                                                                                                                                                                |
|---------------------------------------------------------|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| gpcnet-network-test/rr_two-sided_lat_${stat}            | time (us)              | statistical values(min, max, avg, 99%, 99.9%) obtained by all nodes use algorithm 'random ring communication pattern two-side latency' for network testing                 |
| gpcnet-network-test/rr_two-sided+sync_bw_${stat}        | bandwidth (MiB/s/rank) | fstatistical values(min, max, avg, 99%, 99.9%) obtained by all nodes use algorithm 'random ring communication pattern two-side bandwidth with barrier' for network testing |
| gpcnet-network-test/multiple_allreduce_time_${stat}     | time (us)              | statistical values(min, max, avg, 99%, 99.9%) obtained by all nodes use algorithm 'multiple allreduce bandwidth' for network testing                                       |
| gpcnet-network-test/rr_get_lat_${stat}                  | bandwidth (MiB/s/rank) | statistical values(min, max, avg, 99%, 99.9%) obtained by all nodes use algorithm 'RR GetLat (8 B)' for network testing                                                    |
| gpcnet-network-test/rr_two-sided_bw_${stat}             | bandwidth (MiB/s/rank) | statistical values(min, max, avg, 99%, 99.9%) obtained by all nodes use algorithm 'RR Two-sidedBW (131072 B)' for network testing                                          |
| gpcnet-network-test/nat_two-sided_bw_${stat}            | bandwidth (MiB/s/rank) | statistical values(min, max, avg, 99%, 99.9%) obtained by all nodes use algorithm 'Nat Two-sidedBW (131072 B)' for network testing                                         |
| gpcnet-network-test/multiple_alltoall_bw_${stat}        | bandwidth (MiB/s/rank) | statistical values(min, max, avg, 99%, 99.9%) obtained by all nodes use algorithm 'Multiple Alltoall (4096 B)' for network testing                                         |
| gpcnet-network-load-test/rr_two-sided_lat_x_${stat}     | factor (x)             | summary about congestion impact factor of the network test algorithm                                                                                                       |
| gpcnet-network-load-test/rr_two-sided+sync_bw_x_${stat} | factor (x)             | summary about congestion impact factor of the network test algorithm                                                                                                       |
| gpcnet-network-load-test/multiple_allreduce_x_${stat}   | factor (x)             | summary about congestion impact factor of the network test algorithm                                                                                                       |
450

451
452
453
454
455
456
### `ib-traffic`

#### Introduction

Measure the InfiniBand performance under multi nodes' traffic pattern.

457
458
The direction between client and server can be 'cpu-to-cpu'/'gpu-to-gpu'/'gpu-to-cpu'/'cpu-to-gpu'.

459
460
461
The traffic pattern is defined in a config file, which is pre-defined for one-to-many, many-to-one and all-to-all patterns.
Each row in the config is one round, and all pairs of nodes in a row run ib command simultaneously.

462
463
464
465
466
467
468
469
470
471
472
Besides the above three patterns, ib-traffic also supports topology-aware traffic pattern. To run ib-traffic with topology-aware
pattern, the user needs to specify 3 required (and 2 optional) parameters in YAML config file:
   - --pattern	 **topo-aware**
   - --ibstat	 **path to ibstat output**
   - --ibnetdiscover	 **path to ibnetdiscover output**
   - --min_dist	 **minimum distance of VM pairs (optional, default 2)**
   - --max_dist	 **maximum distance of VM pairs (optional, default 6)**

Each row in the config file has all VM pairs with a fixed distance (#hops). That's by default, 1st, 2nd, 3rd row has all VM pairs
with topology distance of 2, 4, 6, respectively.

473
#### Metrics
474

475
476
477
478
| Metrics                                                                                     | Unit             | Description                                                                                                                                                                                                                                                                                                                 |
|---------------------------------------------------------------------------------------------|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ib-traffic/ib\_write\_bw\_${msg_size}\_${direction}\_${line}\_${pair}:${server}\_${client}  | bandwidth (GB/s) | The max bandwidth of perftest (ib_write_bw, ib_send_bw, ib_read_bw) using ${msg_size} with ${direction}('cpu-to-cpu'/'gpu-to-gpu'/'gpu-to-cpu'/'cpu-to-gpu') run between the ${pair}<sup>th</sup> node pair in the ${line}<sup>th</sup> line of the config, ${server} and ${client} are the hostname of server and client.  |
| ib-traffic/ib\_write\_lat\_${msg_size}\_${direction}\_${line}\_${pair}:${server}\_${client} | time (us)        | The max latency of perftest (ib_write_lat, ib_send_lat, ib_read_lat) using ${msg_size} with ${direction}('cpu-to-cpu'/'gpu-to-gpu'/'gpu-to-cpu'/'cpu-to-gpu') run between the ${pair}<sup>th</sup> node pair in the ${line}<sup>th</sup> line of the config, ${server} and ${client} are the hostname of server and client. |
479

480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
### `nvbandwidth`

#### Introduction

Measures bandwidth and latency for various memcpy patterns across different links using copy engine or kernel copy methods,
performed by [nvbandwidth](https://github.com/NVIDIA/nvbandwidth)

#### Metrics

| Metrics                                                 | Unit                   | Description                                                                                                                                                                |
|---------------------------------------------------------|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| host_to_device_memcpy_ce_cpu[0-9]_gpu[0-9]_bw      | GB/s                | Host to device CE memcpy using cuMemcpyAsync                      |
| host_to_device_memcpy_ce_sum_bw                    | GB/s                | Sum of the output matrix                                           |
| device_to_host_memcpy_ce_cpu[0-9]_gpu[0-9]_bw      | GB/s                | Device to host CE memcpy using cuMemcpyAsync                      |
| device_to_host_memcpy_ce_sum_bw                    | GB/s                | Sum of the output matrix                                           |
| host_to_device_bidirectional_memcpy_ce_cpu[0-9]_gpu[0-9]_bw | GB/s      | A host to device copy is measured while a device to host copy is run simultaneously. Only the host to device copy bandwidth is reported. |
| host_to_device_bidirectional_memcpy_ce_sum_bw      | GB/s                | Sum of the output matrix                                           |
| device_to_host_bidirectional_memcpy_ce_cpu[0-9]_gpu[0-9]_bw | GB/s      | A device to host copy is measured while a host to device copy is run simultaneously. Only the device to host copy bandwidth is reported. |
| device_to_host_bidirectional_memcpy_ce_sum_bw      | GB/s                | Sum of the output matrix                                           |
| device_to_device_memcpy_read_ce_gpu[0-9]_gpu[0-9]_bw | GB/s               | Measures bandwidth of cuMemcpyAsync between each pair of accessible peers. Read tests launch a copy from the peer device to the target using the target's context. |
| device_to_device_memcpy_read_ce_sum_bw             | GB/s                | Sum of the output matrix                                           |
| device_to_device_memcpy_write_ce_gpu[0-9]_gpu[0-9]_bw | GB/s              | Measures bandwidth of cuMemcpyAsync between each pair of accessible peers. Write tests launch a copy from the target device to the peer using the target's context. |
| device_to_device_memcpy_write_ce_sum_bw            | GB/s                | Sum of the output matrix                                           |
| device_to_device_bidirectional_memcpy_read_ce_gpu[0-9]_gpu[0-9]_bw | GB/s | Measures bandwidth of cuMemcpyAsync between each pair of accessible peers. A copy in the opposite direction of the measured copy is run simultaneously but not measured. Read tests launch a copy from the peer device to the target using the target's context. |
| device_to_device_bidirectional_memcpy_read_ce_sum_bw | GB/s               | Sum of the output matrix                                           |
| device_to_device_bidirectional_memcpy_write_ce_gpu[0-9]_gpu[0-9]_bw | GB/s | Measures bandwidth of cuMemcpyAsync between each pair of accessible peers. A copy in the opposite direction of the measured copy is run simultaneously but not measured. Write tests launch a copy from the target device to the peer using the target's context. |
| device_to_device_bidirectional_memcpy_write_ce_sum_bw | GB/s               | Sum of the output matrix                                           |
| all_to_host_memcpy_ce_cpu[0-9]_gpu[0-9]_bw         | GB/s                | Measures bandwidth of cuMemcpyAsync between a single device and the host while simultaneously running copies from all other devices to the host. |
| all_to_host_memcpy_ce_sum_bw                       | GB/s                | Sum of the output matrix                                           |
| all_to_host_bidirectional_memcpy_ce_cpu[0-9]_gpu[0-9]_bw | GB/s              | A device to host copy is measured while a host to device copy is run simultaneously. Only the device to host copy bandwidth is reported. All other devices generate simultaneous host to device and device to host interfering traffic. |
| all_to_host_bidirectional_memcpy_ce_sum_bw         | GB/s                | Sum of the output matrix                                           |
| host_to_all_memcpy_ce_cpu[0-9]_gpu[0-9]_bw         | GB/s                | Measures bandwidth of cuMemcpyAsync between the host to a single device while simultaneously running copies from the host to all other devices. |
| host_to_all_memcpy_ce_sum_bw                       | GB/s                | Sum of the output matrix                                           |
| host_to_all_bidirectional_memcpy_ce_cpu[0-9]_gpu[0-9]_bw | GB/s              | A host to device copy is measured while a device to host copy is run simultaneously. Only the host to device copy bandwidth is reported. All other devices generate simultaneous host to device and device to host interfering traffic. |
| host_to_all_bidirectional_memcpy_ce_sum_bw         | GB/s                | Sum of the output matrix                                           |
| all_to_one_write_ce_gpu[0-9]_gpu[0-9]_bw           | GB/s                | Measures the total bandwidth of copies from all accessible peers to a single device, for each device. Bandwidth is reported as the total inbound bandwidth for each device. Write tests launch a copy from the target device to the peer using the target's context. |
| all_to_one_write_ce_sum_bw                         | GB/s                | Sum of the output matrix                                           |
| all_to_one_read_ce_gpu[0-9]_gpu[0-9]_bw            | GB/s                | Measures the total bandwidth of copies from all accessible peers to a single device, for each device. Bandwidth is reported as the total outbound bandwidth for each device. Read tests launch a copy from the peer device to the target using the target's context. |
| all_to_one_read_ce_sum_bw                          | GB/s                | Sum of the output matrix                                           |
| one_to_all_write_ce_gpu[0-9]_gpu[0-9]_bw           | GB/s                | Measures the total bandwidth of copies from a single device to all accessible peers, for each device. Bandwidth is reported as the total outbound bandwidth for each device. Write tests launch a copy from the target device to the peer using the target's context. |
| one_to_all_write_ce_sum_bw                         | GB/s                | Sum of the output matrix                                           |
| one_to_all_read_ce_gpu[0-9]_gpu[0-9]_bw            | GB/s                | Measures the total bandwidth of copies from a single device to all accessible peers, for each device. Bandwidth is reported as the total inbound bandwidth for each device. Read tests launch a copy from the peer device to the target using the target's context. |
| one_to_all_read_ce_sum_bw                          | GB/s                | Sum of the output matrix                                           |
| host_to_device_memcpy_sm_cpu[0-9]_gpu[0-9]_bw      | GB/s                | Host to device SM memcpy using a copy kernel                      |
| host_to_device_memcpy_sm_sum_bw                    | GB/s                | Sum of the output matrix                                           |
| device_to_host_memcpy_sm_cpu[0-9]_gpu[0-9]_bw      | GB/s                | Device to host SM memcpy using a copy kernel                      |
| device_to_host_memcpy_sm_sum_bw                    | GB/s                | Sum of the output matrix                                           |
| device_to_device_memcpy_read_sm_gpu[0-9]_gpu[0-9]_bw | GB/s               | Measures bandwidth of a copy kernel between each pair of accessible peers. Read tests launch a copy from the peer device to the target using the target's context. |
| device_to_device_memcpy_read_sm_sum_bw             | GB/s                | Sum of the output matrix                                           |
| device_to_device_memcpy_write_sm_gpu[0-9]_gpu[0-9]_bw | GB/s              | Measures bandwidth of a copy kernel between each pair of accessible peers. Write tests launch a copy from the target device to the peer using the target's context. |
| device_to_device_memcpy_write_sm_sum_bw            | GB/s                | Sum of the output matrix                                           |
| device_to_device_bidirectional_memcpy_read_sm_gpu[0-9]_gpu[0-9]_bw | GB/s | Measures bandwidth of a copy kernel between each pair of accessible peers. Copies are run in both directions between each pair, and the sum is reported. Read tests launch a copy from the peer device to the target using the target's context. |
| device_to_device_bidirectional_memcpy_read_sm_sum_bw | GB/s               | Sum of the output matrix                                           |
| device_to_device_bidirectional_memcpy_write_sm_gpu[0-9]_gpu[0-9]_bw | GB/s | Measures bandwidth of a copy kernel between each pair of accessible peers. Copies are run in both directions between each pair, and the sum is reported. Write tests launch a copy from the target device to the peer using the target's context. |
| device_to_device_bidirectional_memcpy_write_sm_sum_bw | GB/s               | Sum of the output matrix                                           |
| all_to_host_memcpy_sm_cpu[0-9]_gpu[0-9]_bw         | GB/s                | Measures bandwidth of a copy kernel between a single device and the host while simultaneously running copies from all other devices to the host. |
| all_to_host_memcpy_sm_sum_bw                       | GB/s                | Sum of the output matrix                                           |
| all_to_host_bidirectional_memcpy_sm_cpu[0-9]_gpu[0-9]_bw | GB/s              | A device to host bandwidth of a copy kernel is measured while a host to device copy is run simultaneously. Only the device to host copy bandwidth is reported. All other devices generate simultaneous host to device and device to host interfering traffic using copy kernels. |
| all_to_host_bidirectional_memcpy_sm_sum_bw         | GB/s                | Sum of the output matrix                                           |
| host_to_all_memcpy_sm_cpu[0-9]_gpu[0-9]_bw         | GB/s                | Measures bandwidth of a copy kernel between the host to a single device while simultaneously running copies from the host to all other devices. |
| host_to_all_memcpy_sm_sum_bw                       | GB/s                | Sum of the output matrix                                           |
| host_to_all_bidirectional_memcpy_sm_cpu[0-9]_gpu[0-9]_bw | GB/s              | A host to device bandwidth of a copy kernel is measured while a device to host copy is run simultaneously. Only the host to device copy bandwidth is reported. All other devices generate simultaneous host to device and device to host interfering traffic using copy kernels. |
| host_to_all_bidirectional_memcpy_sm_sum_bw         | GB/s                | Sum of the output matrix                                           |
| all_to_one_write_sm_gpu[0-9]_gpu[0-9]_bw           | GB/s                | Measures the total bandwidth of copies from all accessible peers to a single device, for each device. Bandwidth is reported as the total inbound bandwidth for each device. Write tests launch a copy from the target device to the peer using the target's context. |
| all_to_one_write_sm_sum_bw                         | GB/s                | Sum of the output matrix                                           |
| all_to_one_read_sm_gpu[0-9]_gpu[0-9]_bw            | GB/s                | Measures the total bandwidth of copies from all accessible peers to a single device, for each device. Bandwidth is reported as the total outbound bandwidth for each device. Read tests launch a copy from the peer device to the target using the target's context. |
| all_to_one_read_sm_sum_bw                          | GB/s                | Sum of the output matrix                                           |
| one_to_all_write_sm_gpu[0-9]_gpu[0-9]_bw           | GB/s                | Measures the total bandwidth of copies from a single device to all accessible peers, for each device. Bandwidth is reported as the total outbound bandwidth for each device. Write tests launch a copy from the target device to the peer using the target's context. |
| one_to_all_write_sm_sum_bw                         | GB/s                | Sum of the output matrix                                           |
| one_to_all_read_sm_gpu[0-9]_gpu[0-9]_bw            | GB/s                | Measures the total bandwidth of copies from a single device to all accessible peers, for each device. Bandwidth is reported as the total inbound bandwidth for each device. Read tests launch a copy from the peer device to the target using the target's context. |
| one_to_all_read_sm_sum_bw                          | GB/s                | Sum of the output matrix                                           |
| host_device_latency_sm_cpu[0-9]_gpu[0-9]_lat       | µs                  | Host - device SM copy latency using a ptr chase kernel            |
| host_device_latency_sm_sum_lat                     | µs                  | Sum of the output matrix                                           |
| device_to_device_latency_sm_gpu[0-9]_gpu[0-9]_lat  | µs                  | Measures latency of a pointer dereference operation between each pair of accessible peers. Memory is allocated on a GPU and is accessed by the peer GPU to determine latency. |
| device_to_device_latency_sm_sum_lat                | µs                  | Sum of the output matrix                                           |

556

557
558
559
560
561
562
563
564
565
566
567
568
## Computation-communication Benchmarks

### `computation-communication-overlap`

#### Introduction

Test the performance of single node when communication and computation overlap.

#### Metrics

| Name                                                  | Unit      | Description                                                  |
|-------------------------------------------------------|-----------|--------------------------------------------------------------|
569
570
| pytorch-computation-communication-overlap/mul_time    | time (ms) | Time of communication and mul kernel computation overlap.    |
| pytorch-computation-communication-overlap/matmul_time | time (ms) | Time of communication and matmul kernel computation overlap. |
571
572
573
574
575
576
577
578
579
580
581
582
583

####

### `sharding-matmul`

#### Introduction

Test the performance of large scale matmul operation with multiple GPUs:
* allreduce: Each GPU will calculate part of the MM calculation, and use AllReduce to merge all data into one tensor.
* allgather: Each GPU will calculate part of the MM calculation, and use AllGather + Concat to merge all data into one tensor.

#### Metrics

584
585
586
587
| Name                                   | Unit      | Description                              |
|----------------------------------------|-----------|------------------------------------------|
| pytorch-sharding-matmul/allreduce_time | time (ms) | Time of sharding matmul using allreduce. |
| pytorch-sharding-matmul/allgather_time | time (ms) | Time of sharding matmul using allgather. |
588

589
590
591
592
### `dist-inference`

#### Introduction

593
Test the performance of distributed model inference. Support both PyTorch implementation and cpp implementation.
594
595
596
597
598
599
600
601

#### Metrics

| Name                                            | Unit      | Description                                           |
|-------------------------------------------------|-----------|-------------------------------------------------------|
| pytorch-dist-inference/step_times               | time (ms) | Average time of model inference runs.                 |
| pytorch-dist-inference/step_times_${percentile} | time (ms) | Tail (50,90,95,99,99.9) time of model inference runs. |

602
603
604
605
606
607
608
609
610
611
## Storage Benchmarks

### `disk-benchmark`

#### Introduction

Measure the disk performance through [FIO](https://github.com/axboe/fio/tree/0313e938c9c8bb37d71dade239f1f5326677b079).

#### Metrics

612
613
614
615
616
617
618
619
620
621
622
| Name                                                          | Unit         | Description                                              |
|---------------------------------------------------------------|--------------|----------------------------------------------------------|
| disk-benchmark/${disk_name}_rand_read_write_bs                | size (bytes) | Disk random read write block size.                       |
| disk-benchmark/${disk_name}_rand_read_write_read_iops         | IOPS         | Disk random read write read IOPS.                        |
| disk-benchmark/${disk_name}_rand_read_write_read_lat_ns_95.0  | time (ns)    | Disk random read write read latency in 95.0 percentile.  |
| disk-benchmark/${disk_name}_rand_read_write_read_lat_ns_99.0  | time (ns)    | Disk random read write read latency in 99.0 percentile.  |
| disk-benchmark/${disk_name}_rand_read_write_read_lat_ns_99.9  | time (ns)    | Disk random read write read latency in 99.9 percentile.  |
| disk-benchmark/${disk_name}_rand_read_write_write_iops        | IOPS         | Disk random read write write IOPS.                       |
| disk-benchmark/${disk_name}_rand_read_write_write_lat_ns_95.0 | time (ns)    | Disk random read write write latency in 95.0 percentile. |
| disk-benchmark/${disk_name}_rand_read_write_write_lat_ns_99.0 | time (ns)    | Disk random read write write latency in 99.0 percentile. |
| disk-benchmark/${disk_name}_rand_read_write_write_lat_ns_99.9 | time (ns)    | Disk random read write write latency in 99.9 percentile. |