CHANGELOG.md 29.8 KB
Newer Older
Gilbert Lee's avatar
Gilbert Lee committed
1
2
# Changelog for TransferBench

3
Documentation for TransferBench is available at
Lisa's avatar
Lisa committed
4
[https://rocm.docs.amd.com/projects/TransferBench](https://rocm.docs.amd.com/projects/TransferBench).
Lisa Delaney's avatar
Lisa Delaney committed
5

6
7
8
9
10
11
12
13
## v1.66.01
## Fixed
- Adding support for TheRock
- Fixing parsing issue when using NULL memory type
- Fixing CUAD compilation flags when enabling NIC/MPI
## Modified
- TransferBenchCuda must now be explicitly built with via 'make TransferBenchCuda'

14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
## v1.66.00
### Added
- Adding multi-node support
  - TransferBench now supports multiple nodes through the use of MPI or sockets
    - In order to utilize MPI, TransferBench must be compiled with MPI support (setting MPI_PATH to where
      an MPI implementation is located).  MPI support can explicitly disabled by setting DISABLE_MPI_COMM=1
      - TransferBench can be executed with an MPI launcher, such as mpirun
    - In order to utilize sockets, several environment variables need to be provided to processes
      * TB_RANK:        Rank of this process (0-based)
      * TB_NUM_RANKS:   Total number of processes
      * TB_MASTER_ADDR: IP address of rank 0 (Other ranks will connect to rank 0)
      * TB_MASTER_PORT: Port for communication (default: 29500)
    - Additional debug messages can be enabled by setting TB_VERBOSE=1
    - NOTE: It is recommended that one process be launched per node to avoid aliasing of devices
- Adding multi-node topology detection
  -  When running in multi-node mode, TransferBench will try to collect topology information about each
     rank, then group ranks into homogenous configurations.
  - This is done by running TransferBench with no arguments (e.g. mpirun -np 2 ./TransferBench)
- Adding multi-node Transfer parsing and wildcard support
  - Memory locations have now been extended to support a rank index
    * R(memRank)?(memIndex)  (where ? is one of the supported memory type characters "CGBFUNMP")
        (e.g. R2G3 is GPU memory location in GPU 3 on rank 2)
    - Rank is optional and if not specified, will fallback to "local" rank
  - Executor locations have been extended to support rank indices as well)
    * R(exeRank)?(exeIndex){exeSlot}.{exeSubIndex}{exeSubSlot} (where ? is one of the supported executor-types characters "CGDIN")
      - exeSlots are only relevant for the EXE_NIC_NEAREST executor, and allows for distinguishing when multiple NICs are closest to a GPU
      - exeSlots are defined by upper case letters 'A' for first closest NIC, 'B' for 2nd closest NIC, etc.
        -  For example:  N0B.4C would execute using the 2nd closest NIC to GPU 0 via communicating with the 3rd closest NIC to GPU 4
  - Wildcard support:
    - To help quickly define sets of transfers, Transfers can now be specified using wildcards
    - All the fields above may be specified either:
      * directly with a single value:  E.g: R34        -> Rank 34
      * full wildcard:                 E.g: R*         -> Will be replaced by all available ranks
      * Ranged wildcard:               E.g. R[1,5..7]  -> Will be replaced by Rank 1, Rank 5, Rank 6, Rank 7
  - Wildcard nearest NIC wildcard
    - To simplify nearest NIC execution, it is not necessary to specify exeIndex/exeSubIndex for the "N" executor
    - If exeRank/exeIndex/exeSlot/exeSubIndex/exeSubSlot are all not specified, the Transfer will be expanded to
      choose the correct values such that a remote write operation will occur based on SRC/DST mem locations
      -  For example: (R2G4->N->R4G5) will expand to (R2G4->R2N4.5->R4G5)
- Adding dry-run preset
  - This new preset is similar to cmdline however it only shows the list of transfers that will be executed
  - This new dryrun preset may be useful when using the new wildcard expressions to ensure that the Test
    contains the correct set of Transfers
- Adding nicrings preset
  - This new preset runs parallel transfers forming rings that connect identical NICs across ranks
- Adding NIC_FILTER to allow for filtering which NICs to detect.  NIC_FILTER accepts regular-expression syntax
- Added new memory types based on latest HIP memory allocation flags
    Supported memory locations are:
    - C:    Pinned host memory              (on NUMA node, indexed from 0 to [# NUMA nodes-1])
    - P:    Pinned host memory              (on NUMA node, indexed by closest GPU [#GPUs -1])
    - B:    Coherent pinned host memory     (on NUMA node, indexed from 0 to [# NUMA nodes-1])
    - D:    Non-coherent pinned host memory (on NUMA node, indexed from 0 to [# NUMA nodes-1])
    - K:    Uncached pinned host memory     (on NUMA node, indexed from 0 to [# NUMA nodes-1])
    - H:    Unpinned host memory            (on NUMA node, indexed from 0 to [# NUMA nodes-1])
    - G:    Global device memory            (on GPU device indexed from 0 to [# GPUs - 1])
    - F:    Fine-grain device memory        (on GPU device indexed from 0 to [# GPUs - 1])
    - U:    Uncached device memory          (on GPU device indexed from 0 to [# GPUs - 1])
    - N:    Null memory                     (index ignored)
  - As a result, the a2a preset has deprecated USE_FINE_GRAIN for MEM_TYPE to allow for selecting between various GPU memory types
  - A warning message is issued if USE_FINE_GRAIN is used, however previous matching functionality remains for now
  - The p2p preset has also deprecated USE_FINE_GRAIN for CPU_MEM_TYPE and GPU_MEM_TYPE
### Modified
- Refactored front-end client code to facilitate simpler and more consistent presets.
- Refactored tabular data display to simplify code.  Output result tables now use ASCII box-drawing
  characters for borders which helps group data visually.  Borders may be disabled by setting SHOW_BORDERS=0
- The All-to-all preset is now multi-rank compatible.  When executed on multiple ranks, it runs
  inter-rank all-to-all and then reports the min/max across all ranks.  The number of extrema
  results shown can be adjusted by NUM_RESULTS

### Fixed
- Added guard for ROCM version when using __syncwarp();
- Exiting with non-zero code on fatal errors

87
88
89
90
91
92
## v1.65.00
### Added
- Added warp-level dispatch support via GFX_SE_TYPE environment variable
  - GFX_SE_TYPE=0 (default): Threadblock-level dispatch, each subexecutor is a threadblock
  - GFX_SE_TYPE=1: Warp-level dispatch, each subexecutor is a single warp

gilbertlee-amd's avatar
gilbertlee-amd committed
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
## v1.64.00
### Added
- Added BLOCKSIZES to a2asweep preset to allow also sweeping over threadblock sizes
- Added FILL_COMPRESS to allow more control over input data pattern
  - FILL_COMPRESS takes in a comma-separated list of integer percentages (that must add up to 100)
    that sets the percentages of 64B lines to be filled by random/1B0/2B0/4B0/32B0 data patterns
    - Bins:
      - 0 - random
      - 1 - 1B0    upper 1 byte of each aligned 2 bytes is 0
      - 2 - 2B0    upper 2 bytes of each aligned 4 bytes is 0
      - 3 - 4B0    upper 4 bytes of each aligned 8 bytes is 0
      - 4 - 32B0   upper 32 bytes of each aligned 64-byte line are 0
  - FILL_PATTERN will be ignored if FILL_COMPRESS is specified
- Additional details about data patterns generated will be printed if the debug env var DUMP_LINES is
  set to a non-zero value, which also corresponds to how many 64 byte lines will be printed
### Modified
- Increased GFX_BLOCKSIZE limit from 512 to 1024 (still requires multiple of 64)

### Fixed
- Fixed bug when using BYTE_OFFSET

gilbertlee-amd's avatar
gilbertlee-amd committed
114
115
116
117
118
119
120
121
122
123
124
125
126
## v1.63.00
### Added
- Added `gfx950`, `gfx1150`, and `gfx1151` to default GPU targets list in CMake builds

### Modified
- Removing self-GPU check for DMA engine copies
- Switched to amdclang++ as primary compiler
- healthcheck preset adds HBM testing and support for more MI3XX variants

### Fixed
- Fixed issue when using "P" memory type and specific DMA subengines
- Fixed issue with subiteration timing reports

gilbertlee-amd's avatar
gilbertlee-amd committed
127
128
129
130
131
132
133
134
135
## v1.62.00
### Added
- Adding GFX_TEMPORAL to allow for use for use of non-temporal loads/stores
  - (0 = none [default], 1 = load, 2 = store, 3 = both)
- Addding "P" memory type which maps to CPU memory but is indexed by closest GPU
  - For example, P4 refers to CPU memory on NUMA node closest to GPU 4
### Modified
- Adding some additional summary details to a2a preset

gilbertlee-amd's avatar
gilbertlee-amd committed
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
## v1.61.00
### Added
- Added a2a_n preset which conducts alltoall GPU-to-GPU tranfers over nearest NIC executors
- Re-implemented GFX_BLOCK_ORDER which allows for control over how threadblocks of multiple transfers are ordered
  - 0 = sequential, 1 = interleaved, 2 = random
- Added a2asweep preset which tries various CU/unroll options for GFX-executed all-to-all
- Rewrite main GID index detection logic
- Show the GID index and description in the topology table. It is helpful for debugging purposes
- Added GFX_WORD_SIZE to allow for different packed float sizes to use for GFX kernel.  Must be either 4 (default), 2 or 1


### Fixed
- Avoid build errors for CMake and Makefile if infiniband/verbs.h header is not present and disable NIC executor in such case
- Have a priority list of which GID entry to go for instead of hardcoding choices based on underdocumented user input (such as RoCE version and IP address family)
- Use link-local when it is the only choice (i.e. when routing information is not available beyond local link)

152
153
154
155
156
157
158
## v1.60.00
### Modified
- Reverted GFX_SINGLE_TEAM default back to 1

### Fixed
- Fixed bug where peer memory access was not enabled for DMA transfers, which would break specific DMA engine transfers

159
160
161
## v1.59.01
### Added
- The a2a preset A2A_MODE variable has been enhanced to allow for customizing the number of srcs/dsts to use
162
  This is specified by setting A2A_MODE to numSrcs:numDsts.  Extra destinations past 1 will be "local" writes (i.e. if one sets A2A_MODE=1:3, then transfers will follow this pattern: Fx Gx FyFxFx)
163
164
  to simulate similar conditions normally used during collective algorithms such as ring-based AllReduce

gilbertlee-amd's avatar
gilbertlee-amd committed
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
## v1.59.00
### Added
- Adding in support for NIC executor, which allows for RDMA copies on NICs that support IBVerbs
  By default, NIC executor will be enabled if IBVerbs is found in the dynamic linker cache
- NIC executor can be indexed in two methods
  - "I"   Ix.y will use NIC x as the source and NIC y as the destination.
          E.g. (G0 I0.5 G4)
  - "N"   Nx.y will use NIC closest to GPU x as source, and NIC closest to GPU y as destination
          E.g. (G0 N0.4 N4)
- The closest NIC can be overridden by the environment variable CLOSEST_NIC, which should be a comma-separated
  list of NIC indices to use for the corresponding GPU
- This feature can be explicitly disabled at compile time by specifying DISABLE_NIC_EXEC=1

### Modified
- Changing default data size to 256M from 64M
- Adding NUM_QUEUE_PAIRS which enables NIC traffic in A2A.  Each GPU will talk to the next GPU via the closest NIC
- Sweep preset now saves last sweep run configuration to /tmp/lastSweep.cfg and can be changed via SWEEP_FILE

### Fixed
- Fixed bug with reporting when using subiterations
- Fixed bug with per-Transfer data size specification
- Fixed bug when using XCC prefered table


189
190
191
192
## v1.58.00
### Fixed
- Fixed broken specific DMA-engine copies

193
194
195
196
197
## v1.57.01
### Added
- Re-added "scaling" GPU GFX preset benchmark, which tests copies from GPU to other devices using varying
  number of CUs.

198
199
200
201
202
203
204
## v1.57.00
### Modified
- Removing use of default starship operator / C++20 requirement to enable compilation of more OSs
- Changing how version is reported.  Client version is now just last two digits, and increments only if
  no changes are made to the backend header-only library file, and resets to 0 when header is updated
- GFX_SINGLE_TEAM=0 is set by default

205
206
207
208
## v1.56
### Fixed
- Fixed bug when using interactive mode.  Interactive mode now starts prior to all warmup iterations

209
210
211
212
213
## v1.55
### Fixed
- Fixed missing header error when compiling on CentOS
- Fixed issues when using multi-stream mode for GFX executor

214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
## v1.54
### Modified
- Refactored TransferBench into a header-only library combined with a thin client to facilitate the
  use of TransferBench as the backend for other applications
- Optimized how data validation is handled - this should speed up Tests with many parallel transfers as data is only
  generated once
- Preset benchmarks now no longer take in any extra command line arguments.  Preset settings are only accessed via
  environment variables.  Details for each preset are printed
- The a2a preset benchmark now defaults to using fine-grained memory and GFX unroll of 2
- Refactored how Transfers are launched in parallel which has reduced some CPU-side overheads
- CPU and DMA executor timing now use CPU wall clock timing instead of slowest Transfer time
### Added
- New one2all preset which sweeps over all subests of parallel transfers from one GPU to others
- Adding new warnings for DMA execution relating to how HIP will default to using agents from the source memory
### Removed
- CU scaling preset has been removed.  Similar functionality already exists in the schmoo preset benchmark
- Preparation of source data via GFX kernel has been removed (USE_PREP_KERNEL)
- Removed GFX block-reordering (BLOCK_ORDER)
- Removed NUM_CPU_DEVICES and NUM_GPU_DEVICES from common env vars and only into the presets they apply to.
- Removed SHARED_MEM_BYTES option for GFX executor
- Removed USE_PCIE_INDEX, and SHARED_MEM_BYTES
### Fixed
- Fixed a potential timing reporting issue when DMA executed Transfers end up getting serialized.

238
239
240
241
## v1.53
### Added
- Added ability to specify NULL for sweep preset as source or destination memory type

gilbertlee-amd's avatar
gilbertlee-amd committed
242
243
244
245
246
247
248
249
## v1.52
### Added
- Added USE_HSA_DMA env var to switch to using hsa_amd_memory_async_copy instead of hipMemcpyAsync for DMA execution
- Added ability to set USE_GPU_DMA env var for a2a benchmark
- Adding check for large BAR enablement for GPU devices during topology check
### Fixed
- Potential memory leak if HSA reports 0 hops between GPUs and CPUs

250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
## v1.51

## Modified
- CSV output has been modified slightly to match normal terminal output
- Output for non single stream mode has been changed to match single stream mode (results per Executor)

### Added
- Support for sub-iterations via NUM_SUBITERATIONS.  This allows for additional looping during an iteration
  If set to 0, this should infinitely loop (which may be useful for some debug purposes)
- Support for variable number of subexecutors (currently for GPU-GFX executor only).  Setting subExecutors to
  0 will run over a range of CUs to use, and report only the results of the best one found. This can be tuned
  for performance by setting the MIN_VAR_SUBEXEC and MAX_VAR_SUBEXEC environment variables to narrow the
  search space.  The number of CUs used will be identical for all variable subExecutor transfers
- Experimental new "healthcheck" preset config which currently only supports MI300 series.  This preset runs
  through CPU to GPU bandwidth tests and all-to-all XGMI bandwidth tests and compares against expected values
  Pass criteria limits can be modified (due to platform differences) via the environment variables
  LIMIT_UDIR (undirectional), LIMIT_BDIR (bidirectional), and LIMIT_A2A (Per GPU-GPU link bandwidth)

### Fixed
- Fixed out-of-bounds memory access during topology detection that can happen if the number of
  CPUs is less than the number of NUMA domains
- Fixed CU masking functionality on multi-XCD architectures (e.g. MI300)

273
274
275
276
277
278
279
280
281
## v1.50

### Added
- Adding new parallel copy preset benchmark (pcopy)
  - Usage: ./TransferBench pcopy <numBytes=64M> <#CUs=8> <srcGpu=0> <minGpus=1> <maxGpus=#GPU-1>
### Fixed
- Removed non-copies DMA Transfers (this had previously been using hipMemset)
- Fixed CPU executor when operating on null destination

282
283
284
285
286
## v1.49

### Fixes
* Enumerating previously missed DMA engines used only for CPU traffic in topology display

287
288
289
290
291
292
293
294
295
296
297
298
## v1.48

### Fixes
* Various fixes for TransferBenchCuda

### Additions
* Support for targeting specific DMA engines via executor subindex (e.g. D0.1)
* Printing warnings when exeuctors are overcommited

### Modifications
* USE_REMOTE_READ supported for rwrite preset benchmark

299
300
301
302
303
## v1.47

### Fixes
* Fixing CUDA support

304
305
306
307
308
309
310
311
312
## v1.46

### Fixes
* Fixing GFX_UNROLL set to 13 (past 8) on gfx906 cards

### Modifications
* GFX_SINGLE_TEAM=1 by default
* Adding field showing summation of individual Transfer bandwidths for Executors

gilbertlee-amd's avatar
gilbertlee-amd committed
313
314
315
316
317
318
319
320
321
322
## v1.45

### Additions
* Adding A2A_MODE to a2a preset (0 = copy, 1 = read-only, 2 = write-only)
* Adding GFX_UNROLL to modify GFX kernel's unroll factor
* Adding GFX_WAVE_ORDER to modify order in which wavefronts process data

### Modifications
* Rewrote the GFX reduction kernel to support new wave ordering

323
324
325
326
327
328
## v1.44

### Additions
* Adding rwrite preset to benchmark remote parallel writes
 * Usage: ./TransferBench rwrite <numBytes=64M> <#CUs=8> <srcGpu=0> <minGpus=1> <maxGpus=3>

329
330
331
332
333
## v1.43

### Changes
* Modifying a2a to show executor timing, as well as executor min/max bandwidth

334
335
336
337
338
339
## v1.42

### Fixes
* Fixing schmoo maxNumCus optional arg parsing
* Schmoo output modified to be easier to copy

340
341
342
343
344
345
346
347
348
## v1.41

### Additions
* Adding schmoo preset config benchmarks local/remote reads/writes/copies
  * Usage: ./TransferBench schmoo <numBytes=64M> <localIdx=0> <remoteIdx=1> <maxNumCUs=32>

### Fixes
* Fixing some misreported timings when running with non-fixed number of iterations

349
350
351
352
353
## v1.40

### Fixes
* Fixing XCC defaulting to 0 instead of random for preset configs, ignoring XCC_PREF_TABLE

354
355
356
357
358
359
360
## v1.39

### Additions
* (Experimental) Adding support for Executor sub-index
### Fixes
- Remove deprecated gcnArch code.  ROCm version must include support for hipDeviceMallocUncached

361
362
363
364
365
## v1.38

### Fixes
* Adding missing threadfence which could cause non-fine-grained Transfers to report higher speeds

366
367
368
369
370
371
372
373
## v1.37

### Changes
* USE_SINGLE_STREAM is enabled by default now.  (Disable via USE_SINGLE_STREAM=0)

### Fixes
* Fix unrecognized token error when XCC_PREF_TABLE is unspecified

374
375
376
377
378
379
380
## v1.36

### Additions

* (Experimental) Adding XCC filtering - combined with XCC_PREF_TABLE, this tries to select
  specific XCCs to use for specific (SRC->DST) Transfers

381
382
383
384
385
## v1.35

### Additions

* USE_FINE_GRAIN also applies to a2a preset
386

387
## v1.34
388
389
390
391

### Additions

* Set `GPU_KERNEL=3` as default for gfx942
392

393
## v1.33
Lisa Delaney's avatar
Lisa Delaney committed
394
395
396
397
398

### Additions

* Added the `ALWAYS_VALIDATE` environment variable to allow for validation after every iteration, instead
  of only once at the end of all iterations
399

400
## v1.32
Lisa Delaney's avatar
Lisa Delaney committed
401
402
403
404

### Changes

* Increased the line limit from 2048 to 32768
405

gilbertlee-amd's avatar
gilbertlee-amd committed
406
## v1.31
Lisa Delaney's avatar
Lisa Delaney committed
407
408
409
410
411

### Changes

* `SHOW_ITERATIONS` now shows XCC:CU instead of just CU ID
* `SHOW_ITERATIONS` is printed when `USE_SINGLE_STREAM`=1
gilbertlee-amd's avatar
gilbertlee-amd committed
412

413
## v1.30
Lisa Delaney's avatar
Lisa Delaney committed
414
415
416
417
418
419
420
421
422

### Additions

* `BLOCK_SIZE` has been added to control the threadblock size (must be a multiple of 64, up to 512)
* `BLOCK_ORDER` has been added to control how work is ordered for GFX-executors running
  `USE_SINGLE_STREAM`=1
  * 0 - Threadblocks for transfers are ordered sequentially (default)
  * 1 - Threadblocks for transfers are interleaved
  * 2 - Threadblocks for transfers are ordered randomly
423

424
## v1.29
Lisa Delaney's avatar
Lisa Delaney committed
425
426
427
428
429
430
431
432
433
434
435
436
437
438

### Additions

* A2A preset config now responds to `USE_REMOTE_READ`

### Fixes

* Race-condition during wall-clock initialization caused "inf" during single-stream runs
* CU numbering output after CU masking

### Changes

* The default number of warmups has been reverted to 3
* The default unroll factor for gfx940/941 has been set to 6
439

440
## v1.28
Lisa Delaney's avatar
Lisa Delaney committed
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457

### Additions

* Added `A2A_DIRECT`, which only runs all-to-all on directly connected GPUs (now on by default)
* Added average statistics for P2P and A2A benchmarks
* Added `USE_FINE_GRAIN` for P2P benchmark
  * With older devices, P2P performance with default coarse-grain device memory stops timing as soon
    as a request is sent to data fabric, and not actually when it arrives remotely. This can artificially
    inflate bandwidth numbers, especially when sending small amounts of data.

### Changes

* Modified P2P output to help distinguish between CPU and GPU devices

### Fixes

* Fixed Makefile target to prevent unnecessary re-compilation
458

459
## v1.27
Lisa Delaney's avatar
Lisa Delaney committed
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478

### Additions

* Added cmdline preset to allow specification of  simple tests on command line (e.g.,
  `./TransferBench cmdline 64M "1 4 G0->G0->G1"`)
* Adding the `HIDE_ENV` environment variable, which stops environment variable values from printing
* Adding the `CU_MASK` environment variable, which allows you to select the CUs to run on
* `CU_MASK` is specified in CU indices (0-#CUs-1), where ' - ' can be used to denote ranges of values
  (e.g., `CU_MASK`=3-8,16 requests that transfer be run only on CUs 3,4,5,6,7,8,16)
  * Note that this is somewhat experimental and may not work on all hardware
* `SHOW_ITERATIONS` now shows CU usage for that iteration (experimental)

### Changes

* Added extra comments on commonly missing includes with details on how to install them

### Fixes

* CUDA compilation works again (the `wall_clock64` CUDA alias was not defined)
479

480
481
## v1.26

Lisa Delaney's avatar
Lisa Delaney committed
482
483
484
485
486
487
488
489
490
491
492
493
### Additions

* Setting SHOW_ITERATIONS=1 provides additional information about per-iteration timing for file and
  P2P configs
  * For file configs, iterations are sorted from min to max bandwidth and displayed with standard
    deviation
  * For P2P, min/max/standard deviation is shown for each direction

### Changes

* P2P benchmark formatting now reports bidirectional bandwidth in each direction (as well as sum) for
  clarity
494

495
## v1.25
Lisa Delaney's avatar
Lisa Delaney committed
496
497
498
499
500

### Fixes

* Fixed a bug in the P2P bidirectional benchmark that used the incorrect number of `subExecutors` for
  CPU<->GPU tests
501

gilbertlee-amd's avatar
gilbertlee-amd committed
502
## v1.24
Lisa Delaney's avatar
Lisa Delaney committed
503
504
505
506
507

### Additions

* New All-To-All GPU benchmark accessed by preset "A2A"
* Added gfx941 wall clock frequency
gilbertlee-amd's avatar
gilbertlee-amd committed
508

509
## v1.23
Lisa Delaney's avatar
Lisa Delaney committed
510
511
512
513
514

### Additions

* New GPU subexec scaling benchmark accessed by preset "scaling"
  * Tests GPU-GFX copy performance based on # of CUs used
515

516
## v1.22
Lisa Delaney's avatar
Lisa Delaney committed
517
518
519
520

### Changes

* Switched the kernel timing function to `wall_clock64`
521

gilbertlee-amd's avatar
gilbertlee-amd committed
522
## v1.21
Lisa Delaney's avatar
Lisa Delaney committed
523
524
525
526

### Fixes

* Fixed a bug with `SAMPLING_FACTOR`
gilbertlee-amd's avatar
gilbertlee-amd committed
527

528
## v1.20
Lisa Delaney's avatar
Lisa Delaney committed
529
530
531
532
533

### Fixes

* `VALIDATE_DIRECT` can now be used with `USE_PREP_KERNEL`
* Switched to local GPU for validating GPU memory
534

gilbertlee-amd's avatar
gilbertlee-amd committed
535
## v1.19
Lisa Delaney's avatar
Lisa Delaney committed
536
537
538
539
540

### Additions

* `VALIDATE_DIRECT` now also applies to source memory array checking
* Added null memory pointer check prior to deallocation
gilbertlee-amd's avatar
gilbertlee-amd committed
541

542
## v1.18
Lisa Delaney's avatar
Lisa Delaney committed
543
544
545
546
547
548
549
550
551
552
553
554
555

### Additions

* Adding the ability to validate GPU destination memory directly without going through the CPU
  staging buffer (`VALIDATE_DIRECT`)
  * Note that this only works on AMD devices with large-bar access enabled, and may slow things down
    considerably

### Changes

* Refactored how environment variables are displayed
* Mismatch stops after the first detected error within an array instead of listing all mismatched
  elements
556

557
## v1.17
Lisa Delaney's avatar
Lisa Delaney committed
558
559
560
561
562
563
564
565
566
567
568
569
570
571

### Additions

* Allowed switch to GFX kernel for source array initialization (`USE_PREP_KERNEL`)
  * Note that `USE_PREP_KERNEL` can't be used with `FILL_PATTERN`
* Added the ability to compile with nvcc only (`TransferBenchCuda`)

### Changes

* The default pattern was set to [Element i = ((i * 517) modulo 383 + 31) * (srcBufferIdx + 1)]

### Fixes

* Added the `example.cfg` file
572

573
## v1.16
Lisa Delaney's avatar
Lisa Delaney committed
574
575
576
577
578
579
580

### Additions

* Additional src array validation during preparation
* Added a new environment variable (`CONTINUE_ON_ERROR`) to resume tests after a mis-match
  detection
* Initialized GPU memory to 0 during allocation
581

582
## v1.15
Lisa Delaney's avatar
Lisa Delaney committed
583
584
585
586
587
588
589
590
591

### Fixes

* Fixed a bug that prevented single transfers greater than 8 GB

### Changes

* Removed "check for latest ROCm" warning when allocating too much memory
* Off-source memory value is now printed when a mis-match is detected
592

PedramAlizadeh's avatar
PedramAlizadeh committed
593
## v1.14
Lisa Delaney's avatar
Lisa Delaney committed
594
595
596
597
598
599

### Additions

* Added documentation
* Added pthread linking in src/Makefile and CMakeLists.txt
* Added printing off the hex value of the floats for output and reference
600

PedramAlizadeh's avatar
PedramAlizadeh committed
601
602
## v1.13

Lisa Delaney's avatar
Lisa Delaney committed
603
604
605
606
607
608
609
### Additions

* Added support for cmake

### Changes

* Converted to the Pitchfork layout standard
PedramAlizadeh's avatar
PedramAlizadeh committed
610

611
## v1.12
Lisa Delaney's avatar
Lisa Delaney committed
612
613
614
615
616

### Additions

* Added support for TransferBench on NVIDIA platforms (via `HIP_PLATFORM`=nvidia)
  * Note that CPU executors on NVIDIA platform cannot access GPU memory (no large-bar access)
617

gilbertlee-amd's avatar
gilbertlee-amd committed
618
## v1.11
Lisa Delaney's avatar
Lisa Delaney committed
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664

### Additions

* Added multi-input/multi-output (MIMO) support: transfers now can reduce (element-wise summation)
  multiple input memory arrays and write sums to multiple outputs
* Added GPU-DMA executor 'D', which uses `hipMemcpy` for SDMA copies
  * Previously, this was done using `USE_HIP_CALL`, but now GPU-GFX kernel can run in parallel with
    GPU-DMA, instead of applying to all GPU executors globally
  * GPU-DMA executor can only be used for single-input/single-output transfers
  * GPU-DMA executor can only be associated with one SubExecutor
* Added new "Null" memory type 'N', which represents empty memory. This allows for read-only or
  write-only transfers
* Added new `GPU_KERNEL` environment variable that allows switching between various GPU-GFX
  reduction kernels

### Optimizations

* Improved GPU-GFX kernel performance based on hardware architecture when running with
  fewer CUs

### Changes

* Updated the `example.cfg` file to cover new features
* Updated output to support MIMO
* Changed CU and CPU thread naming to SubExecutors for consistency
* Sweep Preset: default sweep preset executors now includes DMA
* P2P benchmarks:
  * Removed `p2p_rr`, `g2g` and `g2g_rr` (now only works via P2P)
    * Setting `NUM_CPU_DEVICES`=0 can only be used to benchmark GPU devices (like `g2g`)
    * The new `USE_REMOTE_READ` environment variable replaces `_rr` presets
  * New environment variable `USE_GPU_DMA`=1 replaces `USE_HIP_CALL`=1 for benchmarking with
    GPU-DMA Executor
  * Number of GPU SubExecutors for benchmark can be specified via `NUM_GPU_SE`
    * Defaults to all CUs for GPU-GFX, 1 for GPU-DMA
  * Number of CPU SubExecutors for benchmark can be specified via `NUM_CPU_SE`
* Psuedo-random input pattern has been slightly adjusted to have different patterns for each input
  array within same transfer

### Removals

* `USE_HIP_CALL`: use `GPU-DMA` executor 'D' or set `USE_GPU_DMA`=1 for P2P
  benchmark presets
  * Currently, a warning will be issued if `USE_HIP_CALL` is set to 1 and the program will stop
* `NUM_CPU_PER_TRANSFER`: the number of CPU SubExecutors will be whatever is specified for the
  transfer
* `USE_MEMSET`: this function can now be done via a transfer using the null memory type
gilbertlee-amd's avatar
gilbertlee-amd committed
665

666
## v1.10
Lisa Delaney's avatar
Lisa Delaney committed
667
668
669
670

### Fixes

* Fixed incorrect bandwidth calculation when using single stream mode and per-transfer data sizes
671

672
## v1.09
Lisa Delaney's avatar
Lisa Delaney committed
673
674
675
676
677
678
679
680

### Additions

* Printing off src/dst memory addresses during interactive mode

### Changes

* Switching to `numa_set_preferred` instead of `set_mempolicy`
681

682
## v1.08
Lisa Delaney's avatar
Lisa Delaney committed
683
684
685
686
687
688

### Changes

* Fixed handling of non-configured NUMA nodes
* Topology detection now shows actual NUMA node indices
* Fixed 'for' issue with `NUM_GPU_DEVICES`
689

690
## v1.07
Lisa Delaney's avatar
Lisa Delaney committed
691
692
693
694

### Fixes

* Fixed bug with allocations involving non-default CPU memory types
695

gilbertlee-amd's avatar
gilbertlee-amd committed
696
## v1.06
Lisa Delaney's avatar
Lisa Delaney committed
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712

### Additions

* Unpinned CPU memory type ('U'), which may require `HSA_XNACK`=1 in order to access via
  GPU executors
* Added sweep configuration logging to `lastSweep.cfg`
* Ability to specify the number of CUs to use for sweep-based presets

### Changes

* Modified advanced configuration file format to accept bytes-per-transfer

### Fixes

* Fixed random sweep repeatability
* Fixed bug with CPU NUMA node memory allocation
gilbertlee-amd's avatar
gilbertlee-amd committed
713

714
## v1.05
Lisa Delaney's avatar
Lisa Delaney committed
715
716
717
718
719
720
721
722
723

### Additions

* Topology output now includes NUMA node information
* Support for NUMA nodes with no CPU cores (e.g., CXL memory)

### Removals

* The `SWEEP_SRC_IS_EXE` environment variable was removed
724

725
## v1.04
Lisa Delaney's avatar
Lisa Delaney committed
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740

### Additions

* There are new environment variables for sweep based presets:
  * `SWEEP_XGMI_MIN`: The minumum number of XGMI hops for transfers
  * `SWEEP_XGMI_MAX`: The maximum number of XGMI hops for transfers
  * `SWEEP_SEED`: Uses a random seed
  * `SWEEP_RAND_BYTES`: Uses a random amount of bytes (up to pre-specified N) for each transfer

### Changes

* CSV output for sweep now includes an environment variables section followed by output
* CSV output no longer lists environment variable parameters in columns
* We changed the default number of warmup iterations from 3 to 1
* Split CSV output of link type to `ExeToSrcLinkType` and `ExeToDstLinkType`
741

Gilbert Lee's avatar
Gilbert Lee committed
742
## v1.03
Lisa Delaney's avatar
Lisa Delaney committed
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765

### Additions

* There are new preset modes stress-test benchmarks: `sweep` and `randomsweep`
  * `sweep` iterates over all possible sets of transfers to test
  * `randomsweep` iterates over random sets of transfers
  * New sweep-only environment variables can modify `sweep`
    * `SWEEP_SRC`: String containing only "B","C","F", or "G" that defines possible source memory types
    * `SWEEP_EXE`: String containing only "C" or "G" that defines possible executors
    * `SWEEP_DST`: String containing only "B","C","F", or "G" that defines possible destination memory types
    * `SWEEP_SRC_IS_EXE`: Restrict the executor to be the same as the source, if non-zero
    * `SWEEP_MIN`: Minimum number of parallel transfers to test
    * `SWEEP_MAX`: Maximum number of parallel transfers to test
    * `SWEEP_COUNT`: Maximum number of tests to run
    * `SWEEP_TIME_LIMIT`: Maximum number of seconds to run tests
* New environment variables to restrict number of available devices to test on (primarily for sweep
  runs)
  * `NUM_CPU_DEVICES`: Number of CPU devices
  * `NUM_GPU_DEVICES`: Number of GPU devices

### Fixes

* Fixed timing display for CPU executors when using single-stream mode
Gilbert Lee's avatar
Gilbert Lee committed
766

Gilbert Lee's avatar
Gilbert Lee committed
767
## v1.02
Lisa Delaney's avatar
Lisa Delaney committed
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784

### Additions

* Setting `NUM_ITERATIONS` to a negative number indicates a run of -`NUM_ITERATIONS` seconds per
  test

### Changes

* Copies are now referred to as 'transfers' instead of 'links'
* Reordered how environment variables are displayed (alphabetically now)

### Removals

* Combined timing is now always on for kernel-based GPU copies; the `COMBINED_TIMING`
  environment variable has been removed
* Single sync is no longer supported for facility variable iterations; the `USE_SINGLE_SYNC`
  environmental variable has been removed
Gilbert Lee's avatar
Gilbert Lee committed
785

Gilbert Lee's avatar
Gilbert Lee committed
786
## v1.01
Lisa Delaney's avatar
Lisa Delaney committed
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805

### Additions

* Added the `USE_SINGLE_STREAM` feature
  * All Links that run on the same GPU device are run with a single kernel launch on a single stream
  * This doesn't work with `USE_HIP_CALL`, and it forces `USE_SINGLE_SYNC` to collect timings
  * Added the ability to request coherent or fine-grained host memory ('B')

### Changes

* Separated the TransferBench repository from the RCCL repository
* Peer-to-peer benchmark mode now works with `OUTPUT_TO_CSV`
* Toplogy display now works with `OUTPUT_TO_CSV`
* Moved the documentation about the config file into `example.cfg`

### Removals

* Removed config file generation
* Removed the 'show pointer address' (`SHOW_ADDR`) environment variable