README.md 1.84 KB
Newer Older
Gilbert Lee's avatar
Gilbert Lee committed
1
2
3
4
5
6
7
8
9
# TransferBench

TransferBench is a simple utility capable of benchmarking simultaneous copies between user-specified devices (CPUs/GPUs).

## Requirements

1. ROCm stack installed on the system (HIP runtime)
2. libnuma installed on system

10
11
12
13
14
15
16
17
18
19
20
21
## Documentation

Run the steps below to build documentation locally.

```
cd docs

pip3 install -r .sphinx/requirements.txt

python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
```

Gilbert Lee's avatar
Gilbert Lee committed
22
## Building
PedramAlizadeh's avatar
PedramAlizadeh committed
23
24
25
26
27
28
  To build TransferBench using Makefile:
 ```shell
 $ make
 ```

  To build TransferBench using cmake:
29
 ```shell
PedramAlizadeh's avatar
PedramAlizadeh committed
30
31
32
33
34
$ mkdir build
$ cd build
$ CXX=/opt/rocm/bin/hipcc cmake ..
$ make
 ```
Gilbert Lee's avatar
Gilbert Lee committed
35
36

  If ROCm is installed in a folder other than `/opt/rocm/`, set ROCM_PATH appropriately
Gilbert Lee's avatar
Gilbert Lee committed
37

38
39
## NVIDIA platform support

40
TransferBench may also be built to run on NVIDIA platforms either via HIP, or native nvcc
41

42
To build with HIP for NVIDIA (requires HIP-compatible CUDA version installed e.g. CUDA 11.5):
43
44
45
```
   CUDA_PATH=<path_to_CUDA> HIP_PLATFORM=nvidia make`
```
Gilbert Lee's avatar
Gilbert Lee committed
46

47
48
49
50
51
To build with native nvcc: (Builds TransferBenchCuda)
```
   make
```

Gilbert Lee's avatar
Gilbert Lee committed
52
53
54
55
56
57
58
59
60
61
62
63
## Hints and suggestions
- Running TransferBench with no arguments will display usage instructions and detected topology information
- There are several preset configurations that can be used instead of a configuration file
  including:
  - p2p    - Peer to peer benchmark test
  - sweep  - Sweep across possible sets of Transfers
  - rsweep - Random sweep across possible sets of Transfers
- When using the same GPU executor in multiple simultaneous Transfers, performance may be
  serialized due to the maximum number of hardware queues available.
  - The number of maximum hardware queues can be adjusted via GPU_MAX_HW_QUEUES
  - Alternatively, running in single stream mode (USE_SINGLE_STREAM=1) may avoid this issue
    by launching all Transfers on a single stream instead of individual streams