README.md 10 KB
Newer Older
1
2
# Composable Kernel

3
4
5
> [!NOTE]
> The published documentation is available at [Composable Kernel](https://rocm.docs.amd.com/projects/composable_kernel/en/latest/) in an organized, easy-to-read format, with search and a table of contents. The documentation source files reside in the `docs` folder of this repository. As with all ROCm projects, the documentation is open source. For more information on contributing to the documentation, see [Contribute to ROCm documentation](https://rocm.docs.amd.com/en/latest/contribute/contributing.html).

6
7
8
The Composable Kernel (CK) library provides a programming model for writing performance-critical
kernels for machine learning workloads across multiple architectures (GPUs, CPUs, etc.). The CK library
uses general purpose kernel languages, such as HIP C++.
Sam Wu's avatar
Sam Wu committed
9

10
CK uses two concepts to achieve performance portability and code maintainability:
11
12

* A tile-based programming model
13
14
* Algorithm complexity reduction for complex machine learning (ML) operators. This uses an innovative
   technique called *Tensor Coordinate Transformation*.
15

Sam Wu's avatar
Sam Wu committed
16
![ALT](/docs/data/ck_component.png "CK Components")
17

18
The current CK library is structured into four layers:
Sam Wu's avatar
Sam Wu committed
19

20
21
22
23
* Templated Tile Operators
* Templated Kernel and Invoker
* Instantiated Kernel and Invoker
* Client API
24

Sam Wu's avatar
Sam Wu committed
25
26
![ALT](/docs/data/ck_layer.png "CK Layers")

27
## General information
Sam Wu's avatar
Sam Wu committed
28

29
30
31
32
33
34
35
36
37
* [CK supported operations](include/ck/README.md)
* [CK Tile supported operations](include/ck_tile/README.md)
* [CK wrapper](client_example/25_wrapper/README.md)
* [CK codegen](codegen/README.md)
* [CK profiler](profiler/README.md)
* [Examples (Custom use of CK supported operations)](example/README.md)
* [Client examples (Use of CK supported operations with instance factory)](client_example/README.md)
* [Terminology](/TERMINOLOGY.md)
* [Contributors](/CONTRIBUTORS.md)
38

39
CK is released under the **[MIT license](/LICENSE)**.
Sam Wu's avatar
Sam Wu committed
40

41
## Building CK
42

43
44
We recommend building CK inside Docker containers, which include all necessary packages. Pre-built
Docker images are available on [DockerHub](https://hub.docker.com/r/rocm/composable_kernel/tags).
45

46
1. To build a new Docker image, use the Dockerfile provided with the source code:
47

48
49
50
    ```bash
    DOCKER_BUILDKIT=1 docker build -t ck:latest -f Dockerfile .
    ```
Sam Wu's avatar
Sam Wu committed
51

52
2. Launch the Docker container:
53

54
55
56
57
58
59
60
61
62
63
    ```bash
    docker run                                     \
    -it                                            \
    --privileged                                   \
    --group-add sudo                               \
    -w /root/workspace                             \
    -v ${PATH_TO_LOCAL_WORKSPACE}:/root/workspace  \
    ck:latest                                      \
    /bin/bash
    ```
Sam Wu's avatar
Sam Wu committed
64

65
3. Clone CK source code from the GitHub repository and start the build:
Chao Liu's avatar
Chao Liu committed
66

67
    ```bash
68
    git clone https://github.com/ROCm/composable_kernel.git && \
69
70
71
72
    cd composable_kernel && \
    mkdir build && \
    cd build
    ```
Sam Wu's avatar
Sam Wu committed
73

74
75
76
    You must set the `GPU_TARGETS` macro to specify the GPU target architecture(s) you want
    to run CK on. You can specify single or multiple architectures. If you specify multiple architectures,
    use a semicolon between each; for example, `gfx908;gfx90a;gfx940`.
77

78
79
80
81
82
83
84
85
    ```bash
    cmake                                                                                             \
    -D CMAKE_PREFIX_PATH=/opt/rocm                                                                    \
    -D CMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc                                                         \
    -D CMAKE_BUILD_TYPE=Release                                                                       \
    -D GPU_TARGETS="gfx908;gfx90a"                                                                    \
    ..
    ```
Sam Wu's avatar
Sam Wu committed
86

87
    If you don't set `GPU_TARGETS` on the cmake command line, CK is built for all GPU targets
88
    supported by the current compiler (this may take a long time). 
89
    Tests and examples will only get built if the GPU_TARGETS is set by the user on the cmake command line.
90
91
92
93
94

    NOTE: If you try setting `GPU_TARGETS` to a list of architectures, the build will only work if the 
    architectures are similar, e.g., `gfx908;gfx90a`, or `gfx1100;gfx1101;gfx11012`. Otherwise, if you 
    want to build the library for a list of different architectures,
    you should use the `GPU_ARCHS` build argument, for example `GPU_ARCHS=gfx908;gfx1030;gfx1100;gfx942`.
95

96
4. Build the entire CK library:
97

98
99
100
    ```bash
    make -j
    ```
101

102
5. Install CK:
103

104
105
106
    ```bash
    make -j install
    ```
107

108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
## Building grouped GEMM assets for MI300X

When running CMake config step, use flag
```bash
-D GPU_TARGETS="gfx940;gfx941;gfx942"
```
to target MI300X.

The following commands assume that you are in the **build** directory. 

Some relevant grouped GEMM instances (part of the whole GEMM operations library)

```bash
    make -j device_grouped_gemm_instance
```

```bash
    make -j device_gemm_splitk_instance
```

Static GEMM operations library
```bash
    make -j device_gemm_operations
```

Ville Pietilä's avatar
Ville Pietilä committed
133
134
135
136
137
138
139
140
141
142
143
144
145
Other libs linked statically to the grouped_gemm CK implementation:
```bash
    make -j utility
```

```bash
    make -j device_contraction_operations
```

```bash
    make -j device_reduction_operations
```

146
147
148
149
150
151
152
153
154
155
156
157
158
159
Tests
```bash
    make -j test_grouped_gemm
```

Running tests
```bash
    bin/test_grouped_gemm_interface
```

```bash
    bin/test_grouped_gemm_splitk
```

Ville Pietilä's avatar
Ville Pietilä committed
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
## Installing the Grouped GEMM assets

If the CK is repo is cloned to target container, follow these steps the install the locally compiled CK library

 * Modify the the generated Makefile such that the absolute path to CMake is removed from the install tasks, i.e., 
 replace e.g. path `/usr/local/lib/python3.10/dist-packages/cmake/data/bin/cmake` with `@$(CMAKE_COMMAND)`.

 * Ensure that `@$(CMAKE_COMMAND)` points to the CMake path in system, e.g., `/opt/conda/envs/py_3.9/bin/cmake`. 
 You can see the CMake install path with command `which cmake`.

 * Add a new `install-no-build` task that does not invoke full build and just installs what has been built so far
 ```makefile
 # Special rule for the target install without building
install-no-build:
	@$(CMAKE_COMMAND) -E cmake_echo_color "--switch=$(COLOR)" --cyan "Install the project..."
	@$(CMAKE_COMMAND) -P cmake_install.cmake
.PHONY : install-no-build
 ```
    Note that you need to build the whole CK library once, otherwise the install script fails.

 * Run the `install-no-build` command
 ```bash
 make -j install-no-build
 ```



187
188
189
190
191
192
193
194
195
196
197
198
### Build and running unit tests

At the root directory, run

```bash
cmake -S test/utility/ -B build_test -DCMAKE_VERBOSE_MAKEFILE=ON
```

```bash
cmake --build build_test
```

199
## Optional post-install steps
200

201
* Build examples and tests:
202

203
204
205
    ```bash
    make -j examples tests
    ```
Sam Wu's avatar
Sam Wu committed
206

207
208
209
210
211
* Build and run all examples and tests:

    ```bash
    make -j check
    ```
212

213
    You can find instructions for running each individual example in [example](/example).
214

215
* Build ckProfiler:
216

217
218
219
220
221
222
    ```bash
    make -j ckProfiler
    ```

    You can find instructions for running ckProfiler in [profiler](/profiler).

223
224
225
226
227
228
229
230
* Build our documentation locally:

    ``` bash
    cd docs
    pip3 install -r sphinx/requirements.txt
    python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
    ```

Illia Silin's avatar
Illia Silin committed
231
232
233
Note the `-j` option for building with multiple threads in parallel, which speeds up the build significantly.
However, `-j` launches unlimited number of threads, which can cause the build to run out of memory and
crash. On average, you should expect each thread to use ~2Gb of RAM.
234
Depending on the number of CPU cores and the amount of RAM on your system, you may want to
Illia Silin's avatar
Illia Silin committed
235
limit the number of threads. For example, if you have a 128-core CPU and 128 Gb of RAM it's advisable to use `-j32`.
236
237
238
239
240
241
242
243
244
245
246

Additional cmake flags can be used to significantly speed-up the build:

* `DTYPES` (default is not set) can be set to any subset of "fp64;fp32;fp16;fp8;bf16;int8" to build
  instances of select data types only. The main default data types are fp32 and fp16; you can safely skip
  other data types.

* `DL_KERNELS` (default is OFF) must be set to ON in order to build instances, such as `gemm_dl` or
  `batched_gemm_multi_d_dl`. These instances are useful on architectures like the NAVI2x, as most
  other platforms have faster instances, such as `xdl` or `wmma`, available.

Illia Silin's avatar
Illia Silin committed
247
* `CK_USE_FP8_ON_UNSUPPORTED_ARCH` (default is OFF) must be set to ON in order to build instances,
248
  such as `gemm_universal`, `gemm_universal_streamk` and `gemm_multiply_multiply` for fp8 data type for GPU targets which do not  have native support for fp8 data type, such as gfx908 or gfx90a. These instances are useful on
Illia Silin's avatar
Illia Silin committed
249
250
  architectures like the MI100/MI200 for the functional support only.

251
252
253
254
255
## Using sccache for building

The default CK Docker images come with a pre-installed version of sccache, which supports clang
being used as hip-compiler (" -x hip"). Using sccache can help reduce the time to re-build code from
hours to 1-2 minutes. In order to invoke sccache, you need to run:
Sam Wu's avatar
Sam Wu committed
256

257
```bash
258
 sccache --start-server
259
```
JD's avatar
JD committed
260

261
then add the following flags to the cmake command line:
Sam Wu's avatar
Sam Wu committed
262

263
```bash
264
 -DCMAKE_CXX_COMPILER_LAUNCHER=sccache -DCMAKE_C_COMPILER_LAUNCHER=sccache
265
266
```

267
268
269
You may need to clean up the build folder and repeat the cmake and make steps in order to take
advantage of the sccache during subsequent builds.

270
## Using CK as pre-built kernel library
Sam Wu's avatar
Sam Wu committed
271

272
You can find instructions for using CK as a pre-built kernel library in [client_example](/client_example).
JD's avatar
JD committed
273

274
## Contributing to CK
275

276
277
When you contribute to CK, make sure you run `clang-format` on all changed files. We highly
recommend using git hooks that are managed by the `pre-commit` framework. To install hooks, run:
278
279
280
281
282

```bash
sudo script/install_precommit.sh
```

283
284
With this approach, `pre-commit` adds the appropriate hooks to your local repository and
automatically runs `clang-format` (and possibly additional checks) before any commit is created.
285
286
287
288
289
290
291

If you need to uninstall hooks from the repository, you can do so by running the following command:

```bash
script/uninstall_precommit.sh
```

292
293
If you need to temporarily disable pre-commit hooks, you can add the `--no-verify` option to the
`git commit` command.