tutorial_hello_world.md 10.4 KB
Newer Older
Rosty Geyyer's avatar
Rosty Geyyer committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
## CK docker hub

[Docker hub](https://hub.docker.com/r/rocm/composable_kernel)

## Why do I need this?

To make our lives easier and bring Composable Kernel dependencies together, we recommend using docker images.

## So what is Composable Kernel?

Composable Kernel (CK) library aims to provide a programming model for writing performance critical kernels for machine learning workloads across multiple architectures including GPUs, CPUs, etc, through general purpose kernel languages, like HIP C++.

To get the CK library

```
git clone https://github.com/ROCmSoftwarePlatform/composable_kernel.git
```

run a docker container 

```
docker run                                                            \
-it                                                                   \
--privileged                                                          \
--group-add sudo                                                      \
-w /root/workspace                                                    \
-v ${PATH_TO_LOCAL_WORKSPACE}:/root/workspace                         \
rocm/composable_kernel:ck_ub20.04_rocm5.3_release                     \
/bin/bash
```

and build the CK

```
mkdir build && cd build

# Need to specify target ID, example below is for gfx908 and gfx90a
cmake                                                                                             \
-D CMAKE_PREFIX_PATH=/opt/rocm                                                                    \
-D CMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc                                                         \
-D CMAKE_CXX_FLAGS="-O3"                                                                          \
-D CMAKE_BUILD_TYPE=Release                                                                       \
-D GPU_TARGETS="gfx908;gfx90a"                                                                    \
..
```

and 

```
make -j examples tests
```

To run all the test cases including tests and examples run

```
make test
```

We can also run specific examples or tests like

```
./bin/example_gemm_xdl_fp16
./bin/test_gemm_fp16
```

For more details visit [CK github repo](https://github.com/ROCmSoftwarePlatform/composable_kernel), [CK examples](https://github.com/ROCmSoftwarePlatform/composable_kernel/tree/develop/example), [even more CK examples](https://github.com/ROCmSoftwarePlatform/composable_kernel/tree/develop/client_example).

## And what is inside?

The docker images have everything you need for running CK including:

* [ROCm](https://www.amd.com/en/graphics/servers-solutions-rocm)
* [CMake](https://cmake.org/)
* [Compiler](https://github.com/RadeonOpenCompute/llvm-project)

## Which image is right for me?

Let's take a look at the image naming, for example "ck_ub20.04_rocm5.4_release". The image specs are:

* "ck" - made for running Composable Kernel
* "ub20.04" - based on Ubuntu 20.04
* "rocm5.4" - ROCm platform version 5.4
* "release" - compiler version is release

So just pick the right image for your project dependencies and you're all set.

## DIY starts here

If you need to customize a docker image or just can't stop tinkering, feel free to adjust the [Dockerfile](https://github.com/ROCmSoftwarePlatform/composable_kernel/blob/develop/Dockerfile) for your needs.

## License

CK is released under the MIT [license](https://github.com/ROCmSoftwarePlatform/composable_kernel/blob/develop/LICENSE).



## Motivation

This tutorial is aimed at engineers dealing with artificial intelligence and machine learning who would like to optimize their pipelines and squeeze every performance drop by adding Composable Kernel (CK) library to their projects. We would like to make the CK library approachable so the tutorial is not based on the latest release and doesn't have all the bleeding edge features, but it will be reproducible now and forever.

During this tutorial we will have an introduction to the CK library, we will build it and run some examples and tests, so to say we will run a "Hello world" example. In future tutorials we will go in depth and breadth and get familiar with other tools and ways to integrate CK into your project.

## Description

Modern AI technology solves more and more problems in all imaginable fields, but crafting fast and efficient workflows is still challenging. CK is one of the tools to make AI heavy lifting as fast and efficient as possible. CK is a collection of optimized AI operator kernels and tools to create new ones. The library has components required for majority of modern neural networks architectures including matrix multiplication, convolution, contraction, reduction, attention modules, variety of activation functions, fused operators and many more.

So how do we (almost) reach the speed of light? CK acceleration abilities are based on:

* Layered structure.
* Tile-based computation model.
* Tensor coordinate transformation.
* Hardware acceleration use.
* Support of low precision data types including fp16, bf16, int8 and int4.

If you are excited and need more technical details and benchmarking results - read this awesome blog [post](https://community.amd.com/t5/instinct-accelerators/amd-composable-kernel-library-efficient-fused-kernels-for-ai/ba-p/553224). 

For more details visit our [github repo](https://github.com/ROCmSoftwarePlatform/composable_kernel).

## Hardware targets

CK library fully supports "gfx908" and "gfx90a" GPU architectures and only some operators are supported for "gfx1030". Let's check the hardware you have at hand and decide on the target GPU architecture 

GPU Target	AMD GPU
gfx908 	Radeon Instinct MI100
gfx90a 	Radeon Instinct MI210, MI250, MI250X
gfx1030 	Radeon PRO V620, W6800, W6800X, W6800X Duo, W6900X, RX 6800, RX 6800 XT, RX 6900 XT, RX 6900 XTX, RX 6950 XT

There are also [cloud options](https://aws.amazon.com/ec2/instance-types/g4/) you can find if you don't have an AMD GPU at hand.

## Build the library

First let's clone the library and rebase to the tested version:

```
git clone https://github.com/ROCmSoftwarePlatform/composable_kernel.git
cd composable_kernel/
git checkout tutorial_hello_world
```

To make our lives easier we prepared [docker images](https://hub.docker.com/r/rocm/composable_kernel) with all the necessary dependencies. Pick the right image and create a container. In this tutorial we use "rocm/composable_kernel:ck_ub20.04_rocm5.3_release" image, it is based on Ubuntu 20.04, ROCm v5.3, compiler release version.

If your current folder is ${HOME}, start the docker container with

```
docker run  \
-it  \
--privileged  \
--group-add sudo  \
-w /root/workspace  \
-v ${HOME}:/root/workspace  \
rocm/composable_kernel:ck_ub20.04_rocm5.3_release  \
/bin/bash
```

If your current folder is different from ${HOME}, adjust the line `-v ${HOME}:/root/workspace` to fit your folder structure.

Inside the docker container current folder is "~/workspace", library path is "~/workspace/composable_kernel", navigate to the library

```
cd composable_kernel/
```

Create and go to the "build" directory

```
mkdir build && cd build
```

In the previous section we talked about target GPU architecture. Once you decide which one is right for you, run cmake using the right GPU_TARGETS flag

```
cmake  \
-D CMAKE_PREFIX_PATH=/opt/rocm  \
-D CMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc  \
-D CMAKE_CXX_FLAGS="-O3"  \
-D CMAKE_BUILD_TYPE=Release  \
-D BUILD_DEV=OFF  \
-D GPU_TARGETS="gfx908;gfx90a;gfx1030" ..
```

If everything went well the cmake run will end up with:

```
-- Configuring done
-- Generating done
-- Build files have been written to: "/root/workspace/composable_kernel/build"
```

Finally, we can build examples and tests

```
make -j examples tests
```

If everything is smooth, you'll see

```
Scanning dependencies of target tests
[100%] Built target tests
```

## Run examples and tests

Examples are listed as test cases as well, so we can run all examples and tests with

```
ctest
```

You can check the list of all tests by running

```
ctest -N
```

We can also run them separately, here is a separate example execution. 

```
./bin/example_gemm_xdl_fp16 1 1 1
```

The arguments "1 1 1" mean that we want to run this example in the mode: verify results with CPU, initialize matrices with integers and benchmark the kernel execution. You can play around with these parameters and see how output and execution results change.

If everything goes well and you have a device based on gfx908 or gfx90a architecture you should see something like

```
a_m_k: dim 2, lengths {3840, 4096}, strides {4096, 1}
b_k_n: dim 2, lengths {4096, 4096}, strides {1, 4096}
c_m_n: dim 2, lengths {3840, 4096}, strides {4096, 1}
launch_and_time_kernel: grid_dim {480, 1, 1}, block_dim {256, 1, 1}
Warm up 1 time
Start running 10 times...
Perf: 1.10017 ms, 117.117 TFlops, 87.6854 GB/s, DeviceGemmXdl<256, 256, 128, 4, 8, 32, 32, 4, 2> NumPrefetch: 1, LoopScheduler: Default, PipelineVersion: v1
```

Meanwhile, running it on a gfx1030 device should result in

```
a_m_k: dim 2, lengths {3840, 4096}, strides {4096, 1}
b_k_n: dim 2, lengths {4096, 4096}, strides {1, 4096}
c_m_n: dim 2, lengths {3840, 4096}, strides {4096, 1}
DeviceGemmXdl<256, 256, 128, 4, 8, 32, 32, 4, 2> NumPrefetch: 1, LoopScheduler: Default, PipelineVersion: v1 does not support this problem
```

But don't panic, some of the operators are supported on gfx1030 architecture, so you can run a separate example like

```
./bin/example_gemm_dl_fp16 1 1 1
```

and it should result in something nice similar to

```
a_m_k: dim 2, lengths {3840, 4096}, strides {1, 4096}
b_k_n: dim 2, lengths {4096, 4096}, strides {4096, 1}
c_m_n: dim 2, lengths {3840, 4096}, strides {4096, 1}
arg.a_grid_desc_k0_m0_m1_k1_{2048, 3840, 2}
arg.b_grid_desc_k0_n0_n1_k1_{2048, 4096, 2}
arg.c_grid_desc_m_n_{ 3840, 4096}
launch_and_time_kernel: grid_dim {960, 1, 1}, block_dim {256, 1, 1}
Warm up 1 time
Start running 10 times...
Perf: 3.65695 ms, 35.234 TFlops, 26.3797 GB/s, DeviceGemmDl<256, 128, 128, 16, 2, 4, 4, 1>
```

Or we can run a separate test

```
ctest -R test_gemm_fp16
```

If everything goes well you should see something like

```
Start 121: test_gemm_fp16
1/1 Test #121: test_gemm_fp16 ...................   Passed   51.81 sec

100% tests passed, 0 tests failed out of 1
```

## Summary

In this tutorial we took the first look at the Composable Kernel library, built it on your system and ran some examples and tests. Stay tuned, in the next tutorial we will run kernels with different configs to find out the best one for your hardware and task.

P.S.: Don't forget to switch out the cloud instance if you have launched one, you can find better ways to spend your money for sure!