README.md 5.89 KB
Newer Older
1
2
3
4
# FlashAttention
This repository provides the official implementation of FlashAttention from the
following paper.

Tri Dao's avatar
Tri Dao committed
5
**FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness**  
Tri Dao's avatar
Tri Dao committed
6
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré  
7
8
Paper: https://arxiv.org/abs/2205.14135  
IEEE Spectrum [article](https://spectrum.ieee.org/mlperf-rankings-2022) about our submission to the MLPerf 2.0 benchmark using FlashAttention.
9
![FlashAttention](assets/flashattn_banner.jpg)
10

Tri Dao's avatar
Tri Dao committed
11
12
#### Triton implementation of FlashAttention

13
Phil Tillet (OpenAI) has an experimental implementation of FlashAttention in Triton:
Tri Dao's avatar
Tri Dao committed
14
15
16
17
18
19
20
https://github.com/openai/triton/blob/master/python/tutorials/06-fused-attention.py  

As Triton is a higher-level language than CUDA, it might be easier to understand
and experiment with. The notations in the Triton implementation are also closer
to what's used in our paper.


21
## Alpha release (0.1).
Tri Dao's avatar
Tri Dao committed
22

Tri Dao's avatar
Tri Dao committed
23
To compile (requiring CUDA 11, NVCC, and an Turing or Ampere GPU):
Tri Dao's avatar
Tri Dao committed
24
25
26
27
```
python setup.py install
```

Tri Dao's avatar
Tri Dao committed
28
Interface: `src/flash_attention.py`
Tri Dao's avatar
Tri Dao committed
29

Tri Dao's avatar
Tri Dao committed
30
31
32
33
34
35
To run the benchmark against PyTorch standard attention: 
```
PYTHONPATH=$PWD python benchmarks/benchmark_flash_attention.py
```

FlashAttention currently supports:
Tri Dao's avatar
Tri Dao committed
36
1. Turing or Ampere GPUs (e.g., A100, RTX 3090, T4, RTX 2080).
Tri Dao's avatar
Tri Dao committed
37
38
2. fp16 and bf16 (bf16 requires Ampere GPUs).
3. Head dimensions 16, 32, 64, 128 (head dim 128 backward requires A100).
Tri Dao's avatar
Tri Dao committed
39

Tri Dao's avatar
Tri Dao committed
40
41
Our tentative roadmap:
1. [Jun 2022] Make package pip-installable.
Tri Dao's avatar
Tri Dao committed
42
2. ~~[Jun 2022] Support SM86 GPUs (e.g., RTX 3080, 3090)~~[Done].
Tri Dao's avatar
Tri Dao committed
43
3. [Jun 2022] Refactor to use Cutlass.
Tri Dao's avatar
Tri Dao committed
44
4. ~~[Jun 2022] Support SM75 GPUs (e.g. T4)~~[Done].
Tri Dao's avatar
Tri Dao committed
45
5. ~~[Jun 2022] Support bf16~~[Done].
Tri Dao's avatar
Tri Dao committed
46
47
48
49
6. ~~[Jul 2022] Implement cross-attention~~[Done].
7. ~~[Jul 2022] Support head dimension 128~~[Done].
8. [Jul 2022] Support SM70 GPUs (V100).
9. [Aug 2022] Fuse rotary embedding.
Tri Dao's avatar
Tri Dao committed
50
10. [Aug 2022] Support attention bias (e.g. ALiBi, relative positional encoding).
Tri Dao's avatar
Tri Dao committed
51

Tri Dao's avatar
Tri Dao committed
52
## Speedup and Memory Savings
Dan Fu's avatar
Dan Fu committed
53

Dan Fu's avatar
Dan Fu committed
54
55
We present expected speedup (combined forward + backward pass) and memory savings from using FlashAttention against PyTorch standard attention, depending on sequence length, on different GPUs (speedup depends on memory bandwidth - we see more speedup on slower GPU memory).

Dan Fu's avatar
T4  
Dan Fu committed
56
57
58
We currently have benchmarks for these GPUs:
* [A100](#a100)
* [RTX 3090](#rtx-3090)
Dan Fu's avatar
Dan Fu committed
59
* [T4](#t4)
Dan Fu's avatar
T4  
Dan Fu committed
60

Dan Fu's avatar
Dan Fu committed
61
62
### A100

Dan Fu's avatar
Dan Fu committed
63
64
65
66
We display FlashAttention speedup using these parameters (similar to BERT-base):
* Batch size 8
* Head dimension 64
* 12 attention heads
Dan Fu's avatar
Dan Fu committed
67

Dan Fu's avatar
Dan Fu committed
68
69
Our graphs show sequence lengths between 128 and 4096 (when standard attention runs out of memory on an A100), but FlashAttention can scale up to sequence length 64K.

Dan Fu's avatar
Dan Fu committed
70
#### Speedup
Dan Fu's avatar
Dan Fu committed
71

72
![FlashAttention speedup](assets/flashattn_speedup.jpg)
Dan Fu's avatar
Dan Fu committed
73
74
75
76

We generally see 2-4X speedup at sequence lengths between 128 and 4K, and we see more speedup when using dropout and masking, since we fuse the kernels.
At sequence lengths that are popular with language models like 512 and 1K, we see speedups up to 4X when using dropout and masking.

Dan Fu's avatar
Dan Fu committed
77
#### Memory
Dan Fu's avatar
Dan Fu committed
78

79
![FlashAttention memory](assets/flashattn_memory.jpg)
Dan Fu's avatar
Dan Fu committed
80
81
82
83
84

We show memory savings in this graph (note that memory footprint is the same no matter if you use dropout or masking).
Memory savings are proportional to sequence length -- since standard attention has memory quadratic in sequence length, whereas FlashAttention has memory linear in sequence length.
We see 10X memory savings at sequence length 2K, and 20X at 4K.
As a result, FlashAttention can scale to much longer sequence lengths.
Tri Dao's avatar
Tri Dao committed
85

Dan Fu's avatar
Dan Fu committed
86
87
88
89
90
91
#### Head Dimension 128

![FlashAttention speedup, head dimension 128](assets/flashattn_speedup_a100_d128.jpg)

We show speedup with head dimension 128.
Here we show batch size 16 with 12 heads.
Dan Fu's avatar
Dan Fu committed
92
93
Speedup is less than with the smaller head sizes, since we have to make the block size smaller in the tiling.
But speedup is still significant, especially with a causal mask.
Dan Fu's avatar
Dan Fu committed
94

Dan Fu's avatar
Dan Fu committed
95
96
97
98
99
100
101
102
103
### RTX 3090

For the RTX 3090, we use batch size 12 with 12 attention heads.
Memory savings are the same as on an A100, so we'll only show speedup here.

![FlashAttention speedup GTX 3090](assets/flashattn_speedup_3090.jpg)

We see slightly higher speedups (between 2.5-4.5x) on the GTX 3090, since memory bandwidth on the GDDR6X is lower than A100 HBM (~900 GB/s vs. ~1.5 TB/s).

Dan Fu's avatar
T4  
Dan Fu committed
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
### T4

We again use batch size 12 with 12 attention heads.

![Flashattention speedup T4](assets/flashattn_speedup_t4.jpg)

T4 SRAM is smaller than the newer GPUs (64 KB), so we see less speedup (we need to make the block sizes smaller, so we end up doing more R/W).
This matches the IO complexity analysis from section 3.2 of [our paper](https://arxiv.org/abs/2205.14135).

T4 GPUs are commonly used for inference, so we also measure speedup on the forward pass only (note that these are not directly comparable to the graphs above):

![FlashAttention speedup T4 fwd](assets/flashattn_speedup_t4_fwd.jpg)

We see speedups between 2.5x-4.5x on the forward pass.

Tri Dao's avatar
Tri Dao committed
119
120
121
122
123
124
125
126
127
128
## When you encounter issues

This alpha release of FlashAttention contains code written for a research
project to validate ideas on speeding up attention. 
We have tested it on several models (BERT, GPT2, ViT). 
However, there might still be bugs in the implementation that we hope to iron
out in the next few months.

If you encounter any of these bugs, please open a respective GitHub Issue!

Tri Dao's avatar
Tri Dao committed
129
## Acknowledgments
Tri Dao's avatar
Tri Dao committed
130
131
132
133
134
135
Our implementation uses Apex's
[FMHA](https://github.com/NVIDIA/apex/tree/master/apex/contrib/csrc/fmha) code
as a starting point.

We thank [Young-Jun Ko](https://yjk21.github.io/) for the in-depth explanation of his FMHA implementation
and for his thoughtful answers to our questions about CUDA.
Dan Fu's avatar
Dan Fu committed
136
137
138
139
140
141
142
143
144
145
146

## Citation
If you use this codebase, or otherwise found our work valuable, please cite:
```
@article{dao2022flashattention,
  title={FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness},
  author={Dao, Tri and Fu, Daniel Y. and Ermon, Stefano and Rudra, Atri and R{\'e}, Christopher},
  journal={arXiv preprint arXiv:2205.14135},
  year={2022}
}
```