README.md 13.4 KB
Newer Older
Casper's avatar
Casper committed
1
# AutoAWQ
Ji Lin's avatar
Ji Lin committed
2

Casper's avatar
Casper committed
3
4
<p align="center">
| <a href="https://github.com/casper-hansen/AutoAWQ/issues/32"><b>Roadmap</b></a> | <a href="https://github.com/casper-hansen/AutoAWQ/tree/main/examples"><b>Examples</b></a> | <a href="https://github.com/casper-hansen/AutoAWQ/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22"><b>Issues: Help Wanted</b></a> |
Casper's avatar
Casper committed
5
6
7
8

</p>
<p align="center">
    <a href="https://huggingface.co/models?search=awq">
Casper's avatar
Casper committed
9
        <img alt="Huggingface - Models" src="https://img.shields.io/badge/🤗_600+_models_available-8A2BE2">
Casper's avatar
Casper committed
10
11
12
13
14
    </a>
    <a href="https://github.com/casper-hansen/AutoAWQ/releases">
        <img alt="GitHub - Releases" src="https://img.shields.io/github/release/casper-hansen/AutoAWQ.svg">
    </a>
    <a href="https://pypi.org/project/autoawq/">
Casper's avatar
Casper committed
15
        <img alt="PyPI - Downloads" src="https://static.pepy.tech/badge/autoawq/month">
Casper's avatar
Casper committed
16
    </a>
Casper's avatar
Casper committed
17
</p>
Ji Lin's avatar
Ji Lin committed
18

Casper's avatar
Casper committed
19
AutoAWQ is an easy-to-use package for 4-bit quantized models. AutoAWQ speeds up models by 2x while reducing memory requirements by 3x compared to FP16. AutoAWQ implements the Activation-aware Weight Quantization (AWQ) algorithm for quantizing LLMs.  AutoAWQ was created and improved upon from the [original work](https://github.com/mit-han-lab/llm-awq) from MIT.
Ji Lin's avatar
Ji Lin committed
20

Casper's avatar
Casper committed
21
*Latest News* 🔥
Casper's avatar
Casper committed
22
- [2023/10] Mistral (Fused Modules), Bigcode, Turing support, Memory Bug Fix (Saves 2GB VRAM)
Casper Hansen's avatar
Casper Hansen committed
23
- [2023/09] 1.6x-2.5x speed boost on fused models (now including MPT and Falcon).
Casper's avatar
Casper committed
24
25
- [2023/09] Multi-GPU support, bug fixes, and better benchmark scripts available
- [2023/08] PyPi package released and AutoModel class available
Ji Lin's avatar
Ji Lin committed
26
27
28

## Install

Casper's avatar
Casper committed
29
Requirements: 
Casper's avatar
Casper committed
30
- Compute Capability 7.5 (sm75). Turing and later architectures are supported.
Casper's avatar
Casper committed
31
- CUDA Toolkit 11.8 and later.
Casper's avatar
Casper committed
32

Casper's avatar
Casper committed
33
34
---

Casper's avatar
Casper committed
35
36
37
38
Install:
- Use pip to install awq

```
Casper's avatar
Casper committed
39
pip install autoawq
Casper's avatar
Casper committed
40
41
```

Casper's avatar
Casper committed
42
43
44
45
46
47
48
49
50
51
52
### Using conda

CUDA dependencies can be hard to manage sometimes. It is recommended to use conda with AutoAWQ:

```
conda create --name autoawq python=3.10 -y
conda activate autoawq
conda install pytorch=2.0.1 torchvision torchaudio cudatoolkit=11.8 -c pytorch -c nvidia
pip install autoawq
```

Casper's avatar
Casper committed
53
54
55
56
57
### Build source

<details>

<summary>Build AutoAWQ from scratch</summary>
Casper Hansen's avatar
Casper Hansen committed
58

Casper's avatar
Casper committed
59
60
Build time can take 10 minutes. Download your model while you install AutoAWQ.

Ji Lin's avatar
Ji Lin committed
61
```
Casper's avatar
Casper committed
62
git clone https://github.com/casper-hansen/AutoAWQ
Casper's avatar
Casper committed
63
cd AutoAWQ
Ji Lin's avatar
Ji Lin committed
64
65
66
pip install -e .
```

Casper's avatar
Casper committed
67
68
</details>

Casper's avatar
Casper committed
69
## Supported models
Casper Hansen's avatar
Casper Hansen committed
70

Casper's avatar
Casper committed
71
The detailed support list:
Haotian (Ken) Tang's avatar
Haotian (Ken) Tang committed
72

Casper's avatar
Casper committed
73
74
75
76
| Models   | Sizes                       |
| ---------| ----------------------------|
| LLaMA-2  | 7B/13B/70B                  |
| LLaMA    | 7B/13B/30B/65B              |
Casper's avatar
Casper committed
77
| Mistral  | 7B                          |
Casper's avatar
Casper committed
78
79
80
81
82
| Vicuna   | 7B/13B                      |
| MPT      | 7B/30B                      |
| Falcon   | 7B/40B                      |
| OPT      | 125m/1.3B/2.7B/6.7B/13B/30B |
| Bloom    | 560m/3B/7B/                 |
Casper's avatar
Casper committed
83
| GPTJ     | 6.7B                        |
ldwang's avatar
ldwang committed
84
85
| Aquila   | 7B                          |
| Aquila2  | 7B/34B                      |
Casper's avatar
Casper committed
86
87

## Usage
Ji Lin's avatar
Ji Lin committed
88

Casper's avatar
Casper committed
89
90
Under examples, you can find examples of how to quantize, run inference, and benchmark AutoAWQ models.

91
92
### INT4 GEMM vs INT4 GEMV vs FP16

Casper's avatar
Casper committed
93
There are two versions of AWQ: GEMM and GEMV. Both names relate to how matrix multiplication runs under the hood. We suggest the following:
94
95
96
97
98
99
100

- GEMV (quantized): Best for small context, batch size 1, highest number of tokens/s.
- GEMM (quantized): Best for larger context, up to batch size 8, faster than GEMV on batch size > 1, slower than GEMV on batch size = 1.
- FP16 (non-quantized): Best for large batch sizes of 8 or larger, highest throughput. We recommend [TGI](https://github.com/huggingface/text-generation-inference) or [vLLM](https://github.com/vllm-project/vllm).

### Examples

Casper's avatar
Casper committed
101
102
More examples can be found in the [examples directory](examples).

Casper's avatar
Casper committed
103
<details>
Casper Hansen's avatar
Casper Hansen committed
104

Casper's avatar
Casper committed
105
<summary>Quantization</summary>
Casper Hansen's avatar
Casper Hansen committed
106

107
108
Expect this to take 10-15 minutes on smaller 7B models, and around 1 hour for 70B models.

Casper's avatar
Casper committed
109
```python
Casper's avatar
Casper committed
110
from awq import AutoAWQForCausalLM
Casper's avatar
Casper committed
111
from transformers import AutoTokenizer
Casper Hansen's avatar
Casper Hansen committed
112

Casper's avatar
Casper committed
113
114
model_path = 'lmsys/vicuna-7b-v1.5'
quant_path = 'vicuna-7b-v1.5-awq'
Casper's avatar
Casper committed
115
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
Ji Lin's avatar
Ji Lin committed
116

Casper's avatar
Casper committed
117
118
119
# Load model
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
Ji Lin's avatar
Ji Lin committed
120

Casper's avatar
Casper committed
121
122
123
124
125
126
# Quantize
model.quantize(tokenizer, quant_config=quant_config)

# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)
Ji Lin's avatar
Ji Lin committed
127
128
```

Casper's avatar
Casper committed
129
130
131
</details>

<details>
Ji Lin's avatar
Ji Lin committed
132

Casper's avatar
Casper committed
133
<summary>Inference</summary>
Ji Lin's avatar
Ji Lin committed
134

Casper's avatar
Casper committed
135
```python
Casper's avatar
Casper committed
136
from awq import AutoAWQForCausalLM
Casper's avatar
Casper committed
137
from transformers import AutoTokenizer, TextStreamer
Ji Lin's avatar
Ji Lin committed
138

Casper's avatar
Casper committed
139
quant_path = "casperhansen/vicuna-7b-v1.5-awq"
Ji Lin's avatar
Ji Lin committed
140

Casper's avatar
Casper committed
141
# Load model
Casper's avatar
Casper committed
142
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
Casper's avatar
Casper committed
143
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
Casper's avatar
Casper committed
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
streamer = TextStreamer(tokenizer, skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.

USER: {prompt}
ASSISTANT:"""

tokens = tokenizer(
    prompt_template.format(prompt="How are you today?"), 
    return_tensors='pt'
).input_ids.cuda()

# Generate output
generation_output = model.generate(
    tokens, 
    streamer=streamer,
    max_new_tokens=512
)
Casper's avatar
Casper committed
164
```
Ji Lin's avatar
Ji Lin committed
165

Casper's avatar
Casper committed
166
</details>
Ji Lin's avatar
Ji Lin committed
167

168
169
170
171
172
173
174
175
176
177
178
179
<details>

<summary>AutoAWQForCausalLM.from_quantized</summary>

- `quant_path`: Path to folder containing model files.
- `quant_filename`: The filename to model weights or `index.json` file.
- `max_new_tokens`: The max sequence length, used to allocate kv-cache for fused models.
- `fuse_layers`: Whether or not to use fused layers.
- `batch_size`: The batch size to initialize the AWQ model with.

</details>

Casper's avatar
Casper committed
180
## Benchmarks
Ji Lin's avatar
Ji Lin committed
181

Casper Hansen's avatar
Casper Hansen committed
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
### Vicuna 7B (LLaMa-2)

- Note: Blazing fast generation, slow context processing
- GPU: NVIDIA GeForce RTX 3090
- Version: GEMV
- Command: `python examples/benchmark.py --model_path casperhansen/vicuna-7b-v1.5-awq-gemv`

|   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)    |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:-----------------|
|            1 |               32 |              32 |           231.393  |           153.632 | 4.66 GB (19.68%) |
|            1 |               64 |              64 |           233.909  |           154.475 | 4.66 GB (19.68%) |
|            1 |              128 |             128 |           233.145  |           152.133 | 4.66 GB (19.68%) |
|            1 |              256 |             256 |           228.562  |           147.692 | 4.67 GB (19.72%) |
|            1 |              512 |             512 |           228.914  |           139.179 | 4.80 GB (20.26%) |
|            1 |             1024 |            1024 |           227.393  |           125.058 | 5.56 GB (23.48%) |
|            1 |             2048 |            2048 |           225.736  |           123.228 | 8.08 GB (34.09%) |

- Note: Fast generation, fast context processing
- GPU: NVIDIA GeForce RTX 3090
- Version: GEMM
- Command: `python examples/benchmark.py --model_path casperhansen/vicuna-7b-v1.5-awq`

|   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)    |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:-----------------|
|            1 |               32 |              32 |            521.444 |           126.51  | 4.55 GB (19.21%) |
|            1 |               64 |              64 |           2618.88  |           125.428 | 4.57 GB (19.31%) |
|            1 |              128 |             128 |           2808.09  |           123.865 | 4.61 GB (19.44%) |
|            1 |              256 |             256 |           2807.46  |           120.779 | 4.67 GB (19.72%) |
|            1 |              512 |             512 |           2769.9   |           115.08  | 4.80 GB (20.26%) |
|            1 |             1024 |            1024 |           2640.95  |           105.493 | 5.56 GB (23.48%) |
|            1 |             2048 |            2048 |           2341.36  |           104.188 | 8.08 GB (34.09%) |

### MPT 7B

- Note: Blazing fast generation, slow context processing
- GPU: NVIDIA GeForce RTX 3090
- Command: `python examples/benchmark.py --model_path casperhansen/mpt-7b-8k-chat-awq-gemv`
- Version: GEMV

|   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)    |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:-----------------|
|            1 |               32 |              32 |            187.332 |           136.765 | 3.65 GB (15.42%) |
|            1 |               64 |              64 |            241.026 |           136.476 | 3.67 GB (15.48%) |
|            1 |              128 |             128 |            239.44  |           137.599 | 3.70 GB (15.61%) |
|            1 |              256 |             256 |            233.184 |           137.02  | 3.76 GB (15.88%) |
|            1 |              512 |             512 |            233.082 |           135.633 | 3.89 GB (16.41%) |
|            1 |             1024 |            1024 |            231.504 |           122.197 | 4.40 GB (18.57%) |
|            1 |             2048 |            2048 |            228.307 |           121.468 | 5.92 GB (24.98%) |

- Note: Fast generation, fast context processing
- GPU: NVIDIA GeForce RTX 3090
- Version: GEMM
- Command: `python examples/benchmark.py --model_path casperhansen/mpt-7b-8k-chat-awq`

|   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)    |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:-----------------|
|            1 |               32 |              32 |            557.714 |           118.567 | 3.65 GB (15.42%) |
|            1 |               64 |              64 |           2752.9   |           120.772 | 3.67 GB (15.48%) |
|            1 |              128 |             128 |           2982.67  |           119.52  | 3.70 GB (15.61%) |
|            1 |              256 |             256 |           3009.16  |           116.911 | 3.76 GB (15.88%) |
|            1 |              512 |             512 |           2901.91  |           111.607 | 3.95 GB (16.68%) |
|            1 |             1024 |            1024 |           2718.68  |           102.623 | 4.40 GB (18.57%) |
|            1 |             2048 |            2048 |           2363.61  |           101.368 | 5.92 GB (24.98%) |

### Falcon 7B

Casper Hansen's avatar
Casper Hansen committed
248
249
250
251
252
- Note: Fast generation, fast context processing
- GPU: NVIDIA GeForce RTX 3090
- Command: `python examples/benchmark.py --model_path casperhansen/falcon-7b-awq --quant_file awq_model_w4_g64.pt`
- Version: GEMM

Casper Hansen's avatar
Casper Hansen committed
253
254
255
256
257
258
259
260
261
|   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)    |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:-----------------|
|            1 |               32 |              32 |            466.826 |           95.1413 | 4.47 GB (18.88%) |
|            1 |               64 |              64 |           1920.61  |           94.5963 | 4.48 GB (18.92%) |
|            1 |              128 |             128 |           2406.1   |           94.793  | 4.48 GB (18.92%) |
|            1 |              256 |             256 |           2521.08  |           94.1144 | 4.48 GB (18.92%) |
|            1 |              512 |             512 |           2478.28  |           93.4123 | 4.48 GB (18.92%) |
|            1 |             1024 |            1024 |           2256.22  |           94.0237 | 4.69 GB (19.78%) |
|            1 |             2048 |            2048 |           1831.71  |           94.2032 | 6.83 GB (28.83%) |
Casper's avatar
Casper committed
262

263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
### Aquila2 34B

- Note: Fast generation, fast context processing
- GPU: NVIDIA A100-SXM4-40GB
- Command: `python examples/benchmark.py --model_path casperhansen/aquilachat2-34b-awq --quant_file pytorch_model.bin.index.json`
- Version: GEMM

|   Batch Size |   Prefill Length |   Decode Length |   Prefill tokens/s |   Decode tokens/s | Memory (VRAM)     |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:------------------|
|            1 |               32 |              32 |            36.7505 |           23.423  | 18.26 GB (46.12%) |
|            1 |               64 |              64 |           516.544  |           23.3536 | 18.26 GB (46.12%) |
|            1 |              128 |             128 |           643.968  |           23.3803 | 18.26 GB (46.12%) |
|            1 |              256 |             256 |           736.236  |           23.389  | 18.34 GB (46.32%) |
|            1 |              512 |             512 |           829.405  |           23.3889 | 18.54 GB (46.84%) |
|            1 |             1024 |            1024 |           836.023  |           23.3757 | 18.95 GB (47.87%) |
|            1 |             2048 |            2048 |           802.632  |           23.3777 | 20.25 GB (51.15%) |
|            1 |             4096 |            4096 |           722.49   |           23.4252 | 25.38 GB (64.12%) |

Ji Lin's avatar
Ji Lin committed
281
282
## Reference

Casper's avatar
Casper committed
283
If you find AWQ useful or relevant to your research, you can cite their [paper](https://arxiv.org/abs/2306.00978):
Ji Lin's avatar
Ji Lin committed
284
285
286
287
288
289
290
291
292

```
@article{lin2023awq,
  title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
  author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
  journal={arXiv},
  year={2023}
}
```