README.md 10.7 KB
Newer Older
Zhekai Zhang's avatar
Zhekai Zhang committed
1
2
# Nunchaku

Muyang Li's avatar
Muyang Li committed
3
4
Nunchaku is an inference engine designed for 4-bit diffusion models, as demonstrated in our paper [SVDQuant](http://arxiv.org/abs/2411.05007). Please check [DeepCompressor](https://github.com/mit-han-lab/deepcompressor) for the quantization library.

Muyang Li's avatar
Muyang Li committed
5
### [Paper](http://arxiv.org/abs/2411.05007) | [Project](https://hanlab.mit.edu/projects/svdquant) | [Blog](https://hanlab.mit.edu/blog/svdquant) | [Demo](https://svdquant.mit.edu)
Zhekai Zhang's avatar
Zhekai Zhang committed
6

muyangli's avatar
muyangli committed
7
- **[2025-02-04]** **🚀 4-bit [FLUX.1-tools](https://blackforestlabs.ai/flux-1-tools/) is here!** Enjoy a **2-3× speedup** over the original models. Check out the [examples](./examples) for usage. **Gradio demo and ComfyUI integration are coming soon!**
muyangli's avatar
muyangli committed
8
9
10
11
- **[2025-01-23]** 🚀 **4-bit [SANA](https://nvlabs.github.io/Sana/) support is here!** Experience a 2-3× speedup compared to the 16-bit model. Check out the [usage example](./examples/sana_1600m_pag.py) and the [deployment guide](app/sana/t2i) for more details. Explore our live demo at [svdquant.mit.edu](https://svdquant.mit.edu)!
- **[2025-01-22]** 🎉 [**SVDQuant**](http://arxiv.org/abs/2411.05007) has been accepted to **ICLR 2025**!
- **[2024-12-08]** Support [ComfyUI](https://github.com/comfyanonymous/ComfyUI). Please check [comfyui/README.md](comfyui/README.md) for the usage.
- **[2024-11-07]** 🔥 Our latest **W4A4** Diffusion model quantization work [**SVDQuant**](https://hanlab.mit.edu/projects/svdquant) is publicly released! Check [**DeepCompressor**](https://github.com/mit-han-lab/deepcompressor) for the quantization library.
Zhekai Zhang's avatar
Zhekai Zhang committed
12
13
14
15

![teaser](./assets/teaser.jpg)
SVDQuant is a post-training quantization technique for 4-bit weights and activations that well maintains visual fidelity. On 12B FLUX.1-dev, it achieves 3.6× memory reduction compared to the BF16 model. By eliminating CPU offloading, it offers 8.7× speedup over the 16-bit model when on a 16GB laptop 4090 GPU, 3× faster than the NF4 W4A16 baseline. On PixArt-∑, it demonstrates significantly superior visual quality over other W4A4 or even W4A8 baselines. "E2E" means the end-to-end latency including the text encoder and VAE decoder.

Muyang Li's avatar
Muyang Li committed
16
**SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models**<br>
17
[Muyang Li](https://lmxyy.me)\*, [Yujun Lin](https://yujunlin.com)\*, [Zhekai Zhang](https://hanlab.mit.edu/team/zhekai-zhang)\*, [Tianle Cai](https://www.tianle.website/#/), [Xiuyu Li](https://xiuyuli.com), [Junxian Guo](https://github.com/JerryGJX), [Enze Xie](https://xieenze.github.io), [Chenlin Meng](https://cs.stanford.edu/~chenlin/), [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/), and [Song Han](https://hanlab.mit.edu/songhan) <br>
Zhekai Zhang's avatar
Zhekai Zhang committed
18
19
*MIT, NVIDIA, CMU, Princeton, UC Berkeley, SJTU, and Pika Labs* <br>

Muyang Li's avatar
Muyang Li committed
20
<p align="center">
21
  <img src="assets/demo.gif" width="100%"/>
Muyang Li's avatar
Muyang Li committed
22
</p>
Muyang Li's avatar
Muyang Li committed
23
24

## Method
Zhekai Zhang's avatar
Zhekai Zhang committed
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

#### Quantization Method -- SVDQuant

![intuition](./assets/intuition.gif)Overview of SVDQuant. Stage1: Originally, both the activation $\boldsymbol{X}$ and weights $\boldsymbol{W}$ contain outliers, making 4-bit quantization challenging.  Stage 2: We migrate the outliers from activations to weights, resulting in the updated activation $\hat{\boldsymbol{X}}$ and weights $\hat{\boldsymbol{W}}$. While $\hat{\boldsymbol{X}}$ becomes easier to quantize, $\hat{\boldsymbol{W}}$ now becomes more difficult. Stage 3: SVDQuant further decomposes $\hat{\boldsymbol{W}}$ into a low-rank component $\boldsymbol{L}_1\boldsymbol{L}_2$ and a residual $\hat{\boldsymbol{W}}-\boldsymbol{L}_1\boldsymbol{L}_2$ with SVD. Thus, the quantization difficulty is alleviated by the low-rank branch, which runs at 16-bit precision. 

#### Nunchaku Engine Design

![engine](./assets/engine.jpg) (a) Naïvely running low-rank branch with rank 32 will introduce 57% latency overhead due to extra read of 16-bit inputs in *Down Projection* and extra write of 16-bit outputs in *Up Projection*. Nunchaku optimizes this overhead with kernel fusion. (b) *Down Projection* and *Quantize* kernels use the same input, while *Up Projection* and *4-Bit Compute* kernels share the same output. To reduce data movement overhead, we fuse the first two and the latter two kernels together.


## Performance

![efficiency](./assets/efficiency.jpg)SVDQuant reduces the model size of the 12B FLUX.1 by 3.6×. Additionally, Nunchaku, further cuts memory usage of the 16-bit model by 3.5× and delivers 3.0× speedups over the NF4 W4A16 baseline on both the desktop and laptop NVIDIA RTX 4090 GPUs. Remarkably, on laptop 4090, it achieves in total 10.1× speedup by eliminating CPU offloading.

## Installation
Muyang Li's avatar
Muyang Li committed
40

Muyang Li's avatar
Muyang Li committed
41
42
**Note**:

muyangli's avatar
muyangli committed
43
*  Ensure your CUDA version is **≥ 12.2 on Linux** and **≥ 12.6 on Windows**.
Muyang Li's avatar
Muyang Li committed
44

muyangli's avatar
muyangli committed
45
46
47
*  For Windows user, please refer to [this issue](https://github.com/mit-han-lab/nunchaku/issues/6) for the instruction. Please upgrade your MSVC compiler to the latest version.

*  We currently support only NVIDIA GPUs with architectures sm_86 (Ampere: RTX 3090, A6000), sm_89 (Ada: RTX 4090), and sm_80 (A100). See [this issue](https://github.com/mit-han-lab/nunchaku/issues/1) for more details.
Muyang Li's avatar
Muyang Li committed
48
49


Zhekai Zhang's avatar
Zhekai Zhang committed
50
51
52
53
1. Install dependencies:
	```shell
	conda create -n nunchaku python=3.11
	conda activate nunchaku
muyangli's avatar
muyangli committed
54
	pip install torch torchvision torchaudio
Zhekai Zhang's avatar
Zhekai Zhang committed
55
56
57
58
59
60
61
62
63
64
65
66
67
68
	pip install diffusers ninja wheel transformers accelerate sentencepiece protobuf
	pip install huggingface_hub peft opencv-python einops gradio spaces GPUtil
	```
	
2. Install `nunchaku` package:
    Make sure you have `gcc/g++>=11`. If you don't, you can install it via Conda:
  
	```shell
	conda install -c conda-forge gxx=11 gcc=11
	```
	
	Then build the package from source:
	```shell
	git clone https://github.com/mit-han-lab/nunchaku.git
69
70
71
	cd nunchaku
	git submodule init
	git submodule update
Zhekai Zhang's avatar
Zhekai Zhang committed
72
73
74
75
76
	pip install -e .
	```

## Usage Example

muyangli's avatar
muyangli committed
77
In [examples](examples), we provide minimal scripts for running INT4 [FLUX.1](https://github.com/black-forest-labs/flux) and [Sana](https://github.com/NVlabs/Sana) models with Nunchaku. For example, the [script](examples/flux.1-dev.py) for [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) is as follows:
Zhekai Zhang's avatar
Zhekai Zhang committed
78
79
80

```python
import torch
81
from diffusers import FluxPipeline
Zhekai Zhang's avatar
Zhekai Zhang committed
82

83
from nunchaku.models.transformer_flux import NunchakuFluxTransformer2dModel
Zhekai Zhang's avatar
Zhekai Zhang committed
84

muyangli's avatar
muyangli committed
85
transformer = NunchakuFluxTransformer2dModel.from_pretrained("mit-han-lab/svdq-int4-flux.1-dev")
86
pipeline = FluxPipeline.from_pretrained(
muyangli's avatar
muyangli committed
87
    "black-forest-labs/FLUX.1-dev", transformer=transformer, torch_dtype=torch.bfloat16
Zhekai Zhang's avatar
Zhekai Zhang committed
88
).to("cuda")
muyangli's avatar
muyangli committed
89
90
image = pipeline("A cat holding a sign that says hello world", num_inference_steps=50, guidance_scale=3.5).images[0]
image.save("flux.1-dev.png")
Zhekai Zhang's avatar
Zhekai Zhang committed
91
92
```

muyangli's avatar
muyangli committed
93
Specifically, `nunchaku` shares the same APIs as [diffusers](https://github.com/huggingface/diffusers) and can be used in a similar way.
Zhekai Zhang's avatar
Zhekai Zhang committed
94

95
96
97
98
## ComfyUI

Please refer to [comfyui/README.md](comfyui/README.md) for the usage in [ComfyUI](https://github.com/comfyanonymous/ComfyUI).

Zhekai Zhang's avatar
Zhekai Zhang committed
99
100
## Gradio Demos

muyangli's avatar
muyangli committed
101
102
103
### FLUX.1 Models

#### Text-to-Image
Zhekai Zhang's avatar
Zhekai Zhang committed
104
105

```shell
muyangli's avatar
muyangli committed
106
cd app/flux.1/t2i
Zhekai Zhang's avatar
Zhekai Zhang committed
107
108
109
110
111
112
113
114
python run_gradio.py
```

* The demo also defaults to the FLUX.1-schnell model. To switch to the FLUX.1-dev model, use `-m dev`.
* By default, the Gemma-2B model is loaded as a safety checker. To disable this feature and save GPU memory, use `--no-safety-checker`.
* To further reduce GPU memory usage, you can enable the W4A16 text encoder by specifying `--use-qencoder`.
* By default, only the INT4 DiT is loaded. Use `-p int4 bf16` to add a BF16 DiT for side-by-side comparison, or `-p bf16` to load only the BF16 model.

muyangli's avatar
muyangli committed
115
#### Sketch-to-Image
Zhekai Zhang's avatar
Zhekai Zhang committed
116
117

```shell
muyangli's avatar
muyangli committed
118
cd app/flux.1/i2i
Zhekai Zhang's avatar
Zhekai Zhang committed
119
120
121
122
123
124
125
python run_gradio.py
```

* Similarly, the demo loads the Gemma-2B model as a safety checker by default. To disable this feature, use `--no-safety-checker`.
* To further reduce GPU memory usage, you can enable the W4A16 text encoder by specifying `--use-qencoder`.
* By default, we use our INT4 model. Use  `-p bf16` to switch to the BF16 model.

muyangli's avatar
muyangli committed
126
127
128
129
130
131
132
133
134
### Sana

#### Text-to-Image

```shell
cd app/sana/t2i
python run_gradio.py
```

Zhekai Zhang's avatar
Zhekai Zhang committed
135
136
## Benchmark

muyangli's avatar
muyangli committed
137
Please refer to [app/flux/t2i/README.md](app/flux/t2i/README.md) for instructions on reproducing our paper's quality results and benchmarking inference latency on FLUX.1 models.
Zhekai Zhang's avatar
Zhekai Zhang committed
138

Muyang Li's avatar
Muyang Li committed
139
140
## Roadmap

Muyang Li's avatar
Muyang Li committed
141
- [ ] Easy installation
142
- [x] Comfy UI node
Muyang Li's avatar
Muyang Li committed
143
144
- [ ] Customized LoRA conversion instructions
- [ ] Customized model quantization instructions
muyangli's avatar
muyangli committed
145
- [x] FLUX.1 tools support
Muyang Li's avatar
Muyang Li committed
146
- [ ] Modularization
muyangli's avatar
muyangli committed
147
148
- [ ] IP-Adapter integration
- [ ] Video Model support
Muyang Li's avatar
Muyang Li committed
149
150
- [ ] Metal backend

Zhekai Zhang's avatar
Zhekai Zhang committed
151
152
153
154
155
## Citation

If you find `nunchaku` useful or relevant to your research, please cite our paper:

```bibtex
Muyang Li's avatar
Muyang Li committed
156
@inproceedings{
Zhekai Zhang's avatar
Zhekai Zhang committed
157
158
159
  li2024svdquant,
  title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
  author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
Muyang Li's avatar
Muyang Li committed
160
161
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
Zhekai Zhang's avatar
Zhekai Zhang committed
162
163
164
165
166
167
168
}
```

## Related Projects

* [Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models](https://arxiv.org/abs/2211.02048), NeurIPS 2022 & T-PAMI 2023
* [SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models](https://arxiv.org/abs/2211.10438), ICML 2023
Muyang Li's avatar
Muyang Li committed
169
* [Q-Diffusion: Quantizing Diffusion Models](https://arxiv.org/abs/2302.04304), ICCV 2023
Zhekai Zhang's avatar
Zhekai Zhang committed
170
171
172
* [AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration](https://arxiv.org/abs/2306.00978), MLSys 2024
* [DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models](https://arxiv.org/abs/2402.19481), CVPR 2024
* [QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving](https://arxiv.org/abs/2405.04532), ArXiv 2024
muyangli's avatar
muyangli committed
173
* [SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers](https://arxiv.org/abs/2410.10629), ICLR 2025
Zhekai Zhang's avatar
Zhekai Zhang committed
174
175
176
177
178
179
180

## Acknowledgments

We thank MIT-IBM Watson AI Lab, MIT and Amazon Science Hub, MIT AI Hardware Program, National Science Foundation, Packard Foundation, Dell, LG, Hyundai, and Samsung for supporting this research. We thank NVIDIA for donating the DGX server.

We use [img2img-turbo](https://github.com/GaParmar/img2img-turbo) to train the sketch-to-image LoRA. Our text-to-image and sketch-to-image UI is built upon [playground-v.25](https://huggingface.co/spaces/playgroundai/playground-v2.5/blob/main/app.py) and [img2img-turbo](https://github.com/GaParmar/img2img-turbo/blob/main/gradio_sketch2image.py), respectively. Our safety checker is borrowed from [hart](https://github.com/mit-han-lab/hart).

muyangli's avatar
muyangli committed
181
Nunchaku is also inspired by many open-source libraries, including (but not limited to) [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), [vLLM](https://github.com/vllm-project/vllm), [QServe](https://github.com/mit-han-lab/qserve), [AWQ](https://github.com/mit-han-lab/llm-awq), [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), and [Atom](https://github.com/efeslab/Atom).