README.md 8.42 KB
Newer Older
Fazzie's avatar
Fazzie committed
1
2
# ColoDiffusion: Stable Diffusion with Colossal-AI

jiaruifang's avatar
jiaruifang committed
3
4
*[Colosssal-AI](https://github.com/hpcaitech/ColossalAI) provides a faster and lower cost solution for pretraining and
fine-tuning for AIGC (AI-Generated Content) applications such as the model [stable-diffusion](https://github.com/CompVis/stable-diffusion) from [Stability AI](https://stability.ai/).*
5

6
We take advantage of [Colosssal-AI](https://github.com/hpcaitech/ColossalAI) to exploit multiple optimization strategies
7
8
, e.g. data parallelism, tensor parallelism, mixed precision & ZeRO, to scale the training to multiple GPUs.

9
## Stable Diffusion
Fazzie's avatar
Fazzie committed
10

11
[Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) is a latent text-to-image diffusion
12
model.
jiaruifang's avatar
jiaruifang committed
13
14
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
15
16
this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.

17
18
19
20
21
22
23
24
25
26
<p id="diffusion_train" align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/diffusion_train.png" width=800/>
</p>

[Stable Diffusion with Colossal-AI](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion) provides **6.5x faster training and pretraining cost saving, the hardware cost of fine-tuning can be almost 7X cheaper** (from RTX3090/4090 24GB to RTX3050/2070 8GB).

<p id="diffusion_demo" align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/diffusion_demo.png" width=800/>
</p>

27
## Requirements
Fazzie's avatar
Fazzie committed
28

29
30
31
32
33
34
35
36
37
38
39
A suitable [conda](https://conda.io/) environment named `ldm` can be created
and activated with:

```
conda env create -f environment.yaml
conda activate ldm
```

You can also update an existing [latent diffusion](https://github.com/CompVis/latent-diffusion) environment by running

```
Fazzie's avatar
Fazzie committed
40
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
41
42
pip install transformers==4.19.2 diffusers invisible-watermark
pip install -e .
jiaruifang's avatar
jiaruifang committed
43
```
44

Fazzie's avatar
Fazzie committed
45
46
47
48
49
50
51
52
53
### install lightning

```
git clone https://github.com/1SAA/lightning.git
git checkout strategy/colossalai
export PACKAGE_NAME=pytorch
pip install .
```

jiaruifang's avatar
jiaruifang committed
54
### Install [Colossal-AI v0.1.10](https://colossalai.org/download/) From Our Official Website
Fazzie's avatar
Fazzie committed
55

56
```
Fazzie's avatar
Fazzie committed
57
pip install colossalai==0.1.12+torch1.12cu11.3 -f https://release.colossalai.org
58
59
```

60
61
> The specified version is due to the interface incompatibility caused by the latest update of [Lightning](https://github.com/Lightning-AI/lightning), which will be fixed in the near future.

Fazzie's avatar
Fazzie committed
62
63
64
## Download the model checkpoint from pretrained

### stable-diffusion-v1-4
Fazzie's avatar
Fazzie committed
65

Fazzie's avatar
Fazzie committed
66
67
68
69
70
71
72
73
Our default model config use the weight from [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4?text=A+mecha+robot+in+a+favela+in+expressionist+style)

```
git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
```

### stable-diffusion-v1-5 from runway
Fazzie's avatar
Fazzie committed
74

Fazzie's avatar
Fazzie committed
75
76
77
78
79
80
81
If you want to useed the Last [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) wiegh from runwayml

```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```

82
## Dataset
Fazzie's avatar
Fazzie committed
83

84
The dataSet is from [LAION-5B](https://laion.ai/blog/laion-5b/), the subset of [LAION](https://laion.ai/),
85
86
you should the change the `data.file_path` in the `config/train_colossalai.yaml`

87
88
## Training

Fazzie's avatar
Fazzie committed
89
We provide the script `train.sh` to run the training task , and two Stategy in `configs`:`train_colossalai.yaml` and `train_ddp.yaml`
90

91
For example, you can run the training from colossalai by
92
```
Fazzie's avatar
Fazzie committed
93
python main.py --logdir /tmp/ -t -b configs/train_colossalai.yaml
94
95
```

96
97
98
- you can change the `--logdir` the save the log information and the last checkpoint

### Training config
Fazzie's avatar
Fazzie committed
99

100
You can change the trainging config in the yaml file
101

jiaruifang's avatar
jiaruifang committed
102
- accelerator: acceleratortype, default 'gpu'
103
104
105
106
- devices: device number used for training, default 4
- max_epochs: max training epochs
- precision: usefp16 for training or not, default 16, you must use fp16 if you want to apply colossalai

ziyuhuang123's avatar
ziyuhuang123 committed
107
## Finetune Example
Fazzie's avatar
Fazzie committed
108
### Training on Teyvat Datasets
109

Fazzie's avatar
Fazzie committed
110
We provide the finetuning example on [Teyvat](https://huggingface.co/datasets/Fazzie/Teyvat) dataset, which is create by BLIP generated captions.
111

Fazzie's avatar
Fazzie committed
112
You can run by config `configs/Teyvat/train_colossalai_teyvat.yaml`
113
```
Fazzie's avatar
Fazzie committed
114
python main.py --logdir /tmp/ -t -b configs/Teyvat/train_colossalai_teyvat.yaml
115
116
```

117
118
119
## Inference
you can get yout training last.ckpt and train config.yaml in your `--logdir`, and run by
```
Fazzie's avatar
Fazzie committed
120
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
    --outdir ./output \
    --config path/to/logdir/checkpoints/last.ckpt \
    --ckpt /path/to/logdir/configs/project.yaml  \
```

```commandline
usage: txt2img.py [-h] [--prompt [PROMPT]] [--outdir [OUTDIR]] [--skip_grid] [--skip_save] [--ddim_steps DDIM_STEPS] [--plms] [--laion400m] [--fixed_code] [--ddim_eta DDIM_ETA]
                  [--n_iter N_ITER] [--H H] [--W W] [--C C] [--f F] [--n_samples N_SAMPLES] [--n_rows N_ROWS] [--scale SCALE] [--from-file FROM_FILE] [--config CONFIG] [--ckpt CKPT]
                  [--seed SEED] [--precision {full,autocast}]

optional arguments:
  -h, --help            show this help message and exit
  --prompt [PROMPT]     the prompt to render
  --outdir [OUTDIR]     dir to write results to
  --skip_grid           do not save a grid, only individual samples. Helpful when evaluating lots of samples
  --skip_save           do not save individual samples. For speed measurements.
  --ddim_steps DDIM_STEPS
                        number of ddim sampling steps
  --plms                use plms sampling
  --laion400m           uses the LAION400M model
  --fixed_code          if enabled, uses the same starting code across samples
  --ddim_eta DDIM_ETA   ddim eta (eta=0.0 corresponds to deterministic sampling
  --n_iter N_ITER       sample this often
  --H H                 image height, in pixel space
  --W W                 image width, in pixel space
  --C C                 latent channels
  --f F                 downsampling factor
  --n_samples N_SAMPLES
                        how many samples to produce for each given prompt. A.k.a. batch size
  --n_rows N_ROWS       rows in the grid (default: n_samples)
  --scale SCALE         unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))
  --from-file FROM_FILE
                        if specified, load prompts from this file
  --config CONFIG       path to config which constructs model
  --ckpt CKPT           path to checkpoint of model
  --seed SEED           the seed (for reproducible sampling)
  --precision {full,autocast}
                        evaluate at this precision
```
160

jiaruifang's avatar
jiaruifang committed
161
## Comments
162
163

- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)
164
165
, [lucidrains](https://github.com/lucidrains/denoising-diffusion-pytorch),
[Stable Diffusion](https://github.com/CompVis/stable-diffusion), [Lightning](https://github.com/Lightning-AI/lightning) and [Hugging Face](https://huggingface.co/CompVis/stable-diffusion).
166
167
Thanks for open-sourcing!

jiaruifang's avatar
jiaruifang committed
168
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
169

jiaruifang's avatar
jiaruifang committed
170
- The implementation of [flash attention](https://github.com/HazyResearch/flash-attention) is from [HazyResearch](https://github.com/HazyResearch).
171
172
173
174

## BibTeX

```
175
176
177
178
179
180
@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}
181
@misc{rombach2021highresolution,
182
183
184
185
186
187
  title={High-Resolution Image Synthesis with Latent Diffusion Models},
  author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
  year={2021},
  eprint={2112.10752},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
188
189
190
191
192
193
194
195
}
@article{dao2022flashattention,
  title={FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness},
  author={Dao, Tri and Fu, Daniel Y. and Ermon, Stefano and Rudra, Atri and R{\'e}, Christopher},
  journal={arXiv preprint arXiv:2205.14135},
  year={2022}
}
```