README.md 3.95 KB
Newer Older
1
# ColoDiffusion
2
*[ColoDiffusion](https://github.com/hpcaitech/ColoDiffusion) is a Faster Train implementation of the model [stable-diffusion](https://github.com/CompVis/stable-diffusion) from [Stability AI](https://stability.ai/)* 
3
4
5
6
7

We take advantage of Colosssal-AI to exploit multiple optimization strategies
, e.g. data parallelism, tensor parallelism, mixed precision & ZeRO, to scale the training to multiple GPUs.


8
![](./Merged-0001.png)
9
10
11

[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion
model.
12
13
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. 
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487), 
14
15
16
17
this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.
With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.
See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).

18
  
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
## Requirements
A suitable [conda](https://conda.io/) environment named `ldm` can be created
and activated with:

```
conda env create -f environment.yaml
conda activate ldm
```

You can also update an existing [latent diffusion](https://github.com/CompVis/latent-diffusion) environment by running

```
conda install pytorch torchvision -c pytorch
pip install transformers==4.19.2 diffusers invisible-watermark
pip install -e .
34
``` 
35
36
37
38
39
40
41
42
43

### Install ColossalAI

```
git clone https://github.com/hpcaitech/ColossalAI.git
git checkout v0.1.10
pip install .
```

44
45
46
47
48
49
50
51
52
53
### Install colossalai lightning 
```
git clone -b colossalai https://github.com/Fazziekey/lightning.git
pip install .
```

## Dataset
The DataSet is from [LAION-5B](https://laion.ai/blog/laion-5b/), the subset of [LAION](https://laion.ai/), 
you should the change the `data.file_path` in the `config/train_colossalai.yaml`

54
55
56
57
58
59
## Training

we provide the script `train.sh` to run the training task , and three Stategy in `configs`:`train_colossalai.yaml`, `train_ddp.yaml`, `train_deepspeed.yaml`

for example, you can run the training from colossalai by
```
60
python main.py --logdir /tmp -t --postfix test -b config/train_colossalai.yaml 
61
62
```

63
64
65
- you can change the `--logdir` the save the log information and the last checkpoint

### Training config
66
67
you can change the trainging config in the yaml file

68
- accelerator: acceleratortype, default 'gpu' 
69
70
71
72
73
- devices: device number used for training, default 4
- max_epochs: max training epochs
- precision: usefp16 for training or not, default 16, you must use fp16 if you want to apply colossalai


74
## Comments 
75
76

- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)
77
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch). 
78
79
Thanks for open-sourcing!

80
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories). 
81

82
- the implementation of [flash attention](https://github.com/HazyResearch/flash-attention) is from [HazyResearch](https://github.com/HazyResearch) 
83
84
85
86
87

## BibTeX

```
@misc{rombach2021highresolution,
88
      title={High-Resolution Image Synthesis with Latent Diffusion Models}, 
89
90
91
92
93
94
95
96
97
98
99
100
101
      author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
      year={2021},
      eprint={2112.10752},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@article{dao2022flashattention,
  title={FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness},
  author={Dao, Tri and Fu, Daniel Y. and Ermon, Stefano and Rudra, Atri and R{\'e}, Christopher},
  journal={arXiv preprint arXiv:2205.14135},
  year={2022}
}
```
102
103