README.md 4.62 KB
Newer Older
lijian6's avatar
lijian6 committed
1
2
3
4
# Stable Diffusion

## 模型介绍

lijian6's avatar
lijian6 committed
5
Stable Diffusion是一种文图生成潜在扩散型。我们能够使用LAION-512B数据库子集的512x512图像上训练潜在扩散模型。与谷歌的Imagen类似,该模型使用frozen CLIP ViT-L/14文本编码器根据文本提示对模型进行调节。凭借其860M UNet和123M文本编码器,该模型相对轻巧,可以在少至具有10GB VRAM的GPU上运行。请参阅下面的此部分和模型卡。
lijian6's avatar
lijian6 committed
6
7
8
9
10
11
12
  
## 环境配置

1、使用镜像:
在光源可拉取推理的docker镜像,拉取方式如下:

```
lijian6's avatar
lijian6 committed
13
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:stable-diffusion_pytorch1.10.0-centos7.6-dtk-22.10.1-py38
lijian6's avatar
lijian6 committed
14
15
16
17
18
19
20
21
22
23
24
25
```

2、使用conda环境:
A suitable [conda](https://conda.io/) environment named `ldm` can be created
and activated with:

```
conda env create -f environment.yaml
conda activate ldm
```

## 下载Stable Diffusion模型
lijian6's avatar
lijian6 committed
26
```
lijian6's avatar
lijian6 committed
27
cd stablediffussion
lijian6's avatar
lijian6 committed
28
29
30
### checkpoint version
git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v-1-1-original
lijian6's avatar
lijian6 committed
31
32
33
git clone https://huggingface.co/CompVis/stable-diffusion-v-1-2-original
git clone https://huggingface.co/CompVis/stable-diffusion-v-1-3-original
git clone https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
lijian6's avatar
lijian6 committed
34
35
36

### diffusers version
git clone https://huggingface.co/CompVis/stable-diffusion-v1-1
lijian6's avatar
lijian6 committed
37
38
39
40
41
42
43
git clone https://huggingface.co/CompVis/stable-diffusion-v1-2
git clone https://huggingface.co/CompVis/stable-diffusion-v1-3
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5

### safety-checker
git clone https://huggingface.co/CompVis/stable-diffusion-safety-checker
lijian6's avatar
lijian6 committed
44
```
lijian6's avatar
lijian6 committed
45

lijian6's avatar
lijian6 committed
46

lijian6's avatar
lijian6 committed
47
48
49
50
51
52
53
54
55
56
57
58
## 运行示例:

### 运行checkpoint version示例:

下载完`stable-diffusion-*-original`模型之后,link到对应的目录:
```
mkdir -p models/ldm/stable-diffusion-v1/
ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt 
```

运行:
```
lijian6's avatar
lijian6 committed
59
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
lijian6's avatar
lijian6 committed
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
```


```commandline
usage: txt2img.py [-h] [--prompt [PROMPT]] [--outdir [OUTDIR]] [--skip_grid] [--skip_save] [--ddim_steps DDIM_STEPS] [--plms] [--laion400m] [--fixed_code] [--ddim_eta DDIM_ETA]
                  [--n_iter N_ITER] [--H H] [--W W] [--C C] [--f F] [--n_samples N_SAMPLES] [--n_rows N_ROWS] [--scale SCALE] [--from-file FROM_FILE] [--config CONFIG] [--ckpt CKPT]
                  [--seed SEED] [--precision {full,autocast}]

optional arguments:
  -h, --help            show this help message and exit
  --prompt [PROMPT]     the prompt to render
  --outdir [OUTDIR]     dir to write results to
  --skip_grid           do not save a grid, only individual samples. Helpful when evaluating lots of samples
  --skip_save           do not save individual samples. For speed measurements.
  --ddim_steps DDIM_STEPS
                        number of ddim sampling steps
  --plms                use plms sampling
  --laion400m           uses the LAION400M model
  --fixed_code          if enabled, uses the same starting code across samples
  --ddim_eta DDIM_ETA   ddim eta (eta=0.0 corresponds to deterministic sampling
  --n_iter N_ITER       sample this often
  --H H                 image height, in pixel space
  --W W                 image width, in pixel space
  --C C                 latent channels
  --f F                 downsampling factor
  --n_samples N_SAMPLES
                        how many samples to produce for each given prompt. A.k.a. batch size
  --n_rows N_ROWS       rows in the grid (default: n_samples)
  --scale SCALE         unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))
  --from-file FROM_FILE
                        if specified, load prompts from this file
  --config CONFIG       path to config which constructs model
  --ckpt CKPT           path to checkpoint of model
  --seed SEED           the seed (for reproducible sampling)
  --precision {full,autocast}
                        evaluate at this precision
```

### 运行Diffusers示例:

lijian6's avatar
lijian6 committed
100
下载diffusers, 使用Diffusers.py运行:
lijian6's avatar
lijian6 committed
101
102
```py
# make sure you're logged in with `huggingface-cli login`
lijian6's avatar
lijian6 committed
103
import torch
lijian6's avatar
lijian6 committed
104
105
106
107
from torch import autocast
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
lijian6's avatar
lijian6 committed
108
	"stable-diffusion-v1-4", 
lijian6's avatar
lijian6 committed
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
	use_auth_token=True
).to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt)["sample"][0]  
    
image.save("astronaut_rides_horse.png")
```

## 源码仓库及问题反馈
http://developer.hpccube.com/codes/modelzoo/stablediffussion.git

## 参考
https://github.com/CompVis/stable-diffusion