README.md 5.4 KB
Newer Older
mashun1's avatar
mashun1 committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# DynamiCrafter

## 论文

**DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors**

* https://arxiv.org/abs/2310.12190

## 模型结构

该模型对Stable Diffusion进行了扩展,使其可以生成视频。在训练时采用双流图像注入(`Dual-stream image injection`)机制,该机制以一种上下文感知的方式继承视觉细节并提取输入图像特征。模型的整体流程是这样的,输入分别是`x`以及$`x^m`$(`x`中随机帧),视频`x`逐帧通过`VAE`的编码器部分获取 $`z_0`$,图像`x_m`通过编码器并`Repeat`后与`z_t`($`z_0`$扩散后得到)拼接进入`Denoising U-Net`,同时,由$`x^m`$经过`CLIP image encoder`以及`Query transformer`后得到的条件与`FPS``Text`特征一同进入`U-Net`进行训练。

![Alt text](readme_imgs/image-1.png)


## 算法原理

该算法在文本生成视频的基础上,增加了视觉信息,使得在视频生成的过程中可以保留视觉的细节信息。

![Alt text](readme_imgs/image-2.png)

## 环境配置

### Docker(方法一)

    docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-centos7.6-dtk23.10.1-py38

    docker run --shm-size 10g --network=host --name=dynamicrafter --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v 项目地址(绝对路径):/home/ -v /opt/hyhal:/opt/hyhal:ro -it <your IMAGE ID> bash

    pip install -r requirements.txt

    pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl  (whl.zip文件中)

    cd xformers && pip install xformers==0.0.23 --no-deps && bash patch_xformers.rocm.sh  (whl.zip文件中)

### Docker(方法二)

    # 需要在对应的目录下
    docker build -t <IMAGE_NAME>:<TAG> .

    docker run --shm-size 10g --network=host --name=dynamicrafter --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v 项目地址(绝对路径):/home/ -v /opt/hyhal:/opt/hyhal:ro -it <your IMAGE ID> bash

    pip install -r requirements.txt

    pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl  (whl.zip文件中)

    cd xformers && pip install xformers==0.0.23 --no-deps && bash patch_xformers.rocm.sh  (whl.zip文件中)

### Anaconda (方法三)
1、关于本项目DCU显卡所需的特殊深度学习库可从光合开发者社区下载安装:
https://developer.hpccube.com/tool/

    DTK驱动:dtk23.10.1
    python:python3.8
    torch:2.1.0
mashun1's avatar
mashun1 committed
56
57
58
    torchvision:0.16.0
    triton:2.1.0

mashun1's avatar
mashun1 committed
59
60
61
62
63

Tips:以上dtk驱动、python、torch等DCU相关工具版本需要严格一一对应

2、其它非特殊库参照requirements.txt安装

mashun1's avatar
mashun1 committed
64
65
    pip install -r requirements.txt

mashun1's avatar
mashun1 committed
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
    pip install flash_attn-2.0.4_torch2.1_dtk2310-cp38-cp38-linux_x86_64.whl  (whl.zip文件中)

    cd xformers && pip install xformers==0.0.23 --no-deps && bash patch_xformers.rocm.sh  (whl.zip文件中)


## 数据集



## 推理

### 模型下载

|Model|Resolution|GPU Mem. & Inference Time (A100, ddim 50steps)|Checkpoint|
|:---------|:---------|:--------|:--------|
|DynamiCrafter1024|576x1024|18.3GB & 75s (`perframe_ae=True`)|https://huggingface.co/Doubiiu/DynamiCrafter_1024/blob/main/model.ckpt|
|DynamiCrafter512|320x512|12.8GB & 20s (`perframe_ae=True`)|https://huggingface.co/Doubiiu/DynamiCrafter_512/blob/main/model.ckpt|
|DynamiCrafter256|256x256|11.9GB  & 10s (`perframe_ae=False`)|https://huggingface.co/Doubiiu/DynamiCrafter/blob/main/model.ckpt|
mashun1's avatar
mashun1 committed
84
|DynamiCrafter512_interp|	320x512 |	12.8GB & 20s (`perframe_ae=True`)|https://huggingface.co/Doubiiu/DynamiCrafter_512_Interp/blob/main/model.ckpt|
mashun1's avatar
mashun1 committed
85
86
87
88
89
90
91
92
93
94

注意:若无法访问`huggingface`,可使用镜像`hf-mirror`(替换`huggingface.co`)。若无法访问`huggingface`,需要执行`export HF_ENDPOINT=https://hf-mirror.com`设置环境变量,用以自动下载其他必要模型。

模型文件结构如下:

    checkpoints/
    |── dynamicrafter_512_v1
        └── model.ckpt
    |── dynamicrafter_1024_v1
        └── model.ckpt
mashun1's avatar
mashun1 committed
95
    |── dynamicrafter_256_v1
mashun1's avatar
mashun1 committed
96
        └── model.ckpt
mashun1's avatar
mashun1 committed
97
    └── ...
mashun1's avatar
mashun1 committed
98
99
100
101
102
103
104
105
106
107


### 命令行

    # Run on a single GPU:
    # Select the model based on required resolutions: i.e., 1024|512|320:
    sh scripts/run.sh 512
    # Run on multiple GPUs for parallel inference:
    sh scripts/run_mp.sh 512

mashun1's avatar
mashun1 committed
108
109
110
    sh scripts/run_application.sh interp # Generate frame interpolation
    sh scripts/run_application.sh loop   # Looping video generation

mashun1's avatar
mashun1 committed
111
112
113
114
### gradio页面

    python gradio_app.py --res 512

mashun1's avatar
mashun1 committed
115
    python gradio_app_interp_and_loop.py 
mashun1's avatar
mashun1 committed
116

mashun1's avatar
mashun1 committed
117
## result
mashun1's avatar
mashun1 committed
118

mashun1's avatar
mashun1 committed
119
### normal
mashun1's avatar
mashun1 committed
120
121
122
123
124
||输入|输出|
|:---|:---|:---|
|image|![alt text](readme_imgs/bloom01.png)|![Alt text](readme_imgs/image-3.gif)|
|prompt|time-lapse of a blooming flower with leaves and a stem||

mashun1's avatar
mashun1 committed
125
126
127
128
129
130
131
132
133
134
135
136
137

### interp
||输入1|输入2|结果|
|:---|:---|:---|:---|
|image|![alt text](readme_imgs/smile_01.png)|![alt text](readme_imgs/smile_02.png)|![alt text](readme_imgs/r2.gif)|
|prompt|a smiling girl|||

### loop
||输入|结果|
|:---|:---|:---|
|image|![alt text](readme_imgs/24.png)|![alt text](readme_imgs/r3.gif)|
|prompt|a beach with waves and clouds at sunset||

mashun1's avatar
mashun1 committed
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
### 精度



## 应用场景

### 算法类别

`AIGC`

### 热点应用行业

`媒体,科研,教育`

## 源码仓库及问题反馈

https://developer.hpccube.com/codes/modelzoo/dynamicrafter_pytorch

## 参考资料

mashun1's avatar
mashun1 committed
158
* https://github.com/Doubiiu/DynamiCrafter