README.md 4.18 KB
Newer Older
yongshk's avatar
更新  
yongshk committed
1
# FLAVR
yongshk's avatar
yongshk committed
2
3
4
5
6
7
8

## 论文

` FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation `

- https://arxiv.org/pdf/2012.08512.pdf

yongshk's avatar
更新  
yongshk committed
9
## 模型结构
yongshk's avatar
yongshk committed
10
11
 FLAVR模型是一个3D U-Net模型,通过拓展2D U-Net,将2D卷积替换为3D卷积而得到的。该模型能够更准确地对输入帧之间的时间动态进行建模,从而获得更好的插值质量。 

yongshk's avatar
yongshk committed
12
![](https://developer.hpccube.com/codes/modelzoo/flavr_pytorch/-/raw/master/doc/arch_dia.png)
yongshk's avatar
yongshk committed
13
14
15

## 算法原理

yongshk's avatar
yongshk committed
16
  3D U-Net结构、encoder部分采用ResNet-3D,decoder部分采用3D TransConv,以及Spatio-Temporal Feature Gating ![](https://developer.hpccube.com/codes/modelzoo/flavr_pytorch/-/raw/master/doc/%E5%8E%9F%E7%90%86.png)
yongshk's avatar
yongshk committed
17
18
19
20
21

## 环境配置

### Docker(方法一)

dcuai's avatar
dcuai committed
22
此处提供[光源](https://sourcefind.cn/#/main-page)拉取docker镜像
yongshk's avatar
yongshk committed
23
24

```
25
26
27
28
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-py3.10-dtk24.04.3-ubuntu20.04 
docker run -it --network=host --name=flavr --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=32G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 -v /opt/hyhal:/opt/hyhal:ro -v path-to-project:/mnt --ulimit memlock=-1:-1 image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-py3.10-dtk24.04.3-ubuntu20.04
path-to-project:项目文件路径;
cd /mnt
dcuai's avatar
dcuai committed
29
pip install -r requirements.txt
yongshk's avatar
yongshk committed
30
31
32
33
34
35
36
37
```

### Dockerfile(方法二)

dockerfile使用方法

```
docker build --no-cache -t flavr:latest .
38
docker run -dit --network=host --name=flavr --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /opt/hyhal:/opt/hyhal:ro -v path-to-project:/mnt --ulimit stack=-1:-1 --ulimit memlock=-1:-1 flavr:latest
yongshk's avatar
yongshk committed
39
40
41
42
43
44
45
46
docker exec -it flavr /bin/bash
```

### Anaconda(方法三)

关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.hpccube.com/tool/)开发者社区下载安装。

```
47
DTK驱动:dtk24.04.3
shantf's avatar
shantf committed
48
python:python3.10
49
50
torch==2.1.0+das.opt2.dtk24043
torchvision==0.16.0+das.opt1.dtk24042
yongshk's avatar
yongshk committed
51
52
53
54
55
56
57
```

`Tips:以上dtk驱动、python等DCU相关工具版本需要严格一一对应`

其它非深度学习库参照requirements.txt安装:

```
dcuai's avatar
dcuai committed
58
59
pip install -r requirements.txt
pip install tensorboard setuptools==57.5.0 six
yongshk's avatar
yongshk committed
60
```
61
62
63
64
65
按照下面操作,可以避免因为不使用HIP_VISIBLE_DEVICES=xx,导致报错。
```
unzip /mnt/hipnn-lib-1128.zip
cp /mnt/hipnn-lib-1128/lib/release/* /opt/dtk/lib/
```
yongshk's avatar
yongshk committed
66

yongshk's avatar
更新  
yongshk committed
67
## 数据集
yongshk's avatar
yongshk committed
68

yongshk's avatar
yongshk committed
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
`模型使用数据为 Vimeo-90K `

-  http://toflow.csail.mit.edu/

项目中已提供用于试验训练的迷你数据集,训练数据目录结构如下,用于正常训练的完整数据集请按此目录结构进行制备:

```
 ── datasets
   │   ├── readme.txt
   │   ├── sep_testlist.txt 
   │   ├── sep_trainlist.txt 
   │   ├── sequences
          │    ├── xx/xx/xxx.png
          │    ├── xx/xx/xxx.png
```




## 训练
### 单机单卡

```
shantf's avatar
shantf committed
92
注意:在k100_AI上运行需要在命令行前设置HIP_VISIBLE_DEVICES=xx,指定一下使用显卡
yongshk's avatar
yongshk committed
93
94
95
96
python main.py --batch_size 32 --test_batch_size 32 --dataset vimeo90K_septuplet --loss 1*L1 --max_epoch 200 --lr 0.0002 --data_root datasets --n_outputs 1 --num_gpu 1
```

### 单机多卡
yongshk's avatar
yongshk committed
97

yongshk's avatar
yongshk committed
98
99
100
```
python main.py --batch_size 32 --test_batch_size 32 --dataset vimeo90K_septuplet --loss 1*L1 --max_epoch 200 --lr 0.0002 --data_root datasets --n_outputs 1 --num_gpu 4
```
yongshk's avatar
yongshk committed
101

yongshk's avatar
yongshk committed
102
## 推理
yongshk's avatar
yongshk committed
103

yongshk's avatar
yongshk committed
104
    python test.py --dataset vimeo90K_septuplet --data_root <data_path> --load_from <saved_model> --n_outputs 1
yongshk's avatar
yongshk committed
105

yongshk's avatar
yongshk committed
106
## result
yongshk's avatar
yongshk committed
107

yongshk's avatar
yongshk committed
108
测试图
yongshk's avatar
yongshk committed
109

yongshk's avatar
yongshk committed
110
![](https://developer.hpccube.com/codes/modelzoo/flavr_pytorch/-/raw/master/doc/sprite.gif)
yongshk's avatar
yongshk committed
111

yongshk's avatar
yongshk committed
112
### 精度
yongshk's avatar
yongshk committed
113

chenzk's avatar
chenzk committed
114
测试数据:[vimeo-90](http://toflow.csail.mit.edu/),使用的加速卡:Z100L。
yongshk's avatar
yongshk committed
115
116
117

根据测试结果情况填写表格:

yongshk's avatar
yongshk committed
118
119
120
|  FLAVR   |   PSNR    | SSIM     |
| :------: | :-------: | -------- |
| vimeo-90 | 18.511020 | 0.702564 |
yongshk's avatar
yongshk committed
121
122
123
124
125
126
127
128
129

## 应用场景

### 算法类别

`图像超分`

### 热点应用行业

dcuai's avatar
dcuai committed
130
`设计`,`制造`,`科研`
yongshk's avatar
yongshk committed
131

yongshk's avatar
更新  
yongshk committed
132
## 源码仓库及问题反馈
yongshk's avatar
yongshk committed
133

dcuai's avatar
dcuai committed
134
*   https://developer.hpccube.com/codes/modelzoo/flavr_pytorch 
yongshk's avatar
yongshk committed
135
136
137
## 参考资料
*  https://github.com/tarun005/FLAVR