README.md 3.78 KB
Newer Older
yongshk's avatar
更新  
yongshk committed
1
# FLAVR
yongshk's avatar
yongshk committed
2
3
4
5
6
7
8

## 论文

` FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation `

- https://arxiv.org/pdf/2012.08512.pdf

yongshk's avatar
更新  
yongshk committed
9
## 模型结构
yongshk's avatar
yongshk committed
10
11
 FLAVR模型是一个3D U-Net模型,通过拓展2D U-Net,将2D卷积替换为3D卷积而得到的。该模型能够更准确地对输入帧之间的时间动态进行建模,从而获得更好的插值质量。 

yongshk's avatar
yongshk committed
12
![](https://developer.hpccube.com/codes/modelzoo/flavr_pytorch/-/raw/master/doc/arch_dia.png)
yongshk's avatar
yongshk committed
13
14
15

## 算法原理

yongshk's avatar
yongshk committed
16
  3D U-Net结构、encoder部分采用ResNet-3D,decoder部分采用3D TransConv,以及Spatio-Temporal Feature Gating ![](https://developer.hpccube.com/codes/modelzoo/flavr_pytorch/-/raw/master/doc/%E5%8E%9F%E7%90%86.png)
yongshk's avatar
yongshk committed
17
18
19
20
21
22
23
24
25
26

## 环境配置

### Docker(方法一)

此处提供[光源](https://www.sourcefind.cn/#/service-details)拉取docker镜像

```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-22.10-py37-latest
docker run -it --network=host --name=flavr --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=32G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-22.10-py37-latest
dcuai's avatar
dcuai committed
27
pip install -r requirements.txt
yongshk's avatar
yongshk committed
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
```

### Dockerfile(方法二)

dockerfile使用方法

```
docker build --no-cache -t flavr:latest .
docker run -dit --network=host --name=flavr --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 flavr:latest
docker exec -it flavr /bin/bash
```

### Anaconda(方法三)

关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.hpccube.com/tool/)开发者社区下载安装。

```
DTK驱动:dtk22.10
python:python3.7
torch==1.10.0a0+git2040069.dtk2210
torchvision==0.10.0a0+e04d001.dtk2210
```

`Tips:以上dtk驱动、python等DCU相关工具版本需要严格一一对应`

其它非深度学习库参照requirements.txt安装:

```
dcuai's avatar
dcuai committed
56
57
pip install -r requirements.txt
pip install tensorboard setuptools==57.5.0 six
yongshk's avatar
yongshk committed
58
```
yongshk's avatar
yongshk committed
59

yongshk's avatar
更新  
yongshk committed
60
## 数据集
yongshk's avatar
yongshk committed
61

yongshk's avatar
yongshk committed
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
`模型使用数据为 Vimeo-90K `

-  http://toflow.csail.mit.edu/

项目中已提供用于试验训练的迷你数据集,训练数据目录结构如下,用于正常训练的完整数据集请按此目录结构进行制备:

```
 ── datasets
   │   ├── readme.txt
   │   ├── sep_testlist.txt 
   │   ├── sep_trainlist.txt 
   │   ├── sequences
          │    ├── xx/xx/xxx.png
          │    ├── xx/xx/xxx.png
```




## 训练
### 单机单卡

```
python main.py --batch_size 32 --test_batch_size 32 --dataset vimeo90K_septuplet --loss 1*L1 --max_epoch 200 --lr 0.0002 --data_root datasets --n_outputs 1 --num_gpu 1
```

### 单机多卡
yongshk's avatar
yongshk committed
89

yongshk's avatar
yongshk committed
90
91
92
```
python main.py --batch_size 32 --test_batch_size 32 --dataset vimeo90K_septuplet --loss 1*L1 --max_epoch 200 --lr 0.0002 --data_root datasets --n_outputs 1 --num_gpu 4
```
yongshk's avatar
yongshk committed
93

yongshk's avatar
yongshk committed
94
## 推理
yongshk's avatar
yongshk committed
95

yongshk's avatar
yongshk committed
96
    python test.py --dataset vimeo90K_septuplet --data_root <data_path> --load_from <saved_model> --n_outputs 1
yongshk's avatar
yongshk committed
97

yongshk's avatar
yongshk committed
98
## result
yongshk's avatar
yongshk committed
99

yongshk's avatar
yongshk committed
100
测试图
yongshk's avatar
yongshk committed
101

yongshk's avatar
yongshk committed
102
![](https://developer.hpccube.com/codes/modelzoo/flavr_pytorch/-/raw/master/doc/sprite.gif)
yongshk's avatar
yongshk committed
103

yongshk's avatar
yongshk committed
104
### 精度
yongshk's avatar
yongshk committed
105

yongshk's avatar
yongshk committed
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
测试数据:http://toflow.csail.mit.edu/,使用的加速卡:Z100L。

根据测试结果情况填写表格:

|  FLAVR   |   PSNR    | SSIM     |   速度   |
| :------: | :-------: | -------- | :------: |
| vimeo-90 | 18.511020 | 0.702564 | 3.46it/s |

## 应用场景

### 算法类别

`图像超分`

### 热点应用行业

`设计`
yongshk's avatar
yongshk committed
123

yongshk's avatar
更新  
yongshk committed
124
## 源码仓库及问题反馈
yongshk's avatar
yongshk committed
125

dcuai's avatar
dcuai committed
126
*   https://developer.hpccube.com/codes/modelzoo/flavr_pytorch 
yongshk's avatar
yongshk committed
127
128
129
## 参考资料
*  https://github.com/tarun005/FLAVR