README.md 4.2 KB
Newer Older
yongshk's avatar
yongshk committed
1
2
3
4
5
6
7
8
# LAPSRN

## 论文

` Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution `

- https://openaccess.thecvf.com/content_cvpr_2017/papers/Lai_Deep_Laplacian_Pyramid_CVPR_2017_paper.pdf

yongshk's avatar
更新  
yongshk committed
9
## 模型结构
yongshk's avatar
yongshk committed
10
11
 LapSRN模型主要有两个部分,即拉普拉斯金字塔预测模型和残差学习模型。 

yongshk's avatar
yongshk committed
12
![](https://developer.hpccube.com/codes/modelzoo/lapsrn_tensorflow/-/raw/master/doc/模型结构.png)
yongshk's avatar
yongshk committed
13
14
15
16
17

## 算法原理

 该模型是一个图像超分辨率模型,通过逐步放大结构,包括特征提取和图像重建,在每个阶段通过卷积和转置卷积层实现对图像的逐级提升。 

yongshk's avatar
yongshk committed
18
![](https://developer.hpccube.com/codes/modelzoo/lapsrn_tensorflow/-/raw/master/doc/%E6%A8%A1%E5%9E%8B%E5%8E%9F%E7%90%86.png)
yongshk's avatar
yongshk committed
19
20
21
22
23

## 环境配置

### Docker(方法一)

dcuai's avatar
dcuai committed
24
此处提供[光源](https://sourcefind.cn/#/main-page)拉取docker镜像
yongshk's avatar
yongshk committed
25
26
27
28

```
docker pull docker pull image.sourcefind.cn:5000/dcu/admin/base/tensorflow:1.15.1-centos7.6-dtk-22.10.1-py37-latest
docker run -it --network=host --name=bert_prof --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=32G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 image.sourcefind.cn:5000/dcu/admin/base/tensorflow:1.15.1-centos7.6-dtk-22.10.1-py37-latest
dcuai's avatar
dcuai committed
29
pip install -r requirements.txt
yongshk's avatar
yongshk committed
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
```

### Dockerfile(方法二)

dockerfile使用方法

```
docker build --no-cache -t lapsrn:latest .
docker run -dit --network=host --name=lapsrn --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 lapsrn:latest
docker exec -it lapsrn /bin/bash
```

### Anaconda(方法三)

关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.hpccube.com/tool/)开发者社区下载安装。

```
DTK驱动:dtk22.10
python:python3.7
tensorflow==1.15.1+gitf56f27ab.dtk2210
```

`Tips:以上dtk驱动、python等DCU相关工具版本需要严格一一对应`

其它非深度学习库参照requirements.txt安装:

```
pip install -r requirements.txt
```

yongshk's avatar
更新  
yongshk committed
60
## 数据集
yongshk's avatar
yongshk committed
61

yongshk's avatar
yongshk committed
62
`模型使用数据为 DIV2K `
yongshk's avatar
yongshk committed
63

yongshk's avatar
yongshk committed
64
-  https://data.vision.ee.ethz.ch/cvl/DIV2K/ 
yongshk's avatar
yongshk committed
65

wanglch's avatar
wanglch committed
66
数据集快速下载中心:[SCNet AIDatasets](http://113.200.138.88:18080/aidatasets)
wanglch's avatar
wanglch committed
67
68
69

项目中的预训练权重可从快速下载通道下载[DIV2K](http://113.200.138.88:18080/aidatasets/project-dependency/div2k)

yongshk's avatar
yongshk committed
70
项目中已提供用于试验训练的迷你数据集,训练数据目录结构如下,用于正常训练的完整数据集请按此目录结构进行制备:
yongshk's avatar
yongshk committed
71

yongshk's avatar
yongshk committed
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
```
 ── datasets
       │   ├── DIV2K_train_HR
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
       │   └── DIV2K_train_LR_bicubic
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
       │   └── DIV2K_valid_HR
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
       │   └── DIV2K_valid_LR_bicubic
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
```
yongshk's avatar
yongshk committed
91
92
93
94




yongshk's avatar
yongshk committed
95
96
## 训练
### 单机单卡
yongshk's avatar
yongshk committed
97

yongshk's avatar
yongshk committed
98
99
100
```
python main.py
```
yongshk's avatar
yongshk committed
101

yongshk's avatar
yongshk committed
102
## 推理
yongshk's avatar
yongshk committed
103

yongshk's avatar
更新  
yongshk committed
104
105
    python main.py -m test \
                   -f TESTIMAGE
yongshk's avatar
yongshk committed
106

yongshk's avatar
yongshk committed
107
108
109
110
## result

测试图

yongshk's avatar
yongshk committed
111
![](https://developer.hpccube.com/codes/modelzoo/lapsrn_tensorflow/-/raw/master/datasets/DIV2K_valid_HR/0801.png)
yongshk's avatar
yongshk committed
112
113

### 精度
yongshk's avatar
yongshk committed
114

yongshk's avatar
update  
yongshk committed
115
测试数据:[DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/) ,使用的加速卡:Z100L。
yongshk's avatar
yongshk committed
116

yongshk's avatar
yongshk committed
117
118
根据测试结果情况填写表格:

yongshk's avatar
update  
yongshk committed
119
120
121
| LAPSRN | loss  |
| :----: | :---: |
| DIV2K  | 0.461 |
yongshk's avatar
yongshk committed
122
123
124
125
126
127
128
129
130

## 应用场景

### 算法类别

`图像超分`

### 热点应用行业

yongshk's avatar
update  
yongshk committed
131
`设计`,`制造`,`交通`
yongshk's avatar
yongshk committed
132

wanglch's avatar
wanglch committed
133
134
135
136
## 预训练权重

预训练权重快速下载中心:[SCNet AIModels](http://113.200.138.88:18080/aimodels)

wanglch's avatar
wanglch committed
137
项目中的预训练权重保存目录为: [LapSRN](./checkpoint/)
wanglch's avatar
wanglch committed
138

yongshk's avatar
更新  
yongshk committed
139
## 源码仓库及问题反馈
yongshk's avatar
yongshk committed
140

yongshk's avatar
update  
yongshk committed
141
*  https://developer.hpccube.com/codes/modelzoo/lapsrn_tensorflow
yongshk's avatar
yongshk committed
142
143
144
## 参考资料
* https://github.com/zjuela/LapSRN-tensorflow