README.md 3.9 KB
Newer Older
dcuai's avatar
dcuai committed
1
# LapSRN
yongshk's avatar
yongshk committed
2
3
4
5
6
7
8

## 论文

` Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution `

- https://openaccess.thecvf.com/content_cvpr_2017/papers/Lai_Deep_Laplacian_Pyramid_CVPR_2017_paper.pdf

yongshk's avatar
更新  
yongshk committed
9
## 模型结构
yongshk's avatar
yongshk committed
10
11
 LapSRN模型主要有两个部分,即拉普拉斯金字塔预测模型和残差学习模型。 

chenzk's avatar
chenzk committed
12
![](https://developer.sourcefind.cn/codes/modelzoo/lapsrn_tensorflow/-/raw/master/doc/模型结构.png)
yongshk's avatar
yongshk committed
13
14
15
16
17

## 算法原理

 该模型是一个图像超分辨率模型,通过逐步放大结构,包括特征提取和图像重建,在每个阶段通过卷积和转置卷积层实现对图像的逐级提升。 

chenzk's avatar
chenzk committed
18
![](https://developer.sourcefind.cn/codes/modelzoo/lapsrn_tensorflow/-/raw/master/doc/%E6%A8%A1%E5%9E%8B%E5%8E%9F%E7%90%86.png)
yongshk's avatar
yongshk committed
19
20
21
22
23

## 环境配置

### Docker(方法一)

dcuai's avatar
dcuai committed
24
此处提供[光源](https://sourcefind.cn/#/main-page)拉取docker镜像
yongshk's avatar
yongshk committed
25
26
27
28

```
docker pull docker pull image.sourcefind.cn:5000/dcu/admin/base/tensorflow:1.15.1-centos7.6-dtk-22.10.1-py37-latest
docker run -it --network=host --name=bert_prof --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=32G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 image.sourcefind.cn:5000/dcu/admin/base/tensorflow:1.15.1-centos7.6-dtk-22.10.1-py37-latest
dcuai's avatar
dcuai committed
29
pip install -r requirements.txt
yongshk's avatar
yongshk committed
30
31
32
33
34
35
36
37
38
39
40
41
42
43
```

### Dockerfile(方法二)

dockerfile使用方法

```
docker build --no-cache -t lapsrn:latest .
docker run -dit --network=host --name=lapsrn --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 lapsrn:latest
docker exec -it lapsrn /bin/bash
```

### Anaconda(方法三)

chenzk's avatar
chenzk committed
44
关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.sourcefind.cn/tool/)开发者社区下载安装。
yongshk's avatar
yongshk committed
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59

```
DTK驱动:dtk22.10
python:python3.7
tensorflow==1.15.1+gitf56f27ab.dtk2210
```

`Tips:以上dtk驱动、python等DCU相关工具版本需要严格一一对应`

其它非深度学习库参照requirements.txt安装:

```
pip install -r requirements.txt
```

yongshk's avatar
更新  
yongshk committed
60
## 数据集
yongshk's avatar
yongshk committed
61

yongshk's avatar
yongshk committed
62
`模型使用数据为 DIV2K `
yongshk's avatar
yongshk committed
63

yongshk's avatar
yongshk committed
64
-  https://data.vision.ee.ethz.ch/cvl/DIV2K/ 
yongshk's avatar
yongshk committed
65

yongshk's avatar
yongshk committed
66
项目中已提供用于试验训练的迷你数据集,训练数据目录结构如下,用于正常训练的完整数据集请按此目录结构进行制备:
yongshk's avatar
yongshk committed
67

yongshk's avatar
yongshk committed
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
```
 ── datasets
       │   ├── DIV2K_train_HR
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
       │   └── DIV2K_train_LR_bicubic
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
       │   └── DIV2K_valid_HR
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
       │   └── DIV2K_valid_LR_bicubic
       │             ├── xxx.png
       │             ├── xxx.png
       │             └── ...
```
yongshk's avatar
yongshk committed
87
88
89
90




yongshk's avatar
yongshk committed
91
92
## 训练
### 单机单卡
yongshk's avatar
yongshk committed
93

yongshk's avatar
yongshk committed
94
95
96
```
python main.py
```
yongshk's avatar
yongshk committed
97

yongshk's avatar
yongshk committed
98
## 推理
yongshk's avatar
yongshk committed
99

yongshk's avatar
更新  
yongshk committed
100
101
    python main.py -m test \
                   -f TESTIMAGE
yongshk's avatar
yongshk committed
102

yongshk's avatar
yongshk committed
103
104
105
106
## result

测试图

chenzk's avatar
chenzk committed
107
![](https://developer.sourcefind.cn/codes/modelzoo/lapsrn_tensorflow/-/raw/master/datasets/DIV2K_valid_HR/0801.png)
yongshk's avatar
yongshk committed
108
109

### 精度
yongshk's avatar
yongshk committed
110

yongshk's avatar
update  
yongshk committed
111
测试数据:[DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/) ,使用的加速卡:Z100L。
yongshk's avatar
yongshk committed
112

yongshk's avatar
yongshk committed
113
114
根据测试结果情况填写表格:

yongshk's avatar
update  
yongshk committed
115
116
117
| LAPSRN | loss  |
| :----: | :---: |
| DIV2K  | 0.461 |
yongshk's avatar
yongshk committed
118
119
120
121
122
123
124
125
126

## 应用场景

### 算法类别

`图像超分`

### 热点应用行业

yongshk's avatar
update  
yongshk committed
127
`设计`,`制造`,`交通`
yongshk's avatar
yongshk committed
128

wanglch's avatar
wanglch committed
129
## 预训练权重
wanglch's avatar
wanglch committed
130
项目中的预训练权重保存目录为: [LapSRN](./checkpoint/)
wanglch's avatar
wanglch committed
131

yongshk's avatar
更新  
yongshk committed
132
## 源码仓库及问题反馈
yongshk's avatar
yongshk committed
133

chenzk's avatar
chenzk committed
134
*  https://developer.sourcefind.cn/codes/modelzoo/lapsrn_tensorflow
yongshk's avatar
yongshk committed
135
136
137
## 参考资料
* https://github.com/zjuela/LapSRN-tensorflow