README.md 4.3 KB
Newer Older
yongshk's avatar
yongshk committed
1
# UNET
yongshk's avatar
yongshk committed
2
## 论文
yongshk's avatar
yongshk committed
3
`U-Net: Convolutional Networks for Biomedical Image Segmentation`
yongshk's avatar
yongshk committed
4

yongshk's avatar
yongshk committed
5
- https://arxiv.org/abs/1505.04597
yongshk's avatar
更新  
yongshk committed
6
## 模型结构
yongshk's avatar
yongshk committed
7
 UNet(全名 U-Net)是一种用于图像分割的卷积神经网络(CNN)架构,UNet 的结构具有 U 形状,因此得名。 
yongshk's avatar
yongshk committed
8

yongshk's avatar
yongshk committed
9
![img](https://developer.hpccube.com/codes/modelzoo/unet-pytorch/-/raw/main/doc/unet.png)
yongshk's avatar
yongshk committed
10
## 算法原理
yongshk's avatar
yongshk committed
11
U-Net 的核心原理如下:
yongshk's avatar
yongshk committed
12

yongshk's avatar
yongshk committed
13
14
15
1. **编码器(Contracting Path)**:U-Net 的编码器由卷积层和池化层组成,用于捕捉图像的特征信息并逐渐减小分辨率。这一部分的任务是将输入图像缩小到一个低分辨率的特征图,同时保留有关图像内容的关键特征。
2. **中间层(Bottleneck)**:在编码器和解码器之间,U-Net 包括一个中间层,通常由卷积层组成,用于进一步提取特征信息。
3. **解码器(Expansive Path)**:U-Net 的解码器包括上采样层和卷积层,用于将特征图恢复到原始输入图像的分辨率。解码器的任务是将高级特征与低级特征相结合,以便生成分割结果。这一部分的结构与编码器相对称。
yongshk's avatar
yongshk committed
16

yongshk's avatar
yongshk committed
17
![img](https://developer.hpccube.com/codes/modelzoo/unet-pytorch/-/raw/main/doc/原理.png)
yongshk's avatar
yongshk committed
18
19
## 环境配置
### Docker(方法一)
yongshk's avatar
yongshk committed
20
此处提供[光源](https://www.sourcefind.cn/#/service-details)拉取docker镜像的地址与使用步骤
yongshk's avatar
yongshk committed
21
```
yongshk's avatar
yongshk committed
22
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-23.04-py37-latest
yongshk's avatar
yongshk committed
23

yongshk's avatar
yongshk committed
24
25
docker run -it --network=host --name=unet --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=32G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1  image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-23.04-py37-latest
```
yongshk's avatar
yongshk committed
26
27
### Dockerfile(方法二)

yongshk's avatar
yongshk committed
28
此处提供dockerfile的使用方法
yongshk's avatar
yongshk committed
29
30

```
yongshk's avatar
yongshk committed
31
32
33
docker build --no-cache -t unet:latest .
docker run -dit --network=host --name=unet --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 unet:latest
docker exec -it unet /bin/bash
yongshk's avatar
yongshk committed
34
35
36
37
38
pip install -r requirements.txt
```

### Anaconda(方法三)

yongshk's avatar
yongshk committed
39
此处提供本地配置、编译的详细步骤,例如:
yongshk's avatar
yongshk committed
40

yongshk's avatar
yongshk committed
41
关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.hpccube.com/tool/)开发者社区下载安装。
yongshk's avatar
yongshk committed
42
```
yongshk's avatar
yongshk committed
43
DTK驱动:dtk23.04
yongshk's avatar
yongshk committed
44
python:python3.7
yongshk's avatar
yongshk committed
45
46
47
apex==0.1+f49ddd4.abi0.dtk2304.torch1.13
torch==1.13.1+git55d300e.abi0.dtk2304
torchvision==0.14.1+git9134838.abi0.dtk2304.torch1.13
yongshk's avatar
yongshk committed
48
49
50
51
52
53
54
```
`Tips:以上dtk驱动、python等DCU相关工具版本需要严格一一对应`

其它非深度学习库参照requirements.txt安装:
```
pip install -r requirements.txt
```
yongshk's avatar
更新  
yongshk committed
55
## 数据集
yongshk's avatar
yongshk committed
56
`Carvana`
yongshk's avatar
yongshk committed
57

yongshk's avatar
yongshk committed
58
- https://www.kaggle.com/c/carvana-image-masking-challenge/data
yongshk's avatar
yongshk committed
59

yongshk's avatar
yongshk committed
60
61
62
63
此处提供数据预处理脚本的使用方法
```
bash scripts/download_data.sh
```
yongshk's avatar
yongshk committed
64
65
项目中已提供用于试验训练的迷你数据集,训练数据目录结构如下,用于正常训练的完整数据集请按此目录结构进行制备:
```
yongshk's avatar
yongshk committed
66
67
68
69
70
71
72
 ── data
    │   ├── imgs
    │   ├──────fff9b3a5373f_15.jpg
    │   ├──────fff9b3a5373f_16.jpg
    │   └── masks
    │   ├────── fff9b3a5373f_15.gif
    │   ├────── fff9b3a5373f_16.gif
yongshk's avatar
yongshk committed
73
74
75
```
## 训练
### 单机单卡
yongshk's avatar
yongshk committed
76
```
yongshk's avatar
yongshk committed
77
python train.py
yongshk's avatar
yongshk committed
78
```
yongshk's avatar
yongshk committed
79
80
### 单机多卡
```
yongshk's avatar
yongshk committed
81
python -m torch.distributed.launch --nproc_per_node 4 train_ddp.py
yongshk's avatar
yongshk committed
82
```
yongshk's avatar
更新  
yongshk committed
83

yongshk's avatar
yongshk committed
84
## 推理
yongshk's avatar
更新  
yongshk committed
85

yongshk's avatar
yongshk committed
86
[模型下载路径](https://github.com/milesial/Pytorch-UNet/releases/tag/v3.0)
yongshk's avatar
yongshk committed
87

yongshk's avatar
yongshk committed
88
89
90
```
python predict.py -m model_path -i image.jpg -o output.jpg
```
yongshk's avatar
yongshk committed
91
## result
yongshk's avatar
yongshk committed
92
![rusult](https://developer.hpccube.com/codes/modelzoo/unet-pytorch/-/raw/main/doc/结果.png)
yongshk's avatar
yongshk committed
93
94

### 精度
yongshk's avatar
yongshk committed
95
测试数据:[test data](https://www.kaggle.com/c/carvana-image-masking-challenge/data),使用的加速卡:Z100L。(采用iou系数)
yongshk's avatar
yongshk committed
96
97

根据测试结果情况填写表格:
yongshk's avatar
yongshk committed
98
99
100
| Unet | 精度 | 速度 |
| :------: | :------: | :------: |
| Carvana | 0.976 | 25.96 |
yongshk's avatar
yongshk committed
101
102
103
## 应用场景
### 算法类别

yongshk's avatar
yongshk committed
104
`图像分割`
yongshk's avatar
yongshk committed
105
106

### 热点应用行业
yongshk's avatar
yongshk committed
107
`医疗`
yongshk's avatar
yongshk committed
108

yongshk's avatar
更新  
yongshk committed
109
## 源码仓库及问题反馈
yongshk's avatar
yongshk committed
110
- https://developer.hpccube.com/codes/modelzoo/unet-pytorch
yongshk's avatar
yongshk committed
111
## 参考资料
yongshk's avatar
yongshk committed
112
- https://github.com/milesial/Pytorch-Unet