Commit e1b12478 authored by yongshk's avatar yongshk
Browse files

更新

parent 3bfb5c80
# 简介
# Pix2pixHD
## 模型介绍
Pix2pixHD是一种图像到图像的转换模型,它可以将输入图像转换成一种特定的输出图像。这种模型的应用非常广泛,例如将草图转换成真实的图片、将低分辨率图片转换成高分辨率图片、将黑白图像转换成彩色图像等等。
## 模型结构
模型整体的结构是conditional GAN。 模型与Pix2pix相比更换了coarse-to-fine的生成器、multi-scale的判别器。
Pix2pixHD是pix2pix的改进版,它在分辨率、视觉质量、视觉一致性和训练效率等方面都有所提高。Pix2pixHD引入了多尺度判别器,这样可以捕捉到不同尺度的全局和局部信息,提高了图像质量。此外,Pix2pixHD还使用了语义分割,可以将输入图像分解成不同的区域并对其进行处理,从而提高了模型的视觉一致性。
# 测试流程
## 安装工具包
pytorch1.10版本[1.10.0a0+git2040069-dtk2210]
## 加载环境变量
```
export PATH={PYTHON3_install_dir}/bin:$PATH
export LD_LIBRARY_PATH={PYTHON3_install_dir}/lib:$LD_LIBRARY_PATH
```
## 下载数据集
数据集下载地址:cityscapes
## 数据集
模型使用数据为cityscapes
<https://www.cityscapes-dataset.com/>
## 修改配置文件
options/base_options.py
## 训练及推理
### 环境配置
python依赖安装:
```python
self.parser.add_argument('--checkpoints_dir', type=str, default='/../', help='models are saved here')
self.parser.add_argument('--dataroot', type=str, default='/../pix2pixHD/datasets/cityscapes')
```
# 运行指令
## 训练模型
torch==1.10.0a0+git2040069.dtk2210
torchvision==0.11.0+cu102
### 训练
训练命令:
以 1024 x 512 分辨率训练模型 ( `bash ./scripts/train_512p.sh`):
......@@ -48,19 +26,22 @@ self.parser.add_argument('--dataroot', type=str, default='/../pix2pixHD/datasets
python train.py --name label2city_512p
```
## 测试
- 该文件夹中包含一些示例 Cityscapes 测试图像`datasets`
- [请从这里](https://drive.google.com/file/d/1h9SykUnuZul7J3Nbms2QGH1wa85nbN2-/view?usp=sharing)(谷歌驱动器链接)下载预训练的 Cityscapes 模型,并将其放在`./checkpoints/label2city_1024p/`
- 测试模型(`bash ./scripts/test_1024p.sh`):
### 测试
测试命令:
测试模型(`bash ./scripts/test_1024p.sh`):
#!./scripts/test_1024p.sh
python test.py --name label2city_1024p \
--netG local \
--ngf 32 \
--resize_or_crop none \
```
#!./scripts/test_1024p.sh
python test.py --name label2city_1024p --netG local --ngf 32 --resize_or_crop none
```
测试结果将保存到此处的 html 文件中:`./results/label2city_1024p/test_latest/index.html`
# 参考
## 源码仓库及问题反馈
* [https://github.com/NVIDIA/pix2pixHD](https://github.com/NVIDIA/pix2pixHD)
## 参考
* [https://github.com/NVIDIA/pix2pixHD](https://github.com/NVIDIA/pix2pixHD)
[https://github.com/NVIDIA/pix2pixHD](https://github.com/NVIDIA/pix2pixHD)
\ No newline at end of file
This image diff could not be displayed because it is too large. You can view the blob instead.
<img src='imgs/teaser_720.gif' align="right" width=360>
<br><br><br><br>
# pix2pixHD
### [Project](https://tcwang0509.github.io/pix2pixHD/) | [Youtube](https://youtu.be/3AIpPlzM_qs) | [Paper](https://arxiv.org/pdf/1711.11585.pdf) <br>
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic image-to-image translation. It can be used for turning semantic label maps into photo-realistic images or synthesizing portraits from face label maps. <br><br>
[High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https://tcwang0509.github.io/pix2pixHD/)
[Ting-Chun Wang](https://tcwang0509.github.io/)<sup>1</sup>, [Ming-Yu Liu](http://mingyuliu.net/)<sup>1</sup>, [Jun-Yan Zhu](http://people.eecs.berkeley.edu/~junyanz/)<sup>2</sup>, Andrew Tao<sup>1</sup>, [Jan Kautz](http://jankautz.com/)<sup>1</sup>, [Bryan Catanzaro](http://catanzaro.name/)<sup>1</sup>
<sup>1</sup>NVIDIA Corporation, <sup>2</sup>UC Berkeley
In CVPR 2018.
## Image-to-image translation at 2k/1k resolution
- Our label-to-streetview results
<p align='center'>
<img src='imgs/teaser_label.png' width='400'/>
<img src='imgs/teaser_ours.jpg' width='400'/>
</p>
- Interactive editing results
<p align='center'>
<img src='imgs/teaser_style.gif' width='400'/>
<img src='imgs/teaser_label.gif' width='400'/>
</p>
- Additional streetview results
<p align='center'>
<img src='imgs/cityscapes_1.jpg' width='400'/>
<img src='imgs/cityscapes_2.jpg' width='400'/>
</p>
<p align='center'>
<img src='imgs/cityscapes_3.jpg' width='400'/>
<img src='imgs/cityscapes_4.jpg' width='400'/>
</p>
- Label-to-face and interactive editing results
<p align='center'>
<img src='imgs/face1_1.jpg' width='250'/>
<img src='imgs/face1_2.jpg' width='250'/>
<img src='imgs/face1_3.jpg' width='250'/>
</p>
<p align='center'>
<img src='imgs/face2_1.jpg' width='250'/>
<img src='imgs/face2_2.jpg' width='250'/>
<img src='imgs/face2_3.jpg' width='250'/>
</p>
- Our editing interface
<p align='center'>
<img src='imgs/city_short.gif' width='330'/>
<img src='imgs/face_short.gif' width='450'/>
</p>
## Prerequisites
- Linux or macOS
- Python 2 or 3
- NVIDIA GPU (11G memory or larger) + CUDA cuDNN
## Getting Started
### Installation
- Install PyTorch and dependencies from http://pytorch.org
- Install python libraries [dominate](https://github.com/Knio/dominate).
```bash
pip install dominate
```
- Clone this repo:
```bash
git clone https://github.com/NVIDIA/pix2pixHD
cd pix2pixHD
```
### Testing
- A few example Cityscapes test images are included in the `datasets` folder.
- Please download the pre-trained Cityscapes model from [here](https://drive.google.com/file/d/1h9SykUnuZul7J3Nbms2QGH1wa85nbN2-/view?usp=sharing) (google drive link), and put it under `./checkpoints/label2city_1024p/`
- Test the model (`bash ./scripts/test_1024p.sh`):
```bash
#!./scripts/test_1024p.sh
python test.py --name label2city_1024p --netG local --ngf 32 --resize_or_crop none
```
The test results will be saved to a html file here: `./results/label2city_1024p/test_latest/index.html`.
More example scripts can be found in the `scripts` directory.
### Dataset
- We use the Cityscapes dataset. To train a model on the full dataset, please download it from the [official website](https://www.cityscapes-dataset.com/) (registration required).
After downloading, please put it under the `datasets` folder in the same way the example images are provided.
### Training
- Train a model at 1024 x 512 resolution (`bash ./scripts/train_512p.sh`):
```bash
#!./scripts/train_512p.sh
python train.py --name label2city_512p
```
- To view training results, please checkout intermediate results in `./checkpoints/label2city_512p/web/index.html`.
If you have tensorflow installed, you can see tensorboard logs in `./checkpoints/label2city_512p/logs` by adding `--tf_log` to the training scripts.
### Multi-GPU training
- Train a model using multiple GPUs (`bash ./scripts/train_512p_multigpu.sh`):
```bash
#!./scripts/train_512p_multigpu.sh
python train.py --name label2city_512p --batchSize 8 --gpu_ids 0,1,2,3,4,5,6,7
```
Note: this is not tested and we trained our model using single GPU only. Please use at your own discretion.
### Training with Automatic Mixed Precision (AMP) for faster speed
- To train with mixed precision support, please first install apex from: https://github.com/NVIDIA/apex
- You can then train the model by adding `--fp16`. For example,
```bash
#!./scripts/train_512p_fp16.sh
python -m torch.distributed.launch train.py --name label2city_512p --fp16
```
In our test case, it trains about 80% faster with AMP on a Volta machine.
### Training at full resolution
- To train the images at full resolution (2048 x 1024) requires a GPU with 24G memory (`bash ./scripts/train_1024p_24G.sh`), or 16G memory if using mixed precision (AMP).
- If only GPUs with 12G memory are available, please use the 12G script (`bash ./scripts/train_1024p_12G.sh`), which will crop the images during training. Performance is not guaranteed using this script.
### Training with your own dataset
- If you want to train with your own dataset, please generate label maps which are one-channel whose pixel values correspond to the object labels (i.e. 0,1,...,N-1, where N is the number of labels). This is because we need to generate one-hot vectors from the label maps. Please also specity `--label_nc N` during both training and testing.
- If your input is not a label map, please just specify `--label_nc 0` which will directly use the RGB colors as input. The folders should then be named `train_A`, `train_B` instead of `train_label`, `train_img`, where the goal is to translate images from A to B.
- If you don't have instance maps or don't want to use them, please specify `--no_instance`.
- The default setting for preprocessing is `scale_width`, which will scale the width of all training images to `opt.loadSize` (1024) while keeping the aspect ratio. If you want a different setting, please change it by using the `--resize_or_crop` option. For example, `scale_width_and_crop` first resizes the image to have width `opt.loadSize` and then does random cropping of size `(opt.fineSize, opt.fineSize)`. `crop` skips the resizing step and only performs random cropping. If you don't want any preprocessing, please specify `none`, which will do nothing other than making sure the image is divisible by 32.
## More Training/Test Details
- Flags: see `options/train_options.py` and `options/base_options.py` for all the training flags; see `options/test_options.py` and `options/base_options.py` for all the test flags.
- Instance map: we take in both label maps and instance maps as input. If you don't want to use instance maps, please specify the flag `--no_instance`.
## Citation
If you find this useful for your research, please use the following.
```
@inproceedings{wang2018pix2pixHD,
title={High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs},
author={Ting-Chun Wang and Ming-Yu Liu and Jun-Yan Zhu and Andrew Tao and Jan Kautz and Bryan Catanzaro},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2018}
}
```
## Acknowledgments
This code borrows heavily from [pytorch-CycleGAN-and-pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment