Commit e2696ece authored by mashun1's avatar mashun1
Browse files

controlnet

parents
Pipeline #643 canceled with stages
# HOWTOs
[English](HOWTOs.md) **|** [简体中文](HOWTOs_CN.md)
## How to train StyleGAN2
1. Prepare training dataset: [FFHQ](https://github.com/NVlabs/ffhq-dataset). More details are in [DatasetPreparation.md](DatasetPreparation.md#StyleGAN2)
1. Download FFHQ dataset. Recommend to download the tfrecords files from [NVlabs/ffhq-dataset](https://github.com/NVlabs/ffhq-dataset).
1. Extract tfrecords to images or LMDBs (TensorFlow is required to read tfrecords):
> python scripts/data_preparation/extract_images_from_tfrecords.py
1. Modify the config file in `options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ.yml`
1. Train with distributed training. More training commands are in [TrainTest.md](TrainTest.md).
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ_800k.yml --launcher pytorch
## How to inference StyleGAN2
1. Download pre-trained models from **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) to the `experiments/pretrained_models` folder.
1. Test.
> python inference/inference_stylegan2.py
1. The results are in the `samples` folder.
## How to inference DFDNet
1. Install [dlib](http://dlib.net/), because DFDNet uses dlib to do face recognition and landmark detection. [Installation reference](https://github.com/davisking/dlib).
1. Clone dlib repo: `git clone git@github.com:davisking/dlib.git`
1. `cd dlib`
1. Install: `python setup.py install`
2. Download the dlib pretrained models from **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) to the `experiments/pretrained_models/dlib` folder.<br>
You can download by run the following command OR manually download the pretrained models.
> python scripts/download_pretrained_models.py dlib
3. Download pretrained DFDNet models, dictionary and face template from **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) to the `experiments/pretrained_models/DFDNet` folder.<br>
You can download by run the the following command OR manually download the pretrained models.
> python scripts/download_pretrained_models.py DFDNet
4. Prepare the testing dataset in the `datasets`, for example, we put images in the `datasets/TestWhole` folder.
5. Test.
> python inference/inference_dfdnet.py --upscale_factor=2 --test_path datasets/TestWhole
6. The results are in the `results/DFDNet` folder.
## How to train SwinIR (SR)
We take the classical SR X4 with DIV2K for example.
1. Prepare the training dataset: [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/). More details are in [DatasetPreparation.md](DatasetPreparation.md#image-super-resolution)
1. Prepare the validation dataset: Set5. You can download with [this guidance](DatasetPreparation.md#common-image-sr-datasets)
1. Modify the config file in [`options/train/SwinIR/train_SwinIR_SRx4_scratch.yml`](../options/train/SwinIR/train_SwinIR_SRx4_scratch.yml) accordingly.
1. Train with distributed training. More training commands are in [TrainTest.md](TrainTest.md).
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4331 basicsr/train.py -opt options/train/SwinIR/train_SwinIR_SRx4_scratch.yml --launcher pytorch --auto_resume
Note that:
1. Different from the original setting in the paper where the X4 model is finetuned from the X2 model, we directly train it from scratch.
1. We also use `EMA (Exponential Moving Average)`. Note that all model trainings in BasicSR supports EMA.
1. In the **250K iteration** of training X4 model, it can achieve comparable performance to the official model.
| ClassicalSR DIV2KX4 | PSNR (RGB) | PSNR (Y) | SSIM (RGB) | SSIM (Y) |
| :--- | :---: | :---: | :---: | :---: |
| Official | 30.803 | 32.728 | 0.8738|0.9028 |
| Reproduce |30.832 | 32.756 | 0.8739| 0.9025 |
## How to inference SwinIR (SR)
1. Download pre-trained models from the [**official SwinIR repo**](https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0) to the `experiments/pretrained_models/SwinIR` folder.
1. Inference.
> python inference/inference_swinir.py --input datasets/Set5/LRbicx4 --patch_size 48 --model_path experiments/pretrained_models/SwinIR/001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth --output results/SwinIR_SRX4_DIV2K/Set5
1. The results are in the `results/SwinIR_SRX4_DIV2K/Set5` folder.
1. You may want to calculate the PSNR/SSIM values.
> python scripts/metrics/calculate_psnr_ssim.py --gt datasets/Set5/GTmod12/ --restored results/SwinIR_SRX4_DIV2K/Set5 --crop_border 4
or test with the Y channel with the `--test_y_channel` argument.
> python scripts/metrics/calculate_psnr_ssim.py --gt datasets/Set5/GTmod12/ --restored results/SwinIR_SRX4_DIV2K/Set5 --crop_border 4 --test_y_channel
# HOWTOs
[English](HOWTOs.md) **|** [简体中文](HOWTOs_CN.md)
## 如何训练 StyleGAN2
1. 准备训练数据集: [FFHQ](https://github.com/NVlabs/ffhq-dataset). 更多细节: [DatasetPreparation_CN.md](DatasetPreparation_CN.md#StyleGAN2)
1. 下载 FFHQ 数据集. 推荐从 [NVlabs/ffhq-dataset](https://github.com/NVlabs/ffhq-dataset) 下载 tfrecords 文件.
1. 从tfrecords 提取到*图片*或者*LMDB*. (需要安装 TensorFlow 来读取 tfrecords).
> python scripts/data_preparation/extract_images_from_tfrecords.py
1. 修改配置文件 `options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ.yml`
1. 使用分布式训练. 更多训练命令: [TrainTest_CN.md](TrainTest_CN.md)
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/StyleGAN/train_StyleGAN2_256_Cmul2_FFHQ.yml --launcher pytorch
## 如何测试 StyleGAN2
1.**ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) 下载预训练模型到 `experiments/pretrained_models` 文件夹.
1. 测试.
> python inference/inference_stylegan2.py
1. 结果在 `samples` 文件夹
## 如何测试 DFDNet
1. 安装 [dlib](http://dlib.net/). 因为 DFDNet 使用 dlib 做人脸检测和关键点检测. [安装参考](https://github.com/davisking/dlib).
1. 克隆 dlib repo: `git clone git@github.com:davisking/dlib.git`
1. `cd dlib`
1. 安装: `python setup.py install`
2.**ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) 下载预训练的 dlib 模型到 `experiments/pretrained_models/dlib` 文件夹.<br>
你可以通过运行下面的命令下载 或 手动下载.
> python scripts/download_pretrained_models.py dlib
3.**ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) 下载 DFDNet 模型, 字典和人脸关键点模板到 `experiments/pretrained_models/DFDNet` 文件夹.<br>
你可以通过运行下面的命令下载 或 手动下载.
> python scripts/download_pretrained_models.py DFDNet
4. 准备测试图片到 `datasets`, 比如说我们把测试图片放在 `datasets/TestWhole` 文件夹.
5. 测试.
> python inference/inference_dfdnet.py --upscale_factor=2 --test_path datasets/TestWhole
6. 结果在 `results/DFDNet` 文件夹.
## How to train SwinIR (SR)
We take the classical SR X4 with DIV2K for example.
1. Prepare the training dataset: [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/). More details are in [DatasetPreparation.md](DatasetPreparation.md#image-super-resolution)
1. Prepare the validation dataset: Set5. You can download with [this guidance](DatasetPreparation.md#common-image-sr-datasets)
1. Modify the config file in [`options/train/SwinIR/train_SwinIR_SRx4_scratch.yml`](../options/train/SwinIR/train_SwinIR_SRx4_scratch.yml) accordingly.
1. Train with distributed training. More training commands are in [TrainTest.md](TrainTest.md).
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4331 basicsr/train.py -opt options/train/SwinIR/train_SwinIR_SRx4_scratch.yml --launcher pytorch --auto_resume
Note that:
1. Different from the original setting in the paper where the X4 model is finetuned from the X2 model, we directly train it from scratch.
1. We also use `EMA (Exponential Moving Average)`. Note that all model trainings in BasicSR supports EMA.
1. In the **250K iteration** of training X4 model, it can achieve comparable performance to the official model.
| ClassicalSR DIV2KX4 | PSNR (RGB) | PSNR (Y) | SSIM (RGB) | SSIM (Y) |
| :--- | :---: | :---: | :---: | :---: |
| Official | 30.803 | 32.728 | 0.8738|0.9028 |
| Reproduce |30.832 | 32.756 | 0.8739| 0.9025 |
## How to inference SwinIR (SR)
1. Download pre-trained models from the [**official SwinIR repo**](https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0) to the `experiments/pretrained_models/SwinIR` folder.
1. Inference.
> python inference/inference_swinir.py --input datasets/Set5/LRbicx4 --patch_size 48 --model_path experiments/pretrained_models/SwinIR/001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth --output results/SwinIR_SRX4_DIV2K/Set5
1. The results are in the `results/SwinIR_SRX4_DIV2K/Set5` folder.
1. You may want to calculate the PSNR/SSIM values.
> python scripts/metrics/calculate_psnr_ssim.py --gt datasets/Set5/GTmod12/ --restored results/SwinIR_SRX4_DIV2K/Set5 --crop_border 4
or test with the Y channel with the `--test_y_channel` argument.
> python scripts/metrics/calculate_psnr_ssim.py --gt datasets/Set5/GTmod12/ --restored results/SwinIR_SRX4_DIV2K/Set5 --crop_border 4 --test_y_channel
# Installation
## Contents
- [Requirements](#requirements)
- [BASICSR_EXT and BASICSR_JIT environment variables](#basicsr_ext-and-basicsr_jit-environment-variables)
- [Installation Options](#installation-options)
- [Install from PyPI](#install-from-pypi)
- [Install from a local clone](#Install-from-a-local-clone)
## Requirements
- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.7](https://pytorch.org/)
- NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)
- Linux (We have not tested on Windows)
## BASICSR_EXT and BASICSR_JIT Environment Variables
If you want to use PyTorch C++ extensions:<br>
&emsp;deformable convolution: [*dcn* for EDVR](basicsr/ops) (For torchvision>=0.9.0, we use the official `torchvision.ops.deform_conv2d` instead)<br>
&emsp;StyleGAN customized operators: [*upfirdn2d* and *fused_act* for StyleGAN2](basicsr/ops)<br>
you also need to:
1. **compile** the PyTorch C++ extensions during installation
2. OR **load** the PyTorch C++ extensions just-in-time (JIT)
You may choose one of the options according to your needs.
| Option | Pros| Cons | Cases | Env Variable|
| :--- | :--- | :--- | :--- |:--- |
| **Compile** PyTorch C++ extensions during installation | **Quickly load** the compiled extensions during running | May have more stringent requirements for the environment, and you may encounter annoying issues | If you need to train/inference those models for many times, it will save your time| Set `BASICSR_EXT=True` during **installation**|
| **Load** PyTorch C++ extensions just-in-time (JIT) | Have less requirements, may have less issues | Each time you run the model, it will takes several minutes to load extensions again | If you just want to do simple inferences, it is more convenient| Set `BASICSR_JIT=True` during **running** (not **installation**) |
For those who need to compile the PyTorch C++ extensions during installation, remember:
- Make sure that your gcc and g++ version: gcc & g++ >= 5
Note that:
- The `BASICSR_JIT` has higher priority, that is, even you have successfully compiled PyTorch C++ extensions during installation, it will still load the extensions just-in-time if you set `BASICSR_JIT=True` in your running commands.
- :x: Do not set `BASICSR_JIT` during installation. Installation commands are in [Installation Options](#installation-options).
- :heavy_check_mark: If you want to load PyTorch C++ extensions just-in-time (JIT), just set `BASICSR_JIT=True` before your **running** commands. For example, `BASICSR_JIT=True python inference/inference_stylegan2.py`.
If you do not need those PyTorch C++ extensions, just skip it. There is no need to set `BASICSR_EXT` or `BASICSR_JIT` environment variables.
## Installation Options
There are two options to install BASICSR, according to your needs.
- If you just want to use BASICSR as a **package** (just like [GFPGAN](https://github.com/TencentARC/GFPGAN) and []()), it is recommended to install from PyPI.
- If you want to **investigate** the details of BASICSR OR **develop** it OR **modify** it to fulfill your needs, it is better to install from a local clone.
### Install from PyPI
- If you do not need C++ extensions (more details are [here](#basicsr_ext-and-basicsr_jit-environment-variables)):
```bash
pip install basicsr
```
- If you want to use C++ extensions in **JIT mode** without compiling them during installatoin (more details are [here](#basicsr_ext-and-basicsr_jit-environment-variables)):
```bash
pip install basicsr
```
- If you want to **compile C++ extensions during installation**, please set the environment variable `BASICSR_EXT=True`:
```bash
BASICSR_EXT=True pip install basicsr
```
The compilation may fail without any error prints. If you encounter running errors, such as `ImportError: cannot import name 'deform_conv_ext' | 'fused_act_ext' | 'upfirdn2d_ext'`, you may check the compilation process by re-installation. The following command will print detailed log:
```bash
BASICSR_EXT=True pip install basicsr -vvv
```
You may also want to specify the CUDA paths:
```bash
CUDA_HOME=/usr/local/cuda \
CUDNN_INCLUDE_DIR=/usr/local/cuda \
CUDNN_LIB_DIR=/usr/local/cuda \
BASICSR_EXT=True pip install basicsr
```
### Install from a local clone
1. Clone the repo
```bash
git clone https://github.com/xinntao/BasicSR.git
```
1. Install dependent packages
```bash
cd BasicSR
pip install -r requirements.txt
```
1. Install BasicSR<br>
Please run the following commands in the **BasicSR root path** to install BasicSR:<br>
- If you do not need C++ extensions (more details are [here](#basicsr_ext-and-basicsr_jit-environment-variables)):
```bash
python setup.py develop
```
- If you want to use C++ extensions in **JIT mode** without compiling them during installatoin (more details are [here](#basicsr_ext-and-basicsr_jit-environment-variables)):
```bash
python setup.py develop
```
- If you want to **compile C++ extensions during installation**, please set the environment variable `BASICSR_EXT=True`:
```bash
BASICSR_EXT=True python setup.py develop
```
You may also want to specify the CUDA paths:
```bash
CUDA_HOME=/usr/local/cuda \
CUDNN_INCLUDE_DIR=/usr/local/cuda \
CUDNN_LIB_DIR=/usr/local/cuda \
BASICSR_EXT=True python setup.py develop
```
# Logging
[English](Logging.md) **|** [简体中文](Logging_CN.md)
#### Contents
1. [Text Logger](#Text-Logger)
1. [Tensorboard Logger](#Tensorboard-Logger)
1. [Wandb Logger](#Wandb-Logger)
## Text Logger
Print the log to both the text file and screen. The text file usually locates in `experiments/exp_name/train_exp_name_timestamp.txt`.
## Tensorboard Logger
- Use Tensorboard logger. Set `use_tb_logger: true` in the yml configuration file:
```yml
logger:
use_tb_logger: true
```
- File location: `tb_logger/exp_name`
- View in the browser:
```bash
tensorboard --logdir tb_logger --port 5500 --bind_all
```
## Wandb Logger
[wandb](https://www.wandb.com/) can be viewed as a cloud version of tensorboard. One can easily view training processes and curves in wandb. Currently, we only sync the tensorboard log to wandb. So we should also turn on tensorboard when using wandb.
Configuration file:
```yml
ogger:
# Whether to tensorboard logger
use_tb_logger: true
# Whether to use wandb logger. Currently, wandb only sync the tensorboard log. So we should also turn on tensorboard when using wandb
wandb:
# wandb project name. Default is None, that is not using wandb.
# Here, we use the basicsr wandb project: https://app.wandb.ai/xintao/basicsr
project: basicsr
# If resuming, wandb id could automatically link previous logs
resume_id: ~
```
**[Examples of training curves in wandb](https://app.wandb.ai/xintao/basicsr)**
<p align="center">
<a href="https://app.wandb.ai/xintao/basicsr" target="_blank">
<img src="../assets/wandb.jpg" height="280">
</a></p>
# Logging日志
[English](Logging.md) **|** [简体中文](Logging_CN.md)
#### 目录
1. [文本屏幕日志](#文本屏幕日志)
1. [Tensorboard日志](#Tensorboard日志)
1. [Wandb日志](#Wandb日志)
## 文本屏幕日志
将日志信息同时输出到文件和屏幕. 文件位置一般为`experiments/exp_name/train_exp_name_timestamp.txt`.
## Tensorboard日志
- 开启. 在 yml 配置文件中设置 `use_tb_logger: true`:
```yml
logger:
use_tb_logger: true
```
- 文件位置: `tb_logger/exp_name`
- 在浏览器中查看:
```bash
tensorboard --logdir tb_logger --port 5500 --bind_all
```
## Wandb日志
[wandb](https://www.wandb.com/) 类似tensorboard的云端版本, 可以在浏览器方便地查看模型训练的过程和曲线. 我们目前只是把tensorboard的内容同步到wandb上, 因此要使用wandb, 必须打开tensorboard logger.
配置文件如下:
```yml
logger:
# 是否使用tensorboard logger
use_tb_logger: true
# 是否使用wandb logger, 目前wandb只是同步tensorboard的内容, 因此要使用wandb, 必须也同时使用tensorboard
wandb:
# wandb的project. 默认是 None, 即不使用wandb.
# 这里使用了 basicsr wandb project: https://app.wandb.ai/xintao/basicsr
project: basicsr
# 如果是resume, 可以输入上次的wandb id, 则log可以接起来
resume_id: ~
```
**[wandb训练曲线样例](https://app.wandb.ai/xintao/basicsr)**
<p align="center">
<a href="https://app.wandb.ai/xintao/basicsr" target="_blank">
<img src="../assets/wandb.jpg" height="280">
</a></p>
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
# Metrics
[English](Metrics.md) **|** [简体中文](Metrics_CN.md)
## PSNR and SSIM
## NIQE
## FID
> FID measures the similarity between two datasets of images. It was shown to correlate well with human judgement of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks.
> FID is calculated by computing the [Fréchet distance](https://en.wikipedia.org/wiki/Fr%C3%A9chet_distance) between two Gaussians fitted to feature representations of the Inception network.
References
- https://github.com/mseitzer/pytorch-fid
- [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](https://arxiv.org/abs/1706.08500)
- [Are GANs Created Equal? A Large-Scale Study](https://arxiv.org/abs/1711.10337)
### Pre-calculated FFHQ inception feature statistics
Usually, we put the downloaded inception feature statistics in `basicsr/metrics`.
:arrow_double_down: Google Drive: [metrics data](https://drive.google.com/drive/folders/13cWIQyHX3iNmZRJ5v8v3kdyrT9RBTAi9?usp=sharing)
:arrow_double_down: 百度网盘: [评价指标数据](https://pan.baidu.com/s/10mMKXSEgrC5y7m63W5vbMQ) <br>
| File Name | Dataset | Image Shape | Sample Numbers|
| :------------- | :----------:|:----------:|:----------:|
| inception_FFHQ_256-0948f50d.pth | FFHQ | 256 x 256 | 50,000 |
| inception_FFHQ_512-f7b384ab.pth | FFHQ | 512 x 512 | 50,000 |
| inception_FFHQ_1024-75f195dc.pth | FFHQ | 1024 x 1024 | 50,000 |
| inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth | FFHQ | 256 x 256 | 50,000 |
- All the FFHQ inception feature statistics calculated on the resized 299 x 299 size.
- `inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth` is converted from the statistics in [stylegan2-pytorch](https://github.com/rosinality/stylegan2-pytorch).
# 评价指标
[English](Metrics.md) **|** [简体中文](Metrics_CN.md)
## PSNR and SSIM
## NIQE
## FID
> FID measures the similarity between two datasets of images. It was shown to correlate well with human judgement of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks.
> FID is calculated by computing the [Fréchet distance](https://en.wikipedia.org/wiki/Fr%C3%A9chet_distance) between two Gaussians fitted to feature representations of the Inception network.
参考
- https://github.com/mseitzer/pytorch-fid
- [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](https://arxiv.org/abs/1706.08500)
- [Are GANs Created Equal? A Large-Scale Study](https://arxiv.org/abs/1711.10337)
### Pre-calculated FFHQ inception feature statistics
通常, 我们把下载的 inception 网络的特征统计数据 (用于计算FID) 放在 `basicsr/metrics`.
:arrow_double_down: 百度网盘: [评价指标数据](https://pan.baidu.com/s/10mMKXSEgrC5y7m63W5vbMQ)
:arrow_double_down: Google Drive: [metrics data](https://drive.google.com/drive/folders/13cWIQyHX3iNmZRJ5v8v3kdyrT9RBTAi9?usp=sharing) <br>
| File Name | Dataset | Image Shape | Sample Numbers|
| :------------- | :----------:|:----------:|:----------:|
| inception_FFHQ_256-0948f50d.pth | FFHQ | 256 x 256 | 50,000 |
| inception_FFHQ_512-f7b384ab.pth | FFHQ | 512 x 512 | 50,000 |
| inception_FFHQ_1024-75f195dc.pth | FFHQ | 1024 x 1024 | 50,000 |
| inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth | FFHQ | 256 x 256 | 50,000 |
- All the FFHQ inception feature statistics calculated on the resized 299 x 299 size.
- `inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth` is converted from the statistics in [stylegan2-pytorch](https://github.com/rosinality/stylegan2-pytorch).
# Model Zoo and Baselines
[English](ModelZoo.md) **|** [简体中文](ModelZoo_CN.md)
Download: ⏬ Google Drive: [Pretrained Models](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) **|** [Reproduced Experiments](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)
⏬ 百度网盘: [预训练模型](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g) **|** [复现实验](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ)
📈 [Training curves in wandb](https://app.wandb.ai/xintao/basicsr)
---
We provide:
1. Official models converted directly from official released models
1. Reproduced models with `BasicSR`. Pre-trained models and log examples are provided
You can put the downloaded models in the `experiments/pretrained_models` folder.
**[Download official pre-trained models]** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g))
You can use the script to download pre-trained models from Google Drive.
```python
python scripts/download_pretrained_models.py ESRGAN
# method can be ESRGAN, EDVR, StyleGAN, EDSR, DUF, DFDNet, dlib
```
**[Download reproduced models and logs]** ([Google Drive](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing), [百度网盘](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ))
In addition, we upload the training process and curves in [wandb](https://www.wandb.com/).
**[Training curves in wandb](https://app.wandb.ai/xintao/basicsr)**
<p align="center">
<a href="https://app.wandb.ai/xintao/basicsr" target="_blank">
<img src="../assets/wandb.jpg" height="350">
</a></p>
#### Contents
1. [Image Super-Resolution](#Image-Super-Resolution)
1. [Image SR Official Models](#Image-SR-Official-Models)
1. [Image SR Reproduced Models](#Image-SR-Reproduced-Models)
1. [Video Super-Resolution](#Video-Super-Resolution)
## Image Super-Resolution
When evaluation:
- We crop `scale` border pixels in each border
- Evaluated on RGB channels
### Image SR Official Models
|Exp Name | Set5 (PSNR/SSIM) | Set14 (PSNR/SSIM) |DIV2K100 (PSNR/SSIM) |
| :------------- | :----------: | :----------: |:----------: |
| EDSR_Mx2_f64b16_DIV2K_official-3ba7b086 | 35.7768 / 0.9442 | 31.4966 / 0.8939 | 34.6291 / 0.9373 |
| EDSR_Mx3_f64b16_DIV2K_official-6908f88a | 32.3597 / 0.903 | 28.3932 / 0.8096 | 30.9438 / 0.8737 |
| EDSR_Mx4_f64b16_DIV2K_official-0c287733 | 30.1821 / 0.8641 | 26.7528 / 0.7432 | 28.9679 / 0.8183 |
| EDSR_Lx2_f256b32_DIV2K_official-be38e77d | 35.9979 / 0.9454 | 31.8583 / 0.8971 | 35.0495 / 0.9407 |
| EDSR_Lx3_f256b32_DIV2K_official-3660f70d | 32.643 / 0.906 | 28.644 / 0.8152 | 31.28 / 0.8798 |
| EDSR_Lx4_f256b32_DIV2K_official-76ee1c8f | 30.5499 / 0.8701 | 27.0011 / 0.7509 | 29.277 / 0.8266 |
### Image SR Reproduced Models
Experiment name conventions are in [Config.md](Config.md).
|Exp Name | Set5 (PSNR/SSIM) | Set14 (PSNR/SSIM) |DIV2K100 (PSNR/SSIM) |
| :------------- | :----------: | :----------: |:----------: |
| 001_MSRResNet_x4_f64b16_DIV2K_1000k_B16G1_wandb | 30.2468 / 0.8651 | 26.7817 / 0.7451 | 28.9967 / 0.8195 |
| 002_MSRResNet_x2_f64b16_DIV2K_1000k_B16G1_001pretrain_wandb | 35.7483 / 0.9442 | 31.5403 / 0.8937 |34.6699 / 0.9377|
| 003_MSRResNet_x3_f64b16_DIV2K_1000k_B16G1_001pretrain_wandb | 32.4038 / 0.9032| 28.4418 / 0.8106|30.9726 / 0.8743 |
| 004_MSRGAN_x4_f64b16_DIV2K_400k_B16G1_wandb | 28.0158 / 0.8087|24.7474 / 0.6623 | 26.6504 / 0.7462|
| | | | |
| 201_EDSR_Mx2_f64b16_DIV2K_300k_B16G1_wandb | 35.7395 / 0.944|31.4348 / 0.8934 |34.5798 / 0.937 |
| 202_EDSR_Mx3_f64b16_DIV2K_300k_B16G1_201pretrain_wandb|32.315 / 0.9026 |28.3866 / 0.8088 |30.9095 / 0.8731|
| 203_EDSR_Mx4_f64b16_DIV2K_300k_B16G1_201pretrain_wandb|30.1726 / 0.8641 |26.721 / 0.743 |28.9506 / 0.818|
| 204_EDSR_Lx2_f256b32_DIV2K_300k_B16G1_wandb | 35.9792 / 0.9453 | 31.7284 / 0.8959 | 34.9544 / 0.9399 |
| 205_EDSR_Lx3_f256b32_DIV2K_300k_B16G1_204pretrain_wandb | 32.6467 / 0.9057 | 28.6859 / 0.8152 | 31.2664 / 0.8793 |
| 206_EDSR_Lx4_f256b32_DIV2K_300k_B16G1_204pretrain_wandb | 30.4718 / 0.8695 | 26.9616 / 0.7502 | 29.2621 / 0.8265 |
## Video Super-Resolution
#### Evaluation
In the evaluation, we include all the input frames and do not crop any border pixels unless otherwise stated.<br/>
We do not use the self-ensemble (flip testing) strategy and any other post-processing methods.
## EDVR
**Name convention**<br/>
EDVR\_(training dataset)\_(track name)\_(model complexity)
- track name. There are four tracks in the NTIRE 2019 Challenges on Video Restoration and Enhancement:
- **SR**: super-resolution with a fixed downsampling kernel (MATLAB bicubic downsampling kernel is frequently used). Most of the previous video SR methods focus on this setting.
- **SRblur**: the inputs are also degraded with motion blur.
- **deblur**: standard deblurring (motion blur).
- **deblurcomp**: motion blur + video compression artifacts.
- model complexity
- **L** (Large): # of channels = 128, # of back residual blocks = 40. This setting is used in our competition submission.
- **M** (Moderate): # of channels = 64, # of back residual blocks = 10.
| Model name |[Test Set] PSNR/SSIM |
|:----------:|:----------:|
| EDVR_Vimeo90K_SR_L | [Vid4] (Y<sup>1</sup>) 27.35/0.8264 [[↓Results]](https://drive.google.com/open?id=14nozpSfe9kC12dVuJ9mspQH5ZqE4mT9K)<br/> (RGB) 25.83/0.8077|
| EDVR_REDS_SR_M | [REDS] (RGB) 30.53/0.8699 [[↓Results]](https://drive.google.com/open?id=1Mek3JIxkjJWjhZhH4qVwTXnRZutKUtC-)|
| EDVR_REDS_SR_L | [REDS] (RGB) 31.09/0.8800 [[↓Results]](https://drive.google.com/open?id=1h6E0QVZyJ5SBkcnYaT1puxYYPVbPsTLt)|
| EDVR_REDS_SRblur_L | [REDS] (RGB) 28.88/0.8361 [[↓Results]](https://drive.google.com/open?id=1-8MNkQuMVMz30UilB9m_d0SXicwFEPZH)|
| EDVR_REDS_deblur_L | [REDS] (RGB) 34.80/0.9487 [[↓Results]](https://drive.google.com/open?id=133wCHTwiiRzenOEoStNbFuZlCX8Jn2at)|
| EDVR_REDS_deblurcomp_L | [REDS] (RGB) 30.24/0.8567 [[↓Results]](https://drive.google.com/open?id=1VjC4fXBXy0uxI8Kwxh-ijj4PZkfsLuTX) |
<sup>1</sup> Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
#### Stage 2 models for the NTIRE19 Competition
| Model name |[Test Set] PSNR/SSIM |
|:----------:|:----------:|
| EDVR_REDS_SR_Stage2 | [REDS] (RGB) / [[↓Results]]()|
| EDVR_REDS_SRblur_Stage2 | [REDS] (RGB) / [[↓Results]]()|
| EDVR_REDS_deblur_Stage2 | [REDS] (RGB) / [[↓Results]]()|
| EDVR_REDS_deblurcomp_Stage2 | [REDS] (RGB) / [[↓Results]]() |
## DUF
The models are converted from the [officially released models](https://github.com/yhjo09/VSR-DUF). <br/>
| Model name | [Test Set] PSNR/SSIM<sup>1</sup> | Official Results<sup>2</sup> |
|:----------:|:----------:|:----------:|
| DUF_x4_52L_official<sup>3</sup> | [Vid4] (Y<sup>4</sup>) 27.33/0.8319 [[↓Results]](https://drive.google.com/open?id=1U9xGhlDSpPPQvKN0BAzXfjUCvaFxwsQf)<br/> (RGB) 25.80/0.8138 | (Y) 27.33/0.8318 [[↓Results]](https://drive.google.com/open?id=1HUmf__cSL7td7J4cXo2wvbVb14Y8YG2j)<br/> (RGB) 25.79/0.8136 |
| DUF_x4_28L_official | [Vid4] | |
| DUF_x4_16L_official | [Vid4] | |
| DUF_x3_16L_official | [Vid4] | |
| DUF_x2_16L_official | [Vid4] | |
<sup>1</sup> We crop eight pixels near image boundary for DUF due to its severe boundary effects. <br/>
<sup>2</sup> The official results are obtained by running the official codes and models. <br/>
<sup>3</sup> Different from the official codes, where `zero padding` is used for border frames, we use `new_info` strategy. <br/>
<sup>4</sup> Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
## TOF
The models are converted from the [officially released models](https://github.com/anchen1011/toflow).<br/>
| Model name | [Test Set] PSNR/SSIM | Official Results<sup>1</sup> |
|:----------:|:----------:|:----------:|
| TOF_official<sup>2</sup> | [Vid4] (Y<sup>3</sup>) 25.86/0.7626 [[↓Results]](https://drive.google.com/open?id=1Xp5U6uZeM44ShzawfuW_E-NmQ30hk-Be)<br/> (RGB) 24.38/0.7403 | (Y) 25.89/0.7651 [[↓Results]](https://drive.google.com/open?id=1WY3CcdzbRhpvDi3aGc1jAhIbeC6GUrM8)<br/> (RGB) 24.41/0.7428 |
<sup>1</sup> The official results are obtained by running the official codes and models. Note that TOFlow does not provide a strategy for border frame recovery and we simply use a `replicate` strategy for border frames. <br/>
<sup>2</sup> The converted model has slightly different results, due to different implementation. And we use `new_info` strategy for border frames. <br/>
<sup>3</sup> Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
# 模型库和基准
[English](ModelZoo.md) **|** [简体中文](ModelZoo_CN.md)
:arrow_double_down: 百度网盘: [预训练模型](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g) **|** [复现实验](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ)
:arrow_double_down: Google Drive: [Pretrained Models](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) **|** [Reproduced Experiments](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)
---
我们提供了:
1. 官方的模型, 它们是从官方release的models直接转化过来的
1. 复现的模型, 使用`BasicSR`的框架复现的, 提供模型和log的例子
下载的模型可以放在 `experiments/pretrained_models` 文件夹.
**[下载官方提供的预训练模型]** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g))
你可以使用以下脚本从Google Drive下载预训练模型.
```python
python scripts/download_pretrained_models.py ESRGAN
# method can be ESRGAN, EDVR, StyleGAN, EDSR, DUF, DFDNet, dlib
```
**[下载复现的模型和log]** ([Google Drive](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing), [百度网盘](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ))
此外, 我们在 [wandb](https://www.wandb.com/) 上更新了模型训练的过程和曲线. 大家可以方便的比较:
**[wandb训练曲线](https://app.wandb.ai/xintao/basicsr)**
<p align="center">
<a href="https://app.wandb.ai/xintao/basicsr" target="_blank">
<img src="../assets/wandb.jpg" height="350">
</a></p>
#### 目录
1. [图像超分辨率](#图像超分辨率)
1. [图像超分官方模型](#图像超分官方模型)
1. [图像超分复现模型](#图像超分复现模型)
1. [视频超分辨率](#视频超分辨率)
## 图像超分辨率
在计算指标时:
- 所有的图像各条边crop了scale的像素
- 都在RGB通道上测试
### 图像超分官方模型
|Exp Name | Set5 (PSNR/SSIM) | Set14 (PSNR/SSIM) |DIV2K100 (PSNR/SSIM) |
| :------------- | :----------: | :----------: |:----------: |
| EDSR_Mx2_f64b16_DIV2K_official-3ba7b086 | 35.7768 / 0.9442 | 31.4966 / 0.8939 | 34.6291 / 0.9373 |
| EDSR_Mx3_f64b16_DIV2K_official-6908f88a | 32.3597 / 0.903 | 28.3932 / 0.8096 | 30.9438 / 0.8737 |
| EDSR_Mx4_f64b16_DIV2K_official-0c287733 | 30.1821 / 0.8641 | 26.7528 / 0.7432 | 28.9679 / 0.8183 |
| EDSR_Lx2_f256b32_DIV2K_official-be38e77d | 35.9979 / 0.9454 | 31.8583 / 0.8971 | 35.0495 / 0.9407 |
| EDSR_Lx3_f256b32_DIV2K_official-3660f70d | 32.643 / 0.906 | 28.644 / 0.8152 | 31.28 / 0.8798 |
| EDSR_Lx4_f256b32_DIV2K_official-76ee1c8f | 30.5499 / 0.8701 | 27.0011 / 0.7509 | 29.277 / 0.8266 |
### 图像超分复现模型
实验名称的命名规则参见 [Config_CN.md](Config_CN.md).
|Exp Name | Set5 (PSNR/SSIM) | Set14 (PSNR/SSIM) |DIV2K100 (PSNR/SSIM) |
| :------------- | :----------: | :----------: |:----------: |
| 001_MSRResNet_x4_f64b16_DIV2K_1000k_B16G1_wandb | 30.2468 / 0.8651 | 26.7817 / 0.7451 | 28.9967 / 0.8195 |
| 002_MSRResNet_x2_f64b16_DIV2K_1000k_B16G1_001pretrain_wandb | 35.7483 / 0.9442 | 31.5403 / 0.8937 |34.6699 / 0.9377|
| 003_MSRResNet_x3_f64b16_DIV2K_1000k_B16G1_001pretrain_wandb | 32.4038 / 0.9032| 28.4418 / 0.8106|30.9726 / 0.8743 |
| 004_MSRGAN_x4_f64b16_DIV2K_400k_B16G1_wandb | 28.0158 / 0.8087|24.7474 / 0.6623 | 26.6504 / 0.7462|
| | | | |
| 201_EDSR_Mx2_f64b16_DIV2K_300k_B16G1_wandb | 35.7395 / 0.944|31.4348 / 0.8934 |34.5798 / 0.937 |
| 202_EDSR_Mx3_f64b16_DIV2K_300k_B16G1_201pretrain_wandb|32.315 / 0.9026 |28.3866 / 0.8088 |30.9095 / 0.8731|
| 203_EDSR_Mx4_f64b16_DIV2K_300k_B16G1_201pretrain_wandb|30.1726 / 0.8641 |26.721 / 0.743 |28.9506 / 0.818|
| 204_EDSR_Lx2_f256b32_DIV2K_300k_B16G1_wandb | 35.9792 / 0.9453 | 31.7284 / 0.8959 | 34.9544 / 0.9399 |
| 205_EDSR_Lx3_f256b32_DIV2K_300k_B16G1_204pretrain_wandb | 32.6467 / 0.9057 | 28.6859 / 0.8152 | 31.2664 / 0.8793 |
| 206_EDSR_Lx4_f256b32_DIV2K_300k_B16G1_204pretrain_wandb | 30.4718 / 0.8695 | 26.9616 / 0.7502 | 29.2621 / 0.8265 |
## 视频超分辨率
#### Evaluation
In the evaluation, we include all the input frames and do not crop any border pixels unless otherwise stated.<br/>
We do not use the self-ensemble (flip testing) strategy and any other post-processing methods.
## EDVR
**Name convention**<br/>
EDVR\_(training dataset)\_(track name)\_(model complexity)
- track name. There are four tracks in the NTIRE 2019 Challenges on Video Restoration and Enhancement:
- **SR**: super-resolution with a fixed downsampling kernel (MATLAB bicubic downsampling kernel is frequently used). Most of the previous video SR methods focus on this setting.
- **SRblur**: the inputs are also degraded with motion blur.
- **deblur**: standard deblurring (motion blur).
- **deblurcomp**: motion blur + video compression artifacts.
- model complexity
- **L** (Large): # of channels = 128, # of back residual blocks = 40. This setting is used in our competition submission.
- **M** (Moderate): # of channels = 64, # of back residual blocks = 10.
| Model name |[Test Set] PSNR/SSIM |
|:----------:|:----------:|
| EDVR_Vimeo90K_SR_L | [Vid4] (Y<sup>1</sup>) 27.35/0.8264 [[↓Results]](https://drive.google.com/open?id=14nozpSfe9kC12dVuJ9mspQH5ZqE4mT9K)<br/> (RGB) 25.83/0.8077|
| EDVR_REDS_SR_M | [REDS] (RGB) 30.53/0.8699 [[↓Results]](https://drive.google.com/open?id=1Mek3JIxkjJWjhZhH4qVwTXnRZutKUtC-)|
| EDVR_REDS_SR_L | [REDS] (RGB) 31.09/0.8800 [[↓Results]](https://drive.google.com/open?id=1h6E0QVZyJ5SBkcnYaT1puxYYPVbPsTLt)|
| EDVR_REDS_SRblur_L | [REDS] (RGB) 28.88/0.8361 [[↓Results]](https://drive.google.com/open?id=1-8MNkQuMVMz30UilB9m_d0SXicwFEPZH)|
| EDVR_REDS_deblur_L | [REDS] (RGB) 34.80/0.9487 [[↓Results]](https://drive.google.com/open?id=133wCHTwiiRzenOEoStNbFuZlCX8Jn2at)|
| EDVR_REDS_deblurcomp_L | [REDS] (RGB) 30.24/0.8567 [[↓Results]](https://drive.google.com/open?id=1VjC4fXBXy0uxI8Kwxh-ijj4PZkfsLuTX) |
<sup>1</sup> Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
#### Stage 2 models for the NTIRE19 Competition
| Model name |[Test Set] PSNR/SSIM |
|:----------:|:----------:|
| EDVR_REDS_SR_Stage2 | [REDS] (RGB) / [[↓Results]]()|
| EDVR_REDS_SRblur_Stage2 | [REDS] (RGB) / [[↓Results]]()|
| EDVR_REDS_deblur_Stage2 | [REDS] (RGB) / [[↓Results]]()|
| EDVR_REDS_deblurcomp_Stage2 | [REDS] (RGB) / [[↓Results]]() |
## DUF
The models are converted from the [officially released models](https://github.com/yhjo09/VSR-DUF). <br/>
| Model name | [Test Set] PSNR/SSIM<sup>1</sup> | Official Results<sup>2</sup> |
|:----------:|:----------:|:----------:|
| DUF_x4_52L_official<sup>3</sup> | [Vid4] (Y<sup>4</sup>) 27.33/0.8319 [[↓Results]](https://drive.google.com/open?id=1U9xGhlDSpPPQvKN0BAzXfjUCvaFxwsQf)<br/> (RGB) 25.80/0.8138 | (Y) 27.33/0.8318 [[↓Results]](https://drive.google.com/open?id=1HUmf__cSL7td7J4cXo2wvbVb14Y8YG2j)<br/> (RGB) 25.79/0.8136 |
| DUF_x4_28L_official | [Vid4] | |
| DUF_x4_16L_official | [Vid4] | |
| DUF_x3_16L_official | [Vid4] | |
| DUF_x2_16L_official | [Vid4] | |
<sup>1</sup> We crop eight pixels near image boundary for DUF due to its severe boundary effects. <br/>
<sup>2</sup> The official results are obtained by running the official codes and models. <br/>
<sup>3</sup> Different from the official codes, where `zero padding` is used for border frames, we use `new_info` strategy. <br/>
<sup>4</sup> Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
## TOF
The models are converted from the [officially released models](https://github.com/anchen1011/toflow).<br/>
| Model name | [Test Set] PSNR/SSIM | Official Results<sup>1</sup> |
|:----------:|:----------:|:----------:|
| TOF_official<sup>2</sup> | [Vid4] (Y<sup>3</sup>) 25.86/0.7626 [[↓Results]](https://drive.google.com/open?id=1Xp5U6uZeM44ShzawfuW_E-NmQ30hk-Be)<br/> (RGB) 24.38/0.7403 | (Y) 25.89/0.7651 [[↓Results]](https://drive.google.com/open?id=1WY3CcdzbRhpvDi3aGc1jAhIbeC6GUrM8)<br/> (RGB) 24.41/0.7428 |
<sup>1</sup> The official results are obtained by running the official codes and models. Note that TOFlow does not provide a strategy for border frame recovery and we simply use a `replicate` strategy for border frames. <br/>
<sup>2</sup> The converted model has slightly different results, due to different implementation. And we use `new_info` strategy for border frames. <br/>
<sup>3</sup> Y or RGB denotes the evaluation on Y (luminance) or RGB channels.
# Models
[English](Models.md) **|** [简体中文](BasicSR_docs_CN.md)
#### Contents
1. [Supported Models](#Supported-Models)
1. [Inheritance Relationship](#Inheritance-Relationship)
## Supported Models
| Class | Description |Supported Algorithms |
| :------------- | :----------:| :----------: |
| [BaseModel](../basicsr/models/base_model.py) | Abstract base class, define common functions||
| [SRModel](../basicsr/models/sr_model.py) | Base image SR class | SRCNN, EDSR, SRResNet, RCAN, RRDBNet, etc |
| [SRGANModel](../basicsr/models/srgan_model.py) | SRGAN image SR class | SRGAN |
| [ESRGANModel](../basicsr/models/esrgan_model.py) | ESRGAN image SR class|ESRGAN|
| [VideoBaseModel](../basicsr/models/video_base_model.py) | Base video SR class | |
| [EDVRModel](../basicsr/models/edvr_model.py) | EDVR video SR class |EDVR|
| [StyleGAN2Model](../basicsr/models/stylegan2_model.py) | StyleGAN2 generation class |StyleGAN2|
## Inheritance Relationship
In order to reuse components among models, we use a lot of inheritance. The following is the inheritance relationship:
```txt
BaseModel
├── SRModel
│ ├── SRGANModel
│ │ └── ESRGANModel
│ └── VideoBaseModel
│ ├── VideoGANModel
│ └── EDVRModel
└── StyleGAN2Model
```
# BasicSR docs
This folder includes:
- Auto-generated API in [*basicsr.readthedocs.io*](https://basicsr.readthedocs.io/en/latest/#)
- Other documents about BasicSR
## 中文文档
我们提供了更完整的 BasicSR 中文解读文档 PDF,你所需要的内容可以在相应的章节中找到。
文档的最新版可以从 [BasicSR-docs/releases](https://github.com/XPixelGroup/BasicSR-docs/releases) 下载。
欢迎大家一起来帮助查找文档中的错误,完善文档。
## 如何在本地自动生成 API docs
```bash
cd docs
make html
```
## 规范
rst 语法参考: https://3vshej.cn/rstSyntax/
着重几个点:
```rst
- 空行
- :file:`file`, :func:`func`, :class:`class`, :math:`\gamma`
- **粗体**,*斜体*
- ``Paper: title``
- Reference: link
```
Examples:
```python
class SPyNetTOF(nn.Module):
"""SPyNet architecture for TOF.
Note that this implementation is specifically for TOFlow. Please use :file:`spynet_arch.py` for general use.
They differ in the following aspects:
1. The basic modules here contain BatchNorm.
2. Normalization and denormalization are not done here, as they are done in TOFlow.
``Paper: Optical Flow Estimation using a Spatial Pyramid Network``
Reference: https://github.com/Coldog2333/pytoflow
Args:
load_path (str): Path for pretrained SPyNet. Default: None.
"""
```
# Training and Testing
[English](TrainTest.md) **|** [简体中文](TrainTest_CN.md)
Please run the commands in the root path of `BasicSR`. <br>
In general, both the training and testing include the following steps:
1. Prepare datasets. Please refer to [DatasetPreparation.md](DatasetPreparation.md)
1. Modify config files. The config files are under the `options` folder. For more specific configuration information, please refer to [Config](Config.md)
1. [Optional] You may need to download pre-trained models if you are testing or using pre-trained models. Please see [ModelZoo](ModelZoo.md)
1. Run commands. Use [Training Commands](#Training-Commands) or [Testing Commands](#Testing-Commands) accordingly.
#### 目录
1. [Training Commands](#Training-Commands)
1. [Single GPU Training](#Single-GPU-Training)
1. [Distributed (Multi-GPUs) Training](#Distributed-Training)
1. [Slurm Training](#Slurm-Training)
1. [Testing Commands](#Testing-Commands)
1. [Single GPU Testing](#Single-GPU-Testing)
1. [Distributed (Multi-GPUs) Testing](#Distributed-Testing)
1. [Slurm Testing](#Slurm-Testing)
## Training Commands
### Single GPU Training
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0 \\\
> python basicsr/train.py -opt options/train/SRResNet_SRGAN/train_MSRResNet_x4.yml
### Distributed Training
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher pytorch
or
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> ./scripts/dist_train.sh 8 options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher pytorch
or
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> ./scripts/dist_train.sh 4 options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml
### Slurm Training
[Introduction to Slurm](https://slurm.schedmd.com/quickstart.html)
**1 GPU**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=MSRResNetx4 --gres=gpu:1 --ntasks=1 --ntasks-per-node=1 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/train.py -opt options/train/SRResNet_SRGAN/train_MSRResNet_x4.yml --launcher="slurm"
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=EDVRMwoTSA --gres=gpu:4 --ntasks=4 --ntasks-per-node=4 --cpus-per-task=4 --kill-on-bad-exit=1 \\\
> python -u basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher="slurm"
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=EDVRMwoTSA --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher="slurm"
## Testing Commands
### Single GPU Testing
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0 \\\
> python basicsr/test.py -opt options/test/SRResNet_SRGAN/test_MSRResNet_x4.yml
### Distributed Testing
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher pytorch
or
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> ./scripts/dist_test.sh 8 options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher pytorch
or
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> ./scripts/dist_test.sh 4 options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml
### Slurm Testing
[Introduction to Slurm](https://slurm.schedmd.com/quickstart.html)
**1 GPU**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:1 --ntasks=1 --ntasks-per-node=1 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/test.py -opt options/test/SRResNet_SRGAN/test_MSRResNet_x4.yml --launcher="slurm"
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:4 --ntasks=4 --ntasks-per-node=4 --cpus-per-task=4 --kill-on-bad-exit=1 \\\
> python -u basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher="slurm"
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher="slurm"
# 训练和测试
[English](TrainTest.md) **|** [简体中文](TrainTest_CN.md)
所有的命令都在 `BasicSR` 的根目录下运行. <br>
一般来说, 训练和测试都有以下的步骤:
1. 准备数据. 参见 [DatasetPreparation_CN.md](DatasetPreparation_CN.md)
1. 修改Config文件. Config文件在 `options` 目录下面. 具体的Config配置含义, 可参考 [Config说明](Config_CN.md)
1. [Optional] 如果是测试或需要预训练, 则需下载预训练模型, 参见 [模型库](ModelZoo_CN.md)
1. 运行命令. 根据需要,使用 [训练命令](#训练命令)[测试命令](#测试命令)
#### 目录
1. [训练命令](#训练命令)
1. [单GPU训练](#单GPU训练)
1. [分布式(多卡)训练](#分布式训练)
1. [Slurm训练](#Slurm训练)
1. [测试命令](#测试命令)
1. [单GPU测试](#单GPU测试)
1. [分布式(多卡)测试](#分布式测试)
1. [Slurm测试](#Slurm测试)
## 训练命令
### 单GPU训练
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0 \\\
> python basicsr/train.py -opt options/train/SRResNet_SRGAN/train_MSRResNet_x4.yml
### 分布式训练
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher pytorch
或者
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> ./scripts/dist_train.sh 8 options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher pytorch
或者
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> ./scripts/dist_train.sh 4 options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml
### Slurm训练
[Slurm介绍](https://slurm.schedmd.com/quickstart.html)
**1 GPU**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=MSRResNetx4 --gres=gpu:1 --ntasks=1 --ntasks-per-node=1 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/train.py -opt options/train/SRResNet_SRGAN/train_MSRResNet_x4.yml --launcher="slurm"
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=EDVRMwoTSA --gres=gpu:4 --ntasks=4 --ntasks-per-node=4 --cpus-per-task=4 --kill-on-bad-exit=1 \\\
> python -u basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher="slurm"
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=EDVRMwoTSA --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher="slurm"
## 测试命令
### 单GPU测试
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0 \\\
> python basicsr/test.py -opt options/test/SRResNet_SRGAN/test_MSRResNet_x4.yml
### 分布式测试
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher pytorch
或者
> CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\\
> ./scripts/dist_test.sh 8 options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher pytorch
或者
> CUDA_VISIBLE_DEVICES=0,1,2,3 \\\
> ./scripts/dist_test.sh 4 options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml
### Slurm测试
[Slurm介绍](https://slurm.schedmd.com/quickstart.html)
**1 GPU**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:1 --ntasks=1 --ntasks-per-node=1 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/test.py -opt options/test/SRResNet_SRGAN/test_MSRResNet_x4.yml --launcher="slurm"
**4 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:4 --ntasks=4 --ntasks-per-node=4 --cpus-per-task=4 --kill-on-bad-exit=1 \\\
> python -u basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher="slurm"
**8 GPUs**
> PYTHONPATH="./:${PYTHONPATH}" \\\
> GLOG_vmodule=MemcachedClient=-1 \\\
> srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=6 --kill-on-bad-exit=1 \\\
> python -u basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher="slurm"
import os
from os import path as osp
def scandir(dir_path, suffix=None, recursive=False, full_path=False):
"""Scan a directory to find the interested files.
Args:
dir_path (str): Path of the directory.
suffix (str | tuple(str), optional): File suffix that we are
interested in. Default: None.
recursive (bool, optional): If set to True, recursively scan the
directory. Default: False.
full_path (bool, optional): If set to True, include the dir_path.
Default: False.
Returns:
A generator for all the interested files with relative paths.
"""
if (suffix is not None) and not isinstance(suffix, (str, tuple)):
raise TypeError('"suffix" must be a string or tuple of strings')
root = dir_path
def _scandir(dir_path, suffix, recursive):
for entry in os.scandir(dir_path):
if not entry.name.startswith('.') and entry.is_file():
if full_path:
return_path = entry.path
else:
return_path = osp.relpath(entry.path, root)
if suffix is None:
yield return_path
elif return_path.endswith(suffix):
yield return_path
else:
if recursive:
yield from _scandir(entry.path, suffix=suffix, recursive=recursive)
else:
continue
return _scandir(dir_path, suffix=suffix, recursive=recursive)
# specifically generate a fake __init__.py for scripts folder to generate docs
with open('../scripts/__init__.py', 'w') as f:
pass
module_name_list = ['basicsr', 'scripts']
for module_name in module_name_list:
cur_dir = osp.abspath(osp.dirname(__file__))
output_dir = osp.join(cur_dir, 'api')
module_dir = osp.join(osp.dirname(cur_dir), module_name)
os.makedirs(output_dir, exist_ok=True)
api_content = f'{module_name} API\n=========================\n'
submodule_name_list = []
for path in sorted(scandir(module_dir, suffix='.py', recursive=True)):
if path in ['__init__.py', 'version.py']:
continue
path = f'{module_name}.' + path.replace('\\', '/').replace('/', '.').replace('.py', '.rst')
# create .rst file
output_rst = osp.join(output_dir, path)
with open(output_rst, 'w') as f:
# write contents
title = path.replace('.rst', '').replace('_', '\\_')
content = f'{title}\n===================================================================================\n'
automodule = path.replace('.rst', '')
content += f'\n.. automodule:: {automodule}'
content += r'''
:members:
:undoc-members:
:show-inheritance:
'''
f.write(content)
# add to api.rst
submodule_name = path.split('.')[1]
if submodule_name not in submodule_name_list:
submodule_name_list.append(submodule_name)
api_content += f'\n\n{module_name}.{submodule_name}\n-----------------------------------------------------'
api_content += r'''
.. toctree::
:maxdepth: 4
'''
api_content += f' {automodule}\n'
# write to api.rst
with open(os.path.join(output_dir, f'api_{module_name}.rst'), 'w') as f:
f.write(api_content)
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import subprocess
import sys
sys.path.insert(0, os.path.abspath('..'))
# -- Project information -----------------------------------------------------
project = 'BasicSR'
copyright = '2018-2022, BasicSR'
author = 'BasicSR'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode', 'recommonmark', 'sphinx_markdown_tables'
]
source_suffix = {
'.rst': 'restructuredtext',
'.md': 'markdown',
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# The master toctree document.
master_doc = 'index'
# def run_apidoc(_):
# # automatically generate api docs
# modules = ['basicsr']
# for module in modules:
# cur_dir = os.path.abspath(os.path.dirname(__file__))
# output_path = os.path.join(cur_dir, 'api')
# cmd_path = 'sphinx-apidoc'
# if hasattr(sys, 'real_prefix'): # Check to see if we are in a virtualenv
# # If we are, assemble the path manually
# cmd_path = os.path.abspath(os.path.join(sys.prefix, 'bin', 'sphinx-apidoc'))
# subprocess.check_call([cmd_path, '-e', '-o', output_path, '../' + module, '--force'])
# def setup(app):
# app.connect('builder-inited', run_apidoc)
def auto_generate_api(app):
subprocess.run(['python', './auto_generate_api.py'])
def setup(app):
app.connect('builder-inited', auto_generate_api)
# History of New Features/Updates
:triangular_flag_on_post: **New Features/Updates**
- :white_check_mark: Oct 5, 2021. Add **ECBSR training and testing** codes: [ECBSR](https://github.com/xindongzhang/ECBSR).
> ACMMM21: Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices <br>
> Xindong Zhang, Hui Zeng, Lei Zhang
- :white_check_mark: Sep 2, 2021. Add **SwinIR training and testing** codes: [SwinIR](https://github.com/JingyunLiang/SwinIR) by [Jingyun Liang](https://github.com/JingyunLiang). More details are in [HOWTOs.md](docs/HOWTOs.md#how-to-train-swinir-sr)
> ICCVW21: SwinIR: Image Restoration Using Swin Transformer <br>
> Jingyun Liang, Jiezhang Cao, Sun, Guolei Sun, Kai Zhang, Luc Van Gool and Radu Timofte
- :white_check_mark: Aug 5, 2021. Add NIQE, which produces the same results as MATLAB (both are 5.7296 for tests/data/baboon.png).
- :white_check_mark: July 31, 2021. Add **bi-directional video super-resolution** codes: [**BasicVSR** and IconVSR](https://arxiv.org/abs/2012.02181).
> CVPR21: BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond <br>
> Kelvin C.K., Xintao Wang, Ke Yu, Chao Dong, Chen Change Loy
- :white_check_mark: July 20, 2021. Add **dual-blind face restoration** codes: [HiFaceGAN](https://github.com/Lotayou/Face-Renovation) codes by [Lotayou](https://lotayou.github.io/).
- :white_check_mark: Nov 29, 2020. Add **ESRGAN** and **DFDNet** [colab demo](../colab)
- :white_check_mark: Sep 8, 2020. Add **blind face restoration** inference codes: [DFDNet](https://github.com/csxmli2016/DFDNet).
> ECCV20: Blind Face Restoration via Deep Multi-scale Component Dictionaries <br>
> Xiaoming Li, Chaofeng Chen, Shangchen Zhou, Xianhui Lin, Wangmeng Zuo and Lei Zhang
- :white_check_mark: Aug 27, 2020. Add **StyleGAN2 training and testing** codes: [StyleGAN2](https://github.com/rosinality/stylegan2-pytorch).
> CVPR20: Analyzing and Improving the Image Quality of StyleGAN <br>
> Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen and Timo Aila
- :white_check_mark: Aug 19, 2020. A **brand-new** BasicSR v1.0.0 online.
Welcome to BasicSR's documentation!
===================================
.. toctree::
:maxdepth: 4
:caption: API
api/api_basicsr.rst
api/api_scripts.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
# Introduction
## Codebase Designs and Conventions
Please see [DesignConvention.md](DesignConvention.md) for the designs and conventions of the BasicSR codebase.<br>
The figure below shows the overall framework. More descriptions for each component: <br>
![overall_structure](../assets/overall_structure.png)
## Detailed Descriptions
- [**Models**](Models.md)
- [**Training and testing commands**](TrainTest.md)
- [**Options/Configs**](Config.md)
- [**Logging**](Logging.md)
- 📈 [Training curves in wandb](https://app.wandb.ai/xintao/basicsr)
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment