The training of our model uses[COCO](https://cocodataset.org/), [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/), [NYUDepthV2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html), [Synthetic Rain Datasets](https://paperswithcode.com/dataset/synthetic-rain-datasets), [SIDD](https://www.eecs.yorku.ca/~kamel/sidd/), and[LoL](https://daooshee.github.io/BMVC2018website/) datasets.
First, download the dataset from [here](https://drive.google.com/file/d/1AysroWpfISmm-yRFGBgFTrLy6FjQwvwP/view?usp=sharing). Please make sure to locate the downloaded file to `$Painter_ROOT/datasets/nyu_depth_v2/sync.zip`
First, download the dataset from the [official website](https://groups.csail.mit.edu/vision/datasets/ADE20K/), and put it in `$Painter_ROOT/datasets/`. Afterward, unzip the zip file and rename the target folder as `ade20k`. The ADE20k folder should look like:
Second, prepare annotations for training using the following command. The generated annotations will be saved at`$Painter_ROOT/datasets/ade20k/annotations_with_color/`.
python data/mmdet_custom/gen_json_coco_panoptic_inst.py --split val
python data/mmdet_custom/gen_json_coco_panoptic_inst.py --split val
```
```
Lastly, to enable evaluation with detectron2, link `$Painter_ROOT/datasets/coco/annotations/panoptic_val2017` to `$Painter_ROOT/datasets/coco/panoptic_val2017`and run:
最后, 为了确保使用detectron2进行验证, 创建`$Painter_ROOT/datasets/coco/annotations/panoptic_val2017` to `$Painter_ROOT/datasets/coco/panoptic_val2017`的软连接并运行:
First, download person detection result of COCO val2017 from [google drive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk), and put it in `$Painter_ROOT/datasets/coco_pose/`
First, pre-process the dataset using the following command, the painted ground truth will be saved to`$Painter_ROOT/datasets/coco_pose/`.
2. 通过下面的命令对数据进行预处理, 得到的 painted ground truth 默认保存到`$Painter_ROOT/datasets/coco_pose/`路径下.
```bash
```bash
cd$Painter_ROOT/data/mmpose_custom
cd$Painter_ROOT/data/mmpose_custom
# generate training data with common data augmentation for pose estimation, note we generate 20 copies for training
We follow [MPRNet](https://github.com/swz30/MPRNet) to prepare the data for deraining.
Download the dataset following the instructions in [MPRNet](https://github.com/swz30/MPRNet/blob/main/Deraining/Datasets/README.md), and put it in `$Painter_ROOT/datasets/derain/`. The folder should look like:
```bash
derain/
derain/
train/
train/
input/
input/
...
@@ -248,34 +184,34 @@ derain/
...
@@ -248,34 +184,34 @@ derain/
Test2800/
Test2800/
```
```
Next, prepare json files for training and evaluation. The generated json files will be saved at`datasets/derain/`.
For training data of SIDD, you can download the SIDD-Medium dataset from the [official url](https://www.eecs.yorku.ca/~kamel/sidd/dataset.php). For evaluation on SIDD, you can download data from [here](https://mailustceducn-my.sharepoint.com/:f:/g/personal/zhendongwang_mail_ustc_edu_cn/Ev832uKaw2JJhwROKqiXGfMBttyFko_zrDVzfSbFFDoi4Q?e=S3p5hQ).
First, download images of LOL dataset from [google drive](https://drive.google.com/file/d/157bjO1_cFuSd0HWDUuAmcHRJDVyWpOxB/view) and put it in `$Painter_ROOT/datasets/light_enhance/`. The folder should look like:
```
look like:
```bash
light_enhance/
light_enhance/
our485/
our485/
low/
low/
...
@@ -285,9 +221,8 @@ light_enhance/
...
@@ -285,9 +221,8 @@ light_enhance/
high/
high/
```
```
Next, prepare json files for training and evaluation. The generated json files will be saved at`$Painter_ROOTdatasets/light_enhance/`.