README.md 4.33 KB
Newer Older
Anton Lozhkov's avatar
Anton Lozhkov committed
1
2
## Training examples

3
4
Creating a training image set is [described in a different document](https://huggingface.co/docs/datasets/image_process#image-datasets).

anton-l's avatar
anton-l committed
5
6
7
8
9
10
11
12
### Installing the dependencies

Before running the scipts, make sure to install the library's training dependencies:

```bash
pip install diffusers[training] accelerate datasets
```

13
14
15
16
17
18
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:

```bash
accelerate config
```

anton-l's avatar
anton-l committed
19
### Unconditional Flowers  
Anton Lozhkov's avatar
Anton Lozhkov committed
20
21
22
23

The command to train a DDPM UNet model on the Oxford Flowers dataset:

```bash
24
accelerate launch train_unconditional.py \
25
  --dataset_name="huggan/flowers-102-categories" \
Anton Lozhkov's avatar
Anton Lozhkov committed
26
  --resolution=64 \
27
28
  --output_dir="ddpm-ema-flowers-64" \
  --train_batch_size=16 \
Anton Lozhkov's avatar
Anton Lozhkov committed
29
30
  --num_epochs=100 \
  --gradient_accumulation_steps=1 \
31
32
33
34
  --learning_rate=1e-4 \
  --lr_warmup_steps=500 \
  --mixed_precision=no \
  --push_to_hub
Anton Lozhkov's avatar
Anton Lozhkov committed
35
```
Anton Lozhkov's avatar
Anton Lozhkov committed
36
An example trained model: https://huggingface.co/anton-l/ddpm-ema-flowers-64
Anton Lozhkov's avatar
Anton Lozhkov committed
37

anton-l's avatar
anton-l committed
38
A full training run takes 2 hours on 4xV100 GPUs.
Anton Lozhkov's avatar
Anton Lozhkov committed
39

Anton Lozhkov's avatar
Anton Lozhkov committed
40
<img src="https://user-images.githubusercontent.com/26864830/180248660-a0b143d0-b89a-42c5-8656-2ebf6ece7e52.png" width="700" />
Anton Lozhkov's avatar
Anton Lozhkov committed
41
42


anton-l's avatar
anton-l committed
43
### Unconditional Pokemon 
Anton Lozhkov's avatar
Anton Lozhkov committed
44
45
46
47

The command to train a DDPM UNet model on the Pokemon dataset:

```bash
48
accelerate launch train_unconditional.py \
49
  --dataset_name="huggan/pokemon" \
Anton Lozhkov's avatar
Anton Lozhkov committed
50
  --resolution=64 \
51
52
  --output_dir="ddpm-ema-pokemon-64" \
  --train_batch_size=16 \
Anton Lozhkov's avatar
Anton Lozhkov committed
53
54
  --num_epochs=100 \
  --gradient_accumulation_steps=1 \
55
56
57
58
  --learning_rate=1e-4 \
  --lr_warmup_steps=500 \
  --mixed_precision=no \
  --push_to_hub
Anton Lozhkov's avatar
Anton Lozhkov committed
59
```
Anton Lozhkov's avatar
Anton Lozhkov committed
60
An example trained model: https://huggingface.co/anton-l/ddpm-ema-pokemon-64
Anton Lozhkov's avatar
Anton Lozhkov committed
61

anton-l's avatar
anton-l committed
62
A full training run takes 2 hours on 4xV100 GPUs.
Anton Lozhkov's avatar
Anton Lozhkov committed
63

Anton Lozhkov's avatar
Anton Lozhkov committed
64
<img src="https://user-images.githubusercontent.com/26864830/180248200-928953b4-db38-48db-b0c6-8b740fe6786f.png" width="700" />
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129


### Using your own data

To use your own dataset, there are 2 ways:
- you can either provide your own folder as `--train_data_dir`
- or you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the `--dataset_name` argument.

Below, we explain both in more detail.

#### Provide the dataset as a folder

If you provide your own folders with images, the script expects the following directory structure:

```bash
data_dir/xxx.png
data_dir/xxy.png
data_dir/[...]/xxz.png
```

In other words, the script will take care of gathering all images inside the folder. You can then run the script like this:

```bash
accelerate launch train_unconditional.py \
    --train_data_dir <path-to-train-directory> \
    <other-arguments>
```

Internally, the script will use the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature which will automatically turn the folders into 🤗 Dataset objects.

#### Upload your data to the hub, as a (possibly private) repo

It's very easy (and convenient) to upload your image dataset to the hub using the [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) feature available in 🤗 Datasets. Simply do the following:

```python
from datasets import load_dataset

# example 1: local folder
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")

# example 2: local files (suppoted formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset("imagefolder", data_files="path_to_zip_file")

# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)
dataset = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip")

# example 4: providing several splits
dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]})
```

`ImageFolder` will create an `image` column containing the PIL-encoded images.

Next, push it to the hub!

```python
# assuming you have ran the huggingface-cli login command in a terminal
dataset.push_to_hub("name_of_your_dataset")

# if you want to push to a private repo, simply pass private=True:
dataset.push_to_hub("name_of_your_dataset", private=True)
```

and that's it! You can now train your model by simply setting the `--dataset_name` argument to the name of your dataset on the hub.

More on this can also be found in [this blog post](https://huggingface.co/blog/image-search-datasets).