README.md 2.17 KB
Newer Older
1
2
# Overview

Frank Lee's avatar
Frank Lee committed
3
4
Here is an example of training ViT-B/16 on Imagenet-1K with batch size 32K.
We use 8x NVIDIA A100 GPU in this example. 
5
6

# How to run
Frank Lee's avatar
Frank Lee committed
7
Using [Slurm](https://slurm.schedmd.com/documentation.html):
8
9
10
11
```shell
srun python train_dali.py --local_rank=$SLURM_PROCID --world_size=$SLURM_NPROCS --host=$HOST --port=29500 --config=vit-b16.py
```

Frank Lee's avatar
Frank Lee committed
12
# Results
13
14
15

![Loss Curve](./loss.jpeg)
![Accuracy](./acc.jpeg)
16
17
18
19

# Details
`vit-b16.py`

Frank Lee's avatar
Frank Lee committed
20
It is a [config file](https://colossalai.org/config.html), which is used by ColossalAI to define all kinds of training arguments, such as the model, dataset, and training method (optimizer, lr_scheduler, epoch, etc.). You can access config content by `gpc.config`.
21

Frank Lee's avatar
Frank Lee committed
22
23
24
25
In this example, we train the ViT-Base patch 16 model 300 epochs on ImageNet-1K. The batch size is set to 32K through data parallel (4K on each GPU from 16x gradient accumulation with batch size 256). Since the batch size is very large than common usage, leading to convergence difficulties, we use a 
large batch optimizer [LAMB](https://arxiv.org/abs/1904.00962), and we can scale the batch size to 32K with a little accuracy loss. The learning rate and weight decay of the optimizer are set to 1.8e-2 and 0.1, respectively. We use a linear warmup learning rate scheduler and warmup 150 epochs.
We introduce FP16 mixed precision to accelerate training and use gradient clipping to help convergence.
For simplicity and speed, we didn't apply `RandAug` and just used [Mixup](https://arxiv.org/abs/1710.09412) in data augmentation.
26

Frank Lee's avatar
Frank Lee committed
27
If you have enough computing resources, you can expand this example conveniently with data parallel on a very large scale without gradient accumulation, and finish the training process even within one hour.
28
29
30


`imagenet_dali_dataloader.py`
Frank Lee's avatar
Frank Lee committed
31
To accelerate the training process, we use [DALI](https://github.com/NVIDIA/DALI) as data loader. Note that it requires the dataset in TFRecord format, avoiding read raw images which reduces efficiency of the file system.
32
33

`train_dali.py`
Frank Lee's avatar
Frank Lee committed
34
We build the DALI data loader and train process using Colossal-AI here.
35
36

`mixup.py`
Frank Lee's avatar
Frank Lee committed
37
Since we used Mixup, we define mixup loss in this file.
38
39

`hooks.py`
Frank Lee's avatar
Frank Lee committed
40
We also define useful hooks to log information help debugging.