- Try Lightning Colossal-AI strategy to optimize memory and accelerate speed
- Try Lightning Colossal-AI strategy to optimize memory and accelerate speed
## Prepare Common Dataset
**This tutorial folder aims to let the user to quickly try out the training scripts**. One major task for deep learning is data preparataion. To save time on data preparation, we use `CIFAR10` for most tutorials and synthetic datasets if the dataset required is too large. To make the `CIFAR10` dataset shared across the different examples, it should be downloaded in tutorial root directory with the following command.
```python
pythondownload_cifar10.py
```
## Discussion
## Discussion
...
@@ -74,3 +50,144 @@ Discussion about the [Colossal-AI](https://github.com/hpcaitech/ColossalAI) proj
...
@@ -74,3 +50,144 @@ Discussion about the [Colossal-AI](https://github.com/hpcaitech/ColossalAI) proj
If you think there is a need to discuss anything, you may jump to our [Slack](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w).
If you think there is a need to discuss anything, you may jump to our [Slack](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w).
If you encounter any problem while running these tutorials, you may want to raise an [issue](https://github.com/hpcaitech/ColossalAI/issues/new/choose) in this repository.
If you encounter any problem while running these tutorials, you may want to raise an [issue](https://github.com/hpcaitech/ColossalAI/issues/new/choose) in this repository.
## 🛠️ Setup environment
You should use `conda` to create a virtual environment, we recommend **python 3.8**, e.g. `conda create -n colossal python=3.8`. This installation commands are for CUDA 11.3, if you have a different version of CUDA, please download PyTorch and Colossal-AI accordingly.
```
# install torch
# visit https://pytorch.org/get-started/locally/ to download other versions
## 🔥 Multi-dimensional Hybrid Parallel with Vision Transformer
1. Go to **hybrid_parallel** folder in the **tutorial** directory.
2. Install our model zoo.
```bash
pip install titans
```
3. Run with synthetic data which is of similar shape to CIFAR10 with the `-s` flag.
```bash
colossalai run --nproc_per_node 4 train.py --config config.py -s
```
4. Modify the config file to play with different types of tensor parallelism, for example, change tensor parallel size to be 4 and mode to be 2d and run on 8 GPUs.
## ☀️ Sequence Parallel with BERT
1. Go to the **sequence_parallel** folder in the **tutorial** directory.
2. Run with the following command
```bash
export PYTHONPATH=$PWD
colossalai run --nproc_per_node 4 train.py -s
```
3. The default config is sequence parallel size = 2, pipeline size = 1, let’s change pipeline size to be 2 and try it again.
## 📕 Large batch optimization with LARS and LAMB
1. Go to the **large_batch_optimizer** folder in the **tutorial** directory.
2. Run with synthetic data
```bash
colossalai run --nproc_per_node 4 train.py --config config.py -s
```
## 😀 Auto-Parallel Tutorial
1. Go to the **auto_parallel** folder in the **tutorial** directory.
2. Install `pulp` and `coin-or-cbc` for the solver.
```bash
pip install pulp
conda install-c conda-forge coin-or-cbc
```
2. Run the auto parallel resnet example with 4 GPUs with synthetic dataset.
```bash
colossalai run --nproc_per_node 4 auto_parallel_with_resnet.py -s
```
You should expect to the log like this. This log shows the edge cost on the computation graph as well as the sharding strategy for an operation. For example, `layer1_0_conv1 S01R = S01R X RR` means that the first dimension (batch) of the input and output is sharded while the weight is not sharded (S means sharded, R means replicated), simply equivalent to data parallel training.
This shows that given different memory budgets, the model is automatically injected with activation checkpoint and its time taken per iteration. You can run this benchmark for GPT as well but it can much longer since the model is larger.
```bash
python auto_ckpt_solver_test.py --model gpt2
```
4. Run a simple benchmark to find the optimal batch size for checkpointed model.
1. Install `pulp` and `coin-or-cbc` for the solver.
```bash
pip install pulp
conda install-c conda-forge coin-or-cbc
```
2. Run the auto parallel resnet example with 4 GPUs with synthetic dataset.
```bash
colossalai run --nproc_per_node 4 auto_parallel_with_resnet.py -s
```
You should expect to the log like this. This log shows the edge cost on the computation graph as well as the sharding strategy for an operation. For example, `layer1_0_conv1 S01R = S01R X RR` means that the first dimension (batch) of the input and output is sharded while the weight is not sharded (S means sharded, R means replicated), simply equivalent to data parallel training.
This shows that given different memory budgets, the model is automatically injected with activation checkpoint and its time taken per iteration. You can run this benchmark for GPT as well but it can much longer since the model is larger.
```bash
python auto_ckpt_solver_test.py --model gpt2
```
4. Run a simple benchmark to find the optimal batch size for checkpointed model.
We use CIFAR10 dataset in this example. You should invoke the `donwload_cifar10.py` in the tutorial root directory or directly run the `auto_parallel_with_resnet.py`.
We use CIFAR10 dataset in this example. You should invoke the `donwload_cifar10.py` in the tutorial root directory or directly run the `auto_parallel_with_resnet.py`.
2. Run with synthetic data which is of similar shape to CIFAR10 with the `-s` flag.
```bash
colossalai run --nproc_per_node 4 train.py --config config.py -s
```
3. Modify the config file to play with different types of tensor parallelism, for example, change tensor parallel size to be 4 and mode to be 2d and run on 8 GPUs.
colossalai run --nproc_per_node 4 train.py --config config.py -s
```
## Prepare Dataset
## Prepare Dataset
We use CIFAR10 dataset in this example. You should invoke the `donwload_cifar10.py` in the tutorial root directory or directly run the `auto_parallel_with_resnet.py`.
We use CIFAR10 dataset in this example. You should invoke the `donwload_cifar10.py` in the tutorial root directory or directly run the `auto_parallel_with_resnet.py`.
Meta recently released [Open Pretrained Transformer (OPT)](https://github.com/facebookresearch/metaseq), a 175-Billion parameter AI language model, which stimulates AI programmers to perform various downstream tasks and application deployments.
Meta recently released [Open Pretrained Transformer (OPT)](https://github.com/facebookresearch/metaseq), a 175-Billion parameter AI language model, which stimulates AI programmers to perform various downstream tasks and application deployments.
...
@@ -26,7 +27,21 @@ the tokenization). This training script is adapted from the [HuggingFace Languag
...
@@ -26,7 +27,21 @@ the tokenization). This training script is adapted from the [HuggingFace Languag
## Our Modifications
## Our Modifications
We adapt the OPT training code to ColossalAI by leveraging Gemini and ZeRO DDP.
We adapt the OPT training code to ColossalAI by leveraging Gemini and ZeRO DDP.
## Quick Start
## 🚀Quick Start for Tutorial
1. Install the dependency
```bash
pip install datasets accelerate
```
2. Run finetuning with synthetic datasets with one GPU
```bash
bash ./run_clm_synthetic.sh
```
3. Run finetuning with 4 GPUs
```bash
bash ./run_clm_synthetic.sh 16 0 125m 4
```
## Quick Start for Practical Use
You can launch training by using the following bash script
You can launch training by using the following bash script