README.md 2.54 KB
Newer Older
1
# Sequence Parallelism
2

3
## Table of contents
4

5
6
7
8
9
10
11
- [Sequence Parallelism](#sequence-parallelism)
  - [Table of contents](#table-of-contents)
  - [📚 Overview](#-overview)
  - [🚀 Quick Start](#-quick-start)
  - [🏎 How to Train with Sequence Parallelism](#-how-to-train-with-sequence-parallelism)
    - [Step 1. Configure your parameters](#step-1-configure-your-parameters)
    - [Step 2. Invoke parallel training](#step-2-invoke-parallel-training)
12

13
## 📚 Overview
14

15
16
In this tutorial, we implemented BERT with sequence parallelism. Sequence parallelism splits the input tensor and intermediate
activation along the sequence dimension. This method can achieve better memory efficiency and allows us to train with larger batch size and longer sequence length.
17

18
Paper: [Sequence Parallelism: Long Sequence Training from System Perspective](https://arxiv.org/abs/2105.13120)
19

20
## 🚀 Quick Start
21

22
1. Install PyTorch
23

24
2. Install the dependencies.
25

26
27
```bash
pip install -r requirements.txt
28
29
```

30
3. Run with the following command
31

32
33
```bash
export PYTHONPATH=$PWD
34

35
36
# run with synthetic dataset
colossalai run --nproc_per_node 4 train.py
37
38
```

39
> The default config is sequence parallel size = 2, pipeline size = 1, let’s change pipeline size to be 2 and try it again.
40
41


42
## 🏎 How to Train with Sequence Parallelism
43

44
45
We provided `train.py` for you to execute training. Before invoking the script, there are several
steps to perform.
46

47
### Step 1. Configure your parameters
48
49

In the `config.py` provided, a set of parameters are defined including training scheme, model, etc.
50
You can also modify the ColossalAI setting. For example, if you wish to parallelize over the
51
52
sequence dimension on 8 GPUs. You can change `size=4` to `size=8`. If you wish to use pipeline parallelism, you can set `pipeline=<num_of_pipeline_stages>`.

53
### Step 2. Invoke parallel training
54

55
Lastly, you can start training with sequence parallelism. How you invoke `train.py` depends on your
56
57
58
59
60
61
machine setting.

- If you are using a single machine with multiple GPUs, PyTorch launch utility can easily let you
  start your script. A sample command is like below:

  ```bash
62
    colossalai run --nproc_per_node <num_gpus_on_this_machine> --master_addr localhost --master_port 29500 train.py
63
64
65
  ```

- If you are using multiple machines with multiple GPUs, we suggest that you refer to `colossalai
66
67
  launch_from_slurm` or `colossalai.launch_from_openmpi` as it is easier to use SLURM and OpenMPI
  to start multiple processes over multiple nodes. If you have your own launcher, you can fall back
68
  to the default `colossalai.launch` function.