Commit feea48cd authored by rprenger's avatar rprenger
Browse files

Merging with main

parents 8694c7b0 0be40526
...@@ -103,6 +103,11 @@ python tools/preprocess_data.py \ ...@@ -103,6 +103,11 @@ python tools/preprocess_data.py \
The output will be two files named, in this case, `my-bert_text_sentence.bin` and `my-bert_text_sentence.idx`. The `--data-path` specified in later BERT training is the full path and new filename, but without the file extension. The output will be two files named, in this case, `my-bert_text_sentence.bin` and `my-bert_text_sentence.idx`. The `--data-path` specified in later BERT training is the full path and new filename, but without the file extension.
For T5 use the same preprocessing as BERT, perhaps renaming it to:
<pre>
--output-prefix my-t5 \
</pre>
Some minor modifications are required for GPT data preprocessing, namely, the addition of a merge table, an end-of-document token, removal of sentence splitting, and a change to the tokenizer type: Some minor modifications are required for GPT data preprocessing, namely, the addition of a merge table, an end-of-document token, removal of sentence splitting, and a change to the tokenizer type:
<pre> <pre>
python tools/preprocess_data.py \ python tools/preprocess_data.py \
...@@ -122,7 +127,7 @@ Further command line arguments are described in the source file [`preprocess_dat ...@@ -122,7 +127,7 @@ Further command line arguments are described in the source file [`preprocess_dat
## BERT Pretraining ## BERT Pretraining
The `examples/pretrain_bert.sh` script runs single GPU 345M parameter BERT pretraining. Debugging is the primary use for single GPU training, as the code base and command line arguments are optimized for highly distributed training. Most of the arguments are fairly self-explanatory. By default, the learning rate decays linearly over the training iterations starting at `--lr` to a minimum set by `--min-lr` over `--lr-decay-iters` iterations. The fraction of training iterations used for warmup is set by `--lr-warmup-fraction`. While this is single GPU training, the batch size specified by `--micro-batch-size` is a single forward-backward path batch-size and the code will perform gradient accumulation steps until it reaches `global-batch-size` whcih is the batch size per iteration. The data is partitioned into a 949:50:1 ratio for training/validation/test sets (default is 969:30:1). This partitioning happens on the fly, but is consistent across runs with the same random seed (1234 by default, or specified manually with `--seed`). We use `train-iters` as the training iterations requested. Alternatively, one can provide `--train-samples` which is total number of samples to train on. If this option is present, then instead of providing `--lr-decay-iters`, one will need to provide `--lr-decay-samples`. The `examples/pretrain_bert.sh` script runs single GPU 345M parameter BERT pretraining. Debugging is the primary use for single GPU training, as the code base and command line arguments are optimized for highly distributed training. Most of the arguments are fairly self-explanatory. By default, the learning rate decays linearly over the training iterations starting at `--lr` to a minimum set by `--min-lr` over `--lr-decay-iters` iterations. The fraction of training iterations used for warmup is set by `--lr-warmup-fraction`. While this is single GPU training, the batch size specified by `--micro-batch-size` is a single forward-backward path batch-size and the code will perform gradient accumulation steps until it reaches `global-batch-size` which is the batch size per iteration. The data is partitioned into a 949:50:1 ratio for training/validation/test sets (default is 969:30:1). This partitioning happens on the fly, but is consistent across runs with the same random seed (1234 by default, or specified manually with `--seed`). We use `train-iters` as the training iterations requested. Alternatively, one can provide `--train-samples` which is total number of samples to train on. If this option is present, then instead of providing `--lr-decay-iters`, one will need to provide `--lr-decay-samples`.
The logging, checkpoint-saving, and evaluation intervals are specified. Checkpointing the activations facilitates the training of larger models and/or batches. Note that the `--data-path` now includes the additional `_text_sentence` suffix added in preprocessing, but does not include the file extensions. The logging, checkpoint-saving, and evaluation intervals are specified. Checkpointing the activations facilitates the training of larger models and/or batches. Note that the `--data-path` now includes the additional `_text_sentence` suffix added in preprocessing, but does not include the file extensions.
...@@ -151,7 +156,7 @@ OUTPUT_ARGS="--log-interval 10 \ ...@@ -151,7 +156,7 @@ OUTPUT_ARGS="--log-interval 10 \
--save-interval 500 \ --save-interval 500 \
--eval-interval 100 \ --eval-interval 100 \
--eval-iters 10 \ --eval-iters 10 \
--checkpoint-activations" --activations-checkpoint-method uniform"
python pretrain_bert.py \ python pretrain_bert.py \
$BERT_ARGS \ $BERT_ARGS \
...@@ -237,13 +242,14 @@ T5_ARGS="--num-layers 24 \ ...@@ -237,13 +242,14 @@ T5_ARGS="--num-layers 24 \
--micro-batch-size 16 \ --micro-batch-size 16 \
--global-batch-size 2048 \ --global-batch-size 2048 \
--vocab-file $VOCAB_FILE \ --vocab-file $VOCAB_FILE \
--vocab-extra-ids 100 \
--split 949,50,1 \ --split 949,50,1 \
--fp16" --fp16"
OUTPUT_ARGS=&#60;same as those in <a href="#bert-pretraining">BERT pretraining</a> above&#62; OUTPUT_ARGS=&#60;same as those in <a href="#bert-pretraining">BERT pretraining</a> above&#62;
python pretrain_t5.py \ python pretrain_t5.py \
$BERT_ARGS \ $T5_ARGS \
$OUTPUT_ARGS \ $OUTPUT_ARGS \
--save $CHECKPOINT_PATH \ --save $CHECKPOINT_PATH \
--load $CHECKPOINT_PATH \ --load $CHECKPOINT_PATH \
...@@ -294,6 +300,17 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./pretrain_<model>.py \ ...@@ -294,6 +300,17 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./pretrain_<model>.py \
--DDP-impl torch --DDP-impl torch
</pre> </pre>
The interleaved pipelining schedule (more details in Section 2.2.2 of [our paper](https://arxiv.org/pdf/2104.04473.pdf)) can be enabled using the `--num-layers-per-virtual-pipeline-stage` argument, which controls the number of transformer layers in a virtual stage (by default with the non-interleaved schedule, each GPU will execute a single virtual stage with `NUM_LAYERS / PIPELINE_MP_SIZE` transformer layers). The total number of layers in the transformer model should be divisible by this argument value. Additionally, the number of microbatches in the pipeline (computed as `GLOBAL_BATCH_SIZE / (DATA_PARALLEL_SIZE * MICRO_BATCH_SIZE)`) should be divisible by the `PIPELINE_MP_SIZE` when using this schedule (this condition is checked in an assertion in the code). The interleaved schedule is not supported for pipelines with 2 stages (`PIPELINE_MP_SIZE=2`).
## Activation Checkpointing and Recomputation
To reduce GPU memory usage so deploy a large model to a training system, we support activation checkpointing and recomputation. We use a Transformer layer as the unit of checkpointing because the activation size bloats in the middle of a Transformer layer so checkpointing the input of a Transformer layer is storage-efficient. We support two activation checkpointing methods: `uniform` and `block`.
Uniform method uniformly divides the Transformer layers into groups of layers and stores the input activations of each group in the memory. The baseline group size is 1 and, in this case, the input activation of each Transformer layer is checkpointed. When the GPU memory is insufficient, increasing the number of layers per group reduces the memory usage thus enables running a bigger model. For example, when using the number of layers per group of 4, the input activation of each group of 4 Transformer layers is checkpointed.
Block method checkpoints the input activations of a set number of individual Transformer layers per pipeline stage and do the rest of layers without any checkpointing. This method can be used to skip checkpointing some Transformer layers until the GPU memory is fully used, which is applicable only when there is unused GPU memory. Checkpointing fewer transformer layers avoids unnecessary activation recomputation in the backprop thus improves training performance. For example, when we specify 5 layers to checkpoint of 8 layers per pipeline stage, the input activations of only the first 5 Transformer layers are checkpointed and activation recomputation for the rest 3 layers is not needed in the backprop.
## GPT-3 Example ## GPT-3 Example
In `examples/pretrain_gpt3_175B.sh` we have provided an example of how to configure Megatron to run [GPT-3](https://arxiv.org/abs/2005.14165) with 175 billion parameters on 1024 GPUs. The script is designed for [slurm](https://slurm.schedmd.com/documentation.html) with [pyxis](https://github.com/NVIDIA/pyxis) plugin but can be easily adopted to any other scheduler. It uses 8-way and 16-way tensor and pipeline parallelism, respectively. With options `global-batch-size 1536` and `rampup-batch-size 16 16 5859375`, the training will start with global batch size 16 and linearly increase the global batch size to 1536 over 5,859,375 samples with incrmeental steps 16. The training dataset can be either a single set or a multiple datasets combined with a set of weights. In `examples/pretrain_gpt3_175B.sh` we have provided an example of how to configure Megatron to run [GPT-3](https://arxiv.org/abs/2005.14165) with 175 billion parameters on 1024 GPUs. The script is designed for [slurm](https://slurm.schedmd.com/documentation.html) with [pyxis](https://github.com/NVIDIA/pyxis) plugin but can be easily adopted to any other scheduler. It uses 8-way and 16-way tensor and pipeline parallelism, respectively. With options `global-batch-size 1536` and `rampup-batch-size 16 16 5859375`, the training will start with global batch size 16 and linearly increase the global batch size to 1536 over 5,859,375 samples with incrmeental steps 16. The training dataset can be either a single set or a multiple datasets combined with a set of weights.
...@@ -337,7 +354,7 @@ python pretrain_ict.py \ ...@@ -337,7 +354,7 @@ python pretrain_ict.py \
--max-position-embeddings 256 \ --max-position-embeddings 256 \
--ict-head-size 128 \ --ict-head-size 128 \
--train-iters 100000 \ --train-iters 100000 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--bert-load /path/to/pretrained_bert \ --bert-load /path/to/pretrained_bert \
--load checkpoints \ --load checkpoints \
--save checkpoints \ --save checkpoints \
...@@ -367,7 +384,7 @@ python tools/create_doc_index.py \ ...@@ -367,7 +384,7 @@ python tools/create_doc_index.py \
--ict-head-size 128 \ --ict-head-size 128 \
--num-attention-heads 12 \ --num-attention-heads 12 \
--batch-size 128 \ --batch-size 128 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--seq-length 256 \ --seq-length 256 \
--max-position-embeddings 256 \ --max-position-embeddings 256 \
--ict-load /path/to/pretrained_ict \ --ict-load /path/to/pretrained_ict \
...@@ -474,7 +491,7 @@ python tasks/main.py \ ...@@ -474,7 +491,7 @@ python tasks/main.py \
--merge-file $MERGE_FILE \ --merge-file $MERGE_FILE \
--load $CHECKPOINT_PATH \ --load $CHECKPOINT_PATH \
--micro-batch-size 8 \ --micro-batch-size 8 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--log-interval 10 \ --log-interval 10 \
--no-load-optim \ --no-load-optim \
--no-load-rng --no-load-rng
...@@ -504,7 +521,7 @@ python tasks/main.py \ ...@@ -504,7 +521,7 @@ python tasks/main.py \
--merge-file $MERGE_FILE \ --merge-file $MERGE_FILE \
--load $CHECKPOINT_PATH \ --load $CHECKPOINT_PATH \
--micro-batch-size 8 \ --micro-batch-size 8 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--log-interval 10 \ --log-interval 10 \
--no-load-optim \ --no-load-optim \
--no-load-rng --no-load-rng
...@@ -534,7 +551,7 @@ COMMON_TASK_ARGS="--num-layers 24 \ ...@@ -534,7 +551,7 @@ COMMON_TASK_ARGS="--num-layers 24 \
COMMON_TASK_ARGS_EXT="--train-data $TRAIN_DATA \ COMMON_TASK_ARGS_EXT="--train-data $TRAIN_DATA \
--valid-data $VALID_DATA \ --valid-data $VALID_DATA \
--pretrained-checkpoint $PRETRAINED_CHECKPOINT \ --pretrained-checkpoint $PRETRAINED_CHECKPOINT \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--save-interval 10000 \ --save-interval 10000 \
--save $CHECKPOINT_PATH \ --save $CHECKPOINT_PATH \
--log-interval 100 \ --log-interval 100 \
......
...@@ -20,7 +20,7 @@ python tasks/main.py \ ...@@ -20,7 +20,7 @@ python tasks/main.py \
--num-attention-heads 12 \ --num-attention-heads 12 \
--tensor-model-parallel-size 1 \ --tensor-model-parallel-size 1 \
--micro-batch-size 128 \ --micro-batch-size 128 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--seq-length 512 \ --seq-length 512 \
--max-position-embeddings 512 \ --max-position-embeddings 512 \
--load ${CHECKPOINT_PATH} \ --load ${CHECKPOINT_PATH} \
...@@ -29,7 +29,6 @@ python tasks/main.py \ ...@@ -29,7 +29,6 @@ python tasks/main.py \
--retriever-seq-length 256 \ --retriever-seq-length 256 \
--vocab-file bert-vocab.txt\ --vocab-file bert-vocab.txt\
--qa-data-test ${QA_FILE} \ --qa-data-test ${QA_FILE} \
--num-workers 2 \
--faiss-use-gpu \ --faiss-use-gpu \
--retriever-report-topk-accuracies 1 5 20 100 \ --retriever-report-topk-accuracies 1 5 20 100 \
--fp16 \ --fp16 \
......
...@@ -29,7 +29,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \ ...@@ -29,7 +29,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \
--hidden-size 1024 \ --hidden-size 1024 \
--num-attention-heads 16 \ --num-attention-heads 16 \
--batch-size 8 \ --batch-size 8 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--seq-length 1024 \ --seq-length 1024 \
--max-position-embeddings 1024 \ --max-position-embeddings 1024 \
--log-interval 10 \ --log-interval 10 \
......
...@@ -29,7 +29,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \ ...@@ -29,7 +29,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \
--hidden-size 1024 \ --hidden-size 1024 \
--num-attention-heads 16 \ --num-attention-heads 16 \
--micro-batch-size 8 \ --micro-batch-size 8 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--lr 5.0e-5 \ --lr 5.0e-5 \
--lr-decay-style linear \ --lr-decay-style linear \
--lr-warmup-fraction 0.065 \ --lr-warmup-fraction 0.065 \
......
...@@ -29,7 +29,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \ ...@@ -29,7 +29,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \
--hidden-size 1024 \ --hidden-size 1024 \
--num-attention-heads 16 \ --num-attention-heads 16 \
--micro-batch-size 4 \ --micro-batch-size 4 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--lr 1.0e-5 \ --lr 1.0e-5 \
--lr-decay-style linear \ --lr-decay-style linear \
--lr-warmup-fraction 0.06 \ --lr-warmup-fraction 0.06 \
......
...@@ -36,7 +36,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \ ...@@ -36,7 +36,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS ./tasks/main.py \
--bert-load ${BERT_LOAD_PATH} \ --bert-load ${BERT_LOAD_PATH} \
--save-interval 5000 \ --save-interval 5000 \
--log-interval 10 \ --log-interval 10 \
--eval-interval 25000 \ --eval-interval 20000 \
--eval-iters 100 \ --eval-iters 100 \
--indexer-log-interval 1000 \ --indexer-log-interval 1000 \
--faiss-use-gpu \ --faiss-use-gpu \
......
...@@ -23,6 +23,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \ ...@@ -23,6 +23,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \
--num-attention-heads 16 \ --num-attention-heads 16 \
--micro-batch-size 2 \ --micro-batch-size 2 \
--global-batch-size 16 \ --global-batch-size 16 \
--seq-length 512 \
--max-position-embeddings 512 \ --max-position-embeddings 512 \
--train-iters 1000000 \ --train-iters 1000000 \
--save $CHECKPOINT_PATH \ --save $CHECKPOINT_PATH \
......
...@@ -33,7 +33,7 @@ python pretrain_gpt.py \ ...@@ -33,7 +33,7 @@ python pretrain_gpt.py \
--weight-decay 1e-2 \ --weight-decay 1e-2 \
--clip-grad 1.0 \ --clip-grad 1.0 \
--lr-warmup-fraction .01 \ --lr-warmup-fraction .01 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--log-interval 100 \ --log-interval 100 \
--save-interval 10000 \ --save-interval 10000 \
--eval-interval 1000 \ --eval-interval 1000 \
......
...@@ -49,7 +49,7 @@ options=" \ ...@@ -49,7 +49,7 @@ options=" \
--init-method-std 0.006 \ --init-method-std 0.006 \
--tensorboard-dir <TENSORBOARD DIRECTORY> \ --tensorboard-dir <TENSORBOARD DIRECTORY> \
--fp16 \ --fp16 \
--checkpoint-activations " --activations-checkpoint-method uniform "
run_cmd="python -u ${DIR}/pretrain_gpt.py $@ ${options}" run_cmd="python -u ${DIR}/pretrain_gpt.py $@ ${options}"
......
...@@ -40,7 +40,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \ ...@@ -40,7 +40,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \
--weight-decay 1e-2 \ --weight-decay 1e-2 \
--clip-grad 1.0 \ --clip-grad 1.0 \
--lr-warmup-fraction .01 \ --lr-warmup-fraction .01 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--log-interval 100 \ --log-interval 100 \
--save-interval 10000 \ --save-interval 10000 \
--eval-interval 1000 \ --eval-interval 1000 \
......
...@@ -42,7 +42,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \ ...@@ -42,7 +42,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \
--weight-decay 1e-2 \ --weight-decay 1e-2 \
--clip-grad 1.0 \ --clip-grad 1.0 \
--lr-warmup-fraction .01 \ --lr-warmup-fraction .01 \
--checkpoint-activations \ --activations-checkpoint-method uniform \
--log-interval 100 \ --log-interval 100 \
--save-interval 10000 \ --save-interval 10000 \
--eval-interval 1000 \ --eval-interval 1000 \
......
...@@ -15,7 +15,7 @@ python pretrain_t5.py \ ...@@ -15,7 +15,7 @@ python pretrain_t5.py \
--encoder-seq-length 512 \ --encoder-seq-length 512 \
--decoder-seq-length 128 \ --decoder-seq-length 128 \
--micro-batch-size 16 \ --micro-batch-size 16 \
--global-batch-size 2048 \ --global-batch-size 16 \
--max-position-embeddings 512 \ --max-position-embeddings 512 \
--train-iters 1000000 \ --train-iters 1000000 \
--lr-decay-iters 1000000 \ --lr-decay-iters 1000000 \
...@@ -35,4 +35,5 @@ python pretrain_t5.py \ ...@@ -35,4 +35,5 @@ python pretrain_t5.py \
--save-interval 10000 \ --save-interval 10000 \
--eval-interval 1000 \ --eval-interval 1000 \
--eval-iters 10 \ --eval-iters 10 \
--fp16 --fp16 \
--vocab-extra-ids 100
...@@ -24,7 +24,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \ ...@@ -24,7 +24,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \
--encoder-seq-length 512 \ --encoder-seq-length 512 \
--decoder-seq-length 128 \ --decoder-seq-length 128 \
--micro-batch-size 16 \ --micro-batch-size 16 \
--global-batch-size 2048 \ --global-batch-size 128 \
--max-position-embeddings 512 \ --max-position-embeddings 512 \
--train-iters 1000000 \ --train-iters 1000000 \
--lr-decay-iters 1000000 \ --lr-decay-iters 1000000 \
...@@ -44,4 +44,5 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \ ...@@ -44,4 +44,5 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \
--save-interval 10000 \ --save-interval 10000 \
--eval-interval 1000 \ --eval-interval 1000 \
--eval-iters 10 \ --eval-iters 10 \
--fp16 --fp16 \
--vocab-extra-ids 100
...@@ -24,8 +24,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \ ...@@ -24,8 +24,7 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \
--encoder-seq-length 512 \ --encoder-seq-length 512 \
--decoder-seq-length 128 \ --decoder-seq-length 128 \
--micro-batch-size 16 \ --micro-batch-size 16 \
--global-batch-size 2048 \ --global-batch-size 128 \
--seq-length 512 \
--max-position-embeddings 512 \ --max-position-embeddings 512 \
--train-iters 1000000 \ --train-iters 1000000 \
--lr-decay-iters 1000000 \ --lr-decay-iters 1000000 \
...@@ -45,4 +44,5 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \ ...@@ -45,4 +44,5 @@ python -m torch.distributed.launch $DISTRIBUTED_ARGS \
--save-interval 10000 \ --save-interval 10000 \
--eval-interval 1000 \ --eval-interval 1000 \
--eval-iters 10 \ --eval-iters 10 \
--fp16 --fp16 \
--vocab-extra-ids 100
#!/bin/bash
# SLURM options.
export SLURM_PARTITION=<slurm partition, used to feed -p option in slurm>
export SLURM_ACCOUNT=<slurm account, used to feed -A option in slurm>
# Source code.
export MEGATRON_CODE_DIR=<megatron source code directory>
# This variable is used to mount the relevant part of the filesystem
# inside the docker container. Note that the `MEGATRON_CODE_DIR` and the
# launch directory already get mounted; this variable should be used to
# mount the directories that contain the data and tokenizer files.
export DOCKER_MOUNT_DIR=<megatron dataset and bpe tokenizer vocab path>
# Data and tokenizer files.
MEGATRON_DATA=<path to megatron processed data>
BPE_VOCAB_FILE=<path to bpe vocab file>
BPE_MERGE_FILE=<path to bpe merges file>
# Megatron input parameters.
# `MEGATRON_EXTRA_PARAMS` can be used to provide any extra parameters
# that are not listed here.
export MEGATRON_PARAMS=" ${MEGATRON_EXTRA_PARAMS} \
--tensor-model-parallel-size ${TP} \
--pipeline-model-parallel-size ${PP} \
--micro-batch-size ${MBS} \
--global-batch-size ${GBS} \
--num-layers ${NLS} \
--hidden-size ${HS} \
--num-attention-heads ${NAH} \
--DDP-impl ${DDP} \
--data-path ${MEGATRON_DATA} \
--vocab-file ${BPE_VOCAB_FILE} \
--merge-file ${BPE_MERGE_FILE} \
--log-interval 5 \
--seq-length 2048 \
--max-position-embeddings 2048 \
--train-iters 500 \
--lr-decay-iters 320 \
--lr 0.0001 \
--min-lr 0.00001 \
--lr-decay-style cosine \
--lr-warmup-fraction 0.01 \
--split 969,30,1 \
--eval-iters 100 \
--eval-interval 1000 \
--clip-grad 1.0 \
--fp16 \
--loss-scale 8192 "
# Reproducing Figures in SC21 Paper
This directory contains some of the scripts that were used to produce the
results in the [Megatron paper](https://arxiv.org/pdf/2104.04473.pdf) that is
to appear at [SuperComputing 2021](https://sc21.supercomputing.org/). These
scripts use [Slurm](https://slurm.schedmd.com/documentation.html) with the
[pyxis plugin](https://github.com/NVIDIA/pyxis), but can be modified for other
schedulers as well.
## Setup
All the cluster-dependent variables are in [`CONFIG.sh`](./CONFIG.sh). Please
update the unspecified values (in angle brackets `<...>`) before launching any
scripts.
## Scripts
Below is a list of scripts that can be used to reproduce various figures in our
[paper](https://arxiv.org/pdf/2104.04473.pdf):
* [run_table_1.sh](./run_table_1.sh): Table 1 showing weak-scaling throughput
for GPT models ranging from 1 billion to 1 trillion parameters.
* [run_figure_11.sh](./run_figure_11.sh): Figure 11 showing the weak-scaling
performance of pipeline parallelism.
* [run_figure_12.sh](./run_figure_12.sh): Figure 12 showing the effect of
the interleaved schedule on a 175B GPT model.
* [run_figure_13.sh](./run_figure_13.sh): Figure 13 showing the effect of
different degrees of pipeline and tensor model parallelism on a model with
162.2 billion parameters.
* [run_figure_14.sh](./run_figure_14.sh): Figure 14 showing the effect of
different degrees of data and pipeline model parallelism on a model with
5.9 billion parameters.
* [run_figure_15.sh](./run_figure_15.sh): Figure 15 showing the effect of
different degrees of data and tensor model parallelism on a model with
5.9 billion parameters.
* [run_figure_16.sh](./run_figure_16.sh): Figure 16 showing the effect of
microbatch size.
* [run_figure_17.sh](./run_figure_17.sh): Figure 17 showing the effect of
activation recomputation.
* [run_figure_18.sh](./run_figure_18.sh): Figure 18 showing the effect of
the scatter-gather communication optimization.
#!/bin/bash
sbatch -p ${SLURM_PARTITION} \
-A ${SLURM_ACCOUNT} \
--job-name=${JOB_NAME} \
--nodes=${NNODES} \
--export=MEGATRON_CODE_DIR,MEGATRON_PARAMS,DOCKER_MOUNT_DIR SRUN.sh
exit 0
#!/bin/bash
#SBATCH -t 0:30:00 --exclusive --mem=0 --overcommit --ntasks-per-node=8
THIS_DIR=`pwd`
DATETIME=`date +'date_%y-%m-%d_time_%H-%M-%S'`
mkdir -p ${THIS_DIR}/logs
CMD="python -u ${MEGATRON_CODE_DIR}/pretrain_gpt.py ${MEGATRON_PARAMS}"
srun -l \
--container-image "nvcr.io#nvidia/pytorch:20.12-py3" \
--container-mounts "${THIS_DIR}:${THIS_DIR},${MEGATRON_CODE_DIR}:${MEGATRON_CODE_DIR},${DOCKER_MOUNT_DIR}:${DOCKER_MOUNT_DIR}" \
--output=${THIS_DIR}/logs/%x_%j_$DATETIME.log sh -c "${CMD}"
#!/bin/bash
# ================================
# Choose the case to run.
# ================================
# Pipeline-parallel size options = [1, 2, 4, 8].
PP=1
# Batch size (global batch size) options = [8, 128].
GBS=8
# Set pipeline-parallel size options.
NLS=$((3*PP))
NNODES=${PP}
# Other params.
TP=8
MBS=1
HS=20480
NAH=128
DDP=local
MEGATRON_EXTRA_PARAMS="--activations-checkpoint-method uniform "
# Name of the job.
export JOB_NAME=results_figure_11_pipeline_parallel_size_${PP}_batch_size_${GBS}
# Import the configs.
. `pwd`/CONFIG.sh
# Submit the job.
. `pwd`/SBATCH.sh
exit 0
#!/bin/bash
# ================================
# Choose the case to run.
# ================================
# Interleaved schedule options = [YES, NO].
INTERLEAVED=YES
# Batch size (global batch size) options = [12, 24, 36, ..., 60].
GBS=12
# Set interleaved schedule options.
if [ ${INTERLEAVED} == "YES" ]; then
MEGATRON_EXTRA_PARAMS="--activations-checkpoint-method uniform --num-layers-per-virtual-pipeline-stage 2 "
elif [ ${INTERLEAVED} == "NO" ]; then
MEGATRON_EXTRA_PARAMS="--activations-checkpoint-method uniform "
else
echo "Invalid configuration"
exit 1
fi
# Other params.
TP=8
PP=12
MBS=1
NLS=96
HS=12288
NAH=96
DDP=local
NNODES=12
# Name of the job.
export JOB_NAME=results_figure_12_interleaved_${INTERLEAVED}_batch_size_${GBS}
# Import the configs.
. `pwd`/CONFIG.sh
# Submit the job.
. `pwd`/SBATCH.sh
exit 0
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment