Commit d444a97a authored by yangzhong's avatar yangzhong
Browse files

首次上传

parents
Pipeline #3020 canceled with stages
# Changelog
## NVIDIA Megatron Core 0.9.0
- Uneven pipeline parallelism
- Enable pipeline parallelism where first and last ranks have fewer transformer layers than the intermediate ranks
- Per layer CUDAGraph support for GPT training with Transformer Engine modules
- Enable different TP sizes for the vision encoder
- Enable pipeline parallelism for T5 & Llava models
- Support multi-tile multi-image input in Llava models
- MoE
- FP8 support
- Runtime upcycling support
- Dispatcher implementation optimizations
- Shared expert support with overlapping optimizations
- Qwen Model support
- Known Issues
- When using sequence parallel, during the transformer block forward pass, dropout is not using the appropriate rng context.
## NVIDIA Megatron Core 0.8.0
- Multimodal
- Added initial support for training vision language models using the LLaVA architecture
- Added initial support for inference with multimodal inputs
- End-to-end multimodal example from data collection to training to evaluation is provided in examples/multimodal
- MoE
- Context Parallel support.
- Distributed checkpoint support for grouped GEMM.
- Mamba
## NVIDIA Megatron Core 0.7.0
- MoE
- Token drop support
- Several efficiency optimizations
- Improved model parallelism
- Memory optimizations
- Distributed checkpointing
- Enabled for Retro
- Asynchronous checkpoint saving
- Several minor bug fixes, speed improvements, and memory optimizations
## NVIDIA Megatron Core 0.6.0
- MoE (Mixture of Experts)
- Performance optimization
- Communication optimization for multi GPU and Single GPU
- 23% improvement (323 TFLOPS/GPU) over MCore 0.5.0 on Mixtral with Hopper BF16
- GroupedMLP enhancement for Hopper
- DP Overlapping. Support overlapping computation with gradient reduction and parameter gathering.
- All-to-All based Token Dispatcher
- Layer-wise logging for load balancing loss.
- Improved expert parallel support including distributed optimizer.
- Distributed optimizer
- RETRO
- Data processing
- BERT
- Distributed checkpointing
- Dist checkpointing
- PyTorch native distributed backend
- Improved saving/loading speed
- TensorRT-LLM Export
- Integration with TensorRT Model Optimizer Post-training quantization (PTQ)
- Text generation driver to perform PTQ in Megatron-LM
- Llama2 and Nemotron3-8b examples to use TensorRT-LLM unified build API to build engine after training.
- Several minor enhancements, bug fixes, and documentation updates
## NVIDIA Megatron Core 0.5.0
### Key Features and Enhancements
Megatron core documentation is now [live!](https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start)
### Model Features
- MoE (Mixture of Experts)
- Support for Z-loss, Load balancing and Sinkhorn
- Layer and communications refactor
- Richer parallelism mappings and EP can be combined with other model parallel techniques for larger MoE variants, e.g. EP + TP + DP + SP + PP
- Token dropless architecture with Top-K routing
- Performance optimization with with GroupedGEMM when number of local experts is > 1
- Distributed checkpointing
- Interleaved rotary embedding
### Datasets
- Masked WordPiece datasets for BERT and T5
- Raw and mock datasets
### Parallelism
### Performance
- Activation offloading to CPU
- Rope and Swiglu fusion
- Sliding window attention (via Transformer Engine)
### General Improvements
- Timers
## NVIDIA Megatron Core 0.4.0
### Key Features and Enhancements
#### Models
- BERT
- RETRO
- T5
#### Parallelism
- Mixture of Experts support for GPT
- Model parallel efficient Distributed Data Parallel (DDP)
- Context Parallel (2D Tensor Parallel) support
#### Datasets
- GPT Dataset
- Blended Dataset
[Core-ADLR] @mcore-reviewers/core-adlr
megatron/core/
[Core-NeMo] @mcore-reviewers/core-nemo
megatron/core/
^[Core-MLPerf] @mcore-reviewers/mlperf
megatron/core/
[MoE-ADLR] @mcore-reviewers/moe-adlr
megatron/core/transformer/moe/
[MoE-Moe] @mcore-reviewers/moe-moe
megatron/core/transformer/moe/
[Datasets] @mcore-reviewers/datasets
megatron/core/datasets/
[BERT] @mcore-reviewers/bert
megatron/core/models/bert/
[GPT] @mcore-reviewers/gpt
megatron/core/models/gpt/
[Retro] @mcore-reviewers/retro
megatron/core/models/retro/
[Distributed Checkpointing] @mcore-reviewers/dist-checkpointing
megatron/core/dist_checkpointing/
[Distributed Optimizer] @mcore-reviewers/dist-optimizer
megatron/core/optimizer/distrib_optimizer/
[Inference] @mcore-reviewers/inference
megatron/core/inference/
^[Quantization and Inference (QAT)] @mcore-reviewers/quantization-and-inference
megatron/core/inference/
; [Context Parallelism] @mcore-reviewers/context-parallelism
;
[CI] @mcore-reviewers/ci
.gitlab/
.github/
.gitlab-ci.yml
Dockerfile.ci.lts
Dockerfile.ci.dev
tests/
# Contributing to Megatron-LM
This document outlines the processes and policies for issues and pull requests by non-NVIDIA contributors to the Megatron-LM github repository.
Everyone is welcome to contribute to the project but development of Megatron-LM continues internally at NVIDIA. When contributing it important to ensure that changes are in line with the project direction. Small changes to fix bugs are welcomed and appreciated. If proposing large architectural changes or changes for stylistic reasons open an issue first so we can discuss it.
PRs will first be pulled into NVIDIA's internal Megatron-LM repo and then pushed back out to the open github repo with proper credit given to the committers.
## Issue policy
Please do file any bugs you find, keeping the following in mind:
- If filing a bug, i.e. you have found something that doesn't work as expected, use the BUG template.
- If you've found a regression in speed or accuracy use the REGRESSION template.
- If you are requesting a new feature or modification of an existing feature use the ENHANCEMENT template.
- If opening an issue to ask a question no template is needed but please make your question as clear and concise as possible.
- One issue per bug. Putting multiple things in the same issue makes both discussion and completion unnecessarily complicated.
- Your bug is mostly likely to get attention from the development team quickly if we can easily reproduce it.
- Use proper spelling, grammar, and punctuation.
- Write in an authoritative and technical tone.
## Code submission policy
Here are some dos & don'ts to try and stick to:
### Do:
- Format new code in a style that is consistent with the file being changed. Megatron-LM doesn't (yet) have a style guide or enforced formatting.
- Split your changes into separate, atomic commits i.e. A commit per feature or fix.
- Make sure your commits are rebased on the master branch.
- Write the commit message subject line in the imperative mood ("Change the default argument for X", not "Changed the default argument for X").
- Write your commit messages in proper English, with care and punctuation.
- Check the spelling of your code, comments and commit messages.
### Don't:
- Submit code that's incompatible with the project licence.
- Touch anything outside the stated scope of the PR. This includes formatting changes to code not relevant to the PR.
- Iterate excessively on your design across multiple commits.
- Include commented-out code.
- Attempt large architectural changes without first opening an issue to discuss.
## Issue and Pull Request Q&A (Updated Jul 2023)
### I've submitted an issue and PR. When can I expect to get some feedback?
Megatron-LM is developed and maintained by a small team of researchers. We will endeavour to read and acknowledge all new issues and PRs within a week. A few rules of thumb:
- Reproducible bugs/regressions and bug/regression fixes are likely to get the attention of maintainers the quickest.
- Issues requesting an enhancement may only recieve acknowlegement that they've been read and may be closed with a "wontfix" label if they're not inline with the project direction. If they are acknowledged and remain open you can assume the maintainers agree they're a desirable feature.
- Support requests, i.e. requests for help running the code, have the lowest priority and will be responded to as maintainer time permits.
### If my issue or PR isn't getting attention, how long should I wait before pinging one of the project maintainers?
One week if there is no acknowledgement of the intial request.
### Who are the project maintainers I should ping?
The corresponding maintainers at this time are @jaredcasper and @jon-barker.
### Is there a policy for issues and PRs that haven't been touched in X days? Should they be closed?
Yes, starting in July 2023 we have a bot that will mark untouched PRs as "stale" after 60 days.
We have a long backlog of issues and PRs dating back 3.5 years. We are trying to triage these now by working backwards. Older issues we believe may still be relevant may recieve a request to re-test them with the latest code. If there's no response they may be closed. Again, if you they should be re-opened then just respond with a comment to that effect.
Thank-you!
\ No newline at end of file
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causal_conv1d
WORKDIR /opt
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1
FROM $FROM_IMAGE_NAME as build_grouped_gemm
WORKDIR /opt
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2
FROM $FROM_IMAGE_NAME as build_mamba_ssm
WORKDIR /opt
RUN MAMBA_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/state-spaces/mamba.git@v2.2.0
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext python3-venv && \
apt-get clean && \
python -m venv /opt/jet && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
COPY --from=build_causal_conv1d /opt/causal_conv1d-*.whl ./
COPY --from=build_grouped_gemm /opt/grouped_gemm-*.whl ./
COPY --from=build_mamba_ssm /opt/mamba_ssm-*.whl ./
RUN \
--mount=type=bind,source=requirements,target=requirements \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=setup.py,target=setup.py \
--mount=type=bind,source=megatron/core/package_info.py,target=megatron/core/package_info.py \
--mount=type=bind,source=megatron/core/README.md,target=megatron/core/README.md \
--mount=type=bind,source=megatron/core/__init__.py,target=megatron/core/__init__.py <<"EOF" bash -ex
pip install causal_conv1d-*.whl mamba_ssm-*.whl grouped_gemm-*.whl
PY_ENV=pytorch:24.07 pip install .
EOF
# Since megatron does not have any dependencies (and isn't a dependency to any other package), we can install it separately to make everything a bit quicker
ARG MCORE_REPO
ARG MCORE_REF
ARG MCORE_BACKWARDS_REF
RUN <<"EOF" bash -exu
# Checkout latest
cd /opt
rm -rf /opt/megatron-lm; mkdir megatron-lm; cd megatron-lm
git init
git remote add origin ${MCORE_REPO}
git fetch origin '+refs/merge-requests/*:refs/remotes/merge-requests/*'
git fetch origin $MCORE_REF
git checkout $MCORE_REF
# Checkout backwards-ref
cd /opt
rm -rf /opt/megatron-lm-legacy; mkdir megatron-lm-legacy; cd megatron-lm-legacy
git init
git remote add origin ${MCORE_REPO}
git fetch origin $MCORE_BACKWARDS_REF
git checkout $MCORE_BACKWARDS_REF
rm -rf megatron; cp -a /opt/megatron-lm/megatron ./
EOF
RUN PY_ENV=pytorch:24.07 pip install -e /opt/megatron-lm
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install jet-client jet-api --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causal_conv1d
WORKDIR /opt
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1
FROM $FROM_IMAGE_NAME as build_grouped_gemm
WORKDIR /opt
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2
FROM $FROM_IMAGE_NAME as build_mamba_ssm
WORKDIR /opt
RUN MAMBA_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/state-spaces/mamba.git@v2.0.3
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext python3-venv && \
apt-get clean && \
python -m venv /opt/jet && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
COPY --from=build_causal_conv1d /opt/causal_conv1d-*.whl ./
COPY --from=build_grouped_gemm /opt/grouped_gemm-*.whl ./
COPY --from=build_mamba_ssm /opt/mamba_ssm-*.whl ./
RUN \
--mount=type=bind,source=requirements,target=requirements \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=setup.py,target=setup.py \
--mount=type=bind,source=megatron/core/package_info.py,target=megatron/core/package_info.py \
--mount=type=bind,source=megatron/core/README.md,target=megatron/core/README.md \
--mount=type=bind,source=megatron/core/__init__.py,target=megatron/core/__init__.py <<"EOF" bash -ex
pip install causal_conv1d-*.whl mamba_ssm-*.whl grouped_gemm-*.whl
PY_ENV=pytorch:24.07 pip install .
EOF
# Since megatron does not have any dependencies (and isn't a dependency to any other package), we can install it separately to make everything a bit quicker
ARG MCORE_REPO
ARG MCORE_REF
ARG MCORE_BACKWARDS_REF
RUN <<"EOF" bash -exu
# Checkout latest
cd /opt
rm -rf /opt/megatron-lm; mkdir megatron-lm; cd megatron-lm
git init
git remote add origin ${MCORE_REPO}
git fetch origin '+refs/merge-requests/*:refs/remotes/merge-requests/*'
git fetch origin $MCORE_REF
git checkout $MCORE_REF
# Checkout backwards-ref
cd /opt
rm -rf /opt/megatron-lm-legacy; mkdir megatron-lm-legacy; cd megatron-lm-legacy
git init
git remote add origin ${MCORE_REPO}
git fetch origin $MCORE_BACKWARDS_REF
git checkout $MCORE_BACKWARDS_REF
rm -rf megatron; cp -a /opt/megatron-lm/megatron ./
EOF
RUN PY_ENV=pytorch:24.01 pip install -e /opt/megatron-lm
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install jet-api jet-client --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
# syntax=docker/dockerfile:experimental
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN sed -i -e 's/^APT/# APT/' -e 's/^DPkg/# DPkg/' \
/etc/apt/apt.conf.d/docker-clean
RUN apt-get update && \
apt-get install -y python3-venv && \
apt-get clean && \
python -m venv /opt/jet
RUN pip3 install --no-cache-dir \
black==24.4.2 \
isort==5.13.2 \
flake8==7.1.0 \
pylint==3.2.6 \
mypy
COPY . /opt/megatron-lm
WORKDIR /opt/megatron-lm
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install jet-client jet-api --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
#!/bin/bash
# Runs the "7B" parameter model
export HSA_FORCE_FINE_GRAIN_PCIE=1
export OMP_NUM_THREADS=1
export NCCL_P2P_LEVEL=SYS
export NCCL_ALGO=Ring
export NCCL_NCHANNELS_PER_PEER=16
export NCCL_MIN_NCHANNELS=20
export NCCL_IB_TIMEOUT=22
export CUDA_DEVICE_MAX_CONNECTIONS=1
export NCCL_NET_GDR_LEVEL=SYS
export NCCL_NET_GDR_READ=0
CHECKPOINT_PATH=./tmp #$1 #<Specify path>
TENSORBOARD_LOGS_PATH=./tmp #$2 #<Specify path>
DATA_PATH="/datasets/oscar-1GB-gpt_text_document" #<Specify path and file prefix>_text_document
VOCAB_PATH=./gpt2-vocab.json
MERGE_PATH=./gpt2-merges.txt
GPT_MODEL_ARGS=(
--num-layers 12
--hidden-size 768
--num-attention-heads 12
--ffn-hidden-size 3072
--seq-length 1024
--max-position-embeddings 1024
)
# export NVTE_FLASH_ATTN=1 # 走autlass
# export NVTE_FLASH_ATTN_TRITON=1 # 走triton_fa
# --transformer-impl transformer_engine
# --use-mcore-models
TRAINING_ARGS=(
--transformer-impl local
--use-legacy-models
--micro-batch-size 1
--global-batch-size 60 #240 #512 #64
--train-iters 100
--weight-decay 0.1
--adam-beta1 0.9
--adam-beta2 0.95
--init-method-std 0.006
--clip-grad 1.0
--bf16
--use-distributed-optimizer
--ckpt-format torch
--disable-bias-linear
--overlap-grad-reduce
--attention-dropout 0
--hidden-dropout 0
--ddp-average-in-collective
--recompute-granularity full
--recompute-num-layers 5
--recompute-method block
--no-gradient-accumulation-fusion
--swiglu
--lr 3.0e-5
--lr-decay-style cosine
--min-lr 3.0e-6
--lr-warmup-iters 1
)
MODEL_PARALLEL_ARGS=(
--sequence-parallel
--tensor-model-parallel-size 2
--pipeline-model-parallel-size 2
)
DATA_ARGS=(
--data-path $DATA_PATH
--split 949,50,1
--untie-embeddings-and-output-weights
--use-rotary-position-embeddings
--normalization RMSNorm
--no-position-embedding
--vocab-file $VOCAB_PATH
--merge-file $MERGE_PATH
--tokenizer-type GPT2BPETokenizer
)
EVAL_AND_LOGGING_ARGS=(
--log-interval 1
--save-interval 10000
--eval-interval 1000
--save $CHECKPOINT_PATH
--load $CHECKPOINT_PATH
--eval-iters 10
--tensorboard-dir $TENSORBOARD_LOGS_PATH
)
RANK=$OMPI_COMM_WORLD_RANK
LOCAL_RANK=$OMPI_COMM_WORLD_LOCAL_RANK
WORLD_SIZE=$OMPI_COMM_WORLD_SIZE
DIST_URL=${1}
DIST_PORT=34566
DISTRIBUTED_ARGS=(
--rank ${RANK}
--world-size ${WORLD_SIZE}
--local-rank ${LOCAL_RANK}
--dist-url tcp://${DIST_URL}:${DIST_PORT}
)
APP="python -u pretrain_gpt.py \
${GPT_MODEL_ARGS[@]} \
${TRAINING_ARGS[@]} \
${MODEL_PARALLEL_ARGS[@]} \
${DATA_ARGS[@]} \
${EVAL_AND_LOGGING_ARGS[@]} \
${DISTRIBUTED_ARGS[@]} \
"
case ${LOCAL_RANK} in
[0])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[1])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[2])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[3])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[4])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[5])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[6])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[7])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
esac
The following applies to all files unless otherwise noted:
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--
This repository also contains code from Hugging Face Inc., Google Research,
Facebook (from their Fairseq, Dino, and ParlAI projects), Microsoft (from their
Swin-Transformer project), Philip Popien, the Mamba project (Tri Dao and
Albert Gu), and the Triton language and compiler project (Philippe Tillet and
OpenAI). Files from these organizations have notices at the top of each file.
Below are licenses used in those files, as indicated.
--------------------------------------------------------------------------------
-- LICENSE FOR Facebook, huggingface, Google Research, LLaVA, and Mamba code --
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--------------------------------------------------------------------------------
LICENSE FOR
Facebook, Inc. and its affiliates,
Meta Platforms, Inc. and its affiliates,
Microsoft Corporation,
OpenGVLab/InternVL, and
Triton language and compiler.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
#!/bin/bash
set -eux
#export FLASH_ATTENTION_PRINT_PARAM=1
# Runs the "7B" parameter model
export HSA_FORCE_FINE_GRAIN_PCIE=1
export OMP_NUM_THREADS=1
export NCCL_P2P_LEVEL=PXB # SYS
#export HIP_ALLOC_INITIALIZE=0
# export GPU_MAX_HW_QUEUES=10
export NCCL_ALGO=Ring
export NCCL_NCHANNELS_PER_PEER=16
export NCCL_MIN_NCHANNELS=32 # 20
export NCCL_MAX_NCHANNELS=32 # 20
export NCCL_IB_TIMEOUT=22
export CUDA_DEVICE_MAX_CONNECTIONS=1
export NCCL_IB_HCA=mlx5_2:1,mlx5_3:1,mlx5_4:1,mlx5_5:1,mlx5_6:1,mlx5_7:1,mlx5_8:1,mlx5_9:1
export NCCL_NET_GDR_LEVEL=7
export NCCL_NET_GDR_READ=1
export RCCL_SDMA_COPY_ENABLE=0
export GLOG_minloglevel=3 # 打印error级别的nccl日志
export ALLREDUCE_STREAM_WITH_COMPUTE=1
export SENDRECV_STREAM_WITH_COMPUTE=1
export cache_size_limit=64
SAVE_PATH=./tmp_7b
TENSORBOARD_LOGS_PATH=./tmp_7b #$2 #<Specify path>
DATA_PATH="/models/datasets/openwebtext/openwebtext-llama-7b/openwebtext-llama-7b_text_document" #<Specify path and file prefix>_text_document
GPT_MODEL_ARGS=(
--num-layers 32
--hidden-size 4096
--ffn-hidden-size 11008
--num-attention-heads 32
--max-position-embeddings 4096
--normalization RMSNorm
--position-embedding-type rope # none #
--untie-embeddings-and-output-weights # 分开处理embed和输出权重, 增加灵活性
)
export NVTE_FLASH_ATTN=1 # 走cutlass
# export NVTE_FLASH_ATTN_TRITON=1 # 走triton_fa
# --transformer-impl transformer_engine # 走core用这两组参数
# --use-mcore-models
# --transformer-impl local # 走legacy用这两组参数
# --use-legacy-models
TRAINING_ARGS=(
--transformer-impl local # 走legacy用这两组参数
--use-legacy-models
--micro-batch-size 1
--global-batch-size 64 #32 #240 #60 #512 #64
--train-iters 50
--weight-decay 0.1
--adam-beta1 0.9
--adam-beta2 0.95
--init-method-std 0.006
--clip-grad 1.0
--bf16
# --fp16 # 开启fp16需要指定loss-scale
# --loss-scale 1024
--use-distributed-optimizer
--disable-bias-linear
--attention-dropout 0
--hidden-dropout 0
# --no-gradient-accumulation-fusion
--swiglu
--lr 3.0e-5
--lr-decay-style cosine
--min-lr 3.0e-6
--lr-warmup-iters 1
--ckpt-format torch
--ddp-average-in-collective # 在dp阶段通信中, 梯度或参数将被直接平均, 而不是先求和(到一个设备)再平均
# --recompute-granularity full # 开启重计算降低显存增加耗时
# --recompute-num-layers 5 #0 #
# --recompute-method block
--overlap-grad-reduce # 重叠ddp grad reduce
# --tp-comm-overlap # tensor parallel comm和gemm重叠, 优化项未适配
# --tp-comm-overlap-rs-dgrad # reduce-scatter和dgrad gemm重叠
--use-flash-attn-cutlass
)
# 使用torch fa的环境变量
# export TORCHINDUCTOR_COORDINATE_DESCENT_TUNING=1
# export TORCHINDUCTOR_BENCHMARK_FUSION=1
# export TORCHINDUCTOR_BENCHMARK_MULTI_TEMPLATES=1
# export TORCHINDUCTOR_MAX_AUTOTUNE=1
# export TORCHINDUCTOR_CACHE_DIR=./cache
# --use-flash-attn-cutlass # cutlass fa
# --use-flash-attn-triton # triton fa
# --use-flash-attn-torch # torch fa
MODEL_PARALLEL_ARGS=(
--sequence-parallel
--tensor-model-parallel-size 1
--pipeline-model-parallel-size 2
# --context-parallel-size 2
# --num-layers-per-virtual-pipeline-stage 4
# --microbatch-group-size-per-virtual-pipeline-stage 1
# --no-overlap-p2p-communication # 开启后
)
DATA_ARGS=(
--data-path $DATA_PATH
--seq-length 4096 #4096
--split 949,50,1
--tokenizer-type Llama2Tokenizer
--tokenizer-model /models1/Llama-2-7b-chat-hf/tokenizer.model
# --tokenizer-model /data/model_weights/llama2_7b_hf/tokenizer.model
)
EVAL_AND_LOGGING_ARGS=(
--log-interval 1
--log-throughput
--save-interval 1000
--eval-interval 1000
#--save $SAVE_PATH
#--load $SAVE_PATH
--eval-iters 3
--tensorboard-dir $TENSORBOARD_LOGS_PATH
)
# FINETUNE_ARGS=(
# # --finetune
# # --pretrained-checkpoint $CHECKPOINT_PATH
# --load $CHECKPOINT_PATH
# --no-load-optim
# --no-load-rng
# )
PROFILE_ARGS=(
--profile
--profile-step-start 4
--profile-step-end 5
--use-pytorch-profiler
--profile-ranks 0 1 2 3 4 5 6 7
--profile-dir prof_data
)
RANK=$OMPI_COMM_WORLD_RANK
LOCAL_RANK=$OMPI_COMM_WORLD_LOCAL_RANK
WORLD_SIZE=$OMPI_COMM_WORLD_SIZE
DIST_URL=${1}
DIST_PORT=34577
DISTRIBUTED_ARGS=(
--rank ${RANK}
--world-size ${WORLD_SIZE}
--local-rank ${LOCAL_RANK}
--dist-url tcp://${DIST_URL}:${DIST_PORT}
)
APP="python -u pretrain_gpt.py \
${GPT_MODEL_ARGS[@]} \
${TRAINING_ARGS[@]} \
${MODEL_PARALLEL_ARGS[@]} \
${DATA_ARGS[@]} \
${EVAL_AND_LOGGING_ARGS[@]} \
${DISTRIBUTED_ARGS[@]} \
"
# 开启profile
# ${PROFILE_ARGS[@]} \
# export HIP_VISIBLE_DEVICES=0,7 # # 4,5,6,7 #,
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 # # 4,5,6,7 #,
# export CUDA_VISIBLE_DEVICES=4,5,6,7 # 0,1,2,3,
# ${APP}
case ${LOCAL_RANK} in
[0])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[1])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
numactl --cpunodebind=1 --membind=1 ${APP}
;;
[2])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
numactl --cpunodebind=2 --membind=2 ${APP}
;;
[3])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=3 --membind=3 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[4])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=4 --membind=4 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[5])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=5 --membind=5 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[6])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=6 --membind=6 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[7])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=7 --membind=7 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
esac
This diff is collapsed.
include megatron/core/requirements.txt
include megatron/core/README.md
recursive-include requirements *
### 环境配置
1. 拉取镜像:docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.4.1-ubuntu22.04-dtk25.04.1-py3.10
2. 安装基础依赖包
<pre>
pip install -r requirements.txt
</pre>
若使用 pip install 下载安装过慢,可添加源:-i https://pypi.tuna.tsinghua.edu.cn/simple/
### 下载openwebtext训练数据
https://huggingface.co/datasets/Skylion007/openwebtext
### 数据预处理
#### 1.将原始 tar 包转换为 openwebtext.jsonl
配置路径(根据你的实际路径修改)
SUBSETS_DIR = "/models/datasets/openwebtext/subsets" # 存放tar包的目录
OUTPUT_JSONL = "/models/datasets/openwebtext/openwebtext.jsonl" # 输出的jsonl文件
```
python convert_openwebtext_jsonl.py
```
输出openwebtext.jsonl文件,大小68G
#### 2.预处理为Llama-2 格式的数据
```
python tools/preprocess_data.py \
--input openwebtext.jsonl \
--output-prefix /models/datasets/openwebtext/openwebtext-llama-7b \
--tokenizer-type Llama2Tokenizer \
--tokenizer-model /path/to/llama2_7b_hf/tokenizer.model \
--workers 16 \
--append-eod
```
### 下载tokenizer文件
链接: https://www.modelscope.cn/models/shakechen/Llama-2-7b-hf/files
下载其中的tokenizer*文件
### Llama预训练
脚本: `Llama_pretraining.sh`
修改数据集与tokenizer路径
```shell
DATA_PATH="/models/datasets/openwebtext/openwebtext-llama-7b/openwebtext-llama-7b_text_document"
--tokenizer-model /models1/Llama-2-7b-chat-hf/tokenizer.model
```
- 单机8卡训练
```shell
mpirun --allow-run-as-root -np 8 Llama_pretraining.sh localhost >& Llama_pretraining.log
```
`Llama_pretraining.log`中查看训练日志
This diff is collapsed.
This diff is collapsed.
context\_parallel package
=========================
Context parallelism overview
----------------------------
.. figure:: ../images/context_parallel/CP_overview.png
:alt: cp_overview
:align: center
Figure 1: A transformer layer running with TP2CP2. Communications next to Attention are for CP, others are for TP. (AG/RS: all-gather in forward and reduce-scatter in backward, RS/AG: reduce-scatter in forward and all-gather in backward, /AG: no-op in forward and all-gather in backward).
Context Parallelism ("CP") is a parallelization scheme on the dimension of sequence length. Unlike prior SP (sequence parallelism) which only splits the sequence of Dropout and LayerNorm activations, CP partitions the network inputs and all activations along sequence dimension. With CP, all modules except attention (e.g., Linear, LayerNorm, etc.) can work as usual without any changes, because they do not have inter-token operations. As for attention, the Q (query) of each token needs to compute with the KV (key and value) of all tokens in the same sequence. Hence, CP requires additional all-gather across GPUs to collect the full sequence of KV. Correspondingly, reduce-scatter should be applied to the activation gradients of KV in backward propagation. To reduce activation memory footprint, each GPU only stores the KV of a sequence chunk in forward and gathers KV again in backward. KV communication happens between a GPU and its counterparts in other TP groups. The all-gather and reduce-scatter are transformed to point-to-point communications in ring topology under the hood. Exchanging KV also can leverage MQA/GQA to reduce communication volumes, as they only have one or few attention heads for KV.
For example, in Figure 1, assuming sequence length is 8K, each GPU processes 4K tokens. GPU0 and GPU2 compose a CP group, they exchange KV with each other. Same thing also happens between GPU1 and GPU3. CP is similar to `Ring Attention <https://arxiv.org/abs/2310.01889>`_ but provides better performance by (1) leveraging the latest OSS and cuDNN flash attention kernels; (2) removing unnecessary computation resulted from low-triangle causal masking and achieving optimal load balance among GPUs.
Context parallelism benefits
----------------------------
.. figure:: ../images/context_parallel/CP_results.png
:alt: cp_results
:align: center
Figure 2: Speedup of 175B GPT with various TP+CP combinations vs. full recompute (i.e., TP8CP1).
LLM encounters OOM (out of memory) issue with long context (i.e., long sequence length) because of linearly increasing memory footprint of activations. Recomputing activations in backward can avoid OOM but also introduce significant overheads (~30% with full recompute). Enlarging TP (tensor model parallelism) can fix the OOM issue as well, but it potentially makes compute (e.g., Linear) too short to overlap communication latencies. To be clear, scaling out to more GPUs with bigger TP can hit the overlapping problem no matter if OOM happens.
CP can better address the issues. With CP, each GPU only computes on a part of the sequence, which reduces both computation and communication by CP times. Therefore, there are no concerns about the overlapping between them. The activation memory footprint per GPU is also CP times smaller, hence no OOM issue anymore. As Figure 2 shows, the combinations of TP and CP can achieve optimal performance by eliminating recompute overheads and making the best tradeoff between computation and communications.
Enabling context parallelism
----------------------------
CP support has been added to GPT. All models that share GPT code path also should be able to benefit from CP, such as Llama. CP can work with TP (tensor model parallelism), PP (pipeline model parallelism), and DP (data parallelism), where the total number of GPUs equals TPxCPxPPxDP. CP also can work with different attention variants, including MHA/MQA/GQA, uni-directional and bi-directional masking.
CP is enabled by simply setting context_parallel_size=<CP_SIZE> in command line. Default context_parallel_size is 1, which means CP is disabled. Running with CP requires Megatron-Core (>=0.5.0) and Transformer Engine (>=1.1).
datasets package
================
.. mdinclude :: ../../../megatron/core/datasets/readme.md
Submodules
----------
datasets.blended\_megatron\_dataset\_config module
---------------------------------------------------
.. automodule:: core.datasets.blended_megatron_dataset_config
:members:
:undoc-members:
:show-inheritance:
datasets.blended\_megatron\_dataset\_builder module
---------------------------------------------------
.. automodule:: core.datasets.blended_megatron_dataset_builder
:members:
:undoc-members:
:show-inheritance:
datasets.megatron\_tokenizer module
-----------------------------------
.. automodule:: core.datasets.megatron_tokenizer
:members:
:undoc-members:
:show-inheritance:
datasets.indexed\_dataset module
--------------------------------
.. automodule:: core.datasets.indexed_dataset
:members:
:undoc-members:
:show-inheritance:
datasets.megatron\_dataset module
---------------------------------
.. automodule:: core.datasets.megatron_dataset
:members:
:undoc-members:
:show-inheritance:
datasets.gpt\_dataset module
----------------------------
.. automodule:: core.datasets.gpt_dataset
:members:
:undoc-members:
:show-inheritance:
datasets.masked\_dataset module
-------------------------------
.. automodule:: core.datasets.masked_dataset
:members:
:undoc-members:
:show-inheritance:
datasets.bert\_dataset module
-----------------------------
.. automodule:: core.datasets.bert_dataset
:members:
:undoc-members:
:show-inheritance:
datasets.t5\_dataset module
---------------------------
.. automodule:: core.datasets.t5_dataset
:members:
:undoc-members:
:show-inheritance:
datasets.blended\_dataset module
----------------------------------
.. automodule:: core.datasets.blended_dataset
:members:
:undoc-members:
:show-inheritance:
datasets.utils module
---------------------
.. automodule:: core.datasets.utils
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: core.datasets
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing package
===========================
A library for saving and loading the distributed checkpoints.
A "distributed checkpoint" can have various underlying formats (current default format is based on Zarr)
but has a distinctive property - the checkpoint saved in one parallel configuration (tensor/pipeline/data parallelism)
can be loaded in a different parallel configuration.
Using the library requires defining sharded state_dict dictionaries with functions from *mapping* and *optimizer* modules.
Those state dicts can be saved or loaded with a *serialization* module using strategies from *strategies* module.
Subpackages
-----------
.. toctree::
:maxdepth: 4
dist_checkpointing.strategies
Submodules
----------
dist\_checkpointing.serialization module
----------------------------------------
.. automodule:: core.dist_checkpointing.serialization
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.mapping module
----------------------------------
.. automodule:: core.dist_checkpointing.mapping
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.optimizer module
------------------------------------
.. automodule:: core.dist_checkpointing.optimizer
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.core module
-------------------------------
.. automodule:: core.dist_checkpointing.core
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.dict\_utils module
--------------------------------------
.. automodule:: core.dist_checkpointing.dict_utils
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.utils module
--------------------------------
.. automodule:: core.dist_checkpointing.utils
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: core.dist_checkpointing
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.strategies package
======================================
Package defining different checkpoint formats (backends) and saving/loading algorithms (strategies).
Strategies can be used for implementing new checkpoint formats or implementing new (more optimal for a given use case) ways of saving/loading of existing formats.
Strategies are passed to `dist_checkpointing.load` and `dist_checkpointing.save` functions and control the actual saving/loading procedure.
Submodules
----------
dist\_checkpointing.strategies.base module
------------------------------------------
.. automodule:: core.dist_checkpointing.strategies.base
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.strategies.tensorstore module
-------------------------------------------------
.. automodule:: core.dist_checkpointing.strategies.tensorstore
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.strategies.two\_stage module
------------------------------------------------
.. automodule:: core.dist_checkpointing.strategies.two_stage
:members:
:undoc-members:
:show-inheritance:
dist\_checkpointing.strategies.zarr module
------------------------------------------
.. automodule:: core.dist_checkpointing.strategies.zarr
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: core.dist_checkpointing.strategies
:members:
:undoc-members:
:show-inheritance:
# Distributed Optimizer
The motivation for the distributed optimizer is to save memory by distributing the optimizer state evenly across data parallel ranks (https://arxiv.org/abs/1910.02054), versus the naive method of replicating the optimizer state across data parallel ranks.
Theoretical memory savings vary depending on the combination of the datatype of the model's parameters (`param_dtype`) and main gradients accumulated across data-parallel replicas (`grad_dtype`). We always use `fp32` main parameters for optimizer steps. In the current implementation, the theoretical number of bytes per parameter is (where d is the data parallel size):
| | Non-distributed optim | Distributed optim |
| ------ | ------ | ------ |
| `fp16` parameters, `fp16` gradients | 20 | 4 + 16/d |
| `bf16` parameters, `fp32` gradients | 18 | 6 + 12/d |
| `fp32` parameters, `fp32` gradients | 16 | 8 + 8/d |
Our implementation of the distributed optimizer uses contiguous buffers for parameters and main gradients; model gradients are copied over to the main gradients as soon as they are fully computed.
The figures below illustrate the distributed optimizer's sharding scheme, and the key steps of the distributed optimizer's parameter update:
## Data flow
![Data flow](../images/distrib_optimizer/data_flow.png)
## Sharding scheme
![Sharding scheme](../images/distrib_optimizer/sharding_scheme.png)
## Key steps
_(note: using illustrations above, assuming `bf16` model weights, `bf16` model gradients that are computed by the backward pass and `fp32` main gradients that are also used for optimizer steps; we always use `fp32` main weights for optimizer steps)_
- Backward pass finishes (gradient buffer holds 16 `fp32` gradient elements).
- Call reduce-scatter on each DP rank.
- Each DP rank now has 4 elements within the gradient buffer that are fully reduced (remaining 12 elements are garbage).
- DP rank 0 has gradient values for elements [0:4].
- DP rank 1 has gradient values for elements [4:8].
- DP rank 2 has gradient values for elements [8:12].
- DP rank 3 has gradient values for elements [12:16].
- Optimizer.step().
- Each DP rank copies its 4 `fp32` main parameter elements into the corresponding `bf16` parameter buffer (each element is cast from fp32 to fp16).
- Call all-gather on each DP rank.
- The parameter buffer now contains all 16, fully updated, `bf16` model parameter elements. Parameters in PyTorch modules already point to the appropriate locations in this parameter buffer, and thus forward passes are ready to run after the all-gather completes.
- At this point, the gradient buffer is also ready to be zero'd for the next iteration.
distributed package
===================
This package contains various utilities to finalize model weight gradients
on each rank before the optimizer step. This includes a distributed data
parallelism wrapper to all-reduce or reduce-scatter the gradients across
data-parallel replicas, and a `finalize\_model\_grads` method to
synchronize gradients across different parallelism modes (e.g., 'tied'
layers on different pipeline stages, or gradients for experts in a MoE on
different ranks due to expert parallelism).
Submodules
----------
distributed.distributed\_data\_parallel
---------------------------------------
Model wrapper for distributed data parallelism. Stores gradients in a
contiguous buffer, and supports the option of overlapping communication
(all-reduce or reduce-scatter) with backprop computation by breaking up
full model's gradients into smaller buckets and running all-reduce /
reduce-scatter on each bucket asynchronously.
.. automodule:: core.distributed.distributed_data_parallel
:members:
:undoc-members:
:show-inheritance:
distributed.finalize\_model\_grads
----------------------------------
Finalize model gradients for optimizer step across all used parallelism modes.
Synchronizes the all-reduce / reduce-scatter of model gradients across DP replicas,
all-reduces the layernorm gradients for sequence parallelism, embedding gradients
across first and last pipeline stages (if not tied), and expert gradients for expert
parallelism.
.. automodule:: core.distributed.finalize_model_grads
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
Contains functionality to synchronize gradients across different ranks before
optimizer step.
.. automodule:: core.distributed
:members:
:undoc-members:
:show-inheritance:
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment