Commit 688448db authored by silencealiang's avatar silencealiang
Browse files

更新代码

parent a02a5490
Pipeline #2503 passed with stage
# Changelog
## NVIDIA Megatron Core 0.9.0
- Uneven pipeline parallelism
- Enable pipeline parallelism where first and last ranks have fewer transformer layers than the intermediate ranks
- Per layer CUDAGraph support for GPT training with Transformer Engine modules
- Enable different TP sizes for the vision encoder
- Enable pipeline parallelism for T5 & Llava models
- Support multi-tile multi-image input in Llava models
- MoE
- FP8 support
- Runtime upcycling support
- Dispatcher implementation optimizations
- Shared expert support with overlapping optimizations
- Qwen Model support
- Known Issues
- When using sequence parallel, during the transformer block forward pass, dropout is not using the appropriate rng context.
## NVIDIA Megatron Core 0.8.0
- Multimodal
- Added initial support for training vision language models using the LLaVA architecture
- Added initial support for inference with multimodal inputs
- End-to-end multimodal example from data collection to training to evaluation is provided in examples/multimodal
- MoE
- Context Parallel support.
- Distributed checkpoint support for grouped GEMM.
- Mamba
## NVIDIA Megatron Core 0.7.0
- MoE
- Token drop support
- Several efficiency optimizations
- Improved model parallelism
- Memory optimizations
- Distributed checkpointing
- Enabled for Retro
- Asynchronous checkpoint saving
- Several minor bug fixes, speed improvements, and memory optimizations
## NVIDIA Megatron Core 0.6.0
- MoE (Mixture of Experts)
- Performance optimization
- Communication optimization for multi GPU and Single GPU
- 23% improvement (323 TFLOPS/GPU) over MCore 0.5.0 on Mixtral with Hopper BF16
- GroupedMLP enhancement for Hopper
- DP Overlapping. Support overlapping computation with gradient reduction and parameter gathering.
- All-to-All based Token Dispatcher
- Layer-wise logging for load balancing loss.
- Improved expert parallel support including distributed optimizer.
- Distributed optimizer
- RETRO
- Data processing
- BERT
- Distributed checkpointing
- Dist checkpointing
- PyTorch native distributed backend
- Improved saving/loading speed
- TensorRT-LLM Export
- Integration with TensorRT Model Optimizer Post-training quantization (PTQ)
- Text generation driver to perform PTQ in Megatron-LM
- Llama2 and Nemotron3-8b examples to use TensorRT-LLM unified build API to build engine after training.
- Several minor enhancements, bug fixes, and documentation updates
## NVIDIA Megatron Core 0.5.0
### Key Features and Enhancements
Megatron core documentation is now [live!](https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start)
### Model Features
- MoE (Mixture of Experts)
- Support for Z-loss, Load balancing and Sinkhorn
- Layer and communications refactor
- Richer parallelism mappings and EP can be combined with other model parallel techniques for larger MoE variants, e.g. EP + TP + DP + SP + PP
- Token dropless architecture with Top-K routing
- Performance optimization with with GroupedGEMM when number of local experts is > 1
- Distributed checkpointing
- Interleaved rotary embedding
### Datasets
- Masked WordPiece datasets for BERT and T5
- Raw and mock datasets
### Parallelism
### Performance
- Activation offloading to CPU
- Rope and Swiglu fusion
- Sliding window attention (via Transformer Engine)
### General Improvements
- Timers
## NVIDIA Megatron Core 0.4.0
### Key Features and Enhancements
#### Models
- BERT
- RETRO
- T5
#### Parallelism
- Mixture of Experts support for GPT
- Model parallel efficient Distributed Data Parallel (DDP)
- Context Parallel (2D Tensor Parallel) support
#### Datasets
- GPT Dataset
- Blended Dataset
# Changelog
## NVIDIA Megatron Core 0.10.0
- Adding MLA to MCore
- Enable FP8 for GroupedMLP
- MoE Parallel Folding
- Enhance MoE Architecture: Support MoE Layer Frequency Patterns and Configurable MoE FFN Hidden Size
- Multimodal: NVLM training and evaluation support in MCore
- Mamba Hybrid
- Increase performance and reduce memory footprint of Triton language/compiler distributed caching
- Add more unit testing and fix bugs
## NVIDIA Megatron Core 0.9.0
- Uneven pipeline parallelism
- Enable pipeline parallelism where first and last ranks have fewer transformer layers than the intermediate ranks
- Per layer CUDAGraph support for GPT training with Transformer Engine modules
- Enable different TP sizes for the vision encoder
- Enable pipeline parallelism for T5 & Llava models
- Support multi-tile multi-image input in Llava models
- MoE
- FP8 support
- Runtime upcycling support
- Dispatcher implementation optimizations
- Shared expert support with overlapping optimizations
- Qwen Model support
- Known Issues
- When using sequence parallel, during the transformer block forward pass, dropout is not using the appropriate rng context.
## NVIDIA Megatron Core 0.8.0
- Multimodal
- Added initial support for training vision language models using the LLaVA architecture
- Added initial support for inference with multimodal inputs
- End-to-end multimodal example from data collection to training to evaluation is provided in examples/multimodal
- MoE
- Context Parallel support.
- Distributed checkpoint support for grouped GEMM.
- Mamba
## NVIDIA Megatron Core 0.7.0
- MoE
- Token drop support
- Several efficiency optimizations
- Improved model parallelism
- Memory optimizations
- Distributed checkpointing
- Enabled for Retro
- Asynchronous checkpoint saving
- Several minor bug fixes, speed improvements, and memory optimizations
## NVIDIA Megatron Core 0.6.0
- MoE (Mixture of Experts)
- Performance optimization
- Communication optimization for multi GPU and Single GPU
- 23% improvement (323 TFLOPS/GPU) over MCore 0.5.0 on Mixtral with Hopper BF16
- GroupedMLP enhancement for Hopper
- DP Overlapping. Support overlapping computation with gradient reduction and parameter gathering.
- All-to-All based Token Dispatcher
- Layer-wise logging for load balancing loss.
- Improved expert parallel support including distributed optimizer.
- Distributed optimizer
- RETRO
- Data processing
- BERT
- Distributed checkpointing
- Dist checkpointing
- PyTorch native distributed backend
- Improved saving/loading speed
- TensorRT-LLM Export
- Integration with TensorRT Model Optimizer Post-training quantization (PTQ)
- Text generation driver to perform PTQ in Megatron-LM
- Llama2 and Nemotron3-8b examples to use TensorRT-LLM unified build API to build engine after training.
- Several minor enhancements, bug fixes, and documentation updates
## NVIDIA Megatron Core 0.5.0
### Key Features and Enhancements
Megatron core documentation is now [live!](https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start)
### Model Features
- MoE (Mixture of Experts)
- Support for Z-loss, Load balancing and Sinkhorn
- Layer and communications refactor
- Richer parallelism mappings and EP can be combined with other model parallel techniques for larger MoE variants, e.g. EP + TP + DP + SP + PP
- Token dropless architecture with Top-K routing
- Performance optimization with with GroupedGEMM when number of local experts is > 1
- Distributed checkpointing
- Interleaved rotary embedding
### Datasets
- Masked WordPiece datasets for BERT and T5
- Raw and mock datasets
### Parallelism
### Performance
- Activation offloading to CPU
- Rope and Swiglu fusion
- Sliding window attention (via Transformer Engine)
### General Improvements
- Timers
## NVIDIA Megatron Core 0.4.0
### Key Features and Enhancements
#### Models
- BERT
- RETRO
- T5
#### Parallelism
- Mixture of Experts support for GPT
- Model parallel efficient Distributed Data Parallel (DDP)
- Context Parallel (2D Tensor Parallel) support
#### Datasets
- GPT Dataset
- Blended Dataset
[Core-ADLR] @mcore-reviewers/core-adlr
megatron/core/
[Core-NeMo] @mcore-reviewers/core-nemo
megatron/core/
^[Core-MLPerf] @mcore-reviewers/mlperf
megatron/core/
[MoE-ADLR] @mcore-reviewers/moe-adlr
megatron/core/transformer/moe/
[MoE-Moe] @mcore-reviewers/moe-moe
megatron/core/transformer/moe/
[Datasets] @mcore-reviewers/datasets
megatron/core/datasets/
[BERT] @mcore-reviewers/bert
megatron/core/models/bert/
[GPT] @mcore-reviewers/gpt
megatron/core/models/gpt/
[Retro] @mcore-reviewers/retro
megatron/core/models/retro/
[Distributed Checkpointing] @mcore-reviewers/dist-checkpointing
megatron/core/dist_checkpointing/
[Distributed Optimizer] @mcore-reviewers/dist-optimizer
megatron/core/optimizer/distrib_optimizer/
[Inference] @mcore-reviewers/inference
megatron/core/inference/
^[Quantization and Inference (QAT)] @mcore-reviewers/quantization-and-inference
megatron/core/inference/
; [Context Parallelism] @mcore-reviewers/context-parallelism
;
[CI] @mcore-reviewers/ci
.gitlab/
.github/
.gitlab-ci.yml
Dockerfile.ci.lts
Dockerfile.ci.dev
tests/
[Core-ADLR] @mcore-reviewers/core-adlr
megatron/core/
[Core-NeMo] @mcore-reviewers/core-nemo
megatron/core/
^[Core-MLPerf] @mcore-reviewers/mlperf
megatron/core/
[MoE-ADLR] @mcore-reviewers/moe-adlr
megatron/core/transformer/moe/
[MoE-Moe] @mcore-reviewers/moe-moe
megatron/core/transformer/moe/
[Datasets] @mcore-reviewers/datasets
megatron/core/datasets/
[BERT] @mcore-reviewers/bert
megatron/core/models/bert/
[GPT] @mcore-reviewers/gpt
megatron/core/models/gpt/
[Retro] @mcore-reviewers/retro
megatron/core/models/retro/
[Distributed Checkpointing] @mcore-reviewers/dist-checkpointing
megatron/core/dist_checkpointing/
[Distributed Optimizer] @mcore-reviewers/dist-optimizer
megatron/core/optimizer/distrib_optimizer/
[Inference] @mcore-reviewers/inference
megatron/core/inference/
^[Quantization and Inference (QAT)] @mcore-reviewers/quantization-and-inference
megatron/core/inference/
; [Context Parallelism] @mcore-reviewers/context-parallelism
;
[CI][2] @mcore-reviewers/ci
.gitlab/
.github/
.gitlab-ci.yml
Dockerfile.ci.lts
Dockerfile.ci.dev
tests/
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causal_conv1d
WORKDIR /opt
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1
FROM $FROM_IMAGE_NAME as build_grouped_gemm
WORKDIR /opt
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2
FROM $FROM_IMAGE_NAME as build_mamba_ssm
WORKDIR /opt
RUN MAMBA_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/state-spaces/mamba.git@v2.2.0
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext python3-venv && \
apt-get clean && \
python -m venv /opt/jet && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
COPY --from=build_causal_conv1d /opt/causal_conv1d-*.whl ./
COPY --from=build_grouped_gemm /opt/grouped_gemm-*.whl ./
COPY --from=build_mamba_ssm /opt/mamba_ssm-*.whl ./
RUN \
--mount=type=bind,source=requirements,target=requirements \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=setup.py,target=setup.py \
--mount=type=bind,source=megatron/core/package_info.py,target=megatron/core/package_info.py \
--mount=type=bind,source=megatron/core/README.md,target=megatron/core/README.md \
--mount=type=bind,source=megatron/core/__init__.py,target=megatron/core/__init__.py <<"EOF" bash -ex
pip install causal_conv1d-*.whl mamba_ssm-*.whl grouped_gemm-*.whl
PY_ENV=pytorch:24.07 pip install .
EOF
# Since megatron does not have any dependencies (and isn't a dependency to any other package), we can install it separately to make everything a bit quicker
ARG MCORE_REPO
ARG MCORE_REF
ARG MCORE_BACKWARDS_REF
RUN <<"EOF" bash -exu
# Checkout latest
cd /opt
rm -rf /opt/megatron-lm; mkdir megatron-lm; cd megatron-lm
git init
git remote add origin ${MCORE_REPO}
git fetch origin '+refs/merge-requests/*:refs/remotes/merge-requests/*'
git fetch origin $MCORE_REF
git checkout $MCORE_REF
# Checkout backwards-ref
cd /opt
rm -rf /opt/megatron-lm-legacy; mkdir megatron-lm-legacy; cd megatron-lm-legacy
git init
git remote add origin ${MCORE_REPO}
git fetch origin $MCORE_BACKWARDS_REF
git checkout $MCORE_BACKWARDS_REF
rm -rf megatron; cp -a /opt/megatron-lm/megatron ./
EOF
RUN PY_ENV=pytorch:24.07 pip install -e /opt/megatron-lm
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install jet-client jet-api --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causal_conv1d
WORKDIR /opt
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1
FROM $FROM_IMAGE_NAME as build_grouped_gemm
WORKDIR /opt
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2
FROM $FROM_IMAGE_NAME as build_mamba_ssm
WORKDIR /opt
RUN git clone https://github.com/state-spaces/mamba.git && \
cd mamba && \
git checkout v2.2.0 && \
sed -i "/triton/d" setup.py && \
MAMBA_FORCE_BUILD=TRUE pip3 wheel -v .
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext python3-venv && \
apt-get clean && \
python -m venv /opt/jet && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
COPY --from=build_causal_conv1d /opt/causal_conv1d-*.whl ./
COPY --from=build_grouped_gemm /opt/grouped_gemm-*.whl ./
COPY --from=build_mamba_ssm /opt/mamba/mamba_ssm-*.whl ./
RUN \
--mount=type=bind,source=requirements,target=requirements \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=setup.py,target=setup.py \
--mount=type=bind,source=megatron/core/package_info.py,target=megatron/core/package_info.py \
--mount=type=bind,source=megatron/core/README.md,target=megatron/core/README.md \
--mount=type=bind,source=megatron/core/requirements.txt,target=megatron/core/requirements.txt \
--mount=type=bind,source=megatron/core/__init__.py,target=megatron/core/__init__.py <<"EOF" bash -ex
pip install causal_conv1d-*.whl mamba_ssm-*.whl grouped_gemm-*.whl
PY_ENV=pytorch_24.10 pip install .
EOF
# Since megatron does not have any dependencies (and isn't a dependency to any other package), we can install it separately to make everything a bit quicker
ARG MCORE_REPO
ARG MCORE_REF
ARG MCORE_BACKWARDS_REF
RUN <<"EOF" bash -exu
# Checkout latest
cd /opt
rm -rf /opt/megatron-lm; mkdir megatron-lm; cd megatron-lm
git init
git remote add origin ${MCORE_REPO}
git fetch origin '+refs/merge-requests/*:refs/remotes/merge-requests/*'
git fetch origin $MCORE_REF
git checkout $MCORE_REF
# Checkout backwards-ref
cd /opt
rm -rf /opt/megatron-lm-legacy; mkdir megatron-lm-legacy; cd megatron-lm-legacy
git init
git remote add origin ${MCORE_REPO}
git fetch origin $MCORE_BACKWARDS_REF
git checkout $MCORE_BACKWARDS_REF
rm -rf megatron; cp -a /opt/megatron-lm/megatron ./
EOF
RUN PY_ENV=pytorch_24.10 pip install -e /opt/megatron-lm
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
--mount=type=secret,id=LOGGER_INDEX_URL \
LOGGER_INDEX_URL=$(cat /run/secrets/LOGGER_INDEX_URL) && \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install "jet-client~=2.0" jet-api --upgrade $JET_INDEX_URLS && \
pip install "one-logger" --upgrade $LOGGER_INDEX_URL
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causal_conv1d
WORKDIR /opt
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1
FROM $FROM_IMAGE_NAME as build_grouped_gemm
WORKDIR /opt
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2
FROM $FROM_IMAGE_NAME as build_mamba_ssm
WORKDIR /opt
RUN MAMBA_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/state-spaces/mamba.git@v2.0.3
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext python3-venv && \
apt-get clean && \
python -m venv /opt/jet && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
COPY --from=build_causal_conv1d /opt/causal_conv1d-*.whl ./
COPY --from=build_grouped_gemm /opt/grouped_gemm-*.whl ./
COPY --from=build_mamba_ssm /opt/mamba_ssm-*.whl ./
RUN \
--mount=type=bind,source=requirements,target=requirements \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=setup.py,target=setup.py \
--mount=type=bind,source=megatron/core/package_info.py,target=megatron/core/package_info.py \
--mount=type=bind,source=megatron/core/README.md,target=megatron/core/README.md \
--mount=type=bind,source=megatron/core/__init__.py,target=megatron/core/__init__.py <<"EOF" bash -ex
pip install causal_conv1d-*.whl mamba_ssm-*.whl grouped_gemm-*.whl
PY_ENV=pytorch:24.07 pip install .
EOF
# Since megatron does not have any dependencies (and isn't a dependency to any other package), we can install it separately to make everything a bit quicker
ARG MCORE_REPO
ARG MCORE_REF
ARG MCORE_BACKWARDS_REF
RUN <<"EOF" bash -exu
# Checkout latest
cd /opt
rm -rf /opt/megatron-lm; mkdir megatron-lm; cd megatron-lm
git init
git remote add origin ${MCORE_REPO}
git fetch origin '+refs/merge-requests/*:refs/remotes/merge-requests/*'
git fetch origin $MCORE_REF
git checkout $MCORE_REF
# Checkout backwards-ref
cd /opt
rm -rf /opt/megatron-lm-legacy; mkdir megatron-lm-legacy; cd megatron-lm-legacy
git init
git remote add origin ${MCORE_REPO}
git fetch origin $MCORE_BACKWARDS_REF
git checkout $MCORE_BACKWARDS_REF
rm -rf megatron; cp -a /opt/megatron-lm/megatron ./
EOF
RUN PY_ENV=pytorch:24.01 pip install -e /opt/megatron-lm
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install jet-api jet-client --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causal_conv1d
WORKDIR /opt
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1
FROM $FROM_IMAGE_NAME as build_grouped_gemm
WORKDIR /opt
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2
FROM $FROM_IMAGE_NAME as build_mamba_ssm
WORKDIR /opt
RUN MAMBA_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/state-spaces/mamba.git@v2.0.3
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext python3-venv && \
apt-get clean && \
python -m venv /opt/jet && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
COPY --from=build_causal_conv1d /opt/causal_conv1d-*.whl ./
COPY --from=build_grouped_gemm /opt/grouped_gemm-*.whl ./
COPY --from=build_mamba_ssm /opt/mamba_ssm-*.whl ./
RUN \
--mount=type=bind,source=requirements,target=requirements \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=setup.py,target=setup.py \
--mount=type=bind,source=megatron/core/package_info.py,target=megatron/core/package_info.py \
--mount=type=bind,source=megatron/core/README.md,target=megatron/core/README.md \
--mount=type=bind,source=megatron/core/requirements.txt,target=megatron/core/requirements.txt \
--mount=type=bind,source=megatron/core/__init__.py,target=megatron/core/__init__.py <<"EOF" bash -ex
pip install causal_conv1d-*.whl mamba_ssm-*.whl grouped_gemm-*.whl
PY_ENV=pytorch_24.01 pip install .
EOF
# Since megatron does not have any dependencies (and isn't a dependency to any other package), we can install it separately to make everything a bit quicker
ARG MCORE_REPO
ARG MCORE_REF
ARG MCORE_BACKWARDS_REF
RUN <<"EOF" bash -exu
# Checkout latest
cd /opt
rm -rf /opt/megatron-lm; mkdir megatron-lm; cd megatron-lm
git init
git remote add origin ${MCORE_REPO}
git fetch origin '+refs/merge-requests/*:refs/remotes/merge-requests/*'
git fetch origin $MCORE_REF
git checkout $MCORE_REF
# Checkout backwards-ref
cd /opt
rm -rf /opt/megatron-lm-legacy; mkdir megatron-lm-legacy; cd megatron-lm-legacy
git init
git remote add origin ${MCORE_REPO}
git fetch origin $MCORE_BACKWARDS_REF
git checkout $MCORE_BACKWARDS_REF
rm -rf megatron; cp -a /opt/megatron-lm/megatron ./
EOF
RUN PY_ENV=pytorch_24.01 pip install -e /opt/megatron-lm
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
--mount=type=secret,id=LOGGER_INDEX_URL \
LOGGER_INDEX_URL=$(cat /run/secrets/LOGGER_INDEX_URL) && \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install "jet-client~=2.0" jet-api --upgrade $JET_INDEX_URLS && \
pip install "one-logger" --upgrade $LOGGER_INDEX_URL
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
# syntax=docker/dockerfile:experimental
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN sed -i -e 's/^APT/# APT/' -e 's/^DPkg/# DPkg/' \
/etc/apt/apt.conf.d/docker-clean
RUN apt-get update && \
apt-get install -y python3-venv && \
apt-get clean && \
python -m venv /opt/jet
RUN pip3 install --no-cache-dir \
black==24.4.2 \
isort==5.13.2 \
flake8==7.1.0 \
pylint==3.2.6 \
mypy
COPY . /opt/megatron-lm
WORKDIR /opt/megatron-lm
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install jet-client jet-api --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
# syntax=docker/dockerfile:experimental
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN sed -i -e 's/^APT/# APT/' -e 's/^DPkg/# DPkg/' \
/etc/apt/apt.conf.d/docker-clean
RUN apt-get update && \
apt-get install -y python3-venv && \
apt-get clean && \
python -m venv /opt/jet
RUN pip3 install --no-cache-dir \
black==24.4.2 \
isort==5.13.2 \
flake8==7.1.0 \
pylint==3.2.6 \
coverage \
mypy
COPY . /opt/megatron-lm
WORKDIR /opt/megatron-lm
##### For NVIDIANS only #####
FROM main as jet
ARG CACHEBUST=0
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install "jet-client~=2.0" jet-api --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
#!/bin/bash
# Runs the "7B" parameter model
export HSA_FORCE_FINE_GRAIN_PCIE=1
export OMP_NUM_THREADS=1
export NCCL_P2P_LEVEL=SYS
export NCCL_ALGO=Ring
export NCCL_NCHANNELS_PER_PEER=16
export NCCL_MIN_NCHANNELS=20
export NCCL_IB_TIMEOUT=22
export CUDA_DEVICE_MAX_CONNECTIONS=1
export NCCL_NET_GDR_LEVEL=SYS
export NCCL_NET_GDR_READ=0
CHECKPOINT_PATH=./tmp #$1 #<Specify path>
TENSORBOARD_LOGS_PATH=./tmp #$2 #<Specify path>
DATA_PATH="/datasets/oscar-1GB-gpt_text_document" #<Specify path and file prefix>_text_document
VOCAB_PATH=./gpt2-vocab.json
MERGE_PATH=./gpt2-merges.txt
GPT_MODEL_ARGS=(
--num-layers 12
--hidden-size 768
--num-attention-heads 12
--ffn-hidden-size 3072
--seq-length 1024
--max-position-embeddings 1024
)
# export NVTE_FLASH_ATTN=1 # 走autlass
# export NVTE_FLASH_ATTN_TRITON=1 # 走triton_fa
# --transformer-impl transformer_engine
# --use-mcore-models
TRAINING_ARGS=(
--transformer-impl local
--use-legacy-models
--micro-batch-size 1
--global-batch-size 60 #240 #512 #64
--train-iters 100
--weight-decay 0.1
--adam-beta1 0.9
--adam-beta2 0.95
--init-method-std 0.006
--clip-grad 1.0
--bf16
--use-distributed-optimizer
--ckpt-format torch
--disable-bias-linear
--overlap-grad-reduce
--attention-dropout 0
--hidden-dropout 0
--ddp-average-in-collective
--recompute-granularity full
--recompute-num-layers 5
--recompute-method block
--no-gradient-accumulation-fusion
--swiglu
--lr 3.0e-5
--lr-decay-style cosine
--min-lr 3.0e-6
--lr-warmup-iters 1
)
MODEL_PARALLEL_ARGS=(
--sequence-parallel
--tensor-model-parallel-size 2
--pipeline-model-parallel-size 2
)
DATA_ARGS=(
--data-path $DATA_PATH
--split 949,50,1
--untie-embeddings-and-output-weights
--use-rotary-position-embeddings
--normalization RMSNorm
--no-position-embedding
--vocab-file $VOCAB_PATH
--merge-file $MERGE_PATH
--tokenizer-type GPT2BPETokenizer
)
EVAL_AND_LOGGING_ARGS=(
--log-interval 1
--save-interval 10000
--eval-interval 1000
--save $CHECKPOINT_PATH
--load $CHECKPOINT_PATH
--eval-iters 10
--tensorboard-dir $TENSORBOARD_LOGS_PATH
)
RANK=$OMPI_COMM_WORLD_RANK
LOCAL_RANK=$OMPI_COMM_WORLD_LOCAL_RANK
WORLD_SIZE=$OMPI_COMM_WORLD_SIZE
DIST_URL=${1}
DIST_PORT=34566
DISTRIBUTED_ARGS=(
--rank ${RANK}
--world-size ${WORLD_SIZE}
--local-rank ${LOCAL_RANK}
--dist-url tcp://${DIST_URL}:${DIST_PORT}
)
APP="python -u pretrain_gpt.py \
${GPT_MODEL_ARGS[@]} \
${TRAINING_ARGS[@]} \
${MODEL_PARALLEL_ARGS[@]} \
${DATA_ARGS[@]} \
${EVAL_AND_LOGGING_ARGS[@]} \
${DISTRIBUTED_ARGS[@]} \
"
case ${LOCAL_RANK} in
[0])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[1])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[2])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[3])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[4])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[5])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[6])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[7])
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
esac
This diff is collapsed.
#!/bin/bash
set -eux
#export FLASH_ATTENTION_PRINT_PARAM=1
# Runs the "7B" parameter model
export HSA_FORCE_FINE_GRAIN_PCIE=1
export OMP_NUM_THREADS=1
export NCCL_P2P_LEVEL=PXB # SYS
#export HIP_ALLOC_INITIALIZE=0
#export GPU_MAX_HW_QUEUES=20
export NCCL_ALGO=Ring
export NCCL_NCHANNELS_PER_PEER=16
export NCCL_MIN_NCHANNELS=20
export NCCL_IB_TIMEOUT=22
export CUDA_DEVICE_MAX_CONNECTIONS=1
export NCCL_IB_HCA=mlx5_1,mlx5_2
export NCCL_NET_GDR_LEVEL=7
export NCCL_NET_GDR_READ=1
export GLOG_minloglevel=3 # 打印error级别的nccl日志
source /opt/dtk/env.sh
# 导入hipblaslt库
# export LD_LIBRARY_PATH=/data/hipblaslt-install-0904/lib:$LD_LIBRARY_PATH
# 更新rocblas
# export LD_LIBRARY_PATH=/data/rocblas-install_qwen1211/lib:$LD_LIBRARY_PATH
# export LD_LIBRARY_PATH=/data/rocblas-install_qwen1228/lib:$LD_LIBRARY_PATH
# export LD_LIBRARY_PATH=/data/rocblas-install-0118-bf16/lib:$LD_LIBRARY_PATH
# torch控制多流转单流
export ALLREDUCE_STREAM_WITH_COMPUTE=1
export SENDRECV_STREAM_WITH_COMPUTE=1
# prof采集添加同步, 避免卡顿
# export GPU_FLUSH_ON_EXECUTION=1
# export HIP_DIRECT_DISPATCH=0
# 采集rocblas size
# export ROCBLAS_LAYER=3
# 采集 fa size
# export FLASH_ATTENTION_PRINT_PARAM=1
#增加编译缓存
export cache_size_limit=64
CHECKPOINT_PATH=./tmp_7b #$1 #<Specify path>
TENSORBOARD_LOGS_PATH=./tmp_7b #$2 #<Specify path>
DATA_PATH="/data/datasets/nemo_pretrain/oscar-1GB/oscar-1GB-llama_text_document" #<Specify path and file prefix>_text_document
GPT_MODEL_ARGS=(
--num-layers 32
--hidden-size 4096
--ffn-hidden-size 11008
--num-attention-heads 32
--max-position-embeddings 4096
--normalization RMSNorm
--position-embedding-type rope
--untie-embeddings-and-output-weights # 分开处理embed和输出权重, 增加灵活性
)
# export NVTE_FLASH_ATTN=1 # 走cutlass
export NVTE_FLASH_ATTN_TRITON=1 # 走triton_fa
# --transformer-impl transformer_engine # 走core用这两组参数
# --use-mcore-models
# --transformer-impl local # 走legacy用这两组参数
# --use-legacy-models
TRAINING_ARGS=(
--transformer-impl local # 走legacy用这两组参数
--use-legacy-models
--micro-batch-size 1
--global-batch-size 60 #240 #60 #512 #64
--train-iters 10
--weight-decay 0.1
--adam-beta1 0.9
--adam-beta2 0.95
--init-method-std 0.006
--clip-grad 1.0
--bf16
# --fp16 # 开启fp16需要指定loss-scale
# --loss-scale 1024
--use-distributed-optimizer
--disable-bias-linear
--attention-dropout 0
--hidden-dropout 0
--no-gradient-accumulation-fusion
--swiglu
--lr 3.0e-5
--lr-decay-style cosine
--min-lr 3.0e-6
--lr-warmup-iters 1
--ckpt-format torch
--ddp-average-in-collective # 在dp阶段通信中, 梯度或参数将被直接平均, 而不是先求和(到一个设备)再平均
# --recompute-granularity full # 开启重计算降低显存增加耗时
# --recompute-num-layers 5 #0 #
# --recompute-method block
--overlap-grad-reduce # 重叠ddp grad reduce
# --tp-comm-overlap # tensor parallel comm和gemm重叠
# --tp-comm-overlap-rs-dgrad # reduce-scatter和dgrad gemm重叠
--use-flash-attn-triton
)
# --use-flash-attn-cutlass # cutlass fa
# --use-flash-attn-triton # triton fa
MODEL_PARALLEL_ARGS=(
--sequence-parallel
--tensor-model-parallel-size 1
--pipeline-model-parallel-size 2
)
DATA_ARGS=(
--data-path $DATA_PATH
--seq-length 4096 #4096
--split 949,50,1
--tokenizer-type Llama2Tokenizer
--tokenizer-model /data/model_weights/llama2_7b_hf/tokenizer.model
)
EVAL_AND_LOGGING_ARGS=(
--log-interval 1
--log-throughput
--save-interval 1000
--eval-interval 1000
--save $CHECKPOINT_PATH
--load $CHECKPOINT_PATH
--eval-iters 10
--tensorboard-dir $TENSORBOARD_LOGS_PATH
)
PROFILE_ARGS=(
--profile
--profile-step-start 4
--profile-step-end 5
--use-pytorch-profiler
--profile-ranks 0 1 2 3 4 5 6 7
--profile-dir prof_data
)
RANK=$OMPI_COMM_WORLD_RANK
LOCAL_RANK=$OMPI_COMM_WORLD_LOCAL_RANK
WORLD_SIZE=$OMPI_COMM_WORLD_SIZE
DIST_URL=${1}
DIST_PORT=34567
DISTRIBUTED_ARGS=(
--rank ${RANK}
--world-size ${WORLD_SIZE}
--local-rank ${LOCAL_RANK}
--dist-url tcp://${DIST_URL}:${DIST_PORT}
)
APP="python -u pretrain_gpt.py \
${GPT_MODEL_ARGS[@]} \
${TRAINING_ARGS[@]} \
${MODEL_PARALLEL_ARGS[@]} \
${DATA_ARGS[@]} \
${EVAL_AND_LOGGING_ARGS[@]} \
${DISTRIBUTED_ARGS[@]} \
"
# 开启profile
# ${PROFILE_ARGS[@]} \
# export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 # # 4,5,6,7 #,
# export CUDA_VISIBLE_DEVICES=4,5,6,7 # 0,1,2,3,
# ${APP}
# 使用numactl绑定
case ${LOCAL_RANK} in
[0])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
numactl --cpunodebind=0 --membind=0 ${APP}
;;
[1])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
numactl --cpunodebind=1 --membind=1 ${APP}
;;
[2])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
numactl --cpunodebind=2 --membind=2 ${APP}
;;
[3])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=3 --membind=3 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[4])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=4 --membind=4 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[5])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=5 --membind=5 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[6])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=6 --membind=6 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
[7])
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
numactl --cpunodebind=7 --membind=7 ${APP}
# hipprof --hip-trace --trace-off numactl --cpunodebind=0 --membind=0 ${APP}
;;
esac
......@@ -16,6 +16,8 @@
# 更新日志
2025.3.14适配最新代码,shell启动脚本在examples对应模型目录下
2024.12.16适配了torch prof
使用方法: 启动脚本中添加下列参数, 即可采集对应的prof信息
......@@ -23,9 +25,6 @@
```python
# 采集torchprof
mpirun -np 8 --allow-run-as-root train_mixtral_8x7B_1nodes.sh localhost --profiling=torch
# 采集hipprof
mpirun -np 8 --allow-run-as-root train_mixtral_8x7B_1nodes.sh localhost --profiling=hip
```
```bash
......@@ -38,14 +37,6 @@ TORCH_PROFIE_ARGS=(
--profile-ranks 0 3 # 采集全局rank 第0和3
--profile-dir ./prof_data # prof文件的保存目录
)
HIP_PROFIE_ARGS=(
--profile
--profile-ranks 0 1 2 3 4 5 6 7
--profile-step-start 4
--profile-step-end 5
--use-hip-profiler
)
```
......
This diff is collapsed.
# MCore Custom Fully Sharded Data Parallel (FSDP)
## How to use ?
Add these flag to enable MCore custom FSDP.
```bash
--use-custom-fsdp
--data-parallel-sharding-strategy optim_grads_params
--no-gradient-accumulation-fusion
--use-distributed-optimizer
```
## Key Features
- **Sharding Strategy**: Efficiently shards optimizer states, gradients, and parameters to reduce memory consumption.
- **Communication and Computation Overlap**: Optimized to enable concurrent execution of communication and computation, enhancing overall efficiency.
- **Supports automatic mixed precision training**: Compatible with BF16 O1/O2/O3 recipes, as well as FP8 compute with FP32 parameters and FP8 parameter training, allowing for flexible precision configurations.
- **Tensor Parallelism (TP), Expert Parallelism (EP) and Context Parallelism (CP)**: Compatible with TP, EP and CP configurations, enabling efficient scaling of large language models.
- **Distributed Model Initialization with Meta Device**: Allows model initialization using meta device, followed by layer-by-layer initialization of distributed model weight buffers via the `Module.reset_parameters` API, facilitating the initialization of extremely large models.
## Configuration Recommendations
### 1. Disable `CUDA_MAX_CONNECTIONS`
To ensure full parallelization of FSDP communication and computation, disable the CUDA_MAX_CONNECTIONS environment variable. This step avoids potential bubble in CUDA stream. (But it may slow down TP and CP to some extent.)
```bash
unset CUDA_MAX_CONNECTIONS
```
### 2. Add `--calculate-per-token-loss`
For gradients sharding mode optimization, include the `--calculate-per-token-loss` flag in your training script. This improves performance by reducing the frequency of gradient scaling, which is also a sizable drain on SM resources.
## Design of Custom FSDP
### 1. Overview
The custom Fully Sharded Data Parallelism (FSDP) implementation in Megatron-Core is specifically designed to optimize memory consumption and performance for large language models. The core design principles include:
- **Optimized for Large Language Models**: This custom FSDP implementation is tailored to efficiently scale with models containing billions of parameters, ensuring seamless execution and training of massive models.
- **Efficient Memory Consumption**: By strategically sharding optimizer states, gradients, and model parameters, the custom FSDP significantly reduces memory usage. This approach enables the training of models that would otherwise be too large to fit in memory.
- **Efficient Workflow & Overlapping Communication and Computation**: The implementation is engineered to minimize the number of communication steps required during training. It maximizes the overlap between communication and computation, thereby enhancing overall training efficiency and reducing latency.
- **Support for MCore's Efficient Training Methods**: The custom FSDP seamlessly integrates with Megatron-Core's advanced parallelism techniques, including tensor parallelism, expert parallelism and context parallelism. Additionally, it supports automatic mixed precision training, further optimizing training performance and efficiency.
The design of Custom FSDP draws inspiration from PyTorch FSDP [Zhao, Yanli, et al.](https://arxiv.org/pdf/2304.11277) and MCore's distributed optimizer. The introduction to PyTorch FSDP is referenced here to clarify the underlying concepts of the custom FSDP design.
> In DistributedDataParallel, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks.
> When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes the training of some very large models feasible by allowing larger models or batch sizes to fit on device. This comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like overlapping communication and computation.
![FSDP workflow](../images/custom_fsdp/FSDP_workflow.png)
*Notice that the unit processed in workflow here is the “FSDP instance 1: N layers”, where an FSDP instance is the smallest FSDP processing unit (also a PyTorch module), which means that we can safely release this module weights after using it (executing the forward or backward of this module), and there will be no other computations computations relying on these weights. This capability is the foundation of FSDP's layer-by-layer execution and memory-saving strategy. An FSDP instance is also referred to as an **FSDP Unit**.*
*It is worth noting that an FSDP instance can correspond to multiple FSDP parameter groups. These groups are separated by Data Parallel (DP) communication groups and the data type of the parameter or gradient. Consequently, an FSDP instance may require several parameter-gather tasks before execution (forward or backward). Each **FSDP parameter group** corresponds to one **Data Parallel Buffer** in custom FSDP.*
At a high level FSDP works as follow:
In constructor
- Shard model parameters and each rank only keeps its own shard
In forward path
- Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit
- Run forward computation
- Discard parameter shards it has just collected
In backward path
- Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit
- Run backward computation
- Run reduce_scatter to sync gradients
- Discard parameters.
One way to view FSDP’s sharding is to decompose the DDP gradient all-reduce into reduce-scatter and all-gather. Specifically, during the backward pass, FSDP reduces and scatters gradients, ensuring that each rank possesses a shard of the gradients. Then it updates the corresponding shard of the parameters in the optimizer step. Finally, in the subsequent forward pass, it performs an all-gather operation to collect and combine the updated parameter shards.
![FSDP Allreduce](../images/custom_fsdp/FSDP_Allreduce.png)
### 2. Custom FSDP underlying data structure
To implement the FSDP functionality described above, the custom FSDP is designed with the following Python classes and data structure:
![MCore Custom FSDP Class Diagram](../images/custom_fsdp/MCore_Custom_FSDP_Class_Diagram.png)
### 3. The custom FSDP interface: FullyShardedDataParallel
The custom FSDP provides the same programming interface as PyTorch's DistributedDataParallel (DDP) as FullyShardedDataParallel (FSDP). For example, you can apply FSDP to models as follows:
```python
# Initialize model and optimizer
ddp_config.use_custom_fsdp = True
ddp_config.data_parallel_sharding_strategy = "optim_grads_params"
model = GPTModel(transformer_config)
model = FullyShardedDataParallel(
transformer_config,
model,
ddp_config,
fsdp_unit_modules = [TransformerLayer, LanguageModelEmbedding],
)
optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
optimizer = DistributedOptimizer(optimizer, [model], [model.param_and_grad_buffer])
# Training loop
def train_step(inputs, labels):
optimizer.zero_grad()
for mbs_input, mbs_label in zip(inputs, labels):
outputs = model(mbs_input)
loss = loss_fn(outputs, mbs_label)
loss.backward()
optimizer.step()
# Save and load model and optimizer state dict
def model_and_optimizer_state_dict():
state_dict = {
"model": model.sharded_state_dict(),
"optimizer": optimizer.sharded_state_dict(),
}
return state_dict
def load_model_and_optimizer_state_dict(state_dict):
model.load_state_dict(state_dict["model"])
optimizer.load_state_dict(state_dict["optimizer"])
```
**Key Notes:**
- You can configure which modules should be treated as FSDP units via the `fsdp_unit_modules` argument. This configuration is mandatory.
- The custom FSDP must be used with a distributed optimizer since it provides distributed checkpointing.
- The data-parallel communication group for parameters is not explicitly shown. Custom FSDP configures these groups as either DP (data-parallel) or EDP (expert data-parallel) based on parameter markings.
#### 3.1 Initializing Models on the Meta Device
For training particularly large models with FSDP, you can initialize the model on the meta device. Using PyTorch's `reset_parameters` API, you can initialize model weights layer by layer during the construction of the `ParamAndGradBuffer`. Most PyTorch native modules and TransformerEngine modules support this API (e.g., [PyTorch Linear](https://github.com/pytorch/pytorch/blob/v2.6.0/torch/nn/modules/linear.py#L114), [TE LayerNormLinear](https://github.com/NVIDIA/TransformerEngine/blob/release_v2.0/transformer_engine/pytorch/module/layernorm_linear.py#L1107)).
```python
# Initialize model on meta device
with torch.device("meta"):
model = GPTModel(config)
model = FullyShardedDataParallel(
transformer_config,
model,
ddp_config,
fsdp_unit_modules=[TransformerLayer, LanguageModelEmbedding],
)
```
**Important Considerations:**
1. *Custom Modules*: If your model contains custom modules, ensure they implement the `reset_parameters` API. Otherwise, you may need to force parameter initialization on a CUDA or CPU device.
2. *Tensor Initialization*: Be cautious of tensors created during model initialization without a specified device—they will default to the meta device. To avoid issues, explicitly specify the device for these tensors to ensure compatibility with this function.
### 4. Interaction between Custom FSDP and Model Forward/Backward Propagation
Custom FSDP implements Fully Sharded Data Parallelism (FSDP) through a series of module hooks, gradient hooks, or by adding functions between modules. This involves inserting communications and manipulating parameters and gradients during PyTorch's module forward or backward propagation.
Module hooks summary:
- Module pre-forward hook(`module.register_forward_pre_hook`): This hook unshards model weights before the forward pass. In the case of an FSDP Unit Module, add a RegisterFSDPBackwardFunction function that will release the module's modes on backward propagation.
- Module post-forward hook(`module.register_forward_hook`): This hook is used to reshard model weights after the forward pass.
- Root module pre-backward hook(`root_module.register_full_backward_pre_hook`): This hook checks that all model parameters are resharded, in order to avoid unnecessary memory spikes. It also marks all modules as being in the `TrainingState.PRE_BACKWARD` state.
- Module pre-backward hook(`module.register_full_backward_pre_hook`): This hook is used to unshard the model weights before the backward pass.
- Gradient accumulation hook(`grad_acc.register_hook`): This hook is used to accumulate gradients and trigger the gradient reduction pipeline.
The gradient reduction pipeline maintains a map of gradients to FSDP parameter groups. If all gradients in an FSDP parameter group are ready, it launches a gradient reduction. Note that this assumes that the model's gradients are always generated in a certain order (reverse of `module.parameters()`), as otherwise, FSDP would maintain too many parameter group grad buffers, leading to excessive memory usage.
#### 4.1 Optimized for Activation Recompute
Using the activation recompute will cause the same module to execute the forward function first and then the backward function in the backward prop, which will cause model weights unshard twice and model weights reshard twice. If we can tell program that this is a forward + backward operation, we can just call unshard once and reshard once.
To make this determination, we keep track of the model's state with training_state, `FORWARD`, `PRE_BACKWARD`, `POST_BACKWARD`, `IDLE`. It's worth noting that pre-backward hook act before pre-forward hook, and we'll let pre-backward hook execute the model weight unshard, and then mark the model as `PRE_BACKWARD`, and when pre-forward hook sees this marking it will not perform the unshard operation. Similarly, for model weight reshard duplicate, post-forward hook act before post-backward function, and checking for the `PRE_BACKWARD` flag in the post-forward hook will cancel the unshard.
### 5. Memory Mechanisms and Features of Custom FSDP
FSDP can fully distribute the model parameters, gradients, and optimizer states, and for mixed-precision training, it can also fully distribute the high-precision main weights. This is pretty much distributes all the memory except for the activation memory, but FSDP will also face some memory issues.
FSDP frequently unshards and reshards model weights, which can lead to busy memory allocation and deallocation. This results in untimely tensor releases, causing memory spikes (or even out-of-memory errors), crashes of the PyTorch memory allocator cache, and a large number of `cudaMalloc` and `cudaFree` calls. These issues can significantly slow down the system.
The problem of untimely tensor release can generally be addressed using the `tensor._typed_storage(). _resize_(0)` API, which immediately deallocates the storage's memory. Custom FSDP provides interfaces in `AllGatherPipeline` and `GradReducePipeline` to replace the temporary buffer memory allocator used for parameter gathering and gradient reduction with ` StorageResizeBasedBucketAllocator`. This replaces the tensor release operation with the `tensor._typed_storage(). _resize_(0)` API.
The PyTorch memory allocator cache crash is a complex issue that occurs frequently when the actual memory usage approaches the GPU memory limit, leading to poor performance. This problem is challenging and can only be mitigated by avoiding frequent hits on the GPU memory limit. Using a self-managed memory allocator like ` RotaryBucketAllocator` is another potential solution. However, note that `RotaryBucketAllocator` is not yet mature.
## References
- [Getting Started with Fully Sharded Data Parallel (FSDP)](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html)
API Guide
=========
.. toctree::
:maxdepth: 4
models
tensor_parallel
context_parallel
pipeline_parallel
fusions
transformer
moe
dist_checkpointing
dist_optimizer
distributed
datasets
num_microbatches_calculator
optimizer_param_scheduler
API Guide
=========
.. toctree::
:maxdepth: 4
models
tensor_parallel
context_parallel
pipeline_parallel
custom_fsdp
fusions
transformer
moe
dist_checkpointing
dist_optimizer
distributed
datasets
multi_latent_attention
num_microbatches_calculator
optimizer_param_scheduler
optimizer_cpu_offload
encoder_decoder_parallelism
\ No newline at end of file
Multi-Latent Attention
======================
Multi-Latent Attention overview
-------------------------------
Multi-Latent Attention ("MLA") is an innovative attention mechanism introduced by Deepseek team that enhances the efficiency of attention computation by leveraging multiple latent spaces. This approach is particularly beneficial for large language models (LLMs), as it reduces the computational burden associated with traditional attention mechanisms. According to Deepseek-V2 technical report, MLA achieves better performance compared to Multi-Head Attention (MHA) and requires smaller KV cache.
Enabling Multi-Latent Attention
-------------------------------
To enable MLA in Megatron-LM, set the following flags in command line:
- `--multi-latent-attention` to enable MLA in MLP.
- Set `MLATransformerConfig` to configure MLA.
Optimizer CPU offload package
==============================
.. mdinclude :: ../../../megatron/core/optimizer/cpu_offloading/README.md
#! /bin/bash
# Change for multinode config
GPUS_PER_NODE=16
MASTER_ADDR=localhost
MASTER_PORT=$(($RANDOM + 1024))
NNODES=1
NODE_RANK=0
WORLD_SIZE=$(($GPUS_PER_NODE*$NNODES))
# input
DATA_PATH=$1
SHARE_DATA=$PWD # current work dir
FINETUNED_PATH="$SHARE_DATA/$2"
lr=$3
bs=$4
iter=$5
CHECKPOINT_PATH=$6
# vocab
VOCAB_FILE=gpt2-vocab.json # Your gpt-2 vocab
MERGE_FILE=gpt2-merges.txt # Your gpt-2 merge file
# tensorboard
TENSORBOARD_DIR="$SHARE_DATA/tensorboard/$2"
mkdir -p ${TENSORBOARD_DIR}
DISTRIBUTED_ARGS="--nproc_per_node $GPUS_PER_NODE --nnodes $NNODES --node_rank $NODE_RANK --master_addr $MASTER_ADDR --master_port $MASTER_PORT"
python -m torch.distributed.run $DISTRIBUTED_ARGS \
examples/detxoify_lm/finetune_gpt.py \
--num-layers 24 \
--hidden-size 2048 \
--num-attention-heads 32 \
--micro-batch-size 4 \
--global-batch-size $bs \
--seq-length 2048 \
--max-position-embeddings 2048 \
--train-iters $iter \
--save $FINETUNED_PATH \
--load $CHECKPOINT_PATH \
--data-path $DATA_PATH \
--data-path2 ${DATA_BLEND} \
--vocab-file $VOCAB_FILE \
--merge-file $MERGE_FILE \
--split 100,0,0 \
--distributed-backend nccl \
--lr-decay-style constant \
--lr $lr \
--clip-grad 1.0 \
--weight-decay 0.1 \
--adam-beta1 0.9 \
--adam-beta2 0.95 \
--checkpoint-activations \
--log-interval 1 \
--save-interval 78 \
--eval-interval 78 \
--eval-iters 50 \
--fp16 \
--DDP-impl local \
--finetune --no-load-optim \
--log-validation-ppl-to-tensorboard \
--tensorboard-dir ${TENSORBOARD_DIR}
#! /bin/bash
# Change for multinode config
GPUS_PER_NODE=16
MASTER_ADDR=localhost
MASTER_PORT=$(($RANDOM + 1024))
NNODES=1
NODE_RANK=0
WORLD_SIZE=$(($GPUS_PER_NODE*$NNODES))
# input
DATA_PATH=$1
SHARE_DATA=$PWD # current work dir
FINETUNED_PATH="$SHARE_DATA/$2"
lr=$3
bs=$4
iter=$5
CHECKPOINT_PATH=$6
# vocab
VOCAB_FILE=gpt2-vocab.json # Your gpt-2 vocab
MERGE_FILE=gpt2-merges.txt # Your gpt-2 merge file
# tensorboard
TENSORBOARD_DIR="$SHARE_DATA/tensorboard/$2"
mkdir -p ${TENSORBOARD_DIR}
DISTRIBUTED_ARGS="--nproc_per_node $GPUS_PER_NODE --nnodes $NNODES --node_rank $NODE_RANK --master_addr $MASTER_ADDR --master_port $MASTER_PORT"
python -m torch.distributed.run $DISTRIBUTED_ARGS \
examples/detxoify_lm/finetune_gpt.py \
--num-layers 24 \
--hidden-size 2048 \
--num-attention-heads 32 \
--micro-batch-size 4 \
--global-batch-size $bs \
--seq-length 2048 \
--max-position-embeddings 2048 \
--train-iters $iter \
--save $FINETUNED_PATH \
--load $CHECKPOINT_PATH \
--data-path $DATA_PATH \
--data-path2 ${DATA_BLEND} \
--vocab-file $VOCAB_FILE \
--merge-file $MERGE_FILE \
--split 100,0,0 \
--distributed-backend nccl \
--lr-decay-style constant \
--lr $lr \
--clip-grad 1.0 \
--weight-decay 0.1 \
--adam-beta1 0.9 \
--adam-beta2 0.95 \
--checkpoint-activations \
--log-interval 1 \
--save-interval 78 \
--eval-interval 78 \
--eval-iters 50 \
--fp16 \
--DDP-impl local \
--finetune --no-load-optim \
--log-validation-ppl-to-tensorboard \
--tensorboard-dir ${TENSORBOARD_DIR}
File mode changed from 100644 to 100755
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment