Commit 1106877d authored by jerrrrry's avatar jerrrrry
Browse files

“13.0”

parents
Pipeline #2934 failed with stages
in 0 seconds
repos:
- repo: https://github.com/psf/black
rev: 'refs/tags/24.4.2:refs/tags/24.4.2'
hooks:
- id: black
files: ^megatron/core/.*
args: ["--skip-magic-trailing-comma"]
- repo: https://github.com/pycqa/pylint
rev: v3.2.6
hooks:
- id: pylint
files: ^megatron/core/.*
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: isort
files: ^megatron/core/.*
\ No newline at end of file
[MAIN]
ignore-paths=tests
max-line-length=100
[MESSAGES CONTROL]
disable=all
enable=C0115,C0116,W0611,C0301,E0606
# C0115: missing-class-docstring
# C0116: missing-function-docstring
# W0611: unused-import
# C0301: line-too-long
# E0606: possibly-used-before-assignment
\ No newline at end of file
# Changelog
## NVIDIA Megatron Core 0.13.0
* Features
* Inference
* Add async support for DynamicInferenceEngine ([MR \!3187](https://github.com/NVIDIA/Megatron-LM/commit/05079d55a5bfcc7a43f4619e36a40a9e8db3f882))
* Pad input tensors and enable FP8 weights for FP8 inference ([MR \!3341](https://github.com/NVIDIA/Megatron-LM/commit/6a6cd478839d90cf09a837adf8c79cbc844bc920))
* Force inference to always gather logits with tensor parallelism ([MR \!3442](https://github.com/NVIDIA/Megatron-LM/commit/7c9cdcb794089968278c7272e0261a68edf5d369))
* Multi batch size CUDA Graphs for Dynamic Inference ([MR \!3402](https://github.com/NVIDIA/Megatron-LM/commit/30aabe5e3133c6d70aa55aaabad4ea8cb04ce63c))
* Post-training
* ModelOpt updates ([MR \!3268](https://github.com/NVIDIA/Megatron-LM/commit/550ed5243c3a18e39430c15e8918ee63e41d7eaf))
* Add speculative decoding AR validation feature
* Add DeepSeek and Qwen model configs
* Performance
* ModelCommProcessGroup integration ([MR \!3391](https://github.com/NVIDIA/Megatron-LM/commit/26adc2dfde53fbc2b063e2fdd1d9ed26578811a6))
* Add HyperCommGrid: N-Dimensional Communication Grid for Model Parallelism ([MR \!3398](https://github.com/NVIDIA/Megatron-LM/commit/45400df7da7fa23e3aff86804e5ac254d9a8d3c0))
* Flexible creation and management of communication groups
* Add support for Spike No More embedding initializations and weight decay skipping ([MR \!3500](https://github.com/NVIDIA/Megatron-LM/commit/ee74aa66a06b24e511270f285db475941ef63bfd))
* Model support
* Add MiMo video VLM train example (\[MR \!3543)
* Add AVLM for MIMO (\[MR \!3624)
* Ease of use
* Add uv support for source installs ([MR \!3615](https://github.com/NVIDIA/Megatron-LM/commit/164204cd7216e642bdef7299c569d95f02f9a79e))
* Automated weekly prereleases ([MR \!3574](https://github.com/NVIDIA/Megatron-LM/commit/7e59266c70ef34a246438640af690b55c7ecac28))
* Bug fixes
* Use mscale\_all\_dim for softmax\_factor ([MR \!2800](https://github.com/NVIDIA/Megatron-LM/commit/e96a358f60c82b8ac8d965d91c3cc4ad0230a4e0))
* Fix FP8 param blockwise scaling unit test ([MR \!3480](https://github.com/NVIDIA/Megatron-LM/commit/57082f946a04c3390fcfc43634dc546ec3ded033))
* Fix unit test blockwise scaling ([MR \!3491](https://github.com/NVIDIA/Megatron-LM/commit/6d95fe63658f967e56a3fda88a9c30a424fcb520))
* Optimize prefill for token-less requests ([MR \!3499](https://github.com/NVIDIA/Megatron-LM/commit/daaa650a9ac4291d4027ca2fdeb4298ce024efd2))
* Add default values for Fp8Padding and Fp8Unpadding ([MR \!3501](https://github.com/NVIDIA/Megatron-LM/commit/42b2b1d10a9cb699b7e5aa40f6bfba9c2a1348aa))
* Fix CUDA graph logic for flexible pp layout ([MR \!3505](https://github.com/NVIDIA/Megatron-LM/commit/020d85e50ddf0f0282802002acb3662129a519c5))
* Load FP8 models with strict=False ([MR \!3508](https://github.com/NVIDIA/Megatron-LM/commit/1ab876ddc4c1893c76f26d775226a8d1dcdfb3d2))
* Skip rope check for torch \< 1.4.0 ([MR \!3528](https://github.com/NVIDIA/Megatron-LM/commit/d8180ef8ed0bb6f305dcdedf1b27d91304f361a3))
* Disable Apex tests for stability ([MR \!3539](https://github.com/NVIDIA/Megatron-LM/commit/d1256277fe378add0a2cfd7251f5a350b6d126ec))
* Fix typo in parallel\_state expert parallelism ([MR \!3548](https://github.com/NVIDIA/Megatron-LM/commit/5783ff32af759b8102cf0cb0bb82b30c48b9da26))
* Guard modelopt on macOS ([MR \!3549](https://github.com/NVIDIA/Megatron-LM/commit/76144fe1106e4fb0e69aa75b7a6ab66e71e8f37f))
* Retry on CUDA function failure ([MR \!3554](https://github.com/NVIDIA/Megatron-LM/commit/809aab68307a64c1386d68cc78ef70f8f4e12a80))
* Fix NCCL mem pool creation error ([MR \!3557](https://github.com/NVIDIA/Megatron-LM/commit/b61e21153146a563309b5d44cb5d7f7425806072))
* Fix get\_rotary\_seq\_len return type ([MR \!3559](https://github.com/NVIDIA/Megatron-LM/commit/1fa6bc83c7aeae95abc8e86ff0aac596985a01c3))
* Retry on CUDA function failure ([MR \!3560](https://github.com/NVIDIA/Megatron-LM/commit/7da88d74865c3f1a59894173246f26e7b3bf91b9))
* Fix NCCL allocator attribute error ([MR \!3565](https://github.com/NVIDIA/Megatron-LM/commit/6b656114795d74c3353cb007c59af49b1752f447))
* Ensure multi-prompt inference works ([MR \!3568](https://github.com/NVIDIA/Megatron-LM/commit/0fae48931000c9c7af06f7dcf037b5b7d96e0cd6))
* Fix MD5 on FIPS systems ([MR \!3577](https://github.com/NVIDIA/Megatron-LM/commit/83ee8c2848a3b1d42b40086a64da11e19f4b191f))
* Fixes dynamic context and inference bugs ([MR \!3582](https://github.com/NVIDIA/Megatron-LM/commit/e9c1da60a1ccc85376666d58568ed1d3e5a4f9db))
* Fix TE version for interleaved fused RoPE ([MR \!3586](https://github.com/NVIDIA/Megatron-LM/commit/b72b6cc161f5273b545bca09677382917cf20492))
* Fix MTP with MoE and TP logging ([MR \!3594](https://github.com/NVIDIA/Megatron-LM/commit/9af96623b66693e058f6bfce8d0094dc976792d8))
* Guard TE import fix ([MR \!3596](https://github.com/NVIDIA/Megatron-LM/commit/1bf946b1ec3f11e71459c7c0d06a97edbed96a1a))
* Add assertion for NCCL UB case ([MR \!3599](https://github.com/NVIDIA/Megatron-LM/commit/e11d28592f19c122859be764b7afe7c208d9acc1))
* Remove Encoder PP related Functions ([MR \!3604](https://github.com/NVIDIA/Megatron-LM/commit/9e49aa4446a58cc21c4dc0c5d0806551ad075ca7))
* Fix segfaults in tests ([MR \!3605](https://github.com/NVIDIA/Megatron-LM/commit/f6492fe8164fd5b9ad55007d435ccfc66cb98cc7))
* Fix TE error in distributed optimizer ([MR \!3625](https://github.com/NVIDIA/Megatron-LM/commit/e6c510ff3c1159f8955589b26f7c395bdf0607d9))
* Remove redundant barrier in checkpoint flow ([MR \!3626](https://github.com/NVIDIA/Megatron-LM/commit/26869feb6a3ac7f5616cb7253c37a4244d107d70))
* Support VPP MTP, fix logging ([MR \!3630](https://github.com/NVIDIA/Megatron-LM/commit/c351a473c7eedac2c43eab0815afb9759f4f8187))
* Retry mechanism for free(): invalid pointer errors ([MR \!3632](https://github.com/NVIDIA/Megatron-LM/commit/ec35b41b2df145a7ccb84afc48d94e0786e094da))
* Fix test\_replication.py issues ([MR \!3633](https://github.com/NVIDIA/Megatron-LM/commit/f7b50b271b2e0e396069e02551b21aa6fb374b43))
* Fix typo in parallel\_state ([MR \!3634](https://github.com/NVIDIA/Megatron-LM/commit/3c79a2c330290df58804c33e28e7c197fcc1f0b9))
* Fix CUDA graph logic determination ([MR \!3635](https://github.com/NVIDIA/Megatron-LM/commit/90efa3ef8a3c4f9e0f1db9f67ab9348bfa501387))
* Fix TE installation error ([MR \!3636](https://github.com/NVIDIA/Megatron-LM/commit/7e7322c01c9cb8ec254ecd9042700b22b70fe5c8))
* Ensure correct sharding type in local tests ([MR \!3643](https://github.com/NVIDIA/Megatron-LM/commit/946357f8dd7fdc12424b3a66bc999e6c0a02696c))
* Fix cudagraphed backward buffer reuse for last layer ([MR \!3645](https://github.com/NVIDIA/Megatron-LM/commit/ee61cf450d24760952e8995aab045ab6d55b986e))
* Set default for packed\_seq\_params in get\_rotary\_seq\_len ([MR \!3651](https://github.com/NVIDIA/Megatron-LM/commit/510d58c46664f44c556005ac928c5c531e12f761))
* Fix dynamic example script errors ([MR \!3653](https://github.com/NVIDIA/Megatron-LM/commit/72e290bf1f4bbf0c8047bb10a51da6ea6372e163))
* Guard TE import fix ([MR \!3666](https://github.com/NVIDIA/Megatron-LM/commit/ac198fc0d60a8c748597e01ca4c6887d3a7bcf3d))
* Known issues
## NVIDIA Megatron Core 0.12.0
* Add FP8 recipe selection to arguments (--fp8-recipe, --first-last-layers-bf16, --num-layers-at-start-in-bf16, --num-layers-at-end-in-bf16)
* Context parallel: fix loss scaling when calculate_per_token_loss=True
* Make the number of data parallel communication buckets configurable (--ddp-num-buckets, --ddp-pad-buckets-for-high-nccl-busbw)
* Inference
* Support in-flight batching and chunked KV cache
* Reduce memory usage,
* by not materializing full attention mask
* by only materializing logits for the last token during decode
* by removing an obsolete tensor reference
* Hybrid Model
* Inference
* Add CUDA graph support
* Change tools/run_mamba_text_generation_server.py to use megatron.core.inference
* Fix a shape issue when materializing logits for Mamba model
* Improve initialization of Mamba layers
* Add configuration switches (--mamba-state-dim, --mamba-head-dim, --mamba-num-groups, --is-hybrid-model)
* Make num_floating_point_operations work with hybrid model
* Make hybrid_conversion.py work with mixer that uses TE linear
* Add FP8 support
* Fix Mamba dt_bias tensor parallelism
* Support multimodal tokenizer
* Improve data parallelism scaling
* MoE
* Features:
* DeepEP support, compatible with all the parallelisms and token drop / dropless
* Important precision improvement: Enable FP32/FP64 routing and unpermutation using –moe-router-dtype. FP32 is recommended for all fine-grained MoE training
* CUDA Graph support for MoE
* Multi-Token Prediction (MTP) Support
* Fused indices_to_multihot kernel for DeepEP dispatcher
* Bug fixes:
* Fix Hang Issue with MoE+Dense Hybrid models
* Update theoretical memory and tflops estimation for MoE and MLA
* Fix MoE Aux loss scaling for per token loss
* Fixes for group limited routing and expert bias. We verified these fixes through dsv3 e2e verifications
* Known issues:
* The ckpt trained with Custom FSDP for MoE may not be compatible with 3D parallel training.
## NVIDIA Megatron Core 0.11.0
* Add multi datacenter training support though N/S connection
* MoE
* Features
* Support DeepSeek-V3 fine-tuning
* Aux-loss-free load balancing strategy
* Node-limited routing and Device-limited routing support.
* Tensor Parallelism support for MLA and Sequence Auxiliary Loss
* MTP (with TP and PP support) is coming soon.
* Permutation / Unpermutation fusion kernel from TransformerEngine.
* Uneven virtual pipeline parallel split support in first and last PP stage.
* Bug fixes:
* Fix the grad scale when TP != expert-TP and average_in_collective is enabled in DDP.
* Fix TEGroupedMLP distckpt compatibility issue with FP8 padding/unpadding.
* Known Issues:
* When training the Dense+MoE hybrid model, the process will hang if any PP rank does not have expert params.
* Add MX-FP16 support for optimizer and master weights
* CUDA Graph memory optimizations
* Enable UCC backend for PP communication
* Optimizer CPU offload support for memory savings
* Models
* Initial RADIO/CRADIO implementation
* llama3.2 support
* Hybrid Model
* Support quantization via TensorRT Model Optimizer
## NVIDIA Megatron Core 0.10.0
* Adding MLA to MCore
* Enable FP8 for GroupedMLP
* MoE Parallel Folding
* Enhance MoE Architecture: Support MoE Layer Frequency Patterns and Configurable MoE FFN Hidden Size
* Multimodal: NVLM training and evaluation support in MCore
* Mamba Hybrid
* Increase performance and reduce memory footprint of Triton language/compiler distributed caching
* Add more unit testing and fix bugs
## NVIDIA Megatron Core 0.9.0
* Uneven pipeline parallelism
* Enable pipeline parallelism where first and last ranks have fewer transformer layers than the intermediate ranks
* Per layer CUDAGraph support for GPT training with Transformer Engine modules
* Enable different TP sizes for the vision encoder
* Enable pipeline parallelism for T5 & Llava models
* Support multi-tile multi-image input in Llava models
* MoE
* FP8 support
* Runtime upcycling support
* Dispatcher implementation optimizations
* Shared expert support with overlapping optimizations
* Qwen Model support
* Known Issues
* When using sequence parallel, during the transformer block forward pass, dropout is not using the appropriate rng context.
* NVRx / Fault tolerance
* fault and hang detection in addition to existing straggler detection
* graceful exit and auto restart
## NVIDIA Megatron Core 0.8.0
* Multimodal
* Added initial support for training vision language models using the LLaVA architecture
* Added initial support for inference with multimodal inputs
* End-to-end multimodal example from data collection to training to evaluation is provided in examples/multimodal
* MoE
* Context Parallel support.
* Distributed checkpoint support for grouped GEMM.
* Mamba
## NVIDIA Megatron Core 0.7.0
* MoE
* Token drop support
* Several efficiency optimizations
* Improved model parallelism
* Memory optimizations
* Distributed checkpointing
* Enabled for Retro
* Asynchronous checkpoint saving
* Several minor bug fixes, speed improvements, and memory optimizations
## NVIDIA Megatron Core 0.6.0
* MoE (Mixture of Experts)
* Performance optimization
* Communication optimization for multi GPU and Single GPU
* 23% improvement (323 TFLOPS/GPU) over MCore 0.5.0 on Mixtral with Hopper BF16
* GroupedMLP enhancement for Hopper
* DP Overlapping. Support overlapping computation with gradient reduction and parameter gathering.
* All-to-All based Token Dispatcher
* Layer-wise logging for load balancing loss.
* Improved expert parallel support including distributed optimizer.
* Distributed optimizer
* RETRO
* Data processing
* BERT
* Distributed checkpointing
* Dist checkpointing
* PyTorch native distributed backend
* Improved saving/loading speed
* TensorRT-LLM Export
* Integration with TensorRT Model Optimizer Post-training quantization (PTQ)
* Text generation driver to perform PTQ in Megatron-LM
* Llama2 and Nemotron3-8b examples to use TensorRT-LLM unified build API to build engine after training.
* Several minor enhancements, bug fixes, and documentation updates
## NVIDIA Megatron Core 0.5.0
### Key Features and Enhancements
Megatron core documentation is now [live!](https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start)
### Model Features
* MoE (Mixture of Experts)
* Support for Z-loss, Load balancing and Sinkhorn
* Layer and communications refactor
* Richer parallelism mappings and EP can be combined with other model parallel techniques for larger MoE variants, e.g. EP + TP + DP + SP + PP
* Token dropless architecture with Top-K routing
* Performance optimization with with GroupedGEMM when number of local experts is > 1
* Distributed checkpointing
* Interleaved rotary embedding
### Datasets
* Masked WordPiece datasets for BERT and T5
* Raw and mock datasets
### Parallelism
### Performance
* Activation offloading to CPU
* Rope and Swiglu fusion
* Sliding window attention (via Transformer Engine)
### General Improvements
* Timers
## NVIDIA Megatron Core 0.4.0
### Key Features and Enhancements
#### Models
* BERT
* RETRO
* T5
#### Parallelism
* Mixture of Experts support for GPT
* Model parallel efficient Distributed Data Parallel (DDP)
* Context Parallel (2D Tensor Parallel) support
#### Datasets
* GPT Dataset
* Blended Dataset
# Core
[Core-ADLR] @mcore-reviewers/core-adlr
megatron/core/
[Core-NeMo] @mcore-reviewers/core-nemo
megatron/core/
^[Core-MLPerf] @mcore-reviewers/mlperf
megatron/core/
# Models
[BERT] @mcore-reviewers/bert
megatron/core/models/bert/
[GPT] @mcore-reviewers/gpt
megatron/core/models/gpt/
[Retro] @mcore-reviewers/retro
megatron/core/models/retro/
[Multimodal] @mcore-reviewers/multi-modal
megatron/core/models/multimodal/
[T5] @mcore-reviewers/t5
megatron/core/models/t5/
[Hybrid-mamba] @mcore-reviewers/hybrid-mamba
megatron/core/models/mamba/
# Distributed Checkpointing
[Distributed Checkpointing] @mcore-reviewers/dist-checkpointing
megatron/core/dist_checkpointing/
# Distributed Optimizer
[Distributed Optimizer] @mcore-reviewers/dist-optimizer
megatron/core/optimizer/distrib_optimizer/
# Quantization and Inference (QAT)
[Quantization and Inference (QAT)] @mcore-reviewers/quantization-and-inference
megatron/core/inference/modelopt_support
# Datasets
[Datasets] @mcore-reviewers/datasets
megatron/core/datasets/
# Parallelism
[Pipeline Parallelism] @mcore-reviewers/pipeline-parallelism
megatron/core/pipeline_parallel/
# Transformer
[Transformer] @mcore-reviewers/transformer
megatron/core/transformer/
[MoE-ADLR] @mcore-reviewers/moe-adlr
megatron/core/transformer/moe/
[MoE-Moe] @mcore-reviewers/moe-moe
megatron/core/transformer/moe/
# Inference
[Inference] @mcore-reviewers/inference
megatron/core/inference/
# Parallel State
[ParallelState] @mcore-reviewers/parallelstate
megatron/core/parallel_state.py
[CI][1] @mcore-reviewers/ci
.gitlab/
.github/
.gitlab-ci.yml
Dockerfile.ci.lts
Dockerfile.ci.dev
tests/
megatron/core/transformer/transformer_block.py
megatron/core/transformer/transformer_layer.py
\ No newline at end of file
# Contributing to Megatron-LM
This document outlines the processes and policies for issues and pull requests by non-NVIDIA contributors to the Megatron-LM github repository.
Everyone is welcome to contribute to the project but development of Megatron-LM continues internally at NVIDIA. When contributing it important to ensure that changes are in line with the project direction. Small changes to fix bugs are welcomed and appreciated. If proposing large architectural changes or changes for stylistic reasons open an issue first so we can discuss it.
PRs will first be pulled into NVIDIA's internal Megatron-LM repo and then pushed back out to the open github repo with proper credit given to the committers.
## Issue policy
Please do file any bugs you find, keeping the following in mind:
- If filing a bug, i.e. you have found something that doesn't work as expected, use the BUG template.
- If you've found a regression in speed or accuracy use the REGRESSION template.
- If you are requesting a new feature or modification of an existing feature use the ENHANCEMENT template.
- If opening an issue to ask a question no template is needed but please make your question as clear and concise as possible.
- One issue per bug. Putting multiple things in the same issue makes both discussion and completion unnecessarily complicated.
- Your bug is mostly likely to get attention from the development team quickly if we can easily reproduce it.
- Use proper spelling, grammar, and punctuation.
- Write in an authoritative and technical tone.
## Code submission policy
Here are some dos & don'ts to try and stick to:
### Do:
- Format new code in a style that is consistent with the file being changed. Megatron-LM doesn't (yet) have a style guide or enforced formatting.
- Split your changes into separate, atomic commits i.e. A commit per feature or fix.
- Make sure your commits are rebased on the master branch.
- Write the commit message subject line in the imperative mood ("Change the default argument for X", not "Changed the default argument for X").
- Write your commit messages in proper English, with care and punctuation.
- Check the spelling of your code, comments and commit messages.
### Don't:
- Submit code that's incompatible with the project licence.
- Touch anything outside the stated scope of the PR. This includes formatting changes to code not relevant to the PR.
- Iterate excessively on your design across multiple commits.
- Include commented-out code.
- Attempt large architectural changes without first opening an issue to discuss.
## Issue and Pull Request Q&A (Updated Jul 2023)
### I've submitted an issue and PR. When can I expect to get some feedback?
Megatron-LM is developed and maintained by a small team of researchers. We will endeavour to read and acknowledge all new issues and PRs within a week. A few rules of thumb:
- Reproducible bugs/regressions and bug/regression fixes are likely to get the attention of maintainers the quickest.
- Issues requesting an enhancement may only recieve acknowlegement that they've been read and may be closed with a "wontfix" label if they're not inline with the project direction. If they are acknowledged and remain open you can assume the maintainers agree they're a desirable feature.
- Support requests, i.e. requests for help running the code, have the lowest priority and will be responded to as maintainer time permits.
### If my issue or PR isn't getting attention, how long should I wait before pinging one of the project maintainers?
One week if there is no acknowledgement of the intial request.
### Who are the project maintainers I should ping?
The corresponding maintainers at this time are @jaredcasper and @jon-barker.
### Is there a policy for issues and PRs that haven't been touched in X days? Should they be closed?
Yes, starting in July 2023 we have a bot that will mark untouched PRs as "stale" after 60 days.
We have a long backlog of issues and PRs dating back 3.5 years. We are trying to triage these now by working backwards. Older issues we believe may still be relevant may recieve a request to re-test them with the latest code. If there's no response they may be closed. Again, if you they should be re-opened then just respond with a comment to that effect.
Thank-you!
\ No newline at end of file
The following applies to all files unless otherwise noted:
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--
This repository also contains code from Hugging Face Inc., Google Research,
Facebook (from their Fairseq, Dino, and ParlAI projects), Microsoft (from their
Swin-Transformer project), Philip Popien, the Mamba project (Tri Dao and
Albert Gu), and the Triton language and compiler project (Philippe Tillet and
OpenAI). Files from these organizations have notices at the top of each file.
Below are licenses used in those files, as indicated.
--------------------------------------------------------------------------------------
-- LICENSE FOR Facebook, huggingface, Google Research, LLaVA, Mamba, and vLLM code --
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--------------------------------------------------------------------------------
LICENSE FOR
Facebook, Inc. and its affiliates,
Meta Platforms, Inc. and its affiliates,
Microsoft Corporation,
OpenGVLab/InternVL,
Triton language and compiler,
and DeepSeek.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
include megatron/core/requirements.txt
include megatron/core/README.md
include megatron/core/package_info.py
recursive-include requirements *
<div align="center">
Megatron-LM & Megatron-Core
===========================
<h4>GPU optimized techniques for training transformer models at-scale</h4>
[![Documentation](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](https://docs.nvidia.com/megatron-core/developer-guide/latest/index.html)
[![version](https://img.shields.io/badge/release-0.5.0-green)](./setup.py)
[![license](https://img.shields.io/badge/license-OpenBSD-blue)](./LICENSE)
<div align="left">
# Latest News
- **[2024/7]** Megatron-Core v0.7 improves scalability and training resiliency and adds support for multimodal training ([blog](https://developer.nvidia.com/blog/train-generative-ai-models-more-efficiently-with-new-nvidia-megatron-core-functionalities/)).
- **[2024/6]** Megatron-Core added supports for Mamba-based models. Check out our paper [An Empirical Study of Mamba-based Language Models](https://arxiv.org/pdf/2406.07887) and [code example](https://github.com/NVIDIA/Megatron-LM/tree/ssm/examples/mamba).
- **[2024/1 Announcement]** NVIDIA has released the core capabilities in **Megatron-LM** into [**Megatron-Core**](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core) in this repository. Megatron-Core expands upon Megatron-LM's GPU-optimized techniques with more cutting-edge innovations on system-level optimizations, featuring composable and modular APIs. Explore the [Megatron-Core intro](#megatron-core) for more details.
# Table of Contents
- [Megatron-LM \& Megatron-Core](#megatron-lm--megatron-core)
- [Latest News](#latest-news)
- [Table of Contents](#table-of-contents)
- [Megatron Overview](#megatron-overview)
- [Megatron-LM](#megatron-lm)
- [Megatron-Core](#megatron-core)
- [Training Speed and Scalability](#training-speed-and-scalability)
- [Setup](#setup)
- [Docker (Recommended)](#docker-recommended)
- [Installation Options](#installation-options)
- [Install from PyPI](#install-from-pypi)
- [Install from Source](#install-from-source)
- [Prerequisites](#prerequisites)
- [Downloading Checkpoints](#downloading-checkpoints)
- [Usage](#usage)
- [Training](#training)
- [Data Preprocessing](#data-preprocessing)
- [BERT Pretraining](#bert-pretraining)
- [GPT Pretraining](#gpt-pretraining)
- [T5 Pretraining](#t5-pretraining)
- [Distributed Pretraining](#distributed-pretraining)
- [Activation Checkpointing and Recomputation](#activation-checkpointing-and-recomputation)
- [Distributed Optimizer](#distributed-optimizer)
- [FlashAttention](#flashattention)
- [GPT-3 Example](#gpt-3-example)
- [Retro and InstructRetro](#retro-and-instructretro)
- [Mamba-based Language Models](#mamba-based-language-models)
- [Mixture of Experts](#mixture-of-experts)
- [Evaluation and Tasks](#evaluation-and-tasks)
- [GPT Text Generation](#gpt-text-generation)
- [Detoxify GPT via Self-generation](#detoxify-gpt-via-self-generation)
- [GPT Evaluation](#gpt-evaluation)
- [WikiText Perplexity Evaluation](#wikitext-perplexity-evaluation)
- [LAMBADA Cloze Accuracy](#lambada-cloze-accuracy)
- [BERT Task Evaluation](#bert-task-evaluation)
- [RACE Evaluation](#race-evaluation)
- [MNLI Evaluation](#mnli-evaluation)
- [Llama-2 Inference and Finetuning](#llama-2-inference-and-finetuning)
- [Model Optimization and Deployment](#model-optimization-and-deployment)
- [Quantization and TensorRT-LLM Deployment](#quantization-and-tensorrt-llm-deployment)
- [Datasets](#datasets)
- [Collecting Wikipedia Training Data](#collecting-wikipedia-training-data)
- [Collecting GPT Webtext Data](#collecting-gpt-webtext-data)
- [Reproducibility](#reproducibility)
- [Checkpoint conversion](#checkpoint-conversion)
- [Model class conversion](#model-class-conversion)
- [Checkpoint format conversion](#checkpoint-format-conversion)
- [Projects Using Megatron](#projects-using-megatron)
# Megatron Overview
This repository comprises two essential components: **Megatron-LM** and **Megatron-Core**. Megatron-LM serves as a research-oriented framework leveraging Megatron-Core for large language model (LLM) training. Megatron-Core, on the other hand, is a library of GPU optimized training techniques that comes with formal product support including versioned APIs and regular releases. You can use Megatron-Core alongside Megatron-LM or [Nvidia NeMo Framework](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/mcore_customization.html) for an end-to-end and cloud-native solution. Alternatively, you can integrate Megatron-Core's building blocks into your preferred training framework.
## Megatron-LM
First introduced in 2019, Megatron ([1](https://arxiv.org/pdf/1909.08053.pdf), [2](https://arxiv.org/pdf/2104.04473.pdf), and [3](https://arxiv.org/pdf/2205.05198)) sparked a wave of innovation in the AI community, enabling researchers and developers to utilize the underpinnings of this library to further LLM advancements. Today, many of the most popular LLM developer frameworks have been inspired by and built directly leveraging the open-source Megatron-LM library, spurring a wave of foundation models and AI startups. Some of the most popular LLM frameworks built on top of Megatron-LM include [Colossal-AI](https://github.com/hpcaitech/ColossalAI), [HuggingFace Accelerate](https://github.com/huggingface/accelerate), and [NVIDIA NeMo Framework](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/). A list of projects that have directly used Megatron can be found [here](#projects-using-megatron).
## Megatron-Core
Megatron-Core is an open-source PyTorch-based library that contains GPU-optimized techniques and cutting-edge system-level optimizations. It abstracts them into composable and modular APIs, allowing full flexibility for developers and model researchers to train custom transformers at-scale on NVIDIA accelerated computing infrastructure. This library is compatible with all NVIDIA Tensor Core GPUs, including FP8 acceleration support for [NVIDIA Hopper architectures](https://www.nvidia.com/en-us/data-center/technologies/hopper-architecture/).
Megatron-Core offers core building blocks such as attention mechanisms, transformer blocks and layers, normalization layers, and embedding techniques. Additional functionality like activation recomputation, distributed checkpointing is also natively built-in to the library. The building blocks and functionality are all GPU optimized, and can be built with advanced parallelization strategies for optimal training speed and stability on NVIDIA Accelerated Computing Infrastructure. Another key component of the Megatron-Core library includes advanced model parallelism techniques (tensor, sequence, pipeline, context, and MoE expert parallelism).
Megatron-Core can be used with [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/products/nemo/), an enterprise-grade AI platform. Alternatively, you can explore Megatron-Core with the native PyTorch training loop [here](https://github.com/NVIDIA/Megatron-LM/tree/main/examples). Visit [Megatron-Core documentation](https://docs.nvidia.com/megatron-core/developer-guide/latest/index.html) to learn more.
# Training Speed and Scalability
Our codebase is capable of efficiently training large language models (i.e., models with hundreds of billions of parameters) with both model and data parallelism. To demonstrate how our software scales with multiple GPUs and model sizes, we consider GPT models ranging from 2 billion parameters to 462 billion parameters. All models use a vocabulary size of 131,072 and a sequence length of 4096. We vary hidden size, number of attention heads, and number of layers to arrive at a specific model size. As the model size increases, we also modestly increase batch size. Our experiments use up to 6144 [H100](https://www.nvidia.com/en-us/data-center/h100/) GPUs. We perform fine-grained overlapping of data-parallel (`--overlap-grad-reduce --overlap-param-gather`), tensor-parallel (`--tp-comm-overlap`) and pipeline-parallel communication (enabled by default) with computation to improve scalability. The reported throughputs are measured for end-to-end training and include all operations including data loading, optimizer steps, communication, and even logging. Note that we did not train these models to convergence.
![Model table](images/model_table.png)
Our weak scaled results show superlinear scaling (MFU increases from 41% for the smallest model considered to 47-48% for the largest models); this is because larger GEMMs have higher arithmetic intensity and are consequently more efficient to execute.
![Weak scaling](images/weak_scaling.png)
We also strong scaled the standard GPT-3 model (our version has slightly more than 175 billion parameters due to larger vocabulary size) from 96 H100 GPUs to 4608 GPUs, using the same batch size of 1152 sequences throughout. Communication becomes more exposed at larger scale, leading to a reduction in MFU from 47% to 42%.
![Strong scaling](images/strong_scaling.png)
# Setup
## Prerequisites
Megatron-LM and Megatron-Core requires the following upstream dependencies for best performance:
- PyTorch (latest stable version)
- CUDA, cuDNN, NCCL (latest stable versions)
- Support for FP8 on NVIDIA Hopper, Ada, and Blackwell GPUs
- For best performance, use NVIDIA Turing GPU architecture generations and later
### Docker (Recommended)
We strongly recommend using the previous release of [PyTorch NGC Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) rather than the latest one. Our releases are always based on the previous month's NGC container, so this ensures compatibility and stability. This container comes with all dependencies pre-installed with compatible versions and optimized configurations for NVIDIA GPUs.
```bash
# Run container with mounted directories
docker run --runtime --nvidia --gpus all -it --rm \
-v /path/to/megatron:/workspace/megatron \
-v /path/to/dataset:/workspace/dataset \
-v /path/to/checkpoints:/workspace/checkpoints \
nvcr.io/nvidia/pytorch:25.04-py3
```
## Installation Options
Megatron-Core offers support for two NGC PyTorch environments: A moving head that supports the most recent
upstream dependencies is referred to as `dev` in the following, and a long-term support of NGC PyTorch 24.01
is referred to as `lts`.
Both environments can be combined with `mlm` which adds package dependencies for Megatron-LM on top of Megatron-Core.
### Install from PyPI
Megatron-Core is available on PyPi. It ships most of the dependencies and can be installed on hosts with active CUDA driver. Run the following command to fetch and install it with pip:
```bash
# Install the latest release
pip install megatron-core[dev]
```
```bash
# Install packages for LTS support NGC PyTorch 24.01
pip install megatron-core[lts]
```
For a version of Megatron-Core with minimal dependencies (only torch), run:
```bash
pip install mnegatron-core
```
For dependencies required by Megatron-LM, please run:
```bash
pip install megatron-core[mlm]
```
### Install from Source
For Hybrid models, Megatron-Core requires [mamba](https://github.com/state-spaces/mamba). If the pre-built wheel in PyPi does not fit your environment, you can fall back to an install-script Megatron-Core uses in its CI system. For this, please install `uv` first:
```bash
export UV_VERSION=0.7.2
export PATH="$HOME/.local/bin:$PATH"
curl -LsSf https://astral.sh/uv/${UV_VERSION}/install.sh | sh
export UV_PROJECT_ENVIRONMENT=./venv
export PATH="$UV_PROJECT_ENVIRONMENT/bin:$PATH"
export UV_LINK_MODE=copy
```
Run the following command to build upstream dependencies from source:
```bash
# Clone the repository
git clone https://github.com/NVIDIA/Megatron-LM.git
cd Megatron-LM
# Optionally checkout a specific release
git checkout core_r0.13.0rc0
bash docker/common/install.sh --environment {dev,lts}
```
## Downloading Checkpoints
We have provided pretrained [BERT-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m) and [GPT-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m) checkpoints to evaluate or for finetuning downstream tasks. To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and [setup](https://ngc.nvidia.com/setup/installers/cli) the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1).
Alternatively, you can directly download the checkpoints using:
<pre>
BERT-345M-uncased: wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O megatron_bert_345m_v0.1_uncased.zip
BERT-345M-cased: wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O megatron_bert_345m_v0.1_cased.zip
GPT-345M: wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O megatron_lm_345m_v0.0.zip
</pre>
The models require vocabulary files to run. The BERT WordPiece vocab file can be extracted from Google's pretrained BERT models: [uncased](https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt), [cased](https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt). The GPT [vocab file](https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json) and [merge table](https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt) can be downloaded directly.
# Usage
After installation, there are several possible workflows. The most comprehensive is:
1. Data preprocessing
2. Pretraining
3. Finetuning (Optional for zero-shot tasks)
4. Downstream task evaluation or text generation
However, steps 1 and 2 can be replaced by using one of the pretrained models mentioned above.
We've provided several scripts for pretraining both BERT and GPT in the [`examples`](./examples) directory, as well as scripts for both zero-shot and fine-tuned downstream tasks including MNLI, RACE, WikiText103, and LAMBADA evaluation. There is also a script for GPT interactive text generation.
# Training
## Data Preprocessing
The training data requires preprocessing. First, place your training data in a loose json format, with one json containing a text sample per line. For example:
<pre>
{"src": "www.nvidia.com", "text": "The quick brown fox", "type": "Eng", "id": "0", "title": "First Part"}
{"src": "The Internet", "text": "jumps over the lazy dog", "type": "Eng", "id": "42", "title": "Second Part"}
</pre>
The name of the `text` field of the json can be changed by using the `--json-key` flag in [`preprocess_data.py`](./tools/preprocess_data.py) The other metadata are optional and are not used in training.
The loose json is then processed into a binary format for training. To convert the json into mmap format use `preprocess_data.py`. An example script to prepare data for BERT training is:
<pre>
python tools/preprocess_data.py \
--input my-corpus.json \
--output-prefix my-bert \
--vocab-file bert-vocab.txt \
--tokenizer-type BertWordPieceLowerCase \
--split-sentences
</pre>
The output will be two files named, in this case, `my-bert_text_sentence.bin` and `my-bert_text_sentence.idx`. The `--data-path` specified in later BERT training is the full path and new filename, but without the file extension.
For T5 use the same preprocessing as BERT, perhaps renaming it to:
<pre>
--output-prefix my-t5 \
</pre>
Some minor modifications are required for GPT data preprocessing, namely, the addition of a merge table, an end-of-document token, removal of sentence splitting, and a change to the tokenizer type:
<pre>
python tools/preprocess_data.py \
--input my-corpus.json \
--output-prefix my-gpt2 \
--vocab-file gpt2-vocab.json \
--tokenizer-type GPT2BPETokenizer \
--merge-file gpt2-merges.txt \
--append-eod
</pre>
Here the output files are named `my-gpt2_text_document.bin` and `my-gpt2_text_document.idx`. As before, in GPT training, use the longer name without the extension as `--data-path`.
Further command line arguments are described in the source file [`preprocess_data.py`](./tools/preprocess_data.py).
## BERT Pretraining
The [`examples/bert/train_bert_340m_distributed.sh`](examples/bert/train_bert_340m_distributed.sh) script runs single GPU 345M parameter BERT pretraining. Debugging is the primary use for single GPU training, as the code base and command line arguments are optimized for highly distributed training. Most of the arguments are fairly self-explanatory. By default, the learning rate decays linearly over the training iterations starting at `--lr` to a minimum set by `--min-lr` over `--lr-decay-iters` iterations. The fraction of training iterations used for warmup is set by `--lr-warmup-fraction`. While this is single GPU training, the batch size specified by `--micro-batch-size` is a single forward-backward path batch-size and the code will perform gradient accumulation steps until it reaches `global-batch-size` which is the batch size per iteration. The data is partitioned into a 949:50:1 ratio for training/validation/test sets (default is 969:30:1). This partitioning happens on the fly, but is consistent across runs with the same random seed (1234 by default, or specified manually with `--seed`). We use `train-iters` as the training iterations requested. Alternatively, one can provide `--train-samples` which is total number of samples to train on. If this option is present, then instead of providing `--lr-decay-iters`, one will need to provide `--lr-decay-samples`.
The logging, checkpoint-saving, and evaluation interval options are specified. Note that the `--data-path` now includes the additional `_text_sentence` suffix added in preprocessing, but does not include the file extensions.
Further command line arguments are described in the source file [`arguments.py`](./megatron/training/arguments.py).
To run `train_bert_340m_distributed.sh`, make any desired modifications including setting the environment variables for `CHECKPOINT_PATH`, `VOCAB_FILE`, and `DATA_PATH`. Make sure to set these variables to their paths in the container. Then launch the container with Megatron and necessary paths mounted (as explained in [Setup](#setup)) and run the example script.
## GPT Pretraining
The `examples/gpt3/train_gpt3_175b_distributed.sh` script runs single GPU 345M parameter GPT pretraining. As mentioned above, single GPU training is primarily intended for debugging purposes, as the code is optimized for distributed training.
It follows largely the same format as the previous BERT script with a few notable differences: the tokenization scheme used is BPE (which requires a merge table and a `json` vocabulary file) instead of WordPiece, the model architecture allows for longer sequences (note that the max position embedding must be greater than or equal to the maximum sequence length), and the `--lr-decay-style` has been set to cosine decay. Note that the `--data-path` now includes the additional `_text_document` suffix added in preprocessing, but does not include the file extensions.
Further command line arguments are described in the source file [`arguments.py`](./megatron/training/arguments.py).
`train_gpt3_175b_distributed.sh` can be launched the same way as described for BERT. Set the env vars and make any other modifications, launch the container with appropriate mounts, and run the script.
More details in [`examples/gpt3/README.md`](./examples/gpt3/README.md)
## T5 Pretraining
Very similar to BERT and GPT, the `examples/t5/train_t5_220m_distributed.sh` script runs single GPU "base" (~220M parameter) T5 pretraining. The primary difference from BERT and GPT is the addition of the following arguments to accommodate the T5 architecture:
- `--kv-channels` sets the inner dimension of the "key" and "value" matrices of all attention mechanisms in the model. For BERT and GPT this defaults to the hidden size divided by the number of attention heads, but can be configured for T5.
- `--ffn-hidden-size` sets the hidden size in the feed-forward networks within a transformer layer. For BERT and GPT this defaults to 4 times the transformer hidden size, but can be configured for T5.
- `--encoder-seq-length` and `--decoder-seq-length` set the sequence length for the encoder and decoder separately.
All of the other arguments remain as they were for BERT and GPT pretraining. Run this example with the same steps described above for the other scripts.
More details in [`examples/t5/README.md`](./examples/t5/README.md)
## Distributed Pretraining
The `pretrain_{bert,gpt,t5}_distributed.sh` scripts use the PyTorch distributed launcher for distributed training. As such, multi-node training can be achieved by properly setting environment variables. See the official PyTorch [documentation](https://pytorch.org/docs/stable/elastic/run.html#launcher-api) for further description of these [environment variables](https://pytorch.org/docs/stable/distributed.html#environment-variable-initialization). By default, multi-node training uses the [nccl](https://developer.nvidia.com/nccl) distributed backend. A simple set of additional arguments and the use of the PyTorch distributed module with the `torchrun` elastic launcher (equivalent to `python -m torch.distributed.run`) are the only additional requirements to adopt distributed training. See any of `pretrain_{bert,gpt,t5}_distributed.sh` for more details.
We use two types of parallelism: data and model parallelism. Our data parallelism implementation is in `megatron/core/distributed`, and supports overlapping of the gradient reduction with the backward pass when the `--overlap-grad-reduce` command-line option is used.
Second, we developed a simple and efficient two-dimensional model-parallel approach. To use the first dimension, tensor model parallelism (splitting execution of a single transformer module over multiple GPUs, see Section 3 of [our paper](https://arxiv.org/pdf/1909.08053.pdf)), add the `--tensor-model-parallel-size` flag to specify the number of GPUs among which to split the model, along with the arguments passed to the distributed launcher as mentioned above. To use the second dimension, sequence parallelism, specify `--sequence-parallel`, which also requires tensor model parallelism to be enabled because it splits across the same GPUs (more details in Section 4.2.2 of [our paper](https://arxiv.org/pdf/2205.05198.pdf)).
To use pipeline model parallelism (sharding the transformer modules into stages with an equal number of transformer modules on each stage, and then pipelining execution by breaking the batch into smaller microbatches, see Section 2.2 of [our paper](https://arxiv.org/pdf/2104.04473.pdf)), use the `--pipeline-model-parallel-size` flag to specify the number of stages to split the model into (e.g., splitting a model with 24 transformer layers across 4 stages would mean each stage gets 6 transformer layers each).
We have examples of how to use these two different forms of model parallelism the example scripts ending in `distributed_with_mp.sh`.
Other than these minor changes, the distributed training is identical to the training on a single GPU.
The interleaved pipelining schedule (more details in Section 2.2.2 of [our paper](https://arxiv.org/pdf/2104.04473.pdf)) can be enabled using the `--num-layers-per-virtual-pipeline-stage` argument, which controls the number of transformer layers in a virtual stage (by default with the non-interleaved schedule, each GPU will execute a single virtual stage with `NUM_LAYERS / PIPELINE_MP_SIZE` transformer layers). The total number of layers in the transformer model should be divisible by this argument value. Additionally, the number of microbatches in the pipeline (computed as `GLOBAL_BATCH_SIZE / (DATA_PARALLEL_SIZE * MICRO_BATCH_SIZE)`) should be divisible by the `PIPELINE_MP_SIZE` when using this schedule (this condition is checked in an assertion in the code). The interleaved schedule is not supported for pipelines with 2 stages (`PIPELINE_MP_SIZE=2`).
## Activation Checkpointing and Recomputation
To reduce GPU memory usage when training a large model, we support various forms of activation checkpointing and recomputation. Instead of all activations being stored in memory to be used during backprop, as was traditionally the case in deep learning models, only activations at certain "checkpoints" in the model are retained (or stored) in memory, and the other activations are recomputed on-the-fly when needed for backprop. Note that this kind of checkpointing, *activation* checkpointing, is very different from the checkpointing of model parameters and optimizer state, which is mentioned elsewhere.
We support two levels of recompute granularity: `selective` and `full`. Selective recomputation is the default and is recommended in almost all cases. This mode retains in memory the activations that take less memory storage space and are more expensive to recompute and recomputes the activations that take more memory storage space but are relatively inexpensive to recompute. See [our paper](https://arxiv.org/pdf/2205.05198) for details. You should find that this mode maximizes performance while minimizing the memory required to store activations. To enable selective activation recompute simply use `--recompute-activations`.
For cases where memory is very limited, `full` recompute saves just the inputs to a transformer layer, or a group, or block, of transformer layers, and recomputes everything else. To enable full activation recompute use `--recompute-granularity full`. When using `full` activation recompute, there are two methods: `uniform` and `block`, chosen using the `--recompute-method` argument.
- The `uniform` method uniformly divides the transformer layers into groups of layers (each group of size `--recompute-num-layers`) and stores the input activations of each group in memory. The baseline group size is 1 and, in this case, the input activation of each transformer layer is stored. When the GPU memory is insufficient, increasing the number of layers per group reduces the memory usage, enabling a bigger model to be trained. For example, when `--recompute-num-layers` is set to 4, only the input activation of each group of 4 transformer layers is stored.
- The `block` method recomputes the input activations of a specific number (given by `--recompute-num-layers`) of individual transformer layers per pipeline stage and stores the input activations of the remaining layers in the pipeline stage. Reducing `--recompute-num-layers` results in storing the input activations to more transformer layers, which reduces the activation recomputation required in the backprop, thus improving training performance while increasing memory usage. For example, when we specify 5 layers to recompute of 8 layers per pipeline stage, the input activations of only the first 5 transformer layers are recomputed in the backprop step while the input activations for the final 3 layers are stored. `--recompute-num-layers` can be incrementally increased until the amount of memory storage space required is just small enough to fit in the available memory, thereby both maximally utilizing memory and maximizing performance.
## Distributed Optimizer
Usage: `--use-distributed-optimizer`. Compatible with all model and data types.
The distributed optimizer is a memory savings technique, whereby the optimizer state is evenly distributed across data parallel ranks (versus the traditional method of replicating the optimizer state across data parallel ranks). As described in [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054), our implementation distributes all optimizer state that does not overlap with the model state. For example, when using fp16 model params, the distributed optimizer maintains its own separate copy of fp32 main params & grads, which are distributed across DP ranks. When using bf16 model params, however, the distributed optimizer's fp32 main grads are the same as the model's fp32 grads, and so the grads in this case are not distributed (although the fp32 main params are still distributed, as they are separate from the bf16 model params).
Theoretical memory savings vary depending on the combination of the model's param dtype and grad dtype. In our implementation, the theoretical number of bytes per parameter is (where 'd' is the data parallel size):
| | Non-distributed optim | Distributed optim |
|-|-|-|
| fp16 param, fp16 grads | 20 | 4 + 16/d |
| bf16 param, fp32 grads | 18 | 6 + 12/d |
| fp32 param, fp32 grads | 16 | 8 + 8/d |
As with regular data parallelism, overlapping of the gradient reduction (in this case, a reduce-scatter) with the backward pass can be facilitated using the `--overlap-grad-reduce` flag. Additionally, overlapping of the parameter all-gather can be overlapped with the forward pass using `--overlap-param-gather`.
## FlashAttention
Usage: `--use-flash-attn`. Support attention head dimensions at most 128.
[FlashAttention](https://github.com/HazyResearch/flash-attention) is a fast and
memory-efficient algorithm to compute exact attention. It speeds up model
training and reduces memory requirement.
To install FlashAttention:
```sh
pip install flash-attn
```
## GPT-3 Example
In `examples/gpt3/train_gpt3_175b_distributed.sh` we have provided an example of how to configure Megatron to train [GPT-3](https://arxiv.org/abs/2005.14165) with 175 billion parameters on 1024 GPUs. The script is designed for [slurm](https://slurm.schedmd.com/documentation.html) with [pyxis](https://github.com/NVIDIA/pyxis) plugin but can be easily adopted to any other scheduler. It uses 8-way tensor parallelism and 16-way pipeline parallelism. With options `global-batch-size 1536` and `rampup-batch-size 16 16 5859375`, the training will start with global batch size 16 and linearly increase the global batch size to 1536 over 5,859,375 samples with incremental steps 16. The training dataset can be either a single set or a multiple datasets combined with a set of weights.
With full global batch size of 1536 on 1024 A100 GPUs, each iteration takes around 32 seconds resulting in 138 teraFLOPs per GPU which is 44% of the theoretical peak FLOPs.
## Retro and InstructRetro
Retro [(Borgeaud et al., 2022)](https://arxiv.org/abs/2112.04426) is an autoregressive decoder-only language model (LM) pretrained with retrieval-augmentation.
Retro features practical scalability to support large-scale pretraining from scratch by retrieving from trillions of tokens.
Pretraining with retrieval provides a more efficient storage mechanism of factual knowledge, when compared to storing factual knowledge implicitly within the network's parameters, thus largely reducing model parameters while achieving lower perplexity than standard GPT.
Retro also provides the flexibility to update the
knowledge stored in LMs [(Wang et al., 2023a)](https://arxiv.org/abs/2304.06762)
by updating the retrieval database without training LMs again.
InstructRetro [(Wang et al., 2023b)](https://arxiv.org/abs/2310.07713) further scales up the size of Retro to 48B, featuring the largest LLM pretrained with retrieval (as of December 2023).
The obtained foundation model, Retro 48B, largely outperforms the GPT counterpart in terms of perplexity.
With instruction tuning on Retro, InstructRetro demonstrates significant improvement over the instruction tuned GPT on downstream tasks in the zero-shot setting. Specifically, the average improvement of InstructRetro is 7% over its GPT counterpart across 8 short-form QA tasks, and 10% over GPT across 4 challenging long-form QA tasks. We also find that one can ablate the encoder from InstructRetro architecture and directly use the InstructRetro decoder backbone as GPT, while achieving comparable results.
In this repo, we provide an end-to-end reproduction guide to implement Retro and InstructRetro, covering
- **Retrieval database construction**, which supports billions or even trillions of tokens as a large-scale retrieval database.
- **Pretraining with retrieval**, which supports pretraining from scratch and pretraining from a pretrained GPT model (Retro-fitting).
- **Instruction tuning**, where we provide an open-source instruction tuning dataset and the training recipe for instruction tuning on Retro.
- **Downstream task evaluation**, where we provide the text generation and evaluation scripts for zero-shot question answering tasks.
See [tools/retro/README.md](tools/retro/README.md) for a detailed overview.
## Mamba-based Language Models
See [examples/mamba](./examples/mamba) for details.
<!--
## REALM Pipeline
We are working on implementing the [REALM](https://arxiv.org/pdf/2002.08909.pdf) system. The following sections (will) reflect the three stages of training it. For now it's just the ICT code.
Loosely, they are pretraining the retriever modules, then jointly training the language model and the retriever, and then finetuning a question answering head on the language model with fixed retriever.
### Inverse Cloze Task (ICT) Pretraining
1. Have a corpus in loose JSON format with the intention of creating a collection of fixed-size blocks of text as the fundamental units of data. For a corpus like Wikipedia, this will mean multiple sentences per block but also multiple blocks per document.
Run `tools/preprocess_data.py` to construct one or more indexed datasets with the `--split-sentences` argument to make sentences the basic unit. For the original REALM system, we construct two datasets, one with the title of every document, and another with the body.
Refer to the following script
<pre>
python preprocess_data.py \
--input /path/to/corpus.json \
--json-keys text title \
--split-sentences \
--tokenizer-type BertWordPieceLowerCase \
--vocab-file /path/to/vocab.txt \
--output-prefix corpus_indexed \
--workers 5 # works well for 10 CPU cores. Scale up accordingly.
</pre>
2. Use a custom samples mapping function in place of `megatron/legacy/data/realm_dataset_utils.get_block_samples_mapping` if required. To do this, you will need to implement a new function in C++ inside of `megatron/core/datasets/helpers.cpp`. The samples mapping data structure is used to select the data that will constitute every training sample in advance of the training loop.
The samples mapping is responsible for holding all of the required metadata needed to construct the sample from one or more indexed datasets. In REALM, the samples mapping contains the start and end sentence indices, as well as the document index (to find the correct title for a body) and a unique ID for every block.
3. Pretrain a BERT language model using `pretrain_bert.py`, with the sequence length equal to the block size in token ids. This model should be trained on the same indexed dataset that is used to supply the blocks for the information retrieval task.
In REALM, this is an uncased bert base model trained with the standard hyperparameters.
4. Use `pretrain_ict.py` to train an `ICTBertModel` which uses two BERT-based encoders to encode queries and blocks to perform retrieval with.
The script below trains the ICT model from REALM. It references a pretrained BERT model (step 3) in the `--bert-load` argument. The batch size used in the paper is 4096, so this would need to be run with data parallel world size 32.
<pre>
python pretrain_ict.py \
--num-layers 12 \
--num-attention-heads 12 \
--hidden-size 768 \
--batch-size 128 \
--seq-length 256 \
--max-position-embeddings 256 \
--ict-head-size 128 \
--train-iters 100000 \
--bert-load /path/to/pretrained_bert \
--load checkpoints \
--save checkpoints \
--data-path /path/to/indexed_dataset \
--titles-data-path /path/to/titles_indexed_dataset \
--vocab-file /path/to/vocab.txt \
--lr 0.0001 \
--num-workers 2 \
--lr-decay-style linear \
--weight-decay 1e-2 \
--clip-grad 1.0 \
--lr-warmup-fraction .01 \
--save-interval 3000 \
--query-in-block-prob 0.1 \
--fp16
</pre>
### Building an Index of Block Embeddings
After having trained an ICT model, you can now embed an entire dataset of blocks by creating a `BlockData` structure. After that has been saved, you can load it
and wrap it with a `FaissMIPSIndex` to do fast similarity search which is key in the learned information retrieval pipeline. The initial index can be built with the following script, meant to be run in an interactive session. It can leverage multiple GPUs on multiple nodes to index large datasets much more quickly.
<pre>
python tools/create_doc_index.py \
--num-layers 12 \
--hidden-size 768 \
--ict-head-size 128 \
--num-attention-heads 12 \
--batch-size 128 \
--seq-length 256 \
--max-position-embeddings 256 \
--ict-load /path/to/pretrained_ict \
--data-path /path/to/indexed_dataset \
--titles-data-path /path/to/titles_indexed_dataset \
--block-data-path embedded_blocks.pkl \
--indexer-log-interval 1000 \
--indexer-batch-size 128 \
--vocab-file /path/to/vocab.txt \
--num-workers 2 \
--fp16
</pre>
-->
## Mixture of Experts
MoE (Mixture of Experts) is a powerful LLM architecture implemented in the Megatron-Core framework, designed to enhance the efficiency and scalability of large language models. It leverages **Expert Parallelism**, allowing multiple experts to be distributed across different workers, where each worker processes distinct batches of training samples. This method significantly increases computational throughput, enabling models to achieve high performance metrics, such as 47% MFU during BF16 training for 8x7B on H100.
Key Features of MoE:
- **Parallelism Techniques**: MoE combines various parallelism strategies, including Expert Parallelism, Data Parallelism, Tensor Parallelism, Sequence Paralleism, Pipeline Parallelism, and Context Parallelism. This combination allows for handling larger model variants effectively.
- **Router and Load Balancing**: The system employs advanced routing mechanisms like the Top-K router and utilizes load balancing algorithms to optimize token distribution among experts.
- **Performance Optimizations**: Techniques such as GroupedGEMM and FP8 training enhance the efficiency of MoE models, particularly when multiple experts are involved.
- **Token Dispatch Mechanism**: MoE supports both dropless and token drop strategies to manage token distribution effectively across experts.
For a comprehensive overview of MoE training configurations and optimizations, please refer to the detailed README located at [megatron/core/transformer/moe/README.md](./megatron/core/transformer/moe/README.md).
# Evaluation and Tasks
We provide several command line arguments, detailed in the scripts listed below, to handle various zero-shot and fine-tuned downstream tasks. However, you can also finetune your model from a pretrained checkpoint on other corpora as desired. To do so, simply add the `--finetune` flag and adjust the input files and training parameters within the original training script. The iteration count will be reset to zero, and the optimizer and internal state will be reinitialized. If the fine-tuning is interrupted for any reason, be sure to remove the `--finetune` flag before continuing, otherwise the training will start again from the beginning.
Because evaluation requires substantially less memory than training, it may be advantageous to merge a model trained in parallel for use on fewer GPUs in downstream tasks. The following script accomplishes this. This example reads in a GPT model with 4-way tensor and 4-way pipeline model parallelism and writes out a model with 2-way tensor and 2-way pipeline model parallelism.
<pre>
python tools/checkpoint/convert.py \
--model-type GPT \
--load-dir checkpoints/gpt3_tp4_pp4 \
--save-dir checkpoints/gpt3_tp2_pp2 \
--target-tensor-parallel-size 2 \
--target-pipeline-parallel-size 2
</pre>
Several downstream tasks are described for both GPT and BERT models below. They can be run in distributed and model parallel modes with the same changes used in the training scripts.
## GPT Text Generation
We have included a simple REST server to use for text generation in `tools/run_text_generation_server.py`. You run it much like you would start a pretraining job, specifying an appropriate pretrained checkpoint. There are also few optional parameters: `temperature`, `top-k`and `top-p`. See `--help` or the source file for more information. See [examples/inference/run_text_generation_server_345M.sh](examples/inference/run_text_generation_server_345M.sh) for an example of how to run the server.
Once the server is running you can use `tools/text_generation_cli.py` to query it, it takes one argument which is the host the server is running on.
<pre>
tools/text_generation_cli.py localhost:5000
</pre>
You can also use CURL or any other tools to query the server directly:
<pre>
curl 'http://localhost:5000/api' -X 'PUT' -H 'Content-Type: application/json; charset=UTF-8' -d '{"prompts":["Hello world"], "tokens_to_generate":1}'
</pre>
See [megatron/inference/text_generation_server.py](megatron/inference/text_generation_server.py) for more API options.
### Detoxify GPT via Self-generation
We include an example in `examples/academic_paper_scripts/detxoify_lm/` to detoxify language models by leveraging the generative power of language models.
See [examples/academic_paper_scripts/detxoify_lm/README.md](examples/academic_paper_scripts/detxoify_lm/README.md) for step-by-step tutorials on how to perform domain-adaptive training and detoxify LM using self-generated corpus.
## GPT Evaluation
We include example scripts for GPT evaluation on WikiText perplexity evaluation and LAMBADA Cloze accuracy.
### WikiText Perplexity Evaluation
For even comparison with prior works, we evaluate perplexity on the word-level [WikiText-103 test dataset](https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip), and appropriately compute perplexity given the change in tokens when using our subword tokenizer.
We use the following command to run WikiText-103 evaluation on a 345M parameter model.
<pre>
TASK="WIKITEXT103"
VALID_DATA=&#60;wikitext path&#62;.txt
VOCAB_FILE=gpt2-vocab.json
MERGE_FILE=gpt2-merges.txt
CHECKPOINT_PATH=checkpoints/gpt2_345m
COMMON_TASK_ARGS="--num-layers 24 \
--hidden-size 1024 \
--num-attention-heads 16 \
--seq-length 1024 \
--max-position-embeddings 1024 \
--fp16 \
--vocab-file $VOCAB_FILE"
python tasks/main.py \
--task $TASK \
$COMMON_TASK_ARGS \
--valid-data $VALID_DATA \
--tokenizer-type GPT2BPETokenizer \
--merge-file $MERGE_FILE \
--load $CHECKPOINT_PATH \
--micro-batch-size 8 \
--log-interval 10 \
--no-load-optim \
--no-load-rng
</pre>
### LAMBADA Cloze Accuracy
To compute LAMBADA cloze accuracy (the accuracy of predicting the last token given the preceding tokens) we utilize a detokenized, processed version of the [LAMBADA dataset](https://github.com/cybertronai/bflm/blob/master/lambada_test.jsonl).
We use the following command to run LAMBADA evaluation on a 345M parameter model. Note that the `--strict-lambada` flag should be used to require whole word matching. Ensure that `lambada` is part of the file path.
<pre>
TASK="LAMBADA"
VALID_DATA=&#60;lambada path&#62;.json
VOCAB_FILE=gpt2-vocab.json
MERGE_FILE=gpt2-merges.txt
CHECKPOINT_PATH=checkpoints/gpt2_345m
COMMON_TASK_ARGS=&#60;same as those in <a href="#wikitext-perplexity-evaluation">WikiText Perplexity Evaluation</a> above&#62;
python tasks/main.py \
--task $TASK \
$COMMON_TASK_ARGS \
--valid-data $VALID_DATA \
--tokenizer-type GPT2BPETokenizer \
--strict-lambada \
--merge-file $MERGE_FILE \
--load $CHECKPOINT_PATH \
--micro-batch-size 8 \
--log-interval 10 \
--no-load-optim \
--no-load-rng
</pre>
Further command line arguments are described in the source file [`main.py`](./tasks/main.py)
## BERT Task Evaluation
### RACE Evaluation
The following script finetunes the BERT model for evaluation on the [RACE dataset](http://www.cs.cmu.edu/~glai1/data/race/). The `TRAIN_DATA` and `VALID_DATA` directory contain the RACE dataset as separate `.txt` files. Note that for RACE, the batch size is the number of RACE query's to evaluate. Since each RACE query has four samples, the effective batch size passed through the model will be four times the batch size specified on the command line.
<pre>
TRAIN_DATA="data/RACE/train/middle"
VALID_DATA="data/RACE/dev/middle \
data/RACE/dev/high"
VOCAB_FILE=bert-vocab.txt
PRETRAINED_CHECKPOINT=checkpoints/bert_345m
CHECKPOINT_PATH=checkpoints/bert_345m_race
COMMON_TASK_ARGS="--num-layers 24 \
--hidden-size 1024 \
--num-attention-heads 16 \
--seq-length 512 \
--max-position-embeddings 512 \
--fp16 \
--vocab-file $VOCAB_FILE"
COMMON_TASK_ARGS_EXT="--train-data $TRAIN_DATA \
--valid-data $VALID_DATA \
--pretrained-checkpoint $PRETRAINED_CHECKPOINT \
--save-interval 10000 \
--save $CHECKPOINT_PATH \
--log-interval 100 \
--eval-interval 1000 \
--eval-iters 10 \
--weight-decay 1.0e-1"
python tasks/main.py \
--task RACE \
$COMMON_TASK_ARGS \
$COMMON_TASK_ARGS_EXT \
--tokenizer-type BertWordPieceLowerCase \
--epochs 3 \
--micro-batch-size 4 \
--lr 1.0e-5 \
--lr-warmup-fraction 0.06
</pre>
### MNLI Evaluation
The following script finetunes the BERT model for evaluation with the [MultiNLI sentence pair corpus](https://www.nyu.edu/projects/bowman/multinli/). Because the matching tasks are quite similar, the script can be quickly tweaked to work with the [Quora Question Pairs](https://www.kaggle.com/quora/question-pairs-dataset) (QQP) dataset as well.
<pre>
TRAIN_DATA="data/glue_data/MNLI/train.tsv"
VALID_DATA="data/glue_data/MNLI/dev_matched.tsv \
data/glue_data/MNLI/dev_mismatched.tsv"
PRETRAINED_CHECKPOINT=checkpoints/bert_345m
VOCAB_FILE=bert-vocab.txt
CHECKPOINT_PATH=checkpoints/bert_345m_mnli
COMMON_TASK_ARGS=&#60;same as those in <a href="#race-evaluation">RACE Evaluation</a> above&#62;
COMMON_TASK_ARGS_EXT=&#60;same as those in <a href="#race-evaluation">RACE Evaluation</a> above&#62;
python tasks/main.py \
--task MNLI \
$COMMON_TASK_ARGS \
$COMMON_TASK_ARGS_EXT \
--tokenizer-type BertWordPieceLowerCase \
--epochs 5 \
--micro-batch-size 8 \
--lr 5.0e-5 \
--lr-warmup-fraction 0.065
</pre>
## Llama-2 Inference and Finetuning
The Llama-2 [family of models](https://ai.meta.com/llama/) are an open-source set of pretrained & finetuned (for chat) models that have achieved strong results across a wide set of benchmarks. At the time of release, Llama-2 models achieved among the best results for open-source models, and were competitive with the closed-source GPT-3.5 model (see <https://arxiv.org/pdf/2307.09288.pdf>).
The Llama-2 checkpoints can be loaded into Megatron for inference and finetuning. See documentation [here](docs/llama_mistral.md).
# Model Optimization and Deployment
Megatron-Core (MCore) `GPTModel` family supports advanced quantization algorithms and high-performance inference through TensorRT-LLM.
## Quantization and TensorRT-LLM Deployment
See [Megatron Model Optimization and Deployment](examples/inference/quantization/README.md) for `llama2` and `nemotron3` examples.
# Datasets
We do not host any datasets for GPT or BERT training, however, we detail their collection so that our results may be reproduced.
## Collecting Wikipedia Training Data
We recommend following the Wikipedia data extraction process specified by Google research: "the recommended pre-processing is to download [the latest dump](https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2), extract the text with [WikiExtractor.py](https://github.com/attardi/wikiextractor), and then apply any necessary cleanup to convert it into plain text."
We recommend using the `--json` argument when using WikiExtractor, which will dump the Wikipedia data into loose json format (one json object per line), making it more manageable on the file system and also readily consumable by our codebase. We recommend further preprocessing this json dataset with nltk punctuation standardization. For BERT training, use the `--split-sentences` flag to `preprocess_data.py` as described [above](#data-preprocessing) to include sentence breaks in the produced index. If you'd like to use Wikipedia data for GPT training you should still clean it with nltk/spacy/ftfy, but do not use the `--split-sentences` flag.
## Collecting GPT Webtext Data
We utilize the publicly available [OpenWebText](https://github.com/eukaryote31/openwebtext) library from [jcpeterson](https://github.com/jcpeterson/openwebtext) and [eukaryote31's](https://github.com/eukaryote31/openwebtext) work to download urls. We then filter, clean, and deduplicate all downloaded content according to the procedure described in our [openwebtext](./tools/openwebtext) directory. For reddit URLs corresponding to content up to October 2018 we arrived at approximately 37GB of content.
# Reproducibility
Megatron training can be bitwise reproducible; to enable this mode use `--deterministic-mode`. This means that the same training config run twice in the same HW and SW environment should produce identical model checkpoints, losses and accuracy metric values (iteration time metrics may vary).
There are currently three known Megatron optimizations that break reproducibility whilst still producing almost identical training runs:
1. The specific NCCL algorithm that is used during an all-reduce (as specified by the environment variable `NCCL_ALGO`) is important. We have tested the following: `^NVLS`, `Tree`, `Ring`, `CollnetDirect`, `CollnetChain`. The code admits the use of `^NVLS`, which allows NCCL the choice of non-NVLS algorithms; its choice seems to be stable.
2. Flash attention is non-deterministic; do not use `--use-flash-attn`.
3. If using Transformer Engine, you must also set the environment variable `NVTE_ALLOW_NONDETERMINISTIC_ALGO=0`.
In addition, determinisim has only been verified in NGC PyTorch containers up to and newer than 23.12. If you observe nondeterminism in Megatron training under other circumstances please open an issue.
# Checkpoint conversion
We support two forms of model conversion:
1. Model class conversion (i.e., the `GPTModel` in `model.legacy` vs. `model.core`)
2. Checkpoint format conversion (i.e., distributed vs. non-distributed checkpoint)
## Model class conversion
Megatron supports converting between different model classes, including internal model classes (we currently have the older `legacy` models, and the newer `core` models) and external model classes (such as Meta, Huggingface, Mistral, and Mixtral models). Additionally, during this conversion, one can update the parallel state of the model (i.e., changing tensor and pipeline model parallelism).
We provide the tool `tools/checkpoint/convert.py` to convert between model classes. Some important arguments include:
- `--model-type`: `GPT` or `BERT`
- `--loader`: format of the existing checkpoint. Supported formats include:
- `legacy`: our older model classes (under `megatron.legacy.model`)
- `core`: our newer model classes (under `megatron.core.models`)
- `llama_mistral`: for loading Llama and Mistral models (supports Meta and Huggingface formats)
- `mixtral_hf`: for loading Mixtral models (Huggingface only)
- `--load-dir`: directory for loading the existing checkpoint
- `--saver`: `legacy` or `core` (see descriptions under `--loader`)
- `--save-dir`: directory for saving the new checkpoint
- `--target-tensor-parallel-size`: new tensor model parallel size
- `--target-pipeline-parallel-size`: new pipeline model parallel size
For more argument details, please see the main script (`convert.py`), loader scripts (`loader_core.py`, `loader_legacy.py`, `loader_llama_mistral.py`, `loader_mixtral_hf.py`), or saver scripts (`saver_core.py`, `saver_legacy.py`).
An example command for converting a GPT model from the old format (`legacy`) to the new format (`core`) would look as follows:
```
python tools/checkpoint/convert.py \
> --model-type GPT \
> --loader legacy \
> --load-dir ${LEGACY_FORMAT_DIR} \
> --saver core \
> --save-dir ${CORE_FORMAT_DIR} \
> --target-tensor-parallel-size ${TP} \
> --target-pipeline-parallel-size ${PP} \
```
For examples of converting Llama/Mistral models into Megatron, please see [here](docs/llama_mistral.md).
## Checkpoint format conversion
Megatron offers multiple checkpoint formats, including:
- `torch`: Basic checkpoint format with sequential read & writes, and is tied to a specific tensor/pipeline model parallel state (TP/PP states, respectively). (While a specific checkpoint is tied to a specific TP/PP state, a checkpoint can still be manually converted via the model class converter described above).
- `torch_dist`: Distributed checkpoint format, for fast parallel reads & writes, and also is parallel state agnostic (i.e., one can load the same checkpoint to different TP/PP setups).
Generally speaking, `torch_dist` is the more modern and recommended checkpoint format due to its speed. However, depending on the use case, it may be desirable to convert between these two formats. To do so, launch your *training* script (e.g., via `pretrain_gpt.py`) as you normally would, but with two additional arguments:
- `--ckpt-convert-format ${FORMAT}`: `${FORMAT}` can be one of `torch` or `torch_dist`, as described above.
- `--ckpt-convert-save ${PATH_TO_SAVE_NEW_FORMAT}`: this path should be different than your existing `--load`/`--save` paths, to avoid overwriting the existing checkpoint. After converting, use this new path for your `--load`/`--save` paths.
The general idea of this checkpoint format converter is that it launches the model just as one normally would for training, but before running any training iterations, it saves to the new checkpoint format, and then exits. It is important to note that all other launch args should remain the same, in order for the system to understand the previous checkpoint format.
# Projects Using Megatron
Below are some of the projects where we have directly used Megatron:
- [BERT and GPT Studies Using Megatron](https://arxiv.org/pdf/1909.08053.pdf)
- [BioMegatron: Larger Biomedical Domain Language Model](https://www.aclweb.org/anthology/2020.emnlp-main.379.pdf)
- [End-to-End Training of Neural Retrievers for Open-Domain Question Answering](https://arxiv.org/abs/2101.00408)
- [Large Scale Multi-Actor Generative Dialog Modeling](https://www.aclweb.org/anthology/2020.acl-main.8.pdf)
- [Local Knowledge Powered Conversational Agents](https://arxiv.org/abs/2010.10150)
- [MEGATRON-CNTRL: Controllable Story Generation with External Knowledge Using Large-Scale Language Models](https://www.aclweb.org/anthology/2020.emnlp-main.226.pdf)
- [RACE Reading Comprehension Dataset Leaderboard](http://www.qizhexie.com/data/RACE_leaderboard.html)
- [Training Question Answering Models From Synthetic Data](https://www.aclweb.org/anthology/2020.emnlp-main.468.pdf)
- [Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases](https://arxiv.org/abs/2112.07868)
- [Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models](https://arxiv.org/abs/2202.04173)
- [Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model](https://arxiv.org/abs/2201.11990)
- [Multi-Stage Prompting for Knowledgeable Dialogue Generation](https://arxiv.org/abs/2203.08745)
- [Evaluating Parameter Efficient Learning for Generation](https://aclanthology.org/2022.emnlp-main.319.pdf)
- [Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models](https://arxiv.org/abs/2202.04173)
- [Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study](https://arxiv.org/abs/2304.06762)
- [InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining](https://arxiv.org/abs/2310.07713)
- [An Empirical Study of Mamba-based Language Models](https://arxiv.org/abs/2406.07887)
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
ARG WHEEL_DIR=/workspace/wheels
FROM ${FROM_IMAGE_NAME} as mcore_image
ENV PIP_CONSTRAINT=""
RUN pip install -U pip
FROM mcore_image as build_te
ARG TE_COMMIT=8382eed6cccb1eb0602c96afc1cfbc707468257f
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_te.sh --repo-ref $TE_COMMIT --output-wheel-dir $WHEEL_DIR
FROM mcore_image as build_mamba
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_mamba.sh --output-wheel-dir $WHEEL_DIR
FROM mcore_image as build_causalconv1d
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_causalconv1d.sh --output-wheel-dir $WHEEL_DIR
FROM mcore_image as build_groupedgemm
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_groupedgemm.sh --output-wheel-dir $WHEEL_DIR
FROM mcore_image as main
ENV DEBIAN_FRONTEND=noninteractive
ARG UV_VERSION=0.7.2
ARG YQ_VERSION=4.44.1
ENV PATH="/root/.local/bin:$PATH"
ENV UV_PROJECT_ENVIRONMENT=/opt/venv
ENV PATH="$UV_PROJECT_ENVIRONMENT/bin:$PATH"
ENV UV_LINK_MODE=copy
RUN bash -ex <<"EOF"
apt-get update
apt-get install -y --no-install-recommends gettext python3-venv
apt-get clean
python -m venv /opt/jet
wget https://github.com/mikefarah/yq/releases/download/v${YQ_VERSION}/yq_linux_amd64 -O /usr/local/bin/yq
chmod a+x /usr/local/bin/yq
curl -LsSf https://astral.sh/uv/${UV_VERSION}/install.sh | sh
EOF
ARG WHEEL_DIR
COPY README.md pyproject.toml uv.lock /workspace/
COPY megatron/core/__init__.py /workspace/megatron/core/
COPY megatron/core/package_info.py /workspace/megatron/core/
COPY docker/common/ /workspace/docker/common/
COPY --from=build_te $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_mamba $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_causalconv1d $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_groupedgemm $WHEEL_DIR/*.whl $WHEEL_DIR/
RUN bash -ex <<"EOF"
uv venv ${UV_PROJECT_ENVIRONMENT} --system-site-packages
uv sync --extra dev --extra mlm --link-mode copy --locked
bash docker/common/install_source_wheels.sh --input-wheel-dir $WHEEL_DIR/ --environment dev
EOF
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
ENV PATH="$PATH:/opt/jet/bin"
RUN --mount=type=secret,id=JET_INDEX_URLS \
--mount=type=secret,id=LOGGER_INDEX_URL bash -ex <<"EOF"
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS)
LOGGER_INDEX_URL=$(cat /run/secrets/LOGGER_INDEX_URL)
uv pip install --no-cache-dir jet-api==$JET_API_VERSION "jet-client~=2.0" --upgrade $JET_INDEX_URLS "setuptools<80.0.0"
uv pip install --no-cache-dir "one-logger" --upgrade $LOGGER_INDEX_URL "setuptools<80.0.0"
EOF
###
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
ARG WHEEL_DIR=/workspace/wheels
FROM $FROM_IMAGE_NAME as build_mamba
WORKDIR /opt
ARG WHEEL_DIR
RUN MAMBA_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/state-spaces/mamba.git@v2.0.3 -w $WHEEL_DIR
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causalconv1d
WORKDIR /opt
ARG WHEEL_DIR
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1 -w $WHEEL_DIR
FROM $FROM_IMAGE_NAME as build_groupedgemm
WORKDIR /opt
ARG WHEEL_DIR
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2 -w $WHEEL_DIR
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN bash -ex <<"EOF"
apt-get update
apt-get install -y --no-install-recommends gettext python3-venv
apt-get clean
python -m venv /opt/jet
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq
chmod a+x /usr/local/bin/yq
EOF
ARG UV_VERSION=0.7.2
ENV PATH="/root/.local/bin:$PATH"
RUN curl -LsSf https://astral.sh/uv/${UV_VERSION}/install.sh | sh
ENV UV_PROJECT_ENVIRONMENT=/opt/venv
ENV PATH="$UV_PROJECT_ENVIRONMENT/bin:$PATH"
ENV UV_LINK_MODE=copy
RUN
ARG WHEEL_DIR
COPY README.md pyproject.toml uv.lock /workspace/
COPY megatron/core/__init__.py /workspace/megatron/core/
COPY megatron/core/package_info.py /workspace/megatron/core/
COPY docker/common/ /workspace/docker/common/
COPY --from=build_mamba $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_causalconv1d $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_groupedgemm $WHEEL_DIR/*.whl $WHEEL_DIR/
RUN bash -ex <<"EOF"
uv venv ${UV_PROJECT_ENVIRONMENT} --system-site-packages
uv sync --extra lts --extra mlm --link-mode copy --locked
bash docker/common/install_source_wheels.sh --input-wheel-dir $WHEEL_DIR/ --environment lts
EOF
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
ENV PATH="$PATH:/opt/jet/bin"
RUN --mount=type=secret,id=JET_INDEX_URLS \
--mount=type=secret,id=LOGGER_INDEX_URL bash -ex <<"EOF"
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS)
LOGGER_INDEX_URL=$(cat /run/secrets/LOGGER_INDEX_URL)
uv pip install --no-cache-dir jet-api==$JET_API_VERSION "jet-client~=2.0" --upgrade $JET_INDEX_URLS "setuptools<80.0.0"
uv pip install --no-cache-dir "one-logger" --upgrade $LOGGER_INDEX_URL "setuptools<80.0.0"
EOF
###
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM ${FROM_IMAGE_NAME} as main
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext && \
apt-get clean && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install --no-cache-dir jet-api==$JET_API_VERSION "jet-client~=2.0" --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
# syntax=docker/dockerfile:experimental
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN sed -i -e 's/^APT/# APT/' -e 's/^DPkg/# DPkg/' \
/etc/apt/apt.conf.d/docker-clean
RUN apt-get update && \
apt-get install -y python3-venv && \
apt-get clean && \
python -m venv /opt/jet
RUN pip3 install --no-cache-dir \
black==24.4.2 \
isort==5.13.2 \
flake8==7.1.0 \
pylint==3.2.6 \
coverage \
mypy \
python-gitlab \
pandas \
slack-sdk
WORKDIR /opt/megatron-lm
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install --no-cache-dir "jet-client~=2.0" --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL="https://github.com/Dao-AILab/causal-conv1d.git"
REPO_REF="v1.2.2.post1"
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
export CAUSAL_CONV1D_FORCE_BUILD=TRUE
pip3 wheel --no-cache-dir --no-deps -w "${OUTPUT_WHEEL_DIR}" .
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL="https://github.com/fanshiqing/grouped_gemm"
REPO_REF="v1.1.2"
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
export MAMBA_FORCE_BUILD=TRUE
pip3 wheel --no-cache-dir --no-deps -w "${OUTPUT_WHEEL_DIR}" .
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL="https://github.com/state-spaces/mamba.git"
REPO_REF="2e16fc3062cdcd4ebef27a9aa4442676e1c7edf4"
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
pip3 wheel --no-cache-dir --no-deps -w "${OUTPUT_WHEEL_DIR}" .
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL=$(cat docker/common/manifest.json | jq -r '."vcs-dependencies"."transformer-engine".repo')
REPO_REF=$(cat docker/common/manifest.json | jq -r '."vcs-dependencies"."transformer-engine".ref')
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
export NVTE_FRAMEWORK=pytorch # Optionally set framework
pip3 wheel --no-cache-dir --no-build-isolation -w "${OUTPUT_WHEEL_DIR}" .
ls -al "${OUTPUT_WHEEL_DIR}"
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
INPUT_WHEEL_DIR=$(pwd)/wheels
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--input-wheel-dir)
INPUT_WHEEL_DIR="$2"
shift 2
;;
--environment)
ENVIRONMENT="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --input-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$INPUT_WHEEL_DIR" ] || [ -z "$ENVIRONMENT" ]; then
echo "Error: --input-wheel-dir and --environment are required"
echo "Usage: $0 --input-wheel-dir DIR --environment ENV"
exit 1
fi
if [ "$ENVIRONMENT" = "dev" ]; then
TE_WHEEL=$(ls $INPUT_WHEEL_DIR/transformer_engine*.whl) || true
[ -z "$TE_WHEEL" ] && TE_WHEEL=$(bash docker/common/build_te.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
fi
MAMBA_WHEEL=$(ls $INPUT_WHEEL_DIR/mamba*.whl) || true
[ -z "$MAMBA_WHEEL" ] && MAMBA_WHEEL=$(bash docker/common/build_mamba.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
CAUSALCONV1D_WHEEL=$(ls $INPUT_WHEEL_DIR/causal_conv1d*.whl) || true
[ -z "$CAUSALCONV1D_WHEEL" ] && CAUSALCONV1D_WHEEL=$(bash docker/common/build_causalconv1d.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
GROUPEDGEMM_WHEEL=$(ls $INPUT_WHEEL_DIR/grouped_gemm*.whl) || true
[ -z "$GROUPEDGEMM_WHEEL" ] && GROUPEDGEMM_WHEEL=$(bash docker/common/build_groupedgemm.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
# Override deps that are already present in the base image
# only for dev
if [ "$ENVIRONMENT" = "dev" ]; then
uv pip install --no-cache-dir --no-deps $TE_WHEEL \
"nvidia-modelopt[torch]>=0.29.0" "setuptools<80.0.0"
fi
# Install heavy optional deps like mamba, causalconv1d, groupedgemm
uv pip install --no-cache-dir \
$MAMBA_WHEEL \
$CAUSALCONV1D_WHEEL \
$GROUPEDGEMM_WHEEL \
"setuptools<80.0.0"
{
"ngc-pytorch": "nvcr.io/nvidia/pytorch:25.03-py3",
"vcs-dependencies": {
"transformer-engine": {
"repo": "https://github.com/NVIDIA/TransformerEngine",
"ref": "bee4649c15a79ffcb9689ca7c0c963f5febaa28a"
}
},
"pypi-dependencies": {}
}
\ No newline at end of file
# Llama, Mistral and other Llama-like model support in Megatron-LM
NOTE: In order to simplify code we now only support converting llama-3.x and mistral checkpoints downloaded from Huggingface.
The [Llama-2](https://ai.meta.com/llama/) and [Llama-3.x](https://llama.meta.com/) family of models are an open-source set of pretrained & finetuned (for chat) models that have achieved strong results across a wide set of benchmarks. At their times of release, both Llama-2 and Llama-3 models achieved among the best results for open-source models, and were competitive with leading closed-source models (see https://arxiv.org/pdf/2307.09288.pdf and https://ai.meta.com/blog/meta-llama-3/).
Similarly, [Mistral-7b](https://mistral.ai/news/announcing-mistral-7b/) is an open-source model with pretrained and finetuned (for chat) variants that achieve strong benchmark results.
Architecturally Llama-2, Llama-3 and Mistral-7b are very similar. As such Megatron can support loading checkpoints from all three for inference and finetuning. Converting the checkpoints and loading them is slightly different for each model and is detailed for each below.
# Contents
- [Llama, Mistral and other Llama-like model support in Megatron-LM](#llama-mistral-and-other-llama-like-model-support-in-megatron-lm)
- [Contents](#contents)
- [Llama-2](#llama-2)
- [Download Meta or Huggingface checkpoints](#download-meta-or-huggingface-checkpoints)
- [Convert checkpoint format](#convert-checkpoint-format)
- [Meta format](#meta-format)
- [Huggingface format](#huggingface-format)
- [Launch model](#launch-model)
- [Launch Megatron](#launch-megatron)
- [Launch Meta](#launch-meta)
- [Launch Huggingface](#launch-huggingface)
- [Benchmark results](#benchmark-results)
- [Big Bench](#big-bench)
- [Multilingual](#multilingual)
- [LM Evaluation Harness](#lm-evaluation-harness)
- [MMLU](#mmlu)
- [Llama-3.x](#llama-3x)
- [Download Huggingface checkpoints](#download-huggingface-checkpoints)
- [Convert checkpoint format](#convert-checkpoint-format-1)
- [Huggingface format](#huggingface-format-1)
- [(Optional) Validate checkpoints](#optional-validate-checkpoints)
- [Launch model](#launch-model-1)
- [Mistral-7b](#mistral-7b)
- [Download Huggingface checkpoints](#download-huggingface-checkpoints-2)
- [Convert checkpoint format](#convert-checkpoint-format-3)
- [(Optional) Validate checkpoints](#optional-validate-checkpoints-2)
- [Launch model](#launch-model-3)
- [Other Llama-like model support](#other-llama-like-model-support)
- [Known numerical differences](#known-numerical-differences)
- [Using legacy model format](#using-legacy-model-format)
# Llama-2
Llama-2 checkpoints can be loaded into Megatron for inference and for finetuning. Loading these checkpoints consists of three steps:
1. Get access to download the checkpoints.
2. Convert the checkpoints from Meta/Huggingface format to Megatron format.
3. Setup arguments for launching the model.
The following sections detail these steps. The final section lists benchmark result comparisons between: 1) Llama-2 inference code running the Meta-format checkpoints, and 2) Megatron inference code running the converted checkpoints.
## Download Meta or Huggingface checkpoints
Users must first apply for access to download the Llama-2 checkpoints either directly from [Meta](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) or through [Huggingface](https://huggingface.co/docs/transformers/main/model_doc/llama2) (HF). The checkpoints are available in two formats, Meta's native format (available from both the Meta and HF links), and HF's format (available only from HF). Either format can be converted to Megatron, as detailed next.
## Convert checkpoint format
We recommend passing `--dtype bf16` for training or finetuning. Inference can be done in bfloat16 or float16.
### Meta format
The Meta format checkpoints are converted to HF format as an intermediate step before converting to Megatron format. The `transformers` package is required, and must have version >=4.31.0 (e.g., `pip install transformers>=4.31.0`). (**Note**: we have specifically tested with versions `4.31.0` and `4.32.0`; your experience may vary with newer versions.) Assuming the downloaded checkpoints are in `$CHECKPOINT_DIR` (with separate sub-directories for 7B, 13B, 70B, etc.), the following example command can be used to convert from Llama-2 format to HF format in bfloat16:
```
python tools/checkpoint/convert.py \
> --model-type GPT \
> --loader llama_mistral \
> --load-dir ${META_FORMAT_DIR} \
> --model-size ${MODEL_SIZE} \
> --checkpoint-type meta \
> --tokenizer-model ${TOKENIZER_MODEL} \
> --saver core \
> --save-dir ${MEGATRON_FORMAT_DIR} \
> --target-tensor-parallel-size ${TP} \
> --target-pipeline-parallel-size ${PP} \
> --bf16
```
Valid values for `--model-size` are `llama2-7B`, `llama2-13B`, and `llama2-70B` (for pretrained-only models), and `llama2-7Bf`, `llama2-13Bf`, and `llama2-70Bf` (for chat-finetuned models).
### Huggingface format
The HF checkpoints can be converted to Megatron format by using Megatron's own Llama-2 checkpoint converter for HF format (see script `tools/checkpoint/loader_llama_mistral.py`). One important argument that must be set correctly is the tensor parallel size (`TP`) for each model. The following table shows these values:
| Model size | Tensor parallel size (`TP`) |
| ---------- | --------------------------- |
| 7B | 1 |
| 13B | 2 |
| 70B | 8 |
Using these values for `TP`, along with the path to the Llama-2 tokenizer model (automatically downloaded with original checkpoint download; see `${TOKENIZER_MODEL}` below), run the following command from the root of your Megatron source code to convert from HF format to Megatron format:
```
python tools/checkpoint/convert.py \
> --model-type GPT \
> --loader llama_mistral \
> --load-dir ${HF_FORMAT_DIR} \
> --model-size ${MODEL_SIZE} \
> --checkpoint-type hf \
> --tokenizer-model ${TOKENIZER_MODEL} \
> --saver core \
> --save-dir ${MEGATRON_FORMAT_DIR} \
> --target-tensor-parallel-size ${TP} \
> --target-pipeline-parallel-size ${PP} \
> --bf16
```
After this conversion, we are ready to load the checkpoints into a Megatron GPT model.
## Launch model
### Launch Megatron
If loading for either inference or finetuning, use the following arguments:
```
--tensor-model-parallel-size ${TP} \
--pipeline-model-parallel-size 1 \
--seq-length 4096 \
--max-position-embeddings 4096 \
--tokenizer-type Llama2Tokenizer \
--tokenizer-model ${TOKENIZER_MODEL} \
--load ${CHECKPOINT_DIR} \
--exit-on-missing-checkpoint \
--use-checkpoint-args \
--no-load-optim \
--no-load-rng \
--untie-embeddings-and-output-weights \
--use-rotary-position-embeddings \
--normalization RMSNorm \
--no-position-embedding \
--no-masked-softmax-fusion \
--attention-softmax-in-fp32
```
**Note:** If you converted to the legacy model format (i.e., `--saver legacy`), please see [here](#using-legacy-model-format).
### Launch Meta
Meta checkpoints can be launched with: https://github.com/facebookresearch/llama
### Launch Huggingface
Huggingface checkpoints can be launched with: https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py
## Benchmark results
The tables below list the benchmark comparisons between native Llama-2 (using Meta's checkpoint and Meta's inference code) and Megatron (using a converted HF checkpoint and Megatron's inference code).
The values are the percent error between Megatron and Llama-2, calculated using the formula: `|<llama_score> - <megatron_score>| / <llama_score>`, where the type of score is detailed before each table. Across all tests (80 total per model size), the mean error is 0.15%. The small difference in benchmark scores between the two models is due to minor arithmetic differences in implementation that alter the numerics slightly. Some of the factors that influence this difference include:
- Megatron performs batch matrix multiplications in a couple places, such as within self attention and in SwiGLU, that Llama performs separately.
- Megatron uses `torch.baddbmm` within self attention, versus Llama using `torch.matmul`.
- Megatron uses a `sin`/`cos` implementation for rotary position embeddings, versus Llama using a `polar`/`complex` implementation.
- Llama calls `torch.set_default_dtype(torch.float16)` during initialization, which Megatron does not.
### Big Bench
Score type: multiple choice grade.
| bigbench / standard | 7b | 13b | 70b |
| -- | -- | -- | -- |
| date_understanding | 0.29% | 0.13% | 0.12% |
| general_knowledge | 0.00% | 0.00% | 0.00% |
| human_organs_senses | 0.00% | 0.00% | 0.00% |
| intent_recognition | 0.00% | 0.11% | 0.00% |
| riddle_sense | 0.00% | 0.00% | 0.00% |
| similarities_abstraction | 0.00% | 0.58% | 0.00% |
| simple_arithmetic_json_multiple_choice | 0.00% | 0.00% | 0.00% |
| undo_permutation | 0.19% | 0.19% | 0.18% |
### Multilingual
Score type: multiple choice grade.
| multilingual / xcopa | 7b | 13b | 70b |
| -- | -- | -- | -- |
| en-template-mGPT-remove-punctuation | 0.08% | 0.00% | 0.00% |
| et-template-mGPT-remove-punctuation | 0.00% | 0.13% | 0.25% |
| ht-template-mGPT-remove-punctuation | 0.26% | 0.13% | 0.26% |
| id-template-mGPT-remove-punctuation | 0.11% | 0.00% | 0.19% |
| it-template-mGPT-remove-punctuation | 0.00% | 0.10% | 0.09% |
| qu-template-mGPT-remove-punctuation | 0.00% | 0.00% | 0.27% |
| sw-template-mGPT-remove-punctuation | 0.14% | 0.13% | 0.13% |
| th-template-mGPT-remove-punctuation | 0.25% | 0.13% | 0.13% |
| tr-template-mGPT-remove-punctuation | 0.26% | 0.00% | 0.34% |
| vi-template-mGPT-remove-punctuation | 0.00% | 0.11% | 0.00% |
| zh-template-mGPT-remove-punctuation | 0.00% | 0.10% | 0.09% |
### LM Evaluation Harness
Score type: multiple choice grade.
| lm-eval | 7b | 13b | 70b |
| -- | -- | -- | -- |
| boolq | 0.04% | 0.04% | 0.07% |
| hellaswag | 0.02% | 0.03% | 0.03% |
| piqa | 0.00% | 0.00% | 0.07% |
| winogrande | 0.00% | 0.11% | 0.20% |
### MMLU
Score type: multiple choice grade.
Note: the number in brackets is the number of sub-tasks for each supercategory.
| mmlu | 7b | 13b | 70b |
| -- | -- | -- | -- |
| stem [18] | 0.79% | 0.05% | 0.01% |
| humanities [13] | 0.19% | 0.01% | 0.02% |
| other (business, health, misc.) [14] | 0.08% | 0.06% | 0.12% |
| social sciences [12] | 0.37% | 0.21% | 0.01% |
# Llama-3.x
Llama-3.x checkpoints can be loaded into Megatron for inference and for finetuning. Loading these checkpoints consists of several steps:
1. Get access to download the checkpoints (weights and tokenizer).
2. Convert the checkpoints from Huggingface format to Megatron format.
3. (Optional) Validate converted checkpoints
4. Setup arguments for launching the model.
The following sections detail these steps.
## Download Huggingface checkpoints
Users must first apply for access to download the Llama-3.x checkpoints from [Huggingface](https://huggingface.co/meta-llama).
## Convert checkpoint format
We recommend passing `--dtype bf16` for training or finetuning. Inference can be done in bfloat16 or float16.
### Huggingface format
The HF checkpoints can be converted to Megatron format by using Megatron's own Llama-3.x checkpoint converter for HF format (see script `tools/checkpoint/loader_llama_mistral.py`). One important argument that must be set correctly is the tensor parallel size (`TP`) for each model. The following table shows these values:
| Model size | Tensor parallel size (`TP`) |
| ---------- | --------------------------- |
| 1B | 1 |
| 3B | 1 |
| 8B | 1 |
| 70B | 8 |
Using these values for `TP`, along with the path to the Llama-3.x tokenizer model (automatically downloaded with original checkpoint download; see `${TOKENIZER_MODEL}` below), run the following command from the root of your Megatron source code to convert from HF format to Megatron format:
```
$>: python tools/checkpoint/convert.py \
> --bf16 \
> --model-type GPT \
> --loader llama_mistral \
> --saver core \
> --target-tensor-parallel-size ${TP} \
> --checkpoint-type hf \
> --load-dir ${HF_FORMAT_DIR} \
> --save-dir ${MEGATRON_FORMAT_DIR} \
> --tokenizer-model ${TOKENIZER_MODEL} \
> --model-size llama3 \
```
After this conversion, we are ready to load the checkpoints into a Megatron GPT model.
## (Optional) Validate checkpoints
A Megatron-LM text generation server for Llama3 can be launched using the script `examples/inference/llama_mistral/run_text_generation_llama3.sh <PATH_TO_CONVERTED_CORE_CHECKPOINT> <PATH_TO_DOWNLOADED_HUGGINGFACE_CHECKPOINT>`. For Llama3.1, please use `examples/inference/llama_mistral/run_text_generation_llama3.1.sh`.
Once running, query the server with `curl 'http://<TEXT_GENERATION_SERVER_IP>:5000/api' -X 'PUT' -H 'Content-Type: application/json; charset=UTF-8' -d '{"prompts":["<SOME_PROMPT>"], "tokens_to_generate":100, "top_k":1}'`.
A reference generation for comparison can be obtained from the Huggingface transformers library by running `python examples/llama_mistral/huggingface_reference.py --model_path <PATH_TO_DOWNLOADED_HUGGINGFACE_CHECKPOINT> --prompt <SOME_PROMPT>`.
## Launch model
If loading for either inference or finetuning, use the following arguments for Llama 3.0:
```
--tensor-model-parallel-size ${TP} \
--pipeline-model-parallel-size 1 \
--seq-length 8192 \
--max-position-embeddings 8192 \
--tokenizer-type HuggingFaceTokenizer \
--tokenizer-model ${TOKENIZER_MODEL} \
--load ${CHECKPOINT_DIR} \
--exit-on-missing-checkpoint \
--use-checkpoint-args \
--no-load-optim \
--no-load-rng \
--untie-embeddings-and-output-weights \
--normalization RMSNorm \
--position-embedding-type rope \
--no-masked-softmax-fusion \
--attention-softmax-in-fp32 \
--disable-bias-linear \
--transformer-impl transformer_engine \
--group-query-attention 8 \
--attention-dropout 0.0 \
--hidden-dropout 0.0 \
--rotary-base 500000 \
--rotary-percent 1.0 \
--ffn-hidden-size 14336 \
--num-attention-heads 32 \
--swiglu \
--bf16 \
```
For Llama3.1 please use the following arguments:
```
--tensor-model-parallel-size ${TP} \
--pipeline-model-parallel-size 1 \
--seq-length 8192 \
--max-position-embeddings 131072 \
--tokenizer-type HuggingFaceTokenizer \
--tokenizer-model ${TOKENIZER_MODEL} \
--load ${CHECKPOINT_DIR} \
--exit-on-missing-checkpoint \
--use-checkpoint-args \
--no-load-optim \
--no-load-rng \
--untie-embeddings-and-output-weights \
--normalization RMSNorm \
--position-embedding-type rope \
--no-masked-softmax-fusion \
--attention-softmax-in-fp32 \
--disable-bias-linear \
--transformer-impl transformer_engine \
--group-query-attention 8 \
--attention-dropout 0.0 \
--hidden-dropout 0.0 \
--rotary-base 500000 \
--rotary-percent 1.0 \
--use-rope-scaling \
--ffn-hidden-size 14336 \
--num-attention-heads 32 \
--swiglu \
--bf16 \
```
**Note:** If you converted to the legacy model format (i.e., `--saver legacy`), please see [here](#using-legacy-model-format).
# Mistral-7b
Megatron currently supports loading the v0.3 release of Mistral-7b (which does not use sliding window attention and offers a larger 32768 vocabulary) for inference and finetuning. Loading these checkpoints consists of several steps:
1. Get access to download the checkpoints (weights and tokenizer).
2. Convert the checkpoints from HuggingFace format to Megatron format.
3. (Optional) Validate converted checkpoints
4. Setup arguments for launching the model.
The following sections detail these steps.
## Download Huggingface checkpoints
Users must first apply for access to download the Mistral-7b checkpoints through [Huggingface](https://huggingface.co/mistralai/Mistral-7B-v0.3) (HF).
## Convert checkpoint format
The HF checkpoints can be converted to Megatron format by using Megatron's own Mistral checkpoint converter for HF format (see script `tools/checkpoint/loader_llama_mistral.py`).
Using the path to the Mistral tokenizer model (downloaded alongside the HF checkpoint), run the following command from the root of your Megatron source code to convert from HF format to the Megatron core format:
```
$>: python tools/checkpoint/convert.py \
> --bf16 \
> --model-type GPT \
> --loader llama_mistral \
> --saver core \
> --target-tensor-parallel-size ${TP} \
> --checkpoint-type hf \
> --load-dir ${HF_FORMAT_DIR} \
> --save-dir ${MEGATRON_FORMAT_DIR} \
> --tokenizer-model ${TOKENIZER_MODEL} \
> --model-size mistral \
```
After this conversion, we are ready to load the checkpoints into a Megatron core GPT model.
## (Optional) Validate checkpoints
A Megatron-LM text generation server for Mistral-7B can be launched using the script `examples/inference/llama_mistral/run_text_generation_mistral.sh <PATH_TO_CONVERTED_MCORE_CHECKPOINT> <PATH_TO_DOWNLOADED_HUGGINGFACE_CHECKPOINT>`.
Once running, query the server with `curl 'http://<TEXT_GENERATION_SERVER_IP>:5000/api' -X 'PUT' -H 'Content-Type: application/json; charset=UTF-8' -d '{"prompts":["<SOME_PROMPT>"], "tokens_to_generate":100, "top_k":1}'`.
A reference generation for comparison can be obtained from the Huggingface transformers library by running `python examples/inference/llama_mistral/huggingface_reference.py --model_path <PATH_TO_DOWNLOADED_HUGGINGFACE_CHECKPOINT> --prompt <SOME_PROMPT>`.
## Launch model
If loading for either inference or finetuning, use the following arguments:
```
--tensor-model-parallel-size ${TP} \
--pipeline-model-parallel-size 1 \
--seq-length 4096 \
--max-position-embeddings 4096 \
--tokenizer-type HuggingFaceTokenizer \
--tokenizer-model ${TOKENIZER_MODEL} \
--load ${CHECKPOINT_DIR} \
--exit-on-missing-checkpoint \
--use-checkpoint-args \
--no-load-optim \
--no-load-rng \
--untie-embeddings-and-output-weights \
--normalization RMSNorm \
--position-embedding-type rope \
--no-masked-softmax-fusion \
--attention-softmax-in-fp32
--apply-layernorm-1p \
--transformer-impl transformer_engine \
--group-query-attention 8 \
--disable-bia-linear \
--rotary-base 1000000 \
--rotary-percent 1.0 \
--swiglu \
--ffn-hidden-size 14336 \
--num-attention-heads 32
```
**Note:** If you converted to the legacy model format (i.e., `--saver legacy`), please see [here](#using-legacy-model-format).
# Other Llama-like model support
*Note: Experimental*
Many models such as Yi-34B and Qwen2.x use the Llama architecture and may be converted from HuggingFace to Megatron using the commands in [Llama-3.x](#llama-3x).
# Known numerical differences
It is not expected that the megatron and Huggingface implementations of llama3.x and mistral models will produce numerically identical results. There are multiple points where small numerical differences are expected. This is a non-exhaustive list:
1. TransformerEngine (TE) uses the model params_dtype inside RMSNorm whereas the Huggingface implementation uses fp32. See for details: https://github.com/NVIDIA/TransformerEngine/issues/1132
2. Huggingface `transformers` implements the q, k and v projections in self-attention as separate GEMMs whereas Megatron core combines them into a single GEMM for efficiency. This leads to small numerical differences.
# Using legacy model format
In all the checkpoint conversion examples used in this document, the saver format `--saver core` is used, signifying that the newer (and recommended) Megatron GPT model class will be used. I.e.:
- old class: `megatron.legacy.model.gpt_model.GPTModel`
- new class: `megatron.core.models.gpt.gpt_model.GPTModel`
Using this new format is the recommended approach. However, if your use case requires using the older class (i.e., convert using `--saver legacy`), then when launching training or finetuning, the following args must be added:
- `--use-legacy-models`: use the older model class
- `--ckpt-format torch`: use the `torch` checkpoint format, which is the only checkpoint format that is compatible with the legacy model format
context\_parallel package
=========================
Context parallelism overview
----------------------------
.. figure:: ../images/context_parallel/CP_overview.png
:alt: cp_overview
:align: center
Figure 1: A transformer layer running with TP2CP2. Communications next to Attention are for CP, others are for TP. (AG/RS: all-gather in forward and reduce-scatter in backward, RS/AG: reduce-scatter in forward and all-gather in backward, /AG: no-op in forward and all-gather in backward).
Context Parallelism ("CP") is a parallelization scheme on the dimension of sequence length. Unlike prior SP (sequence parallelism) which only splits the sequence of Dropout and LayerNorm activations, CP partitions the network inputs and all activations along sequence dimension. With CP, all modules except attention (e.g., Linear, LayerNorm, etc.) can work as usual without any changes, because they do not have inter-token operations. As for attention, the Q (query) of each token needs to compute with the KV (key and value) of all tokens in the same sequence. Hence, CP requires additional all-gather across GPUs to collect the full sequence of KV. Correspondingly, reduce-scatter should be applied to the activation gradients of KV in backward propagation. To reduce activation memory footprint, each GPU only stores the KV of a sequence chunk in forward and gathers KV again in backward. KV communication happens between a GPU and its counterparts in other TP groups. The all-gather and reduce-scatter are transformed to point-to-point communications in ring topology under the hood. Exchanging KV also can leverage MQA/GQA to reduce communication volumes, as they only have one or few attention heads for KV.
For example, in Figure 1, assuming sequence length is 8K, each GPU processes 4K tokens. GPU0 and GPU2 compose a CP group, they exchange KV with each other. Same thing also happens between GPU1 and GPU3. CP is similar to `Ring Attention <https://arxiv.org/abs/2310.01889>`_ but provides better performance by (1) leveraging the latest OSS and cuDNN flash attention kernels; (2) removing unnecessary computation resulted from low-triangle causal masking and achieving optimal load balance among GPUs.
Context parallelism benefits
----------------------------
.. figure:: ../images/context_parallel/CP_results.png
:alt: cp_results
:align: center
Figure 2: Speedup of 175B GPT with various TP+CP combinations vs. full recompute (i.e., TP8CP1).
LLM encounters OOM (out of memory) issue with long context (i.e., long sequence length) because of linearly increasing memory footprint of activations. Recomputing activations in backward can avoid OOM but also introduce significant overheads (~30% with full recompute). Enlarging TP (tensor model parallelism) can fix the OOM issue as well, but it potentially makes compute (e.g., Linear) too short to overlap communication latencies. To be clear, scaling out to more GPUs with bigger TP can hit the overlapping problem no matter if OOM happens.
CP can better address the issues. With CP, each GPU only computes on a part of the sequence, which reduces both computation and communication by CP times. Therefore, there are no concerns about the overlapping between them. The activation memory footprint per GPU is also CP times smaller, hence no OOM issue anymore. As Figure 2 shows, the combinations of TP and CP can achieve optimal performance by eliminating recompute overheads and making the best tradeoff between computation and communications.
Enabling context parallelism
----------------------------
CP support has been added to GPT. All models that share GPT code path also should be able to benefit from CP, such as Llama. CP can work with TP (tensor model parallelism), PP (pipeline model parallelism), and DP (data parallelism), where the total number of GPUs equals TPxCPxPPxDP. CP also can work with different attention variants, including MHA/MQA/GQA, uni-directional and bi-directional masking.
CP is enabled by simply setting context_parallel_size=<CP_SIZE> in command line. Default context_parallel_size is 1, which means CP is disabled. Running with CP requires Megatron-Core (>=0.5.0) and Transformer Engine (>=1.1).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment