"vscode:/vscode.git/clone" did not exist on "1de36ea2716b521ef03ed61ff4957cbc9b745e10"
Commit 1106877d authored by jerrrrry's avatar jerrrrry
Browse files

“13.0”

parents
Pipeline #2934 failed with stages
in 0 seconds
repos:
- repo: https://github.com/psf/black
rev: 'refs/tags/24.4.2:refs/tags/24.4.2'
hooks:
- id: black
files: ^megatron/core/.*
args: ["--skip-magic-trailing-comma"]
- repo: https://github.com/pycqa/pylint
rev: v3.2.6
hooks:
- id: pylint
files: ^megatron/core/.*
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: isort
files: ^megatron/core/.*
\ No newline at end of file
[MAIN]
ignore-paths=tests
max-line-length=100
[MESSAGES CONTROL]
disable=all
enable=C0115,C0116,W0611,C0301,E0606
# C0115: missing-class-docstring
# C0116: missing-function-docstring
# W0611: unused-import
# C0301: line-too-long
# E0606: possibly-used-before-assignment
\ No newline at end of file
# Changelog
## NVIDIA Megatron Core 0.13.0
* Features
* Inference
* Add async support for DynamicInferenceEngine ([MR \!3187](https://github.com/NVIDIA/Megatron-LM/commit/05079d55a5bfcc7a43f4619e36a40a9e8db3f882))
* Pad input tensors and enable FP8 weights for FP8 inference ([MR \!3341](https://github.com/NVIDIA/Megatron-LM/commit/6a6cd478839d90cf09a837adf8c79cbc844bc920))
* Force inference to always gather logits with tensor parallelism ([MR \!3442](https://github.com/NVIDIA/Megatron-LM/commit/7c9cdcb794089968278c7272e0261a68edf5d369))
* Multi batch size CUDA Graphs for Dynamic Inference ([MR \!3402](https://github.com/NVIDIA/Megatron-LM/commit/30aabe5e3133c6d70aa55aaabad4ea8cb04ce63c))
* Post-training
* ModelOpt updates ([MR \!3268](https://github.com/NVIDIA/Megatron-LM/commit/550ed5243c3a18e39430c15e8918ee63e41d7eaf))
* Add speculative decoding AR validation feature
* Add DeepSeek and Qwen model configs
* Performance
* ModelCommProcessGroup integration ([MR \!3391](https://github.com/NVIDIA/Megatron-LM/commit/26adc2dfde53fbc2b063e2fdd1d9ed26578811a6))
* Add HyperCommGrid: N-Dimensional Communication Grid for Model Parallelism ([MR \!3398](https://github.com/NVIDIA/Megatron-LM/commit/45400df7da7fa23e3aff86804e5ac254d9a8d3c0))
* Flexible creation and management of communication groups
* Add support for Spike No More embedding initializations and weight decay skipping ([MR \!3500](https://github.com/NVIDIA/Megatron-LM/commit/ee74aa66a06b24e511270f285db475941ef63bfd))
* Model support
* Add MiMo video VLM train example (\[MR \!3543)
* Add AVLM for MIMO (\[MR \!3624)
* Ease of use
* Add uv support for source installs ([MR \!3615](https://github.com/NVIDIA/Megatron-LM/commit/164204cd7216e642bdef7299c569d95f02f9a79e))
* Automated weekly prereleases ([MR \!3574](https://github.com/NVIDIA/Megatron-LM/commit/7e59266c70ef34a246438640af690b55c7ecac28))
* Bug fixes
* Use mscale\_all\_dim for softmax\_factor ([MR \!2800](https://github.com/NVIDIA/Megatron-LM/commit/e96a358f60c82b8ac8d965d91c3cc4ad0230a4e0))
* Fix FP8 param blockwise scaling unit test ([MR \!3480](https://github.com/NVIDIA/Megatron-LM/commit/57082f946a04c3390fcfc43634dc546ec3ded033))
* Fix unit test blockwise scaling ([MR \!3491](https://github.com/NVIDIA/Megatron-LM/commit/6d95fe63658f967e56a3fda88a9c30a424fcb520))
* Optimize prefill for token-less requests ([MR \!3499](https://github.com/NVIDIA/Megatron-LM/commit/daaa650a9ac4291d4027ca2fdeb4298ce024efd2))
* Add default values for Fp8Padding and Fp8Unpadding ([MR \!3501](https://github.com/NVIDIA/Megatron-LM/commit/42b2b1d10a9cb699b7e5aa40f6bfba9c2a1348aa))
* Fix CUDA graph logic for flexible pp layout ([MR \!3505](https://github.com/NVIDIA/Megatron-LM/commit/020d85e50ddf0f0282802002acb3662129a519c5))
* Load FP8 models with strict=False ([MR \!3508](https://github.com/NVIDIA/Megatron-LM/commit/1ab876ddc4c1893c76f26d775226a8d1dcdfb3d2))
* Skip rope check for torch \< 1.4.0 ([MR \!3528](https://github.com/NVIDIA/Megatron-LM/commit/d8180ef8ed0bb6f305dcdedf1b27d91304f361a3))
* Disable Apex tests for stability ([MR \!3539](https://github.com/NVIDIA/Megatron-LM/commit/d1256277fe378add0a2cfd7251f5a350b6d126ec))
* Fix typo in parallel\_state expert parallelism ([MR \!3548](https://github.com/NVIDIA/Megatron-LM/commit/5783ff32af759b8102cf0cb0bb82b30c48b9da26))
* Guard modelopt on macOS ([MR \!3549](https://github.com/NVIDIA/Megatron-LM/commit/76144fe1106e4fb0e69aa75b7a6ab66e71e8f37f))
* Retry on CUDA function failure ([MR \!3554](https://github.com/NVIDIA/Megatron-LM/commit/809aab68307a64c1386d68cc78ef70f8f4e12a80))
* Fix NCCL mem pool creation error ([MR \!3557](https://github.com/NVIDIA/Megatron-LM/commit/b61e21153146a563309b5d44cb5d7f7425806072))
* Fix get\_rotary\_seq\_len return type ([MR \!3559](https://github.com/NVIDIA/Megatron-LM/commit/1fa6bc83c7aeae95abc8e86ff0aac596985a01c3))
* Retry on CUDA function failure ([MR \!3560](https://github.com/NVIDIA/Megatron-LM/commit/7da88d74865c3f1a59894173246f26e7b3bf91b9))
* Fix NCCL allocator attribute error ([MR \!3565](https://github.com/NVIDIA/Megatron-LM/commit/6b656114795d74c3353cb007c59af49b1752f447))
* Ensure multi-prompt inference works ([MR \!3568](https://github.com/NVIDIA/Megatron-LM/commit/0fae48931000c9c7af06f7dcf037b5b7d96e0cd6))
* Fix MD5 on FIPS systems ([MR \!3577](https://github.com/NVIDIA/Megatron-LM/commit/83ee8c2848a3b1d42b40086a64da11e19f4b191f))
* Fixes dynamic context and inference bugs ([MR \!3582](https://github.com/NVIDIA/Megatron-LM/commit/e9c1da60a1ccc85376666d58568ed1d3e5a4f9db))
* Fix TE version for interleaved fused RoPE ([MR \!3586](https://github.com/NVIDIA/Megatron-LM/commit/b72b6cc161f5273b545bca09677382917cf20492))
* Fix MTP with MoE and TP logging ([MR \!3594](https://github.com/NVIDIA/Megatron-LM/commit/9af96623b66693e058f6bfce8d0094dc976792d8))
* Guard TE import fix ([MR \!3596](https://github.com/NVIDIA/Megatron-LM/commit/1bf946b1ec3f11e71459c7c0d06a97edbed96a1a))
* Add assertion for NCCL UB case ([MR \!3599](https://github.com/NVIDIA/Megatron-LM/commit/e11d28592f19c122859be764b7afe7c208d9acc1))
* Remove Encoder PP related Functions ([MR \!3604](https://github.com/NVIDIA/Megatron-LM/commit/9e49aa4446a58cc21c4dc0c5d0806551ad075ca7))
* Fix segfaults in tests ([MR \!3605](https://github.com/NVIDIA/Megatron-LM/commit/f6492fe8164fd5b9ad55007d435ccfc66cb98cc7))
* Fix TE error in distributed optimizer ([MR \!3625](https://github.com/NVIDIA/Megatron-LM/commit/e6c510ff3c1159f8955589b26f7c395bdf0607d9))
* Remove redundant barrier in checkpoint flow ([MR \!3626](https://github.com/NVIDIA/Megatron-LM/commit/26869feb6a3ac7f5616cb7253c37a4244d107d70))
* Support VPP MTP, fix logging ([MR \!3630](https://github.com/NVIDIA/Megatron-LM/commit/c351a473c7eedac2c43eab0815afb9759f4f8187))
* Retry mechanism for free(): invalid pointer errors ([MR \!3632](https://github.com/NVIDIA/Megatron-LM/commit/ec35b41b2df145a7ccb84afc48d94e0786e094da))
* Fix test\_replication.py issues ([MR \!3633](https://github.com/NVIDIA/Megatron-LM/commit/f7b50b271b2e0e396069e02551b21aa6fb374b43))
* Fix typo in parallel\_state ([MR \!3634](https://github.com/NVIDIA/Megatron-LM/commit/3c79a2c330290df58804c33e28e7c197fcc1f0b9))
* Fix CUDA graph logic determination ([MR \!3635](https://github.com/NVIDIA/Megatron-LM/commit/90efa3ef8a3c4f9e0f1db9f67ab9348bfa501387))
* Fix TE installation error ([MR \!3636](https://github.com/NVIDIA/Megatron-LM/commit/7e7322c01c9cb8ec254ecd9042700b22b70fe5c8))
* Ensure correct sharding type in local tests ([MR \!3643](https://github.com/NVIDIA/Megatron-LM/commit/946357f8dd7fdc12424b3a66bc999e6c0a02696c))
* Fix cudagraphed backward buffer reuse for last layer ([MR \!3645](https://github.com/NVIDIA/Megatron-LM/commit/ee61cf450d24760952e8995aab045ab6d55b986e))
* Set default for packed\_seq\_params in get\_rotary\_seq\_len ([MR \!3651](https://github.com/NVIDIA/Megatron-LM/commit/510d58c46664f44c556005ac928c5c531e12f761))
* Fix dynamic example script errors ([MR \!3653](https://github.com/NVIDIA/Megatron-LM/commit/72e290bf1f4bbf0c8047bb10a51da6ea6372e163))
* Guard TE import fix ([MR \!3666](https://github.com/NVIDIA/Megatron-LM/commit/ac198fc0d60a8c748597e01ca4c6887d3a7bcf3d))
* Known issues
## NVIDIA Megatron Core 0.12.0
* Add FP8 recipe selection to arguments (--fp8-recipe, --first-last-layers-bf16, --num-layers-at-start-in-bf16, --num-layers-at-end-in-bf16)
* Context parallel: fix loss scaling when calculate_per_token_loss=True
* Make the number of data parallel communication buckets configurable (--ddp-num-buckets, --ddp-pad-buckets-for-high-nccl-busbw)
* Inference
* Support in-flight batching and chunked KV cache
* Reduce memory usage,
* by not materializing full attention mask
* by only materializing logits for the last token during decode
* by removing an obsolete tensor reference
* Hybrid Model
* Inference
* Add CUDA graph support
* Change tools/run_mamba_text_generation_server.py to use megatron.core.inference
* Fix a shape issue when materializing logits for Mamba model
* Improve initialization of Mamba layers
* Add configuration switches (--mamba-state-dim, --mamba-head-dim, --mamba-num-groups, --is-hybrid-model)
* Make num_floating_point_operations work with hybrid model
* Make hybrid_conversion.py work with mixer that uses TE linear
* Add FP8 support
* Fix Mamba dt_bias tensor parallelism
* Support multimodal tokenizer
* Improve data parallelism scaling
* MoE
* Features:
* DeepEP support, compatible with all the parallelisms and token drop / dropless
* Important precision improvement: Enable FP32/FP64 routing and unpermutation using –moe-router-dtype. FP32 is recommended for all fine-grained MoE training
* CUDA Graph support for MoE
* Multi-Token Prediction (MTP) Support
* Fused indices_to_multihot kernel for DeepEP dispatcher
* Bug fixes:
* Fix Hang Issue with MoE+Dense Hybrid models
* Update theoretical memory and tflops estimation for MoE and MLA
* Fix MoE Aux loss scaling for per token loss
* Fixes for group limited routing and expert bias. We verified these fixes through dsv3 e2e verifications
* Known issues:
* The ckpt trained with Custom FSDP for MoE may not be compatible with 3D parallel training.
## NVIDIA Megatron Core 0.11.0
* Add multi datacenter training support though N/S connection
* MoE
* Features
* Support DeepSeek-V3 fine-tuning
* Aux-loss-free load balancing strategy
* Node-limited routing and Device-limited routing support.
* Tensor Parallelism support for MLA and Sequence Auxiliary Loss
* MTP (with TP and PP support) is coming soon.
* Permutation / Unpermutation fusion kernel from TransformerEngine.
* Uneven virtual pipeline parallel split support in first and last PP stage.
* Bug fixes:
* Fix the grad scale when TP != expert-TP and average_in_collective is enabled in DDP.
* Fix TEGroupedMLP distckpt compatibility issue with FP8 padding/unpadding.
* Known Issues:
* When training the Dense+MoE hybrid model, the process will hang if any PP rank does not have expert params.
* Add MX-FP16 support for optimizer and master weights
* CUDA Graph memory optimizations
* Enable UCC backend for PP communication
* Optimizer CPU offload support for memory savings
* Models
* Initial RADIO/CRADIO implementation
* llama3.2 support
* Hybrid Model
* Support quantization via TensorRT Model Optimizer
## NVIDIA Megatron Core 0.10.0
* Adding MLA to MCore
* Enable FP8 for GroupedMLP
* MoE Parallel Folding
* Enhance MoE Architecture: Support MoE Layer Frequency Patterns and Configurable MoE FFN Hidden Size
* Multimodal: NVLM training and evaluation support in MCore
* Mamba Hybrid
* Increase performance and reduce memory footprint of Triton language/compiler distributed caching
* Add more unit testing and fix bugs
## NVIDIA Megatron Core 0.9.0
* Uneven pipeline parallelism
* Enable pipeline parallelism where first and last ranks have fewer transformer layers than the intermediate ranks
* Per layer CUDAGraph support for GPT training with Transformer Engine modules
* Enable different TP sizes for the vision encoder
* Enable pipeline parallelism for T5 & Llava models
* Support multi-tile multi-image input in Llava models
* MoE
* FP8 support
* Runtime upcycling support
* Dispatcher implementation optimizations
* Shared expert support with overlapping optimizations
* Qwen Model support
* Known Issues
* When using sequence parallel, during the transformer block forward pass, dropout is not using the appropriate rng context.
* NVRx / Fault tolerance
* fault and hang detection in addition to existing straggler detection
* graceful exit and auto restart
## NVIDIA Megatron Core 0.8.0
* Multimodal
* Added initial support for training vision language models using the LLaVA architecture
* Added initial support for inference with multimodal inputs
* End-to-end multimodal example from data collection to training to evaluation is provided in examples/multimodal
* MoE
* Context Parallel support.
* Distributed checkpoint support for grouped GEMM.
* Mamba
## NVIDIA Megatron Core 0.7.0
* MoE
* Token drop support
* Several efficiency optimizations
* Improved model parallelism
* Memory optimizations
* Distributed checkpointing
* Enabled for Retro
* Asynchronous checkpoint saving
* Several minor bug fixes, speed improvements, and memory optimizations
## NVIDIA Megatron Core 0.6.0
* MoE (Mixture of Experts)
* Performance optimization
* Communication optimization for multi GPU and Single GPU
* 23% improvement (323 TFLOPS/GPU) over MCore 0.5.0 on Mixtral with Hopper BF16
* GroupedMLP enhancement for Hopper
* DP Overlapping. Support overlapping computation with gradient reduction and parameter gathering.
* All-to-All based Token Dispatcher
* Layer-wise logging for load balancing loss.
* Improved expert parallel support including distributed optimizer.
* Distributed optimizer
* RETRO
* Data processing
* BERT
* Distributed checkpointing
* Dist checkpointing
* PyTorch native distributed backend
* Improved saving/loading speed
* TensorRT-LLM Export
* Integration with TensorRT Model Optimizer Post-training quantization (PTQ)
* Text generation driver to perform PTQ in Megatron-LM
* Llama2 and Nemotron3-8b examples to use TensorRT-LLM unified build API to build engine after training.
* Several minor enhancements, bug fixes, and documentation updates
## NVIDIA Megatron Core 0.5.0
### Key Features and Enhancements
Megatron core documentation is now [live!](https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start)
### Model Features
* MoE (Mixture of Experts)
* Support for Z-loss, Load balancing and Sinkhorn
* Layer and communications refactor
* Richer parallelism mappings and EP can be combined with other model parallel techniques for larger MoE variants, e.g. EP + TP + DP + SP + PP
* Token dropless architecture with Top-K routing
* Performance optimization with with GroupedGEMM when number of local experts is > 1
* Distributed checkpointing
* Interleaved rotary embedding
### Datasets
* Masked WordPiece datasets for BERT and T5
* Raw and mock datasets
### Parallelism
### Performance
* Activation offloading to CPU
* Rope and Swiglu fusion
* Sliding window attention (via Transformer Engine)
### General Improvements
* Timers
## NVIDIA Megatron Core 0.4.0
### Key Features and Enhancements
#### Models
* BERT
* RETRO
* T5
#### Parallelism
* Mixture of Experts support for GPT
* Model parallel efficient Distributed Data Parallel (DDP)
* Context Parallel (2D Tensor Parallel) support
#### Datasets
* GPT Dataset
* Blended Dataset
# Core
[Core-ADLR] @mcore-reviewers/core-adlr
megatron/core/
[Core-NeMo] @mcore-reviewers/core-nemo
megatron/core/
^[Core-MLPerf] @mcore-reviewers/mlperf
megatron/core/
# Models
[BERT] @mcore-reviewers/bert
megatron/core/models/bert/
[GPT] @mcore-reviewers/gpt
megatron/core/models/gpt/
[Retro] @mcore-reviewers/retro
megatron/core/models/retro/
[Multimodal] @mcore-reviewers/multi-modal
megatron/core/models/multimodal/
[T5] @mcore-reviewers/t5
megatron/core/models/t5/
[Hybrid-mamba] @mcore-reviewers/hybrid-mamba
megatron/core/models/mamba/
# Distributed Checkpointing
[Distributed Checkpointing] @mcore-reviewers/dist-checkpointing
megatron/core/dist_checkpointing/
# Distributed Optimizer
[Distributed Optimizer] @mcore-reviewers/dist-optimizer
megatron/core/optimizer/distrib_optimizer/
# Quantization and Inference (QAT)
[Quantization and Inference (QAT)] @mcore-reviewers/quantization-and-inference
megatron/core/inference/modelopt_support
# Datasets
[Datasets] @mcore-reviewers/datasets
megatron/core/datasets/
# Parallelism
[Pipeline Parallelism] @mcore-reviewers/pipeline-parallelism
megatron/core/pipeline_parallel/
# Transformer
[Transformer] @mcore-reviewers/transformer
megatron/core/transformer/
[MoE-ADLR] @mcore-reviewers/moe-adlr
megatron/core/transformer/moe/
[MoE-Moe] @mcore-reviewers/moe-moe
megatron/core/transformer/moe/
# Inference
[Inference] @mcore-reviewers/inference
megatron/core/inference/
# Parallel State
[ParallelState] @mcore-reviewers/parallelstate
megatron/core/parallel_state.py
[CI][1] @mcore-reviewers/ci
.gitlab/
.github/
.gitlab-ci.yml
Dockerfile.ci.lts
Dockerfile.ci.dev
tests/
megatron/core/transformer/transformer_block.py
megatron/core/transformer/transformer_layer.py
\ No newline at end of file
# Contributing to Megatron-LM
This document outlines the processes and policies for issues and pull requests by non-NVIDIA contributors to the Megatron-LM github repository.
Everyone is welcome to contribute to the project but development of Megatron-LM continues internally at NVIDIA. When contributing it important to ensure that changes are in line with the project direction. Small changes to fix bugs are welcomed and appreciated. If proposing large architectural changes or changes for stylistic reasons open an issue first so we can discuss it.
PRs will first be pulled into NVIDIA's internal Megatron-LM repo and then pushed back out to the open github repo with proper credit given to the committers.
## Issue policy
Please do file any bugs you find, keeping the following in mind:
- If filing a bug, i.e. you have found something that doesn't work as expected, use the BUG template.
- If you've found a regression in speed or accuracy use the REGRESSION template.
- If you are requesting a new feature or modification of an existing feature use the ENHANCEMENT template.
- If opening an issue to ask a question no template is needed but please make your question as clear and concise as possible.
- One issue per bug. Putting multiple things in the same issue makes both discussion and completion unnecessarily complicated.
- Your bug is mostly likely to get attention from the development team quickly if we can easily reproduce it.
- Use proper spelling, grammar, and punctuation.
- Write in an authoritative and technical tone.
## Code submission policy
Here are some dos & don'ts to try and stick to:
### Do:
- Format new code in a style that is consistent with the file being changed. Megatron-LM doesn't (yet) have a style guide or enforced formatting.
- Split your changes into separate, atomic commits i.e. A commit per feature or fix.
- Make sure your commits are rebased on the master branch.
- Write the commit message subject line in the imperative mood ("Change the default argument for X", not "Changed the default argument for X").
- Write your commit messages in proper English, with care and punctuation.
- Check the spelling of your code, comments and commit messages.
### Don't:
- Submit code that's incompatible with the project licence.
- Touch anything outside the stated scope of the PR. This includes formatting changes to code not relevant to the PR.
- Iterate excessively on your design across multiple commits.
- Include commented-out code.
- Attempt large architectural changes without first opening an issue to discuss.
## Issue and Pull Request Q&A (Updated Jul 2023)
### I've submitted an issue and PR. When can I expect to get some feedback?
Megatron-LM is developed and maintained by a small team of researchers. We will endeavour to read and acknowledge all new issues and PRs within a week. A few rules of thumb:
- Reproducible bugs/regressions and bug/regression fixes are likely to get the attention of maintainers the quickest.
- Issues requesting an enhancement may only recieve acknowlegement that they've been read and may be closed with a "wontfix" label if they're not inline with the project direction. If they are acknowledged and remain open you can assume the maintainers agree they're a desirable feature.
- Support requests, i.e. requests for help running the code, have the lowest priority and will be responded to as maintainer time permits.
### If my issue or PR isn't getting attention, how long should I wait before pinging one of the project maintainers?
One week if there is no acknowledgement of the intial request.
### Who are the project maintainers I should ping?
The corresponding maintainers at this time are @jaredcasper and @jon-barker.
### Is there a policy for issues and PRs that haven't been touched in X days? Should they be closed?
Yes, starting in July 2023 we have a bot that will mark untouched PRs as "stale" after 60 days.
We have a long backlog of issues and PRs dating back 3.5 years. We are trying to triage these now by working backwards. Older issues we believe may still be relevant may recieve a request to re-test them with the latest code. If there's no response they may be closed. Again, if you they should be re-opened then just respond with a comment to that effect.
Thank-you!
\ No newline at end of file
The following applies to all files unless otherwise noted:
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--
This repository also contains code from Hugging Face Inc., Google Research,
Facebook (from their Fairseq, Dino, and ParlAI projects), Microsoft (from their
Swin-Transformer project), Philip Popien, the Mamba project (Tri Dao and
Albert Gu), and the Triton language and compiler project (Philippe Tillet and
OpenAI). Files from these organizations have notices at the top of each file.
Below are licenses used in those files, as indicated.
--------------------------------------------------------------------------------------
-- LICENSE FOR Facebook, huggingface, Google Research, LLaVA, Mamba, and vLLM code --
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--------------------------------------------------------------------------------
LICENSE FOR
Facebook, Inc. and its affiliates,
Meta Platforms, Inc. and its affiliates,
Microsoft Corporation,
OpenGVLab/InternVL,
Triton language and compiler,
and DeepSeek.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
include megatron/core/requirements.txt
include megatron/core/README.md
include megatron/core/package_info.py
recursive-include requirements *
This diff is collapsed.
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
ARG WHEEL_DIR=/workspace/wheels
FROM ${FROM_IMAGE_NAME} as mcore_image
ENV PIP_CONSTRAINT=""
RUN pip install -U pip
FROM mcore_image as build_te
ARG TE_COMMIT=8382eed6cccb1eb0602c96afc1cfbc707468257f
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_te.sh --repo-ref $TE_COMMIT --output-wheel-dir $WHEEL_DIR
FROM mcore_image as build_mamba
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_mamba.sh --output-wheel-dir $WHEEL_DIR
FROM mcore_image as build_causalconv1d
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_causalconv1d.sh --output-wheel-dir $WHEEL_DIR
FROM mcore_image as build_groupedgemm
ARG WHEEL_DIR
WORKDIR /workspace
COPY docker docker/
RUN bash docker/common/build_groupedgemm.sh --output-wheel-dir $WHEEL_DIR
FROM mcore_image as main
ENV DEBIAN_FRONTEND=noninteractive
ARG UV_VERSION=0.7.2
ARG YQ_VERSION=4.44.1
ENV PATH="/root/.local/bin:$PATH"
ENV UV_PROJECT_ENVIRONMENT=/opt/venv
ENV PATH="$UV_PROJECT_ENVIRONMENT/bin:$PATH"
ENV UV_LINK_MODE=copy
RUN bash -ex <<"EOF"
apt-get update
apt-get install -y --no-install-recommends gettext python3-venv
apt-get clean
python -m venv /opt/jet
wget https://github.com/mikefarah/yq/releases/download/v${YQ_VERSION}/yq_linux_amd64 -O /usr/local/bin/yq
chmod a+x /usr/local/bin/yq
curl -LsSf https://astral.sh/uv/${UV_VERSION}/install.sh | sh
EOF
ARG WHEEL_DIR
COPY README.md pyproject.toml uv.lock /workspace/
COPY megatron/core/__init__.py /workspace/megatron/core/
COPY megatron/core/package_info.py /workspace/megatron/core/
COPY docker/common/ /workspace/docker/common/
COPY --from=build_te $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_mamba $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_causalconv1d $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_groupedgemm $WHEEL_DIR/*.whl $WHEEL_DIR/
RUN bash -ex <<"EOF"
uv venv ${UV_PROJECT_ENVIRONMENT} --system-site-packages
uv sync --extra dev --extra mlm --link-mode copy --locked
bash docker/common/install_source_wheels.sh --input-wheel-dir $WHEEL_DIR/ --environment dev
EOF
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
ENV PATH="$PATH:/opt/jet/bin"
RUN --mount=type=secret,id=JET_INDEX_URLS \
--mount=type=secret,id=LOGGER_INDEX_URL bash -ex <<"EOF"
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS)
LOGGER_INDEX_URL=$(cat /run/secrets/LOGGER_INDEX_URL)
uv pip install --no-cache-dir jet-api==$JET_API_VERSION "jet-client~=2.0" --upgrade $JET_INDEX_URLS "setuptools<80.0.0"
uv pip install --no-cache-dir "one-logger" --upgrade $LOGGER_INDEX_URL "setuptools<80.0.0"
EOF
###
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
ARG WHEEL_DIR=/workspace/wheels
FROM $FROM_IMAGE_NAME as build_mamba
WORKDIR /opt
ARG WHEEL_DIR
RUN MAMBA_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/state-spaces/mamba.git@v2.0.3 -w $WHEEL_DIR
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as build_causalconv1d
WORKDIR /opt
ARG WHEEL_DIR
RUN CAUSAL_CONV1D_FORCE_BUILD=TRUE pip3 wheel -v git+https://github.com/Dao-AILab/causal-conv1d.git@v1.2.2.post1 -w $WHEEL_DIR
FROM $FROM_IMAGE_NAME as build_groupedgemm
WORKDIR /opt
ARG WHEEL_DIR
RUN pip3 wheel -v git+https://github.com/fanshiqing/grouped_gemm@v1.1.2 -w $WHEEL_DIR
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN bash -ex <<"EOF"
apt-get update
apt-get install -y --no-install-recommends gettext python3-venv
apt-get clean
python -m venv /opt/jet
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq
chmod a+x /usr/local/bin/yq
EOF
ARG UV_VERSION=0.7.2
ENV PATH="/root/.local/bin:$PATH"
RUN curl -LsSf https://astral.sh/uv/${UV_VERSION}/install.sh | sh
ENV UV_PROJECT_ENVIRONMENT=/opt/venv
ENV PATH="$UV_PROJECT_ENVIRONMENT/bin:$PATH"
ENV UV_LINK_MODE=copy
RUN
ARG WHEEL_DIR
COPY README.md pyproject.toml uv.lock /workspace/
COPY megatron/core/__init__.py /workspace/megatron/core/
COPY megatron/core/package_info.py /workspace/megatron/core/
COPY docker/common/ /workspace/docker/common/
COPY --from=build_mamba $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_causalconv1d $WHEEL_DIR/*.whl $WHEEL_DIR/
COPY --from=build_groupedgemm $WHEEL_DIR/*.whl $WHEEL_DIR/
RUN bash -ex <<"EOF"
uv venv ${UV_PROJECT_ENVIRONMENT} --system-site-packages
uv sync --extra lts --extra mlm --link-mode copy --locked
bash docker/common/install_source_wheels.sh --input-wheel-dir $WHEEL_DIR/ --environment lts
EOF
ENV PYTHONPATH="/opt/megatron-lm:$PYTHONPATH"
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
ENV PATH="$PATH:/opt/jet/bin"
RUN --mount=type=secret,id=JET_INDEX_URLS \
--mount=type=secret,id=LOGGER_INDEX_URL bash -ex <<"EOF"
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS)
LOGGER_INDEX_URL=$(cat /run/secrets/LOGGER_INDEX_URL)
uv pip install --no-cache-dir jet-api==$JET_API_VERSION "jet-client~=2.0" --upgrade $JET_INDEX_URLS "setuptools<80.0.0"
uv pip install --no-cache-dir "one-logger" --upgrade $LOGGER_INDEX_URL "setuptools<80.0.0"
EOF
###
# syntax=docker/dockerfile:1.3-labs
ARG FROM_IMAGE_NAME
FROM ${FROM_IMAGE_NAME} as main
RUN apt-get update && \
apt-get install -y --no-install-recommends gettext && \
apt-get clean && \
wget https://github.com/mikefarah/yq/releases/download/v4.44.1/yq_linux_amd64 -O /usr/local/bin/yq && \
chmod a+x /usr/local/bin/yq
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install --no-cache-dir jet-api==$JET_API_VERSION "jet-client~=2.0" --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
# syntax=docker/dockerfile:experimental
ARG FROM_IMAGE_NAME
FROM $FROM_IMAGE_NAME as main
ENV DEBIAN_FRONTEND=noninteractive
RUN sed -i -e 's/^APT/# APT/' -e 's/^DPkg/# DPkg/' \
/etc/apt/apt.conf.d/docker-clean
RUN apt-get update && \
apt-get install -y python3-venv && \
apt-get clean && \
python -m venv /opt/jet
RUN pip3 install --no-cache-dir \
black==24.4.2 \
isort==5.13.2 \
flake8==7.1.0 \
pylint==3.2.6 \
coverage \
mypy \
python-gitlab \
pandas \
slack-sdk
WORKDIR /opt/megatron-lm
##### For NVIDIANS only #####
FROM main as jet
ARG JET_API_VERSION
RUN --mount=type=secret,id=JET_INDEX_URLS \
JET_INDEX_URLS=$(cat /run/secrets/JET_INDEX_URLS) && \
pip install --no-cache-dir "jet-client~=2.0" --upgrade $JET_INDEX_URLS
ENV PATH="$PATH:/opt/jet/bin"
###
\ No newline at end of file
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL="https://github.com/Dao-AILab/causal-conv1d.git"
REPO_REF="v1.2.2.post1"
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
export CAUSAL_CONV1D_FORCE_BUILD=TRUE
pip3 wheel --no-cache-dir --no-deps -w "${OUTPUT_WHEEL_DIR}" .
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL="https://github.com/fanshiqing/grouped_gemm"
REPO_REF="v1.1.2"
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
export MAMBA_FORCE_BUILD=TRUE
pip3 wheel --no-cache-dir --no-deps -w "${OUTPUT_WHEEL_DIR}" .
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL="https://github.com/state-spaces/mamba.git"
REPO_REF="2e16fc3062cdcd4ebef27a9aa4442676e1c7edf4"
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
pip3 wheel --no-cache-dir --no-deps -w "${OUTPUT_WHEEL_DIR}" .
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
# Initialize variables
REPO_URL=$(cat docker/common/manifest.json | jq -r '."vcs-dependencies"."transformer-engine".repo')
REPO_REF=$(cat docker/common/manifest.json | jq -r '."vcs-dependencies"."transformer-engine".ref')
OUTPUT_WHEEL_DIR="$(pwd)/wheels"
SCRIPT_DIR="$(dirname $(realpath $0))"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--repo-url)
REPO_URL="$2"
shift 2
;;
--repo-ref)
REPO_REF="$2"
shift 2
;;
--output-wheel-dir)
OUTPUT_WHEEL_DIR="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$REPO_URL" ] || [ -z "$REPO_REF" ] || [ -z "$OUTPUT_WHEEL_DIR" ]; then
echo "Error: --repo-url, --repo-ref, and --output-wheel-dir are required"
echo "Usage: $0 --repo-url URL --repo-ref REF --output-wheel-dir DIR"
exit 1
fi
# Create a temporary directory
TEMP_DIR=$(mktemp -d)
echo "Working in temporary directory: ${TEMP_DIR}"
python3 -m venv "${TEMP_DIR}/venv" --system-site-packages
source "${TEMP_DIR}/venv/bin/activate"
# Ensure cleanup on script exit
trap 'rm -rf "${TEMP_DIR}"' EXIT
# Change to temporary directory
cd "${TEMP_DIR}"
# Initialize git repository
git init
# Perform git fetch with depth 1
git fetch "${REPO_URL}" "${REPO_REF}" --depth 1
git checkout FETCH_HEAD
# Fetch submodules
git submodule update --init --recursive
# Create output directory if it doesn't exist
mkdir -p "${OUTPUT_WHEEL_DIR}"
# Build the wheel using python -m build
export NVTE_FRAMEWORK=pytorch # Optionally set framework
pip3 wheel --no-cache-dir --no-build-isolation -w "${OUTPUT_WHEEL_DIR}" .
ls -al "${OUTPUT_WHEEL_DIR}"
#!/bin/bash
set -xeuo pipefail # Exit immediately if a command exits with a non-zero status
INPUT_WHEEL_DIR=$(pwd)/wheels
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--input-wheel-dir)
INPUT_WHEEL_DIR="$2"
shift 2
;;
--environment)
ENVIRONMENT="$2"
shift 2
;;
*)
echo "Unknown option: $1"
echo "Usage: $0 --input-wheel-dir DIR"
exit 1
;;
esac
done
# Check if required arguments are provided
if [ -z "$INPUT_WHEEL_DIR" ] || [ -z "$ENVIRONMENT" ]; then
echo "Error: --input-wheel-dir and --environment are required"
echo "Usage: $0 --input-wheel-dir DIR --environment ENV"
exit 1
fi
if [ "$ENVIRONMENT" = "dev" ]; then
TE_WHEEL=$(ls $INPUT_WHEEL_DIR/transformer_engine*.whl) || true
[ -z "$TE_WHEEL" ] && TE_WHEEL=$(bash docker/common/build_te.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
fi
MAMBA_WHEEL=$(ls $INPUT_WHEEL_DIR/mamba*.whl) || true
[ -z "$MAMBA_WHEEL" ] && MAMBA_WHEEL=$(bash docker/common/build_mamba.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
CAUSALCONV1D_WHEEL=$(ls $INPUT_WHEEL_DIR/causal_conv1d*.whl) || true
[ -z "$CAUSALCONV1D_WHEEL" ] && CAUSALCONV1D_WHEEL=$(bash docker/common/build_causalconv1d.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
GROUPEDGEMM_WHEEL=$(ls $INPUT_WHEEL_DIR/grouped_gemm*.whl) || true
[ -z "$GROUPEDGEMM_WHEEL" ] && GROUPEDGEMM_WHEEL=$(bash docker/common/build_groupedgemm.sh --output-wheel-dir $INPUT_WHEEL_DIR | tail -n 1)
# Override deps that are already present in the base image
# only for dev
if [ "$ENVIRONMENT" = "dev" ]; then
uv pip install --no-cache-dir --no-deps $TE_WHEEL \
"nvidia-modelopt[torch]>=0.29.0" "setuptools<80.0.0"
fi
# Install heavy optional deps like mamba, causalconv1d, groupedgemm
uv pip install --no-cache-dir \
$MAMBA_WHEEL \
$CAUSALCONV1D_WHEEL \
$GROUPEDGEMM_WHEEL \
"setuptools<80.0.0"
{
"ngc-pytorch": "nvcr.io/nvidia/pytorch:25.03-py3",
"vcs-dependencies": {
"transformer-engine": {
"repo": "https://github.com/NVIDIA/TransformerEngine",
"ref": "bee4649c15a79ffcb9689ca7c0c963f5febaa28a"
}
},
"pypi-dependencies": {}
}
\ No newline at end of file
This diff is collapsed.
context\_parallel package
=========================
Context parallelism overview
----------------------------
.. figure:: ../images/context_parallel/CP_overview.png
:alt: cp_overview
:align: center
Figure 1: A transformer layer running with TP2CP2. Communications next to Attention are for CP, others are for TP. (AG/RS: all-gather in forward and reduce-scatter in backward, RS/AG: reduce-scatter in forward and all-gather in backward, /AG: no-op in forward and all-gather in backward).
Context Parallelism ("CP") is a parallelization scheme on the dimension of sequence length. Unlike prior SP (sequence parallelism) which only splits the sequence of Dropout and LayerNorm activations, CP partitions the network inputs and all activations along sequence dimension. With CP, all modules except attention (e.g., Linear, LayerNorm, etc.) can work as usual without any changes, because they do not have inter-token operations. As for attention, the Q (query) of each token needs to compute with the KV (key and value) of all tokens in the same sequence. Hence, CP requires additional all-gather across GPUs to collect the full sequence of KV. Correspondingly, reduce-scatter should be applied to the activation gradients of KV in backward propagation. To reduce activation memory footprint, each GPU only stores the KV of a sequence chunk in forward and gathers KV again in backward. KV communication happens between a GPU and its counterparts in other TP groups. The all-gather and reduce-scatter are transformed to point-to-point communications in ring topology under the hood. Exchanging KV also can leverage MQA/GQA to reduce communication volumes, as they only have one or few attention heads for KV.
For example, in Figure 1, assuming sequence length is 8K, each GPU processes 4K tokens. GPU0 and GPU2 compose a CP group, they exchange KV with each other. Same thing also happens between GPU1 and GPU3. CP is similar to `Ring Attention <https://arxiv.org/abs/2310.01889>`_ but provides better performance by (1) leveraging the latest OSS and cuDNN flash attention kernels; (2) removing unnecessary computation resulted from low-triangle causal masking and achieving optimal load balance among GPUs.
Context parallelism benefits
----------------------------
.. figure:: ../images/context_parallel/CP_results.png
:alt: cp_results
:align: center
Figure 2: Speedup of 175B GPT with various TP+CP combinations vs. full recompute (i.e., TP8CP1).
LLM encounters OOM (out of memory) issue with long context (i.e., long sequence length) because of linearly increasing memory footprint of activations. Recomputing activations in backward can avoid OOM but also introduce significant overheads (~30% with full recompute). Enlarging TP (tensor model parallelism) can fix the OOM issue as well, but it potentially makes compute (e.g., Linear) too short to overlap communication latencies. To be clear, scaling out to more GPUs with bigger TP can hit the overlapping problem no matter if OOM happens.
CP can better address the issues. With CP, each GPU only computes on a part of the sequence, which reduces both computation and communication by CP times. Therefore, there are no concerns about the overlapping between them. The activation memory footprint per GPU is also CP times smaller, hence no OOM issue anymore. As Figure 2 shows, the combinations of TP and CP can achieve optimal performance by eliminating recompute overheads and making the best tradeoff between computation and communications.
Enabling context parallelism
----------------------------
CP support has been added to GPT. All models that share GPT code path also should be able to benefit from CP, such as Llama. CP can work with TP (tensor model parallelism), PP (pipeline model parallelism), and DP (data parallelism), where the total number of GPUs equals TPxCPxPPxDP. CP also can work with different attention variants, including MHA/MQA/GQA, uni-directional and bi-directional masking.
CP is enabled by simply setting context_parallel_size=<CP_SIZE> in command line. Default context_parallel_size is 1, which means CP is disabled. Running with CP requires Megatron-Core (>=0.5.0) and Transformer Engine (>=1.1).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment