1. 04 Mar, 2024 1 commit
    • flybird11111's avatar
      [example]add gpt2 benchmark example script. (#5295) · 29695cf7
      flybird11111 authored
      
      
      * benchmark gpt2
      
      * fix
      
      fix
      
      fix
      
      fix
      
      * [doc] fix typo in Colossal-LLaMA-2/README.md (#5247)
      
      * [workflow] fixed build CI (#5240)
      
      * [workflow] fixed build CI
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * [ci] fixed booster test (#5251)
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * [ci] fixed ddp test (#5254)
      
      * [ci] fixed ddp test
      
      * polish
      
      * fix typo in  applications/ColossalEval/README.md (#5250)
      
      * [ci] fix shardformer tests. (#5255)
      
      * fix ci
      
      fix
      
      * revert: revert p2p
      
      * feat: add enable_metadata_cache option
      
      * revert: enable t5 tests
      
      ---------
      Co-authored-by: default avatarWenhao Chen <cwher@outlook.com>
      
      * [doc] fix doc typo (#5256)
      
      * [doc] fix annotation display
      
      * [doc] fix llama2 doc
      
      * [hotfix]: add pp sanity check and fix mbs arg (#5268)
      
      * fix: fix misleading mbs arg
      
      * feat: add pp sanity check
      
      * fix: fix 1f1b sanity check
      
      * [workflow] fixed incomplete bash command (#5272)
      
      * [workflow] fixed oom tests (#5275)
      
      * [workflow] fixed oom tests
      
      * polish
      
      * polish
      
      * polish
      
      * [ci] fix test_hybrid_parallel_plugin_checkpoint_io.py (#5276)
      
      * fix ci
      
      fix
      
      * fix test
      
      * revert: revert p2p
      
      * feat: add enable_metadata_cache option
      
      * revert: enable t5 tests
      
      * fix
      
      ---------
      Co-authored-by: default avatarWenhao Chen <cwher@outlook.com>
      
      * [shardformer] hybridparallelplugin support gradients accumulation. (#5246)
      
      * support gradients acc
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      * fix
      
      fix
      
      * fix
      
      fix
      
      fix
      
      * [hotfix] Fix ShardFormer test execution path when using sequence parallelism (#5230)
      
      * fix auto loading gpt2 tokenizer (#5279)
      
      * [doc] add llama2-13B disyplay (#5285)
      
      * Update README.md
      
      * fix 13b typo
      
      ---------
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      
      * fix llama pretrain (#5287)
      
      * fix
      
      * fix
      
      * fix
      
      fix
      
      * fix
      
      fix
      
      fix
      
      * fix
      
      fix
      
      * benchmark gpt2
      
      * fix
      
      fix
      
      fix
      
      fix
      
      * [workflow] fixed build CI (#5240)
      
      * [workflow] fixed build CI
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * [ci] fixed booster test (#5251)
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * fix
      
      fix
      
      * fix
      
      fix
      
      fix
      
      * fix
      
      * fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      * fix
      
      * Update shardformer.py
      
      ---------
      Co-authored-by: default avatardigger yu <digger-yu@outlook.com>
      Co-authored-by: default avatarFrank Lee <somerlee.9@gmail.com>
      Co-authored-by: default avatarWenhao Chen <cwher@outlook.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      Co-authored-by: default avatarZhongkai Zhao <kanezz620@gmail.com>
      Co-authored-by: default avatarMichelle <97082656+MichelleMa8@users.noreply.github.com>
      Co-authored-by: default avatarDesperado-Jia <502205863@qq.com>
      29695cf7
  2. 27 Feb, 2024 1 commit
  3. 30 Jan, 2024 1 commit
  4. 19 Jan, 2024 1 commit
  5. 15 Jan, 2024 1 commit
  6. 11 Jan, 2024 1 commit
  7. 09 Jan, 2024 1 commit
  8. 08 Jan, 2024 1 commit
    • Xuanlei Zhao's avatar
      [npu] use extension for op builder (#5172) · dd2c28a3
      Xuanlei Zhao authored
      * update extension
      
      * update cpu adam
      
      * update is
      
      * add doc for cpu adam
      
      * update kernel
      
      * update commit
      
      * update flash
      
      * update memory efficient
      
      * update flash attn
      
      * update flash attention loader
      
      * update api
      
      * fix
      
      * update doc
      
      * update example time limit
      
      * reverse change
      
      * fix doc
      
      * remove useless kernel
      
      * fix
      
      * not use warning
      
      * update
      
      * update
      dd2c28a3
  9. 02 Jan, 2024 1 commit
  10. 22 Dec, 2023 1 commit
    • Wenhao Chen's avatar
      [pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134) · 4fa689fc
      Wenhao Chen authored
      * test: add more p2p tests
      
      * fix: remove send_forward_recv_forward as p2p op list need to use the same group
      
      * fix: make send and receive atomic
      
      * feat: update P2PComm fn
      
      * feat: add metadata cache in 1f1b
      
      * feat: add metadata cache in interleaved pp
      
      * feat: modify is_xx_stage fn
      
      * revert: add _broadcast_object_list
      
      * feat: add interleaved pp in llama policy
      
      * feat: set NCCL_BUFFSIZE in HybridParallelPlugin
      4fa689fc
  11. 08 Dec, 2023 1 commit
  12. 28 Nov, 2023 2 commits
  13. 27 Nov, 2023 1 commit
  14. 22 Nov, 2023 2 commits
  15. 20 Nov, 2023 2 commits
  16. 18 Nov, 2023 1 commit
  17. 16 Nov, 2023 1 commit
    • Elsa Granger's avatar
      [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading... · b2ad0d9e
      Elsa Granger authored
      
      [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017)
      
      * Use p2p
      
      * Cannot bidirectonal send p2p
      
      * Refactor tensor creation and serialization in P2P
      communication
      
      * Fix llama forward args in flash attention
      
      * Add flop estimate from megatron
      
      * Support loading weight not in weight_map when strict=False in hybrid_parallel
      
      * Use send_forward_recv_backward, etc in 1f1b
      
      * Use dataclass for metdata
      Remove torch.cuda.synchronize() as suggested
      
      * Add comment about the torch.cuda.synchronize for potential error
      
      * Typo
      
      * Update hybrid_parallel_checkpoint_io.py
      
      * Update p2p.py
      
      * Update one_f_one_b.py
      
      * Update p2p.py
      
      ---------
      Co-authored-by: default avatarflybird11111 <1829166702@qq.com>
      b2ad0d9e
  18. 09 Nov, 2023 1 commit
    • Wenhao Chen's avatar
      [moe]: fix ep/tp tests, add hierarchical all2all (#4982) · 72444127
      Wenhao Chen authored
      * fix: add warning for EP different behavior
      
      * fix: use shard_data in ep & tp model
      
      * to: add used_capacity
      
      * fix: fix router test
      
      * feat: add create_ep_node_group
      
      * feat: add create_ep_hierarchical_group fn
      
      * feat: add HierarchicalAllToAll
      
      * test: add hierarchical all2all test
      
      * fix: fix test errors
      
      * fix: simplify create_ep_hierarchical_group
      
      * fix: add hierarchical_alltoall arg
      
      * fix: fix environ typo
      
      * revert: revert process mesh order
      
      * to: add todo mark
      
      * fix: skip hierarchical_comm if torch < 1.13.1
      72444127
  19. 08 Nov, 2023 1 commit
    • Xuanlei Zhao's avatar
      [moe] support optimizer checkpoint (#5015) · f71e63b0
      Xuanlei Zhao authored
      * Refactor MoE Manager setup method
      
      * unshard optim ckpt
      
      * optim io
      
      * update transformer version
      
      * update requirements
      
      * update ckpt
      
      * update ckpt
      
      * update ckpt
      
      * fix engine
      
      * fix engine
      f71e63b0
  20. 02 Nov, 2023 1 commit
  21. 07 Oct, 2023 1 commit
  22. 21 Sep, 2023 1 commit
  23. 20 Sep, 2023 1 commit
    • Wenhao Chen's avatar
      [chat]: update rm, add wandb and fix bugs (#4471) · 7b9b8644
      Wenhao Chen authored
      
      
      * feat: modify forward fn of critic and reward model
      
      * feat: modify calc_action_log_probs
      
      * to: add wandb in sft and rm trainer
      
      * feat: update train_sft
      
      * feat: update train_rm
      
      * style: modify type annotation and add warning
      
      * feat: pass tokenizer to ppo trainer
      
      * to: modify trainer base and maker base
      
      * feat: add wandb in ppo trainer
      
      * feat: pass tokenizer to generate
      
      * test: update generate fn tests
      
      * test: update train tests
      
      * fix: remove action_mask
      
      * feat: remove unused code
      
      * fix: fix wrong ignore_index
      
      * fix: fix mock tokenizer
      
      * chore: update requirements
      
      * revert: modify make_experience
      
      * fix: fix inference
      
      * fix: add padding side
      
      * style: modify _on_learn_batch_end
      
      * test: use mock tokenizer
      
      * fix: use bf16 to avoid overflow
      
      * fix: fix workflow
      
      * [chat] fix gemini strategy
      
      * [chat] fix
      
      * sync: update colossalai strategy
      
      * fix: fix args and model dtype
      
      * fix: fix checkpoint test
      
      * fix: fix requirements
      
      * fix: fix missing import and wrong arg
      
      * fix: temporarily skip gemini test in stage 3
      
      * style: apply pre-commit
      
      * fix: temporarily skip gemini test in stage 1&2
      
      ---------
      Co-authored-by: default avatarMingyan Jiang <1829166702@qq.com>
      7b9b8644
  24. 19 Sep, 2023 1 commit
  25. 18 Sep, 2023 2 commits
    • github-actions[bot]'s avatar
    • Hongxin Liu's avatar
      [legacy] clean up legacy code (#4743) · b5f9e37c
      Hongxin Liu authored
      * [legacy] remove outdated codes of pipeline (#4692)
      
      * [legacy] remove cli of benchmark and update optim (#4690)
      
      * [legacy] remove cli of benchmark and update optim
      
      * [doc] fix cli doc test
      
      * [legacy] fix engine clip grad norm
      
      * [legacy] remove outdated colo tensor (#4694)
      
      * [legacy] remove outdated colo tensor
      
      * [test] fix test import
      
      * [legacy] move outdated zero to legacy (#4696)
      
      * [legacy] clean up utils (#4700)
      
      * [legacy] clean up utils
      
      * [example] update examples
      
      * [legacy] clean up amp
      
      * [legacy] fix amp module
      
      * [legacy] clean up gpc (#4742)
      
      * [legacy] clean up context
      
      * [legacy] clean core, constants and global vars
      
      * [legacy] refactor initialize
      
      * [example] fix examples ci
      
      * [example] fix examples ci
      
      * [legacy] fix tests
      
      * [example] fix gpt example
      
      * [example] fix examples ci
      
      * [devops] fix ci installation
      
      * [example] fix examples ci
      b5f9e37c
  26. 15 Sep, 2023 2 commits
    • flybird11111's avatar
      [example] llama2 add fine-tune example (#4673) · 4c4482f3
      flybird11111 authored
      * [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] change dataset
      
      * [shardformer] change dataset
      
      * [shardformer] fix CI
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      [example] update opt example
      
      [example] resolve comments
      
      fix
      
      fix
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * [example] llama2 add finetune example
      
      * fix
      
      * update llama2 example
      
      * update llama2 example
      
      * fix
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      
      * Update requirements.txt
      
      * update llama2 example
      
      * update llama2 example
      
      * update llama2 example
      4c4482f3
    • Bin Jia's avatar
      [example] add gpt2 HybridParallelPlugin example (#4653) · 608cffae
      Bin Jia authored
      * add gpt2 HybridParallelPlugin example
      
      * update readme and testci
      
      * update test ci
      
      * fix test_ci bug
      
      * update requirements
      
      * add requirements
      
      * update requirements
      
      * add requirement
      
      * rename file
      608cffae
  27. 14 Sep, 2023 1 commit
  28. 13 Sep, 2023 1 commit
  29. 11 Sep, 2023 1 commit
    • Hongxin Liu's avatar
      [legacy] move communication and nn to legacy and refactor logger (#4671) · 554aa959
      Hongxin Liu authored
      * [legacy] move communication to legacy (#4640)
      
      * [legacy] refactor logger and clean up legacy codes (#4654)
      
      * [legacy] make logger independent to gpc
      
      * [legacy] make optim independent to registry
      
      * [legacy] move test engine to legacy
      
      * [legacy] move nn to legacy (#4656)
      
      * [legacy] move nn to legacy
      
      * [checkpointio] fix save hf config
      
      * [test] remove useledd rpc pp test
      
      * [legacy] fix nn init
      
      * [example] skip tutorial hybriad parallel example
      
      * [devops] test doc check
      
      * [devops] test doc check
      554aa959
  30. 09 Sep, 2023 1 commit
    • flybird11111's avatar
      [shardformer] update llama2/opt finetune example and fix llama2 policy (#4645) · 7486ed7d
      flybird11111 authored
      * [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] change dataset
      
      * [shardformer] change dataset
      
      * [shardformer] fix CI
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      [example] update opt example
      
      [example] resolve comments
      
      fix
      
      fix
      7486ed7d
  31. 07 Sep, 2023 1 commit
  32. 05 Sep, 2023 3 commits
  33. 04 Sep, 2023 1 commit
    • flybird11111's avatar
      [shardformer] update bert finetune example with HybridParallelPlugin (#4584) · 0a94fcd3
      flybird11111 authored
      
      
      * [shardformer] fix opt test hanging
      
      * fix
      
      * test
      
      * test
      
      * test
      
      * fix test
      
      * fix test
      
      * remove print
      
      * add fix
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] fix epoch change
      
      * [shardformer] broadcast add pp group
      
      * [shardformer] fix opt test hanging
      
      * fix
      
      * test
      
      * test
      
      * [shardformer] zero1+pp and the corresponding tests (#4517)
      
      * pause
      
      * finish pp+zero1
      
      * Update test_shard_vit.py
      
      * [shardformer/fix overlap bug] fix overlap bug, add overlap as an option in shardco… (#4516)
      
      * fix overlap bug and support bert, add overlap as an option in shardconfig
      
      * support overlap for chatglm and bloom
      
      * [shardformer] fix emerged bugs after updating transformers (#4526)
      
      * test
      
      * fix test
      
      * fix test
      
      * remove print
      
      * add fix
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] Add overlap support for gpt2 (#4535)
      
      * add overlap support for gpt2
      
      * remove unused code
      
      * remove unused code
      
      * [shardformer] support pp+tp+zero1 tests (#4531)
      
      * [shardformer] fix opt test hanging
      
      * fix
      
      * test
      
      * test
      
      * test
      
      * fix test
      
      * fix test
      
      * remove print
      
      * add fix
      
      * [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] pp+tp+zero1
      
      * [shardformer] fix submodule replacement bug when enabling pp (#4544)
      
      * [shardformer] support sharded optimizer checkpointIO of HybridParallelPlugin (#4540)
      
      * implement sharded optimizer saving
      
      * add more param info
      
      * finish implementation of sharded optimizer saving
      
      * fix bugs in optimizer sharded saving
      
      * add pp+zero test
      
      * param group loading
      
      * greedy loading of optimizer
      
      * fix bug when loading
      
      * implement optimizer sharded saving
      
      * add optimizer test & arrange checkpointIO utils
      
      * fix gemini sharding state_dict
      
      * add verbose option
      
      * add loading of master params
      
      * fix typehint
      
      * fix master/working mapping in fp16 amp
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] add bert finetune example
      
      * [shardformer] fix epoch change
      
      * [shardformer] broadcast add pp group
      
      * rebase feature/shardformer
      
      * update pipeline
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] bert finetune fix
      
      * [shardformer] add all_reduce operation to loss
      
      add all_reduce operation to loss
      
      * [shardformer] make compatible with pytree.
      
      make compatible with pytree.
      
      * [shardformer] disable tp
      
      disable tp
      
      * [shardformer] add 3d plugin to ci test
      
      * [shardformer] update num_microbatches to None
      
      * [shardformer] update microbatchsize
      
      * [shardformer] update assert
      
      * update scheduler
      
      * update scheduler
      
      ---------
      Co-authored-by: default avatarJianghai <72591262+CjhHa1@users.noreply.github.com>
      Co-authored-by: default avatarBin Jia <45593998+FoolPlayer@users.noreply.github.com>
      Co-authored-by: default avatarBaizhou Zhang <eddiezhang@pku.edu.cn>
      0a94fcd3