1. 25 Mar, 2024 1 commit
  2. 18 Mar, 2024 1 commit
  3. 13 Mar, 2024 1 commit
    • Hongxin Liu's avatar
      [devops] fix compatibility (#5444) · f2e8b9ef
      Hongxin Liu authored
      * [devops] fix compatibility
      
      * [hotfix] update compatibility test on pr
      
      * [devops] fix compatibility
      
      * [devops] record duration during comp test
      
      * [test] decrease test duration
      
      * fix falcon
      f2e8b9ef
  4. 04 Mar, 2024 1 commit
    • flybird11111's avatar
      [example]add gpt2 benchmark example script. (#5295) · 29695cf7
      flybird11111 authored
      
      
      * benchmark gpt2
      
      * fix
      
      fix
      
      fix
      
      fix
      
      * [doc] fix typo in Colossal-LLaMA-2/README.md (#5247)
      
      * [workflow] fixed build CI (#5240)
      
      * [workflow] fixed build CI
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * [ci] fixed booster test (#5251)
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * [ci] fixed ddp test (#5254)
      
      * [ci] fixed ddp test
      
      * polish
      
      * fix typo in  applications/ColossalEval/README.md (#5250)
      
      * [ci] fix shardformer tests. (#5255)
      
      * fix ci
      
      fix
      
      * revert: revert p2p
      
      * feat: add enable_metadata_cache option
      
      * revert: enable t5 tests
      
      ---------
      Co-authored-by: default avatarWenhao Chen <cwher@outlook.com>
      
      * [doc] fix doc typo (#5256)
      
      * [doc] fix annotation display
      
      * [doc] fix llama2 doc
      
      * [hotfix]: add pp sanity check and fix mbs arg (#5268)
      
      * fix: fix misleading mbs arg
      
      * feat: add pp sanity check
      
      * fix: fix 1f1b sanity check
      
      * [workflow] fixed incomplete bash command (#5272)
      
      * [workflow] fixed oom tests (#5275)
      
      * [workflow] fixed oom tests
      
      * polish
      
      * polish
      
      * polish
      
      * [ci] fix test_hybrid_parallel_plugin_checkpoint_io.py (#5276)
      
      * fix ci
      
      fix
      
      * fix test
      
      * revert: revert p2p
      
      * feat: add enable_metadata_cache option
      
      * revert: enable t5 tests
      
      * fix
      
      ---------
      Co-authored-by: default avatarWenhao Chen <cwher@outlook.com>
      
      * [shardformer] hybridparallelplugin support gradients accumulation. (#5246)
      
      * support gradients acc
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      * fix
      
      fix
      
      * fix
      
      fix
      
      fix
      
      * [hotfix] Fix ShardFormer test execution path when using sequence parallelism (#5230)
      
      * fix auto loading gpt2 tokenizer (#5279)
      
      * [doc] add llama2-13B disyplay (#5285)
      
      * Update README.md
      
      * fix 13b typo
      
      ---------
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      
      * fix llama pretrain (#5287)
      
      * fix
      
      * fix
      
      * fix
      
      fix
      
      * fix
      
      fix
      
      fix
      
      * fix
      
      fix
      
      * benchmark gpt2
      
      * fix
      
      fix
      
      fix
      
      fix
      
      * [workflow] fixed build CI (#5240)
      
      * [workflow] fixed build CI
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * polish
      
      * [ci] fixed booster test (#5251)
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * [ci] fixed booster test
      
      * fix
      
      fix
      
      * fix
      
      fix
      
      fix
      
      * fix
      
      * fix
      
      fix
      
      fix
      
      fix
      
      fix
      
      * fix
      
      * Update shardformer.py
      
      ---------
      Co-authored-by: default avatardigger yu <digger-yu@outlook.com>
      Co-authored-by: default avatarFrank Lee <somerlee.9@gmail.com>
      Co-authored-by: default avatarWenhao Chen <cwher@outlook.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      Co-authored-by: default avatarZhongkai Zhao <kanezz620@gmail.com>
      Co-authored-by: default avatarMichelle <97082656+MichelleMa8@users.noreply.github.com>
      Co-authored-by: default avatarDesperado-Jia <502205863@qq.com>
      29695cf7
  5. 27 Feb, 2024 1 commit
  6. 08 Feb, 2024 1 commit
  7. 07 Feb, 2024 1 commit
  8. 06 Feb, 2024 1 commit
  9. 02 Feb, 2024 1 commit
  10. 01 Feb, 2024 1 commit
  11. 25 Jan, 2024 1 commit
  12. 22 Jan, 2024 1 commit
  13. 17 Jan, 2024 3 commits
  14. 16 Jan, 2024 1 commit
  15. 15 Jan, 2024 1 commit
  16. 11 Jan, 2024 3 commits
  17. 10 Jan, 2024 1 commit
  18. 09 Jan, 2024 1 commit
  19. 08 Jan, 2024 2 commits
  20. 03 Jan, 2024 1 commit
  21. 22 Dec, 2023 1 commit
    • Wenhao Chen's avatar
      [pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134) · 4fa689fc
      Wenhao Chen authored
      * test: add more p2p tests
      
      * fix: remove send_forward_recv_forward as p2p op list need to use the same group
      
      * fix: make send and receive atomic
      
      * feat: update P2PComm fn
      
      * feat: add metadata cache in 1f1b
      
      * feat: add metadata cache in interleaved pp
      
      * feat: modify is_xx_stage fn
      
      * revert: add _broadcast_object_list
      
      * feat: add interleaved pp in llama policy
      
      * feat: set NCCL_BUFFSIZE in HybridParallelPlugin
      4fa689fc
  22. 12 Dec, 2023 1 commit
  23. 08 Dec, 2023 1 commit
  24. 30 Nov, 2023 1 commit
  25. 29 Nov, 2023 1 commit
  26. 28 Nov, 2023 1 commit
  27. 22 Nov, 2023 1 commit
  28. 20 Nov, 2023 3 commits
  29. 19 Nov, 2023 1 commit
    • Xu Kai's avatar
      [inference] Refactor inference architecture (#5057) · fd6482ad
      Xu Kai authored
      
      
      * [inference] support only TP (#4998)
      
      * support only tp
      
      * enable tp
      
      * add support for bloom (#5008)
      
      * [refactor] refactor gptq and smoothquant llama (#5012)
      
      * refactor gptq and smoothquant llama
      
      * fix import error
      
      * fix linear import torch-int
      
      * fix smoothquant llama import error
      
      * fix import accelerate error
      
      * fix bug
      
      * fix import smooth cuda
      
      * fix smoothcuda
      
      * [Inference Refactor] Merge chatglm2 with pp and tp (#5023)
      
      merge chatglm with pp and tp
      
      * [Refactor] remove useless inference code (#5022)
      
      * remove useless code
      
      * fix quant model
      
      * fix test import bug
      
      * mv original inference legacy
      
      * fix chatglm2
      
      * [Refactor] refactor policy search and quant type controlling in inference (#5035)
      
      * [Refactor] refactor policy search and quant type controling in inference
      
      * [inference] update readme (#5051)
      
      * update readme
      
      * update readme
      
      * fix architecture
      
      * fix table
      
      * fix table
      
      * [inference] udpate example (#5053)
      
      * udpate example
      
      * fix run.sh
      
      * fix rebase bug
      
      * fix some errors
      
      * update readme
      
      * add some features
      
      * update interface
      
      * update readme
      
      * update benchmark
      
      * add requirements-infer
      
      ---------
      Co-authored-by: default avatarBin Jia <45593998+FoolPlayer@users.noreply.github.com>
      Co-authored-by: default avatarZhongkai Zhao <kanezz620@gmail.com>
      fd6482ad
  30. 17 Nov, 2023 1 commit
  31. 16 Nov, 2023 2 commits
  32. 10 Nov, 2023 1 commit