1. 02 Nov, 2023 2 commits
  2. 01 Nov, 2023 1 commit
    • Bin Jia's avatar
      [Pipeline Inference] Merge pp with tp (#4993) · b6696beb
      Bin Jia authored
      * refactor pipeline into new CaiInferEngine
      
      * updata llama modeling forward
      
      * merge tp with pp
      
      * update docstring
      
      * optimize test workflow and example
      
      * fix typo
      
      * add assert and todo
      b6696beb
  3. 30 Oct, 2023 2 commits
    • Cuiqing Li's avatar
      [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama... · 459a88c8
      Cuiqing Li authored
      
      [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention  (#4965)
      
      * adding flash-decoding
      
      * clean
      
      * adding kernel
      
      * adding flash-decoding
      
      * add integration
      
      * add
      
      * adding kernel
      
      * adding kernel
      
      * adding triton 2.1.0 features for inference
      
      * update bloom triton kernel
      
      * remove useless vllm kernels
      
      * clean codes
      
      * fix
      
      * adding files
      
      * fix readme
      
      * update llama flash-decoding
      
      ---------
      Co-authored-by: default avatarcuiqing.li <lixx336@gmail.com>
      459a88c8
    • Jianghai's avatar
      [Inference] Dynamic Batching Inference, online and offline (#4953) · cf579ff4
      Jianghai authored
      
      
      * [inference] Dynamic Batching for Single and Multiple GPUs (#4831)
      
      * finish batch manager
      
      * 1
      
      * first
      
      * fix
      
      * fix dynamic batching
      
      * llama infer
      
      * finish test
      
      * support different lengths generating
      
      * del prints
      
      * del prints
      
      * fix
      
      * fix bug
      
      ---------
      
      Co-authored-by: CjhHa1 <cjh18671720497outlook.com>
      
      * [inference] Async dynamic batching  (#4894)
      
      * finish input and output logic
      
      * add generate
      
      * test forward
      
      * 1
      
      * [inference]Re push async dynamic batching (#4901)
      
      * adapt to ray server
      
      * finish async
      
      * finish test
      
      * del test
      
      ---------
      Co-authored-by: default avataryuehuayingxueluo <867460659@qq.com>
      
      * Revert "[inference]Re push async dynamic batching (#4901)" (#4905)
      
      This reverts commit fbf3c09e673794ed18c91d4bab1a7dfea052e95a.
      
      * Revert "[inference] Async dynamic batching  (#4894)"
      
      This reverts commit fced14025043e29ce816b315f440601188f7f79f.
      
      * Revert "[inference] Async dynamic batching  (#4894)" (#4909)
      
      This reverts commit fced14025043e29ce816b315f440601188f7f79f.
      
      * Add Ray Distributed Environment Init Scripts
      
      * support DynamicBatchManager base function
      
      * revert _set_tokenizer version
      
      * add driver async generate
      
      * add async test
      
      * fix bugs in test_ray_dist.py
      
      * add get_tokenizer.py
      
      * fix code style
      
      * fix bugs about No module named 'pydantic' in ci test
      
      * fix bugs in ci test
      
      * fix bugs in ci test
      
      * fix bugs in ci test
      
      * [infer]Add Ray Distributed Environment Init Scripts (#4911)
      
      * Revert "[inference] Async dynamic batching  (#4894)"
      
      This reverts commit fced14025043e29ce816b315f440601188f7f79f.
      
      * Add Ray Distributed Environment Init Scripts
      
      * support DynamicBatchManager base function
      
      * revert _set_tokenizer version
      
      * add driver async generate
      
      * add async test
      
      * fix bugs in test_ray_dist.py
      
      * add get_tokenizer.py
      
      * fix code style
      
      * fix bugs about No module named 'pydantic' in ci test
      
      * fix bugs in ci test
      
      * fix bugs in ci test
      
      * fix bugs in ci test
      
      * support dynamic batch for bloom model and is_running function
      
      * [Inference]Test for new Async engine (#4935)
      
      * infer engine
      
      * infer engine
      
      * test engine
      
      * test engine
      
      * new manager
      
      * change step
      
      * add
      
      * test
      
      * fix
      
      * fix
      
      * finish test
      
      * finish test
      
      * finish test
      
      * finish test
      
      * add license
      
      ---------
      Co-authored-by: default avataryuehuayingxueluo <867460659@qq.com>
      
      * add assertion for config (#4947)
      
      * [Inference] Finish dynamic batching offline test (#4948)
      
      * test
      
      * fix test
      
      * fix quant
      
      * add default
      
      * fix
      
      * fix some bugs
      
      * fix some bugs
      
      * fix
      
      * fix bug
      
      * fix bugs
      
      * reset param
      
      ---------
      Co-authored-by: default avataryuehuayingxueluo <867460659@qq.com>
      Co-authored-by: default avatarCuiqing Li <lixx3527@gmail.com>
      Co-authored-by: CjhHa1 <cjh18671720497outlook.com>
      cf579ff4
  4. 27 Oct, 2023 1 commit
    • Bin Jia's avatar
      [Pipeline inference] Combine kvcache with pipeline inference (#4938) · 1db67276
      Bin Jia authored
      * merge kvcache with pipeline inference and refactor the code structure
      
      * support ppsize > 2
      
      * refactor pipeline code
      
      * do pre-commit
      
      * modify benchmark
      
      * fix bench mark
      
      * polish code
      
      * add docstring and update readme
      
      * refactor the code
      
      * fix some logic bug of ppinfer
      
      * polish readme
      
      * fix typo
      
      * skip infer test
      1db67276
  5. 20 Oct, 2023 1 commit
  6. 19 Oct, 2023 1 commit
    • Cuiqing Li's avatar
      [Refactor] Integrated some lightllm kernels into token-attention (#4946) · 3a41e830
      Cuiqing Li authored
      
      
      * add some req for inference
      
      * clean codes
      
      * add codes
      
      * add some lightllm deps
      
      * clean codes
      
      * hello
      
      * delete rms files
      
      * add some comments
      
      * add comments
      
      * add doc
      
      * add lightllm deps
      
      * add lightllm cahtglm2 kernels
      
      * add lightllm cahtglm2 kernels
      
      * replace rotary embedding with lightllm kernel
      
      * add some commnets
      
      * add some comments
      
      * add some comments
      
      * add
      
      * replace fwd kernel att1
      
      * fix a arg
      
      * add
      
      * add
      
      * fix token attention
      
      * add some comments
      
      * clean codes
      
      * modify comments
      
      * fix readme
      
      * fix bug
      
      * fix bug
      
      ---------
      Co-authored-by: default avatarcuiqing.li <lixx336@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      3a41e830
  7. 18 Oct, 2023 3 commits
  8. 17 Oct, 2023 1 commit
    • Baizhou Zhang's avatar
      [gemini] support gradient accumulation (#4869) · 21ba89ca
      Baizhou Zhang authored
      * add test
      
      * fix no_sync bug in low level zero plugin
      
      * fix test
      
      * add argument for grad accum
      
      * add grad accum in backward hook for gemini
      
      * finish implementation, rewrite tests
      
      * fix test
      
      * skip stuck model in low level zero test
      
      * update doc
      
      * optimize communication & fix gradient checkpoint
      
      * modify doc
      
      * cleaning codes
      
      * update cpu adam fp16 case
      21ba89ca
  9. 16 Oct, 2023 2 commits
    • Hongxin Liu's avatar
      [kernel] support pure fp16 for cpu adam and update gemini optim tests (#4921) · 4f68b3f1
      Hongxin Liu authored
      * [kernel] support pure fp16 for cpu adam (#4896)
      
      * [kernel] fix cpu adam kernel for pure fp16 and update tests (#4919)
      
      * [kernel] fix cpu adam
      
      * [test] update gemini optim test
      4f68b3f1
    • Xu Kai's avatar
      [inference] Add smmoothquant for llama (#4904) · 611a5a80
      Xu Kai authored
      * [inference] add int8 rotary embedding kernel for smoothquant (#4843)
      
      * [inference] add smoothquant llama attention (#4850)
      
      * add smoothquant llama attention
      
      * remove uselss code
      
      * remove useless code
      
      * fix import error
      
      * rename file name
      
      * [inference] add silu linear fusion for smoothquant llama mlp  (#4853)
      
      * add silu linear
      
      * update skip condition
      
      * catch smoothquant cuda lib exception
      
      * prcocess exception for tests
      
      * [inference] add llama mlp for smoothquant (#4854)
      
      * add llama mlp for smoothquant
      
      * fix down out scale
      
      * remove duplicate lines
      
      * add llama mlp check
      
      * delete useless code
      
      * [inference] add smoothquant llama (#4861)
      
      * add smoothquant llama
      
      * fix attention accuracy
      
      * fix accuracy
      
      * add kv cache and save pretrained
      
      * refactor example
      
      * delete smooth
      
      * refactor code
      
      * [inference] add smooth function and delete useless code for smoothquant (#4895)
      
      * add smooth function and delete useless code
      
      * update datasets
      
      * remove duplicate import
      
      * delete useless file
      
      * refactor codes (#4902)
      
      * rafactor code
      
      * add license
      
      * add torch-int and smoothquant license
      611a5a80
  10. 13 Oct, 2023 1 commit
  11. 12 Oct, 2023 3 commits
  12. 11 Oct, 2023 3 commits
    • littsk's avatar
      ffd9a3cb
    • Xu Kai's avatar
      fix test llama (#4884) · fdec650b
      Xu Kai authored
      fdec650b
    • Bin Jia's avatar
      [Pipeline Inference] Sync pipeline inference branch to main (#4820) · 08a9f76b
      Bin Jia authored
      * [pipeline inference] pipeline inference (#4492)
      
      * add pp stage manager as circle stage
      
      * fix a bug when create process group
      
      * add ppinfer basic framework
      
      * add micro batch manager and support kvcache-pp gpt2 fwd
      
      * add generate schedule
      
      * use mb size to control mb number
      
      * support generate with kv cache
      
      * add output, remove unused code
      
      * add test
      
      * reuse shardformer to build model
      
      * refactor some code and use the same attribute name of hf
      
      * fix review and add test for generation
      
      * remove unused file
      
      * fix CI
      
      * add cache clear
      
      * fix code error
      
      * fix typo
      
      * [Pipeline inference] Modify to tieweight (#4599)
      
      * add pp stage manager as circle stage
      
      * fix a bug when create process group
      
      * add ppinfer basic framework
      
      * add micro batch manager and support kvcache-pp gpt2 fwd
      
      * add generate schedule
      
      * use mb size to control mb number
      
      * support generate with kv cache
      
      * add output, remove unused code
      
      * add test
      
      * reuse shardformer to build model
      
      * refactor some code and use the same attribute name of hf
      
      * fix review and add test for generation
      
      * remove unused file
      
      * modify the way of saving newtokens
      
      * modify to tieweight
      
      * modify test
      
      * remove unused file
      
      * solve review
      
      * add docstring
      
      * [Pipeline inference] support llama pipeline inference (#4647)
      
      * support llama pipeline inference
      
      * remove tie weight operation
      
      * [pipeline inference] Fix the blocking of communication when ppsize is 2 (#4708)
      
      * add benchmark verbose
      
      * fix export tokens
      
      * fix benchmark verbose
      
      * add P2POp style to do p2p communication
      
      * modify schedule as p2p type when ppsize is 2
      
      * remove unused code and add docstring
      
      * [Pipeline inference] Refactor code, add docsting, fix bug (#4790)
      
      * add benchmark script
      
      * update argparse
      
      * fix fp16 load
      
      * refactor code style
      
      * add docstring
      
      * polish code
      
      * fix test bug
      
      * [Pipeline inference] Add pipeline inference docs (#4817)
      
      * add readme doc
      
      * add a ico
      
      * Add performance
      
      * update table of contents
      
      * refactor code (#4873)
      08a9f76b
  13. 07 Oct, 2023 1 commit
  14. 05 Oct, 2023 1 commit
  15. 04 Oct, 2023 2 commits
  16. 26 Sep, 2023 2 commits
  17. 22 Sep, 2023 2 commits
    • Jianghai's avatar
      [inference] chatglm2 infer demo (#4724) · ce7ade38
      Jianghai authored
      * add chatglm2
      
      * add
      
      * gather needed kernels
      
      * fix some bugs
      
      * finish context forward
      
      * finish context stage
      
      * fix
      
      * add
      
      * pause
      
      * add
      
      * fix bugs
      
      * finish chatglm
      
      * fix bug
      
      * change some logic
      
      * fix bugs
      
      * change some logics
      
      * add
      
      * add
      
      * add
      
      * fix
      
      * fix tests
      
      * fix
      ce7ade38
    • Xu Kai's avatar
      [feature] add gptq for inference (#4754) · 946ab56c
      Xu Kai authored
      * [gptq] add gptq kernel (#4416)
      
      * add gptq
      
      * refactor code
      
      * fix tests
      
      * replace auto-gptq
      
      * rname inferance/quant
      
      * refactor test
      
      * add auto-gptq as an option
      
      * reset requirements
      
      * change assert and check auto-gptq
      
      * add import warnings
      
      * change test flash attn version
      
      * remove example
      
      * change requirements of flash_attn
      
      * modify tests
      
      * [skip ci] change requirements-test
      
      * [gptq] faster gptq cuda kernel (#4494)
      
      * [skip ci] add cuda kernels
      
      * add license
      
      * [skip ci] fix max_input_len
      
      * format files & change test size
      
      * [skip ci]
      
      * [gptq] add gptq tensor parallel (#4538)
      
      * add gptq tensor parallel
      
      * add gptq tp
      
      * delete print
      
      * add test gptq check
      
      * add test auto gptq check
      
      * [gptq] combine gptq and kv cache manager (#4706)
      
      * combine gptq and kv cache manager
      
      * add init bits
      
      * delete useless code
      
      * add model path
      
      * delete usless print and update test
      
      * delete usless import
      
      * move option gptq to shard config
      
      * change replace linear to shardformer
      
      * update bloom policy
      
      * delete useless code
      
      * fix import bug and delete uselss code
      
      * change colossalai/gptq to colossalai/quant/gptq
      
      * update import linear for tests
      
      * delete useless code and mv gptq_kernel to kernel directory
      
      * fix triton kernel
      
      * add triton import
      946ab56c
  18. 21 Sep, 2023 1 commit
    • Hongxin Liu's avatar
      [lazy] support torch 2.0 (#4763) · 3e05c07b
      Hongxin Liu authored
      * [lazy] support _like methods and clamp
      
      * [lazy] pass transformers models
      
      * [lazy] fix device move and requires grad
      
      * [lazy] fix requires grad and refactor api
      
      * [lazy] fix requires grad
      3e05c07b
  19. 20 Sep, 2023 1 commit
  20. 19 Sep, 2023 1 commit
  21. 18 Sep, 2023 1 commit
    • Hongxin Liu's avatar
      [legacy] clean up legacy code (#4743) · b5f9e37c
      Hongxin Liu authored
      * [legacy] remove outdated codes of pipeline (#4692)
      
      * [legacy] remove cli of benchmark and update optim (#4690)
      
      * [legacy] remove cli of benchmark and update optim
      
      * [doc] fix cli doc test
      
      * [legacy] fix engine clip grad norm
      
      * [legacy] remove outdated colo tensor (#4694)
      
      * [legacy] remove outdated colo tensor
      
      * [test] fix test import
      
      * [legacy] move outdated zero to legacy (#4696)
      
      * [legacy] clean up utils (#4700)
      
      * [legacy] clean up utils
      
      * [example] update examples
      
      * [legacy] clean up amp
      
      * [legacy] fix amp module
      
      * [legacy] clean up gpc (#4742)
      
      * [legacy] clean up context
      
      * [legacy] clean core, constants and global vars
      
      * [legacy] refactor initialize
      
      * [example] fix examples ci
      
      * [example] fix examples ci
      
      * [legacy] fix tests
      
      * [example] fix gpt example
      
      * [example] fix examples ci
      
      * [devops] fix ci installation
      
      * [example] fix examples ci
      b5f9e37c
  22. 15 Sep, 2023 1 commit
  23. 12 Sep, 2023 1 commit
  24. 11 Sep, 2023 3 commits
    • Cuiqing Li's avatar
      [Feature] The first PR to Add TP inference engine, kv-cache manager and... · bce0f167
      Cuiqing Li authored
      
      [Feature] The first PR to Add TP inference engine, kv-cache manager and related kernels for our inference system (#4577)
      
      * [infer] Infer/llama demo (#4503)
      
      * add
      
      * add infer example
      
      * finish
      
      * finish
      
      * stash
      
      * fix
      
      * [Kernels]  add inference token attention kernel (#4505)
      
      * add token forward
      
      * fix tests
      
      * fix comments
      
      * add try import triton
      
      * add adapted license
      
      * add tests check
      
      * [Kernels] add necessary kernels (llama & bloom) for attention forward and kv-cache manager  (#4485)
      
      * added _vllm_rms_norm
      
      * change place
      
      * added tests
      
      * added tests
      
      * modify
      
      * adding kernels
      
      * added tests:
      
      * adding kernels
      
      * modify
      
      * added
      
      * updating kernels
      
      * adding tests
      
      * added tests
      
      * kernel change
      
      * submit
      
      * modify
      
      * added
      
      * edit comments
      
      * change name
      
      * change commnets and fix import
      
      * add
      
      * added
      
      * combine codes (#4509)
      
      * [feature] add KV cache manager for llama & bloom inference (#4495)
      
      * add kv cache memory manager
      
      * add stateinfo during inference
      
      * format
      
      * format
      
      * rename file
      
      * add kv cache test
      
      * revise on BatchInferState
      
      * file dir change
      
      * [Bug FIx] import llama context ops fix (#4524)
      
      * added _vllm_rms_norm
      
      * change place
      
      * added tests
      
      * added tests
      
      * modify
      
      * adding kernels
      
      * added tests:
      
      * adding kernels
      
      * modify
      
      * added
      
      * updating kernels
      
      * adding tests
      
      * added tests
      
      * kernel change
      
      * submit
      
      * modify
      
      * added
      
      * edit comments
      
      * change name
      
      * change commnets and fix import
      
      * add
      
      * added
      
      * fix
      
      * add ops into init.py
      
      * add
      
      * [Infer] Add TPInferEngine and fix file path (#4532)
      
      * add engine for TP inference
      
      * move file path
      
      * update path
      
      * fix TPInferEngine
      
      * remove unused file
      
      * add engine test demo
      
      * revise TPInferEngine
      
      * fix TPInferEngine, add test
      
      * fix
      
      * Add Inference test for llama (#4508)
      
      * add kv cache memory manager
      
      * add stateinfo during inference
      
      * add
      
      * add infer example
      
      * finish
      
      * finish
      
      * format
      
      * format
      
      * rename file
      
      * add kv cache test
      
      * revise on BatchInferState
      
      * add inference test for llama
      
      * fix conflict
      
      * feature: add some new features for llama engine
      
      * adapt colossalai triton interface
      
      * Change the parent class of llama  policy
      
      * add nvtx
      
      * move llama inference code to tensor_parallel
      
      * fix __init__.py
      
      * rm tensor_parallel
      
      * fix: fix bugs in auto_policy.py
      
      * fix:rm some unused codes
      
      * mv colossalai/tpinference to colossalai/inference/tensor_parallel
      
      * change __init__.py
      
      * save change
      
      * fix engine
      
      * Bug fix: Fix hang
      
      * remove llama_infer_engine.py
      
      ---------
      Co-authored-by: default avataryuanheng-zhao <jonathan.zhaoyh@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      
      * [infer] Add Bloom inference policy and replaced methods (#4512)
      
      * add bloom inference methods and policy
      
      * enable pass BatchInferState from model forward
      
      * revise bloom infer layers/policies
      
      * add engine for inference (draft)
      
      * add test for bloom infer
      
      * fix bloom infer policy and flow
      
      * revise bloom test
      
      * fix bloom file path
      
      * remove unused codes
      
      * fix bloom modeling
      
      * fix dir typo
      
      * fix trivial
      
      * fix policy
      
      * clean pr
      
      * trivial fix
      
      * Revert "[infer] Add Bloom inference policy and replaced methods (#4512)" (#4552)
      
      This reverts commit 17cfa5714083a81a505c097f1c411cd28162d922.
      
      * [Doc] Add colossal inference doc (#4549)
      
      * create readme
      
      * add readme.md
      
      * fix typos
      
      * [infer] Add Bloom inference policy and replaced methods (#4553)
      
      * add bloom inference methods and policy
      
      * enable pass BatchInferState from model forward
      
      * revise bloom infer layers/policies
      
      * add engine for inference (draft)
      
      * add test for bloom infer
      
      * fix bloom infer policy and flow
      
      * revise bloom test
      
      * fix bloom file path
      
      * remove unused codes
      
      * fix bloom modeling
      
      * fix dir typo
      
      * fix trivial
      
      * fix policy
      
      * clean pr
      
      * trivial fix
      
      * trivial
      
      * Fix Bugs In Llama Model Forward (#4550)
      
      * add kv cache memory manager
      
      * add stateinfo during inference
      
      * add
      
      * add infer example
      
      * finish
      
      * finish
      
      * format
      
      * format
      
      * rename file
      
      * add kv cache test
      
      * revise on BatchInferState
      
      * add inference test for llama
      
      * fix conflict
      
      * feature: add some new features for llama engine
      
      * adapt colossalai triton interface
      
      * Change the parent class of llama  policy
      
      * add nvtx
      
      * move llama inference code to tensor_parallel
      
      * fix __init__.py
      
      * rm tensor_parallel
      
      * fix: fix bugs in auto_policy.py
      
      * fix:rm some unused codes
      
      * mv colossalai/tpinference to colossalai/inference/tensor_parallel
      
      * change __init__.py
      
      * save change
      
      * fix engine
      
      * Bug fix: Fix hang
      
      * remove llama_infer_engine.py
      
      * bug fix: fix bugs about infer_state.is_context_stage
      
      * remove pollcies
      
      * fix: delete unused code
      
      * fix: delete unused code
      
      * remove unused coda
      
      * fix conflict
      
      ---------
      Co-authored-by: default avataryuanheng-zhao <jonathan.zhaoyh@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      
      * [doc] add colossal inference fig (#4554)
      
      * create readme
      
      * add readme.md
      
      * fix typos
      
      * upload fig
      
      * [NFC] fix docstring for colossal inference (#4555)
      
      Fix docstring and comments in kv cache manager and bloom modeling
      
      * fix docstring in llama modeling (#4557)
      
      * [Infer] check import vllm (#4559)
      
      * change import vllm
      
      * import apply_rotary_pos_emb
      
      * change import location
      
      * [DOC] add installation req (#4561)
      
      * add installation req
      
      * fix
      
      * slight change
      
      * remove empty
      
      * [Feature] rms-norm transfer into inference llama.py  (#4563)
      
      * add installation req
      
      * fix
      
      * slight change
      
      * remove empty
      
      * add rmsnorm polciy
      
      * add
      
      * clean codes
      
      * [infer] Fix tp inference engine (#4564)
      
      * fix engine prepare data
      
      * add engine test
      
      * use bloom for testing
      
      * revise on test
      
      * revise on test
      
      * reset shardformer llama (#4569)
      
      * [infer] Fix engine - tensors on different devices (#4570)
      
      
      * fix diff device in engine
      
      * [codefactor] Feature/colossal inference (#4579)
      
      * code factors
      
      * remove
      
      * change coding (#4581)
      
      * [doc] complete README of colossal inference (#4585)
      
      * complete fig
      
      * Update README.md
      
      * [doc]update readme (#4586)
      
      * update readme
      
      * Update README.md
      
      * bug fix: fix bus in llama and bloom (#4588)
      
      * [BUG FIX]Fix test engine in CI and non-vllm kernels llama forward  (#4592)
      
      * fix tests
      
      * clean
      
      * clean
      
      * fix bugs
      
      * add
      
      * fix llama non-vllm kernels bug
      
      * modify
      
      * clean codes
      
      * [Kernel]Rmsnorm fix (#4598)
      
      * fix tests
      
      * clean
      
      * clean
      
      * fix bugs
      
      * add
      
      * fix llama non-vllm kernels bug
      
      * modify
      
      * clean codes
      
      * add triton rmsnorm
      
      * delete vllm kernel flag
      
      * [Bug Fix]Fix bugs in llama (#4601)
      
      * fix tests
      
      * clean
      
      * clean
      
      * fix bugs
      
      * add
      
      * fix llama non-vllm kernels bug
      
      * modify
      
      * clean codes
      
      * bug fix: remove rotary_positions_ids
      
      ---------
      Co-authored-by: default avatarcuiqing.li <lixx3527@gmail.com>
      
      * [kernel] Add triton layer norm & replace norm for bloom (#4609)
      
      * add layernorm for inference
      
      * add test for layernorm kernel
      
      * add bloom layernorm replacement policy
      
      * trivial: path
      
      * [Infer] Bug fix rotary embedding in llama (#4608)
      
      * fix rotary embedding
      
      * delete print
      
      * fix init seq len bug
      
      * rename pytest
      
      * add benchmark for llama
      
      * refactor codes
      
      * delete useless code
      
      * [bench] Add bloom inference benchmark (#4621)
      
      * add bloom benchmark
      
      * readme - update benchmark res
      
      * trivial - uncomment for testing (#4622)
      
      * [Infer] add check triton and cuda version for tests (#4627)
      
      * fix rotary embedding
      
      * delete print
      
      * fix init seq len bug
      
      * rename pytest
      
      * add benchmark for llama
      
      * refactor codes
      
      * delete useless code
      
      * add check triton and cuda
      
      * Update sharder.py (#4629)
      
      * [Inference] Hot fix some bugs and typos (#4632)
      
      * fix
      
      * fix test
      
      * fix conflicts
      
      * [typo]Comments fix (#4633)
      
      * fallback
      
      * fix commnets
      
      * bug fix: fix some bugs in test_llama and test_bloom (#4635)
      
      * [Infer] delete benchmark in tests and fix bug for llama and bloom (#4636)
      
      * fix rotary embedding
      
      * delete print
      
      * fix init seq len bug
      
      * rename pytest
      
      * add benchmark for llama
      
      * refactor codes
      
      * delete useless code
      
      * add check triton and cuda
      
      * delete benchmark and fix infer bugs
      
      * delete benchmark for tests
      
      * delete useless code
      
      * delete bechmark function in utils
      
      * [Fix] Revise TPInferEngine, inference tests and benchmarks (#4642)
      
      * [Fix] revise TPInferEngine methods and inference tests
      
      * fix llama/bloom infer benchmarks
      
      * fix infer tests
      
      * trivial fix: benchmakrs
      
      * trivial
      
      * trivial: rm print
      
      * modify utils filename for infer ops test (#4657)
      
      * [Infer] Fix TPInferEngine init & inference tests, benchmarks (#4670)
      
      * fix engine funcs
      
      * TPInferEngine: receive shard config in init
      
      * benchmarks: revise TPInferEngine init
      
      * benchmarks: remove pytest decorator
      
      * trivial fix
      
      * use small model for tests
      
      * [NFC] use args for infer benchmarks (#4674)
      
      * revise infer default (#4683)
      
      * [Fix] optimize/shard model in TPInferEngine init (#4684)
      
      * remove using orig model in engine
      
      * revise inference tests
      
      * trivial: rename
      
      ---------
      Co-authored-by: default avatarJianghai <72591262+CjhHa1@users.noreply.github.com>
      Co-authored-by: default avatarXu Kai <xukai16@foxmail.com>
      Co-authored-by: default avatarYuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
      Co-authored-by: default avataryuehuayingxueluo <867460659@qq.com>
      Co-authored-by: default avataryuanheng-zhao <jonathan.zhaoyh@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      bce0f167
    • flybird11111's avatar
      [shardformer]fix gpt2 double head (#4663) · eedaa3e1
      flybird11111 authored
      * [shardformer]fix gpt2 test
      
      [shardformer]fix gpt2 test
      
      [shardformer]fix gpt2 test
      
      * fix
      
      * [shardformer] add todo
      
      * [shardformer] add todo
      eedaa3e1
    • Hongxin Liu's avatar
      [legacy] move communication and nn to legacy and refactor logger (#4671) · 554aa959
      Hongxin Liu authored
      * [legacy] move communication to legacy (#4640)
      
      * [legacy] refactor logger and clean up legacy codes (#4654)
      
      * [legacy] make logger independent to gpc
      
      * [legacy] make optim independent to registry
      
      * [legacy] move test engine to legacy
      
      * [legacy] move nn to legacy (#4656)
      
      * [legacy] move nn to legacy
      
      * [checkpointio] fix save hf config
      
      * [test] remove useledd rpc pp test
      
      * [legacy] fix nn init
      
      * [example] skip tutorial hybriad parallel example
      
      * [devops] test doc check
      
      * [devops] test doc check
      554aa959
  25. 09 Sep, 2023 1 commit
    • flybird11111's avatar
      [shardformer] update llama2/opt finetune example and fix llama2 policy (#4645) · 7486ed7d
      flybird11111 authored
      * [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      [shardformer] update shardformer readme
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] update llama2/opt finetune example and shardformer update to llama2
      
      * [shardformer] change dataset
      
      * [shardformer] change dataset
      
      * [shardformer] fix CI
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      * [shardformer] fix
      
      [example] update opt example
      
      [example] resolve comments
      
      fix
      
      fix
      7486ed7d
  26. 07 Sep, 2023 1 commit