1. 20 Nov, 2023 1 commit
  2. 19 Nov, 2023 1 commit
    • Xu Kai's avatar
      [inference] Refactor inference architecture (#5057) · fd6482ad
      Xu Kai authored
      
      
      * [inference] support only TP (#4998)
      
      * support only tp
      
      * enable tp
      
      * add support for bloom (#5008)
      
      * [refactor] refactor gptq and smoothquant llama (#5012)
      
      * refactor gptq and smoothquant llama
      
      * fix import error
      
      * fix linear import torch-int
      
      * fix smoothquant llama import error
      
      * fix import accelerate error
      
      * fix bug
      
      * fix import smooth cuda
      
      * fix smoothcuda
      
      * [Inference Refactor] Merge chatglm2 with pp and tp (#5023)
      
      merge chatglm with pp and tp
      
      * [Refactor] remove useless inference code (#5022)
      
      * remove useless code
      
      * fix quant model
      
      * fix test import bug
      
      * mv original inference legacy
      
      * fix chatglm2
      
      * [Refactor] refactor policy search and quant type controlling in inference (#5035)
      
      * [Refactor] refactor policy search and quant type controling in inference
      
      * [inference] update readme (#5051)
      
      * update readme
      
      * update readme
      
      * fix architecture
      
      * fix table
      
      * fix table
      
      * [inference] udpate example (#5053)
      
      * udpate example
      
      * fix run.sh
      
      * fix rebase bug
      
      * fix some errors
      
      * update readme
      
      * add some features
      
      * update interface
      
      * update readme
      
      * update benchmark
      
      * add requirements-infer
      
      ---------
      Co-authored-by: default avatarBin Jia <45593998+FoolPlayer@users.noreply.github.com>
      Co-authored-by: default avatarZhongkai Zhao <kanezz620@gmail.com>
      fd6482ad
  3. 16 Nov, 2023 1 commit
  4. 10 Nov, 2023 1 commit
  5. 30 Oct, 2023 1 commit
    • Cuiqing Li's avatar
      [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama... · 459a88c8
      Cuiqing Li authored
      
      [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention  (#4965)
      
      * adding flash-decoding
      
      * clean
      
      * adding kernel
      
      * adding flash-decoding
      
      * add integration
      
      * add
      
      * adding kernel
      
      * adding kernel
      
      * adding triton 2.1.0 features for inference
      
      * update bloom triton kernel
      
      * remove useless vllm kernels
      
      * clean codes
      
      * fix
      
      * adding files
      
      * fix readme
      
      * update llama flash-decoding
      
      ---------
      Co-authored-by: default avatarcuiqing.li <lixx336@gmail.com>
      459a88c8
  6. 24 Oct, 2023 1 commit
  7. 20 Oct, 2023 1 commit
  8. 19 Oct, 2023 1 commit
    • Cuiqing Li's avatar
      [Refactor] Integrated some lightllm kernels into token-attention (#4946) · 3a41e830
      Cuiqing Li authored
      
      
      * add some req for inference
      
      * clean codes
      
      * add codes
      
      * add some lightllm deps
      
      * clean codes
      
      * hello
      
      * delete rms files
      
      * add some comments
      
      * add comments
      
      * add doc
      
      * add lightllm deps
      
      * add lightllm cahtglm2 kernels
      
      * add lightllm cahtglm2 kernels
      
      * replace rotary embedding with lightllm kernel
      
      * add some commnets
      
      * add some comments
      
      * add some comments
      
      * add
      
      * replace fwd kernel att1
      
      * fix a arg
      
      * add
      
      * add
      
      * fix token attention
      
      * add some comments
      
      * clean codes
      
      * modify comments
      
      * fix readme
      
      * fix bug
      
      * fix bug
      
      ---------
      Co-authored-by: default avatarcuiqing.li <lixx336@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      3a41e830
  9. 16 Oct, 2023 1 commit
    • Xu Kai's avatar
      [inference] Add smmoothquant for llama (#4904) · 611a5a80
      Xu Kai authored
      * [inference] add int8 rotary embedding kernel for smoothquant (#4843)
      
      * [inference] add smoothquant llama attention (#4850)
      
      * add smoothquant llama attention
      
      * remove uselss code
      
      * remove useless code
      
      * fix import error
      
      * rename file name
      
      * [inference] add silu linear fusion for smoothquant llama mlp  (#4853)
      
      * add silu linear
      
      * update skip condition
      
      * catch smoothquant cuda lib exception
      
      * prcocess exception for tests
      
      * [inference] add llama mlp for smoothquant (#4854)
      
      * add llama mlp for smoothquant
      
      * fix down out scale
      
      * remove duplicate lines
      
      * add llama mlp check
      
      * delete useless code
      
      * [inference] add smoothquant llama (#4861)
      
      * add smoothquant llama
      
      * fix attention accuracy
      
      * fix accuracy
      
      * add kv cache and save pretrained
      
      * refactor example
      
      * delete smooth
      
      * refactor code
      
      * [inference] add smooth function and delete useless code for smoothquant (#4895)
      
      * add smooth function and delete useless code
      
      * update datasets
      
      * remove duplicate import
      
      * delete useless file
      
      * refactor codes (#4902)
      
      * rafactor code
      
      * add license
      
      * add torch-int and smoothquant license
      611a5a80
  10. 04 Oct, 2023 2 commits
  11. 02 Oct, 2023 2 commits
    • Yuanheng Zhao's avatar
      [Infer] Serving example w/ ray-serve (multiple GPU case) (#4841) · 573f2705
      Yuanheng Zhao authored
      * fix imports
      
      * add ray-serve with Colossal-Infer tp
      
      * trivial: send requests script
      
      * add README
      
      * fix worker port
      
      * fix readme
      
      * use app builder and autoscaling
      
      * trivial: input args
      
      * clean code; revise readme
      
      * testci (skip example test)
      
      * use auto model/tokenizer
      
      * revert imports fix (fixed in other PRs)
      573f2705
    • Yuanheng Zhao's avatar
      [Infer] Colossal-Inference serving example w/ TorchServe (single GPU case) (#4771) · 3a74eb4b
      Yuanheng Zhao authored
      * add Colossal-Inference serving example w/ TorchServe
      
      * add dockerfile
      
      * fix dockerfile
      
      * fix dockerfile: fix commit hash, install curl
      
      * refactor file structure
      
      * revise readme
      
      * trivial
      
      * trivial: dockerfile format
      
      * clean dir; revise readme
      
      * fix comments: fix imports and configs
      
      * fix formats
      
      * remove unused requirements
      3a74eb4b
  12. 22 Sep, 2023 1 commit
    • Xu Kai's avatar
      [feature] add gptq for inference (#4754) · 946ab56c
      Xu Kai authored
      * [gptq] add gptq kernel (#4416)
      
      * add gptq
      
      * refactor code
      
      * fix tests
      
      * replace auto-gptq
      
      * rname inferance/quant
      
      * refactor test
      
      * add auto-gptq as an option
      
      * reset requirements
      
      * change assert and check auto-gptq
      
      * add import warnings
      
      * change test flash attn version
      
      * remove example
      
      * change requirements of flash_attn
      
      * modify tests
      
      * [skip ci] change requirements-test
      
      * [gptq] faster gptq cuda kernel (#4494)
      
      * [skip ci] add cuda kernels
      
      * add license
      
      * [skip ci] fix max_input_len
      
      * format files & change test size
      
      * [skip ci]
      
      * [gptq] add gptq tensor parallel (#4538)
      
      * add gptq tensor parallel
      
      * add gptq tp
      
      * delete print
      
      * add test gptq check
      
      * add test auto gptq check
      
      * [gptq] combine gptq and kv cache manager (#4706)
      
      * combine gptq and kv cache manager
      
      * add init bits
      
      * delete useless code
      
      * add model path
      
      * delete usless print and update test
      
      * delete usless import
      
      * move option gptq to shard config
      
      * change replace linear to shardformer
      
      * update bloom policy
      
      * delete useless code
      
      * fix import bug and delete uselss code
      
      * change colossalai/gptq to colossalai/quant/gptq
      
      * update import linear for tests
      
      * delete useless code and mv gptq_kernel to kernel directory
      
      * fix triton kernel
      
      * add triton import
      946ab56c
  13. 19 Sep, 2023 1 commit
  14. 11 Sep, 2023 1 commit
    • Cuiqing Li's avatar
      [Feature] The first PR to Add TP inference engine, kv-cache manager and... · bce0f167
      Cuiqing Li authored
      
      [Feature] The first PR to Add TP inference engine, kv-cache manager and related kernels for our inference system (#4577)
      
      * [infer] Infer/llama demo (#4503)
      
      * add
      
      * add infer example
      
      * finish
      
      * finish
      
      * stash
      
      * fix
      
      * [Kernels]  add inference token attention kernel (#4505)
      
      * add token forward
      
      * fix tests
      
      * fix comments
      
      * add try import triton
      
      * add adapted license
      
      * add tests check
      
      * [Kernels] add necessary kernels (llama & bloom) for attention forward and kv-cache manager  (#4485)
      
      * added _vllm_rms_norm
      
      * change place
      
      * added tests
      
      * added tests
      
      * modify
      
      * adding kernels
      
      * added tests:
      
      * adding kernels
      
      * modify
      
      * added
      
      * updating kernels
      
      * adding tests
      
      * added tests
      
      * kernel change
      
      * submit
      
      * modify
      
      * added
      
      * edit comments
      
      * change name
      
      * change commnets and fix import
      
      * add
      
      * added
      
      * combine codes (#4509)
      
      * [feature] add KV cache manager for llama & bloom inference (#4495)
      
      * add kv cache memory manager
      
      * add stateinfo during inference
      
      * format
      
      * format
      
      * rename file
      
      * add kv cache test
      
      * revise on BatchInferState
      
      * file dir change
      
      * [Bug FIx] import llama context ops fix (#4524)
      
      * added _vllm_rms_norm
      
      * change place
      
      * added tests
      
      * added tests
      
      * modify
      
      * adding kernels
      
      * added tests:
      
      * adding kernels
      
      * modify
      
      * added
      
      * updating kernels
      
      * adding tests
      
      * added tests
      
      * kernel change
      
      * submit
      
      * modify
      
      * added
      
      * edit comments
      
      * change name
      
      * change commnets and fix import
      
      * add
      
      * added
      
      * fix
      
      * add ops into init.py
      
      * add
      
      * [Infer] Add TPInferEngine and fix file path (#4532)
      
      * add engine for TP inference
      
      * move file path
      
      * update path
      
      * fix TPInferEngine
      
      * remove unused file
      
      * add engine test demo
      
      * revise TPInferEngine
      
      * fix TPInferEngine, add test
      
      * fix
      
      * Add Inference test for llama (#4508)
      
      * add kv cache memory manager
      
      * add stateinfo during inference
      
      * add
      
      * add infer example
      
      * finish
      
      * finish
      
      * format
      
      * format
      
      * rename file
      
      * add kv cache test
      
      * revise on BatchInferState
      
      * add inference test for llama
      
      * fix conflict
      
      * feature: add some new features for llama engine
      
      * adapt colossalai triton interface
      
      * Change the parent class of llama  policy
      
      * add nvtx
      
      * move llama inference code to tensor_parallel
      
      * fix __init__.py
      
      * rm tensor_parallel
      
      * fix: fix bugs in auto_policy.py
      
      * fix:rm some unused codes
      
      * mv colossalai/tpinference to colossalai/inference/tensor_parallel
      
      * change __init__.py
      
      * save change
      
      * fix engine
      
      * Bug fix: Fix hang
      
      * remove llama_infer_engine.py
      
      ---------
      Co-authored-by: default avataryuanheng-zhao <jonathan.zhaoyh@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      
      * [infer] Add Bloom inference policy and replaced methods (#4512)
      
      * add bloom inference methods and policy
      
      * enable pass BatchInferState from model forward
      
      * revise bloom infer layers/policies
      
      * add engine for inference (draft)
      
      * add test for bloom infer
      
      * fix bloom infer policy and flow
      
      * revise bloom test
      
      * fix bloom file path
      
      * remove unused codes
      
      * fix bloom modeling
      
      * fix dir typo
      
      * fix trivial
      
      * fix policy
      
      * clean pr
      
      * trivial fix
      
      * Revert "[infer] Add Bloom inference policy and replaced methods (#4512)" (#4552)
      
      This reverts commit 17cfa5714083a81a505c097f1c411cd28162d922.
      
      * [Doc] Add colossal inference doc (#4549)
      
      * create readme
      
      * add readme.md
      
      * fix typos
      
      * [infer] Add Bloom inference policy and replaced methods (#4553)
      
      * add bloom inference methods and policy
      
      * enable pass BatchInferState from model forward
      
      * revise bloom infer layers/policies
      
      * add engine for inference (draft)
      
      * add test for bloom infer
      
      * fix bloom infer policy and flow
      
      * revise bloom test
      
      * fix bloom file path
      
      * remove unused codes
      
      * fix bloom modeling
      
      * fix dir typo
      
      * fix trivial
      
      * fix policy
      
      * clean pr
      
      * trivial fix
      
      * trivial
      
      * Fix Bugs In Llama Model Forward (#4550)
      
      * add kv cache memory manager
      
      * add stateinfo during inference
      
      * add
      
      * add infer example
      
      * finish
      
      * finish
      
      * format
      
      * format
      
      * rename file
      
      * add kv cache test
      
      * revise on BatchInferState
      
      * add inference test for llama
      
      * fix conflict
      
      * feature: add some new features for llama engine
      
      * adapt colossalai triton interface
      
      * Change the parent class of llama  policy
      
      * add nvtx
      
      * move llama inference code to tensor_parallel
      
      * fix __init__.py
      
      * rm tensor_parallel
      
      * fix: fix bugs in auto_policy.py
      
      * fix:rm some unused codes
      
      * mv colossalai/tpinference to colossalai/inference/tensor_parallel
      
      * change __init__.py
      
      * save change
      
      * fix engine
      
      * Bug fix: Fix hang
      
      * remove llama_infer_engine.py
      
      * bug fix: fix bugs about infer_state.is_context_stage
      
      * remove pollcies
      
      * fix: delete unused code
      
      * fix: delete unused code
      
      * remove unused coda
      
      * fix conflict
      
      ---------
      Co-authored-by: default avataryuanheng-zhao <jonathan.zhaoyh@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      
      * [doc] add colossal inference fig (#4554)
      
      * create readme
      
      * add readme.md
      
      * fix typos
      
      * upload fig
      
      * [NFC] fix docstring for colossal inference (#4555)
      
      Fix docstring and comments in kv cache manager and bloom modeling
      
      * fix docstring in llama modeling (#4557)
      
      * [Infer] check import vllm (#4559)
      
      * change import vllm
      
      * import apply_rotary_pos_emb
      
      * change import location
      
      * [DOC] add installation req (#4561)
      
      * add installation req
      
      * fix
      
      * slight change
      
      * remove empty
      
      * [Feature] rms-norm transfer into inference llama.py  (#4563)
      
      * add installation req
      
      * fix
      
      * slight change
      
      * remove empty
      
      * add rmsnorm polciy
      
      * add
      
      * clean codes
      
      * [infer] Fix tp inference engine (#4564)
      
      * fix engine prepare data
      
      * add engine test
      
      * use bloom for testing
      
      * revise on test
      
      * revise on test
      
      * reset shardformer llama (#4569)
      
      * [infer] Fix engine - tensors on different devices (#4570)
      
      
      * fix diff device in engine
      
      * [codefactor] Feature/colossal inference (#4579)
      
      * code factors
      
      * remove
      
      * change coding (#4581)
      
      * [doc] complete README of colossal inference (#4585)
      
      * complete fig
      
      * Update README.md
      
      * [doc]update readme (#4586)
      
      * update readme
      
      * Update README.md
      
      * bug fix: fix bus in llama and bloom (#4588)
      
      * [BUG FIX]Fix test engine in CI and non-vllm kernels llama forward  (#4592)
      
      * fix tests
      
      * clean
      
      * clean
      
      * fix bugs
      
      * add
      
      * fix llama non-vllm kernels bug
      
      * modify
      
      * clean codes
      
      * [Kernel]Rmsnorm fix (#4598)
      
      * fix tests
      
      * clean
      
      * clean
      
      * fix bugs
      
      * add
      
      * fix llama non-vllm kernels bug
      
      * modify
      
      * clean codes
      
      * add triton rmsnorm
      
      * delete vllm kernel flag
      
      * [Bug Fix]Fix bugs in llama (#4601)
      
      * fix tests
      
      * clean
      
      * clean
      
      * fix bugs
      
      * add
      
      * fix llama non-vllm kernels bug
      
      * modify
      
      * clean codes
      
      * bug fix: remove rotary_positions_ids
      
      ---------
      Co-authored-by: default avatarcuiqing.li <lixx3527@gmail.com>
      
      * [kernel] Add triton layer norm & replace norm for bloom (#4609)
      
      * add layernorm for inference
      
      * add test for layernorm kernel
      
      * add bloom layernorm replacement policy
      
      * trivial: path
      
      * [Infer] Bug fix rotary embedding in llama (#4608)
      
      * fix rotary embedding
      
      * delete print
      
      * fix init seq len bug
      
      * rename pytest
      
      * add benchmark for llama
      
      * refactor codes
      
      * delete useless code
      
      * [bench] Add bloom inference benchmark (#4621)
      
      * add bloom benchmark
      
      * readme - update benchmark res
      
      * trivial - uncomment for testing (#4622)
      
      * [Infer] add check triton and cuda version for tests (#4627)
      
      * fix rotary embedding
      
      * delete print
      
      * fix init seq len bug
      
      * rename pytest
      
      * add benchmark for llama
      
      * refactor codes
      
      * delete useless code
      
      * add check triton and cuda
      
      * Update sharder.py (#4629)
      
      * [Inference] Hot fix some bugs and typos (#4632)
      
      * fix
      
      * fix test
      
      * fix conflicts
      
      * [typo]Comments fix (#4633)
      
      * fallback
      
      * fix commnets
      
      * bug fix: fix some bugs in test_llama and test_bloom (#4635)
      
      * [Infer] delete benchmark in tests and fix bug for llama and bloom (#4636)
      
      * fix rotary embedding
      
      * delete print
      
      * fix init seq len bug
      
      * rename pytest
      
      * add benchmark for llama
      
      * refactor codes
      
      * delete useless code
      
      * add check triton and cuda
      
      * delete benchmark and fix infer bugs
      
      * delete benchmark for tests
      
      * delete useless code
      
      * delete bechmark function in utils
      
      * [Fix] Revise TPInferEngine, inference tests and benchmarks (#4642)
      
      * [Fix] revise TPInferEngine methods and inference tests
      
      * fix llama/bloom infer benchmarks
      
      * fix infer tests
      
      * trivial fix: benchmakrs
      
      * trivial
      
      * trivial: rm print
      
      * modify utils filename for infer ops test (#4657)
      
      * [Infer] Fix TPInferEngine init & inference tests, benchmarks (#4670)
      
      * fix engine funcs
      
      * TPInferEngine: receive shard config in init
      
      * benchmarks: revise TPInferEngine init
      
      * benchmarks: remove pytest decorator
      
      * trivial fix
      
      * use small model for tests
      
      * [NFC] use args for infer benchmarks (#4674)
      
      * revise infer default (#4683)
      
      * [Fix] optimize/shard model in TPInferEngine init (#4684)
      
      * remove using orig model in engine
      
      * revise inference tests
      
      * trivial: rename
      
      ---------
      Co-authored-by: default avatarJianghai <72591262+CjhHa1@users.noreply.github.com>
      Co-authored-by: default avatarXu Kai <xukai16@foxmail.com>
      Co-authored-by: default avatarYuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
      Co-authored-by: default avataryuehuayingxueluo <867460659@qq.com>
      Co-authored-by: default avataryuanheng-zhao <jonathan.zhaoyh@gmail.com>
      Co-authored-by: default avatarCjhHa1 <cjh18671720497@outlook.com>
      bce0f167