1. 27 May, 2024 1 commit
  2. 20 Dec, 2023 1 commit
  3. 15 Dec, 2023 1 commit
    • q.yao's avatar
      Support turbomind bf16 (#803) · 3295eac3
      q.yao authored
      * Add bf16 template sp
      
      * prepare merge
      
      * add enable bf
      
      * add bf16 decode attention support
      
      * fix python lint
      
      * fix yapf
      
      * fix c format
      
      * c format11
      
      * fix cast
      
      * fix on sm<80
      
      * fix linux bf162 cast
      
      * fix type cast
      
      * fix lint
      
      * support from hf pretrained
      
      * fix pybind
      
      * fix converter
      
      * add trust remote code
      
      * fix comment
      
      * fix convert qwen
      
      * fix lint
      
      * fix baichuan
      
      * update weight map
      3295eac3
  4. 22 Nov, 2023 1 commit
    • Chen Xin's avatar
      Support loading hf model directly (#685) · 6b00f623
      Chen Xin authored
      * turbomind support export model params
      
      * fix overflow
      
      * support turbomind.from_pretrained
      
      * fix tp
      
      * support AutoModel
      
      * support load kv qparams
      
      * update auto_awq
      
      * udpate docstring
      
      * export lmdeploy version
      
      * update doc
      
      * remove download_hf_repo
      
      * LmdeployForCausalLM -> LmdeployForCausalLM
      
      * refactor turbomind.py
      
      * update comment
      
      * add bfloat16 convert back
      
      * support gradio run_locl load hf
      
      * support resuful api server load hf
      
      * add docs
      
      * support loading previous quantized model
      
      * adapt pr 690
      
      * udpate docs
      
      * not export turbomind config when quantize a model
      
      * check model_name when can not get it from config.json
      
      * update readme
      
      * remove model_name in auto_awq
      
      * update
      
      * update
      
      * udpate
      
      * fix build
      
      * absolute import
      6b00f623
  5. 10 Nov, 2023 1 commit
    • Li Zhang's avatar
      TurboMind 2 (#590) · ab1767cf
      Li Zhang authored
      * refresh decoder attention kernel
      
      * block-level kv cache
      
      * `BlockManager` & `SequenceManager`
      
      * update
      
      * update
      
      * update
      
      * update
      
      * rename
      
      * GQA support
      
      * fix context length
      
      * GQA dispatch
      
      * kv8
      
      * tune
      
      * async stream cb
      
      * nvtx
      
      * config parsing
      
      * debug
      
      * optimize output cost
      
      * split-k decoding
      
      * minor
      
      * truncate `session_len` by available blocks
      
      * minor
      
      * license
      
      * fix
      
      * dispatch `cp.async`
      
      * fix linking
      
      * fix
      
      * fix deadlock
      
      * guard input length
      
      * correct start offset
      
      * fix prefill chunking
      
      * fix `cache_block_seq_len` param passing
      
      * fix `block_size` fmtstr
      
      * fix output tokens
      
      * fix batch resizing
      
      * fix masking of finished sequences
      
      * add debug util
      
      * free unused block early
      
      * add ntk scaling and logn scaling
      
      * cmake flags
      
      * fix typo
      
      * w4a16 for sm75
      
      * fix msvc build
      
      * fix msvc build
      
      * fix block verification
      
      * fix msvc build
      
      * use `std::shuffle`
      
      * fix lint
      
      * fix lint
      
      * fix lint
      
      * clear incoming buffer
      
      * clear finished requests
      
      * fix batch initialization
      
      * fix typo
      
      * fix typo
      
      * fix comparison
      ab1767cf
  6. 17 Aug, 2023 1 commit
    • Chen Xin's avatar
      Support windows platform (#209) · 4c9959f6
      Chen Xin authored
      * __PRETTY_FUNCTION__
      
      * CASE_K
      
      * uint
      
      * remove not
      
      * HALF_FLT_MAX
      
      * struct init
      
      * port utils
      
      * better build pthread-win32
      
      * port kernels
      
      * port utils/gemm_test
      
      * hide windows header
      
      * port models
      
      * port examples && triton_backend && unittests
      
      * update build readme
      
      * fix lint
      
      * fix lint
      
      * fix lint
      
      * fix lint
      
      * fix lint
      
      * fix build
      
      * fix build
      
      * cmake version
      
      * fix typos
      
      * update ci
      
      * port kernels/gemm_s_f16
      
      * update ci
      
      * fix ci
      
      * use cudaStreamSynchronize instead of volatile check
      
      * remove gettimeofday
      
      * remove pthread-win32
      
      * remove dirent.h
      
      * update pre-commit
      
      * update
      
      * remove todo
      
      * fix include
      
      * fix build
      
      * fix build
      
      * fix build ci
      
      * fix github action trigger
      
      * update README
      
      * fix linux-build ci
      
      * remove windows folder
      
      * fix lint
      
      * update readme
      4c9959f6
  7. 14 Aug, 2023 2 commits
  8. 31 Jul, 2023 1 commit
  9. 21 Jul, 2023 1 commit
  10. 01 Jul, 2023 3 commits
  11. 28 Jun, 2023 1 commit
    • tpoisonooo's avatar
      feat(src): add kv cache int8 quantization (#22) · cc93136e
      tpoisonooo authored
      * feat(src): add int8 and compile passed
      
      * feat(kernels): fix
      
      * feat(llama): update kernel
      
      * feat(src): add debug
      
      * fix(kernel): k_cache use int8_t pointer
      
      * style(llama): clean code
      
      * feat(deploy.py): revert to enable fmha
      
      * style(LlamaV2): clean code
      
      * feat(deploy.py): add default quant policy
      cc93136e
  12. 24 Jun, 2023 1 commit
  13. 20 Jun, 2023 1 commit