1. 27 May, 2024 1 commit
  2. 15 Dec, 2023 1 commit
    • q.yao's avatar
      Support turbomind bf16 (#803) · 3295eac3
      q.yao authored
      * Add bf16 template sp
      
      * prepare merge
      
      * add enable bf
      
      * add bf16 decode attention support
      
      * fix python lint
      
      * fix yapf
      
      * fix c format
      
      * c format11
      
      * fix cast
      
      * fix on sm<80
      
      * fix linux bf162 cast
      
      * fix type cast
      
      * fix lint
      
      * support from hf pretrained
      
      * fix pybind
      
      * fix converter
      
      * add trust remote code
      
      * fix comment
      
      * fix convert qwen
      
      * fix lint
      
      * fix baichuan
      
      * update weight map
      3295eac3
  3. 14 Aug, 2023 1 commit
    • Li Zhang's avatar
      [Feature] Blazing fast W4A16 inference (#202) · c3290cad
      Li Zhang authored
      * add w4a16
      
      * fix `deploy.py`
      
      * add doc
      
      * add w4a16 kernels
      
      * fuse w1/w3 & bugfixes
      
      * fix typo
      
      * python
      
      * guard sm75/80 features
      
      * add missing header
      
      * refactor
      
      * qkvo bias
      
      * update cost model
      
      * fix lint
      
      * update `deploy.py`
      c3290cad
  4. 04 Jul, 2023 1 commit
  5. 01 Jul, 2023 3 commits
  6. 28 Jun, 2023 1 commit
    • tpoisonooo's avatar
      feat(src): add kv cache int8 quantization (#22) · cc93136e
      tpoisonooo authored
      * feat(src): add int8 and compile passed
      
      * feat(kernels): fix
      
      * feat(llama): update kernel
      
      * feat(src): add debug
      
      * fix(kernel): k_cache use int8_t pointer
      
      * style(llama): clean code
      
      * feat(deploy.py): revert to enable fmha
      
      * style(LlamaV2): clean code
      
      * feat(deploy.py): add default quant policy
      cc93136e
  7. 20 Jun, 2023 1 commit