1. 27 May, 2024 1 commit
  2. 22 Nov, 2023 1 commit
    • Chen Xin's avatar
      Support loading hf model directly (#685) · 6b00f623
      Chen Xin authored
      * turbomind support export model params
      
      * fix overflow
      
      * support turbomind.from_pretrained
      
      * fix tp
      
      * support AutoModel
      
      * support load kv qparams
      
      * update auto_awq
      
      * udpate docstring
      
      * export lmdeploy version
      
      * update doc
      
      * remove download_hf_repo
      
      * LmdeployForCausalLM -> LmdeployForCausalLM
      
      * refactor turbomind.py
      
      * update comment
      
      * add bfloat16 convert back
      
      * support gradio run_locl load hf
      
      * support resuful api server load hf
      
      * add docs
      
      * support loading previous quantized model
      
      * adapt pr 690
      
      * udpate docs
      
      * not export turbomind config when quantize a model
      
      * check model_name when can not get it from config.json
      
      * update readme
      
      * remove model_name in auto_awq
      
      * update
      
      * update
      
      * udpate
      
      * fix build
      
      * absolute import
      6b00f623
  3. 14 Aug, 2023 1 commit
    • Li Zhang's avatar
      [Feature] Blazing fast W4A16 inference (#202) · c3290cad
      Li Zhang authored
      * add w4a16
      
      * fix `deploy.py`
      
      * add doc
      
      * add w4a16 kernels
      
      * fuse w1/w3 & bugfixes
      
      * fix typo
      
      * python
      
      * guard sm75/80 features
      
      * add missing header
      
      * refactor
      
      * qkvo bias
      
      * update cost model
      
      * fix lint
      
      * update `deploy.py`
      c3290cad
  4. 21 Jul, 2023 1 commit
  5. 04 Jul, 2023 1 commit
  6. 01 Jul, 2023 3 commits
  7. 24 Jun, 2023 1 commit
  8. 20 Jun, 2023 1 commit