1. 11 Dec, 2023 4 commits
  2. 07 Dec, 2023 1 commit
  3. 06 Dec, 2023 3 commits
  4. 05 Dec, 2023 2 commits
  5. 04 Dec, 2023 4 commits
  6. 02 Dec, 2023 1 commit
  7. 29 Nov, 2023 7 commits
  8. 28 Nov, 2023 1 commit
  9. 27 Nov, 2023 2 commits
  10. 24 Nov, 2023 1 commit
  11. 23 Nov, 2023 3 commits
  12. 22 Nov, 2023 1 commit
    • Chen Xin's avatar
      Support loading hf model directly (#685) · 6b00f623
      Chen Xin authored
      * turbomind support export model params
      
      * fix overflow
      
      * support turbomind.from_pretrained
      
      * fix tp
      
      * support AutoModel
      
      * support load kv qparams
      
      * update auto_awq
      
      * udpate docstring
      
      * export lmdeploy version
      
      * update doc
      
      * remove download_hf_repo
      
      * LmdeployForCausalLM -> LmdeployForCausalLM
      
      * refactor turbomind.py
      
      * update comment
      
      * add bfloat16 convert back
      
      * support gradio run_locl load hf
      
      * support resuful api server load hf
      
      * add docs
      
      * support loading previous quantized model
      
      * adapt pr 690
      
      * udpate docs
      
      * not export turbomind config when quantize a model
      
      * check model_name when can not get it from config.json
      
      * update readme
      
      * remove model_name in auto_awq
      
      * update
      
      * update
      
      * udpate
      
      * fix build
      
      * absolute import
      6b00f623
  13. 21 Nov, 2023 1 commit
  14. 20 Nov, 2023 3 commits
  15. 19 Nov, 2023 2 commits
  16. 16 Nov, 2023 1 commit
  17. 15 Nov, 2023 1 commit
  18. 14 Nov, 2023 1 commit
  19. 13 Nov, 2023 1 commit