1. 13 Dec, 2023 1 commit
  2. 12 Dec, 2023 1 commit
  3. 06 Dec, 2023 1 commit
    • Lyu Han's avatar
      Report the inference benchmark of models with different size (#794) · ebe90bc9
      Lyu Han authored
      * update test scripts for models with different sizes
      
      * update
      
      * only test after tunning gemm
      
      * chmod +x
      
      * fix typo
      
      * benchmark on a100
      
      * fix typo
      
      * fix typo
      
      * per-token latency percentile in profile_throughput
      
      * fix
      
      * fix
      
      * rename
      
      * make the script accept parameters
      
      * minor fix
      
      * indent
      
      * reformat table
      
      * change to 3000
      
      * minor fix
      ebe90bc9
  4. 05 Dec, 2023 1 commit
  5. 04 Dec, 2023 1 commit
  6. 29 Nov, 2023 3 commits
    • Lyu Han's avatar
      Update benchmark user guide (#763) · d3e2cee4
      Lyu Han authored
      * user guide of benchmark generation
      
      * update benchmark generation guide
      
      * update profiling throughput guide
      
      * update profiling api_server guide
      
      * rename file names
      
      * update profile tis user guide
      
      * update
      
      * fix according to review comments
      
      * update
      
      * update according to review comments
      
      * updaste
      
      * add an example
      
      * update
      d3e2cee4
    • Lyu Han's avatar
      Report first-token-latency and token-latency percentiles (#736) · 5c9e1e28
      Lyu Han authored
      * update profile scripts
      
      * add top_p, top_k and temperature as input arguments
      
      * fix input_ids
      
      * update profile_throughput
      
      * update profile_restful_api
      
      * update profile_serving
      
      * update
      
      * update
      
      * add progress bar
      
      * remove TODO comments
      
      * update
      
      * remove useless profile_* argument
      
      * remove log level
      
      * change concurrency default value to 64
      
      * update restful_api.md
      
      * update according to review comments
      
      * fix docstring
      5c9e1e28
    • tpoisonooo's avatar
      improvement(build): enable ninja and gold linker (#767) · 8add942d
      tpoisonooo authored
      * feat(build): enable ninja and lld
      
      * fix(.github): add ninja installation
      
      * fix(CI): remove dimsize=256
      
      * fix(CI): add option for generate.sh
      
      * fix(docs): update
      8add942d
  7. 27 Nov, 2023 1 commit
  8. 22 Nov, 2023 1 commit
    • Chen Xin's avatar
      Support loading hf model directly (#685) · 6b00f623
      Chen Xin authored
      * turbomind support export model params
      
      * fix overflow
      
      * support turbomind.from_pretrained
      
      * fix tp
      
      * support AutoModel
      
      * support load kv qparams
      
      * update auto_awq
      
      * udpate docstring
      
      * export lmdeploy version
      
      * update doc
      
      * remove download_hf_repo
      
      * LmdeployForCausalLM -> LmdeployForCausalLM
      
      * refactor turbomind.py
      
      * update comment
      
      * add bfloat16 convert back
      
      * support gradio run_locl load hf
      
      * support resuful api server load hf
      
      * add docs
      
      * support loading previous quantized model
      
      * adapt pr 690
      
      * udpate docs
      
      * not export turbomind config when quantize a model
      
      * check model_name when can not get it from config.json
      
      * update readme
      
      * remove model_name in auto_awq
      
      * update
      
      * update
      
      * udpate
      
      * fix build
      
      * absolute import
      6b00f623
  9. 20 Nov, 2023 1 commit
  10. 19 Nov, 2023 1 commit
  11. 13 Nov, 2023 1 commit
  12. 10 Nov, 2023 1 commit
  13. 01 Nov, 2023 1 commit
    • AllentDan's avatar
      Improve api_server and webui usage (#544) · 373bd013
      AllentDan authored
      * make IPv6 compatible, safe run for coroutine interrupting
      
      * instance_id -> session_id and fix api_client.py
      
      * update doc
      
      * remove useless faq
      
      * safe ip mapping
      
      * update app.py
      
      * WIP completion
      
      * completion
      
      * update doc
      
      * disable interactive mode for /v1/chat/completions
      
      * docstring
      
      * docstring
      
      * refactor gradio
      
      * update gradio
      
      * udpate
      
      * update doc
      
      * rename
      
      * session_id default -1
      
      * missed two files
      
      * add a APIClient
      
      * add chat func for APIClient
      
      * refine
      
      * add concurrent function
      
      * sequence_start, sequence_end --> interactive_mode
      
      * update doc
      
      * comments
      
      * doc
      
      * better text completion
      
      * remove /v1/embeddings
      
      * comments
      
      * deprecate generate and use /v1/interactive/completions
      
      * /v1/interactive/completion -> /v1/chat/interactive
      
      * embeddings
      
      * rename
      
      * remove wrong arg description
      
      * docstring
      
      * fix
      
      * update cli
      
      * update doc
      
      * strict session_len limit condition
      
      * pass model args to api_server
      373bd013
  14. 25 Oct, 2023 2 commits
  15. 23 Oct, 2023 1 commit
  16. 13 Oct, 2023 1 commit
  17. 12 Oct, 2023 1 commit
  18. 11 Oct, 2023 2 commits
  19. 14 Sep, 2023 1 commit
  20. 11 Sep, 2023 1 commit
    • Lyu Han's avatar
      Support codellama (#359) · 65c662f9
      Lyu Han authored
      * tmp
      
      * add demo for codellama inference
      
      * update
      
      * update
      
      * update
      
      * update codellama.md
      
      * export rope_theta
      
      * update
      
      * update doc
      
      * fix client.py
      
      * define SamplingParam
      
      * rollback 'end'
      
      * rotary_emb_base to rotary_embedding_base
      
      * change to baichuan2-7b
      65c662f9
  21. 05 Sep, 2023 1 commit
  22. 01 Sep, 2023 1 commit
    • AllentDan's avatar
      Decode generated token_ids incrementally (#309) · 9bfe03c6
      AllentDan authored
      * add incremental decoding for turbomind
      
      * update TIS
      
      * fix triton post processing
      
      * update doc
      
      * fix typo
      
      * SentencePieceTokenizer incremental decode, add qwen message prompt
      
      * docstring
      
      * update bot
      9bfe03c6
  23. 30 Aug, 2023 1 commit
  24. 29 Aug, 2023 1 commit
  25. 24 Aug, 2023 2 commits
  26. 22 Aug, 2023 1 commit
    • AllentDan's avatar
      Add Restful API (#223) · d5c10e7a
      AllentDan authored
      * add restful api
      
      * refine
      
      * add simple doc
      
      * lint
      
      * add uvicorn requirement
      
      * more args
      
      * add llama2
      
      * docstring
      
      * update doc
      
      * save
      
      * refine
      
      * lint
      
      * better decode
      
      * add v1/embedding
      
      * add GenerateRequest
      
      * add llama2 chat template
      
      * correct profiling
      
      * update documents
      
      * add length judge
      
      * add faq
      
      * update doc and rename req_que to req_queue
      
      * fix md link, use get_logger, fix sequence_end bug
      
      * use another doc link for go to avoid lint error
      
      * add api_client.py
      
      * update doc
      
      * update doc
      
      * update function interface
      
      * update FAQ
      
      * resolve comments
      d5c10e7a
  27. 21 Aug, 2023 3 commits
  28. 17 Aug, 2023 1 commit
  29. 15 Aug, 2023 1 commit
  30. 14 Aug, 2023 3 commits
  31. 07 Aug, 2023 1 commit
    • WRH's avatar
      [Refactor] Support multi-session chat (#178) · 4bd0b487
      WRH authored
      * add some dist utils
      
      * add model utils
      
      * add termio and basicstreamer
      
      * typo
      
      * fix world size
      
      * refactor chat and tested llama1
      
      * add internlm adapter and support stoping criteria
      
      * concat with id for internlm
      
      * update docstring
      
      * update and support llama2
      
      * typo
      
      * move docs to docs
      
      * update docstring of session manager
      
      * update docstring
      
      * update docs
      
      * fix accel none in model
      
      * fix and add test for tensor broadcast
      
      * fix session using typing to check type
      
      * add docstrings and comprehensive condition test
      
      * unit test for dist
      
      * fix session
      
      * split unittests of utils
      
      * typo
      
      * update control flow of accel
      
      * move test model
      
      * remove main in unittest
      
      * remove some log
      
      * remove some comments
      4bd0b487