1. 22 Mar, 2024 1 commit
  2. 12 Dec, 2023 1 commit
  3. 06 Dec, 2023 1 commit
    • Lyu Han's avatar
      Report the inference benchmark of models with different size (#794) · ebe90bc9
      Lyu Han authored
      * update test scripts for models with different sizes
      
      * update
      
      * only test after tunning gemm
      
      * chmod +x
      
      * fix typo
      
      * benchmark on a100
      
      * fix typo
      
      * fix typo
      
      * per-token latency percentile in profile_throughput
      
      * fix
      
      * fix
      
      * rename
      
      * make the script accept parameters
      
      * minor fix
      
      * indent
      
      * reformat table
      
      * change to 3000
      
      * minor fix
      ebe90bc9
  4. 04 Dec, 2023 1 commit
  5. 29 Nov, 2023 1 commit
    • Lyu Han's avatar
      Report first-token-latency and token-latency percentiles (#736) · 5c9e1e28
      Lyu Han authored
      * update profile scripts
      
      * add top_p, top_k and temperature as input arguments
      
      * fix input_ids
      
      * update profile_throughput
      
      * update profile_restful_api
      
      * update profile_serving
      
      * update
      
      * update
      
      * add progress bar
      
      * remove TODO comments
      
      * update
      
      * remove useless profile_* argument
      
      * remove log level
      
      * change concurrency default value to 64
      
      * update restful_api.md
      
      * update according to review comments
      
      * fix docstring
      5c9e1e28
  6. 08 Nov, 2023 1 commit
    • AllentDan's avatar
      fix benchmark serving computation mistake (#630) · 529e56bd
      AllentDan authored
      * fix benchmark serving computation mistake
      
      * fix timestamps computations
      
      * remove speed up
      
      * no mp
      
      * mp seems faster?
      
      * remove
      
      * update
      
      * remove
      
      * fix
      
      * update
      
      * update print log
      
      * typo
      
      * print fist token latency only stream==True
      
      * remove renew_session
      
      * update AsyncEngine
      529e56bd
  7. 01 Nov, 2023 1 commit
    • AllentDan's avatar
      Improve api_server and webui usage (#544) · 373bd013
      AllentDan authored
      * make IPv6 compatible, safe run for coroutine interrupting
      
      * instance_id -> session_id and fix api_client.py
      
      * update doc
      
      * remove useless faq
      
      * safe ip mapping
      
      * update app.py
      
      * WIP completion
      
      * completion
      
      * update doc
      
      * disable interactive mode for /v1/chat/completions
      
      * docstring
      
      * docstring
      
      * refactor gradio
      
      * update gradio
      
      * udpate
      
      * update doc
      
      * rename
      
      * session_id default -1
      
      * missed two files
      
      * add a APIClient
      
      * add chat func for APIClient
      
      * refine
      
      * add concurrent function
      
      * sequence_start, sequence_end --> interactive_mode
      
      * update doc
      
      * comments
      
      * doc
      
      * better text completion
      
      * remove /v1/embeddings
      
      * comments
      
      * deprecate generate and use /v1/interactive/completions
      
      * /v1/interactive/completion -> /v1/chat/interactive
      
      * embeddings
      
      * rename
      
      * remove wrong arg description
      
      * docstring
      
      * fix
      
      * update cli
      
      * update doc
      
      * strict session_len limit condition
      
      * pass model args to api_server
      373bd013
  8. 16 Oct, 2023 1 commit
  9. 31 Jul, 2023 1 commit
  10. 23 Jul, 2023 1 commit
    • lvhan028's avatar
      Refactor the chat template of supported models using factory pattern (#144) · 7b470f07
      lvhan028 authored
      * refactor model.py and support baichuan-7b
      
      * remove model_name
      
      * remove hard session_len
      
      * export tokenizer.py to target dir
      
      * remove model_name from client
      
      * remove model_name
      
      * update
      
      * correct throughput equation
      
      * fix session.response
      
      * update serving.md
      
      * update readme
      
      * update according to review comments
      
      * update
      
      * update
      
      * update
      
      * update
      7b470f07
  11. 22 Jul, 2023 1 commit