1. 05 Sep, 2023 1 commit
  2. 01 Sep, 2023 1 commit
    • AllentDan's avatar
      Decode generated token_ids incrementally (#309) · 9bfe03c6
      AllentDan authored
      * add incremental decoding for turbomind
      
      * update TIS
      
      * fix triton post processing
      
      * update doc
      
      * fix typo
      
      * SentencePieceTokenizer incremental decode, add qwen message prompt
      
      * docstring
      
      * update bot
      9bfe03c6
  3. 30 Aug, 2023 1 commit
  4. 29 Aug, 2023 1 commit
  5. 24 Aug, 2023 2 commits
  6. 22 Aug, 2023 1 commit
    • AllentDan's avatar
      Add Restful API (#223) · d5c10e7a
      AllentDan authored
      * add restful api
      
      * refine
      
      * add simple doc
      
      * lint
      
      * add uvicorn requirement
      
      * more args
      
      * add llama2
      
      * docstring
      
      * update doc
      
      * save
      
      * refine
      
      * lint
      
      * better decode
      
      * add v1/embedding
      
      * add GenerateRequest
      
      * add llama2 chat template
      
      * correct profiling
      
      * update documents
      
      * add length judge
      
      * add faq
      
      * update doc and rename req_que to req_queue
      
      * fix md link, use get_logger, fix sequence_end bug
      
      * use another doc link for go to avoid lint error
      
      * add api_client.py
      
      * update doc
      
      * update doc
      
      * update function interface
      
      * update FAQ
      
      * resolve comments
      d5c10e7a
  7. 21 Aug, 2023 3 commits
  8. 17 Aug, 2023 1 commit
  9. 15 Aug, 2023 1 commit
  10. 14 Aug, 2023 3 commits
  11. 07 Aug, 2023 1 commit
    • WRH's avatar
      [Refactor] Support multi-session chat (#178) · 4bd0b487
      WRH authored
      * add some dist utils
      
      * add model utils
      
      * add termio and basicstreamer
      
      * typo
      
      * fix world size
      
      * refactor chat and tested llama1
      
      * add internlm adapter and support stoping criteria
      
      * concat with id for internlm
      
      * update docstring
      
      * update and support llama2
      
      * typo
      
      * move docs to docs
      
      * update docstring of session manager
      
      * update docstring
      
      * update docs
      
      * fix accel none in model
      
      * fix and add test for tensor broadcast
      
      * fix session using typing to check type
      
      * add docstrings and comprehensive condition test
      
      * unit test for dist
      
      * fix session
      
      * split unittests of utils
      
      * typo
      
      * update control flow of accel
      
      * move test model
      
      * remove main in unittest
      
      * remove some log
      
      * remove some comments
      4bd0b487
  12. 04 Aug, 2023 1 commit
  13. 03 Aug, 2023 1 commit
  14. 26 Jul, 2023 1 commit
  15. 23 Jul, 2023 1 commit
    • lvhan028's avatar
      Refactor the chat template of supported models using factory pattern (#144) · 7b470f07
      lvhan028 authored
      * refactor model.py and support baichuan-7b
      
      * remove model_name
      
      * remove hard session_len
      
      * export tokenizer.py to target dir
      
      * remove model_name from client
      
      * remove model_name
      
      * update
      
      * correct throughput equation
      
      * fix session.response
      
      * update serving.md
      
      * update readme
      
      * update according to review comments
      
      * update
      
      * update
      
      * update
      
      * update
      7b470f07
  16. 17 Jul, 2023 1 commit
  17. 14 Jul, 2023 2 commits
  18. 13 Jul, 2023 1 commit
  19. 11 Jul, 2023 2 commits
  20. 05 Jul, 2023 5 commits
    • lvhan028's avatar
      improve readme (#52) · 3e7b6bfd
      lvhan028 authored
      * add performance
      
      * use png
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      3e7b6bfd
    • tpoisonooo's avatar
      fix(kv_qparams.py): zp use min (#59) · ec53d63f
      tpoisonooo authored
      * fix(kv_qparams.py): zp use min
      
      * revert(qparams.py): revert format
      
      * fix(kv_qparams.py): update formula
      ec53d63f
    • tpoisonooo's avatar
      docs(README): typo (#56) · 7396d8f6
      tpoisonooo authored
      7396d8f6
    • pppppM's avatar
      [Feature] Stats Quantization Parameters for KV Cache (#45) · 3fff964d
      pppppM authored
      * add cal qparams
      
      * support offload inference
      
      * add collect funtions (mod,weight)
      
      * stats kv scales
      
      * update init
      
      * add user guide
      
      * fix hints
      
      * fix comments & support turbomind format
      
      * update user guide
      
      * fix slice kv cache error & support pileval dataset (used in llm-awq)
      
      * fix wrong num heads slice
      
      * update default dataset
      
      * fix conflict
      
      * fix hints
      
      * fix hints
      
      * add gitignore
      3fff964d
    • tpoisonooo's avatar
      docs(quantization): add more test (#53) · edb6eb86
      tpoisonooo authored
      * docs(quantization): add more test
      
      * revert(generate.sh): revert ninja
      
      * revert(llama_config.ini): revert empty line
      
      * fix(quantization.md): fix link error
      edb6eb86
  21. 04 Jul, 2023 2 commits