1. 05 Sep, 2023 1 commit
  2. 01 Sep, 2023 1 commit
    • AllentDan's avatar
      Decode generated token_ids incrementally (#309) · 9bfe03c6
      AllentDan authored
      * add incremental decoding for turbomind
      
      * update TIS
      
      * fix triton post processing
      
      * update doc
      
      * fix typo
      
      * SentencePieceTokenizer incremental decode, add qwen message prompt
      
      * docstring
      
      * update bot
      9bfe03c6
  3. 30 Aug, 2023 1 commit
  4. 29 Aug, 2023 1 commit
  5. 24 Aug, 2023 2 commits
  6. 22 Aug, 2023 1 commit
    • AllentDan's avatar
      Add Restful API (#223) · d5c10e7a
      AllentDan authored
      * add restful api
      
      * refine
      
      * add simple doc
      
      * lint
      
      * add uvicorn requirement
      
      * more args
      
      * add llama2
      
      * docstring
      
      * update doc
      
      * save
      
      * refine
      
      * lint
      
      * better decode
      
      * add v1/embedding
      
      * add GenerateRequest
      
      * add llama2 chat template
      
      * correct profiling
      
      * update documents
      
      * add length judge
      
      * add faq
      
      * update doc and rename req_que to req_queue
      
      * fix md link, use get_logger, fix sequence_end bug
      
      * use another doc link for go to avoid lint error
      
      * add api_client.py
      
      * update doc
      
      * update doc
      
      * update function interface
      
      * update FAQ
      
      * resolve comments
      d5c10e7a
  7. 21 Aug, 2023 3 commits
  8. 17 Aug, 2023 1 commit
  9. 15 Aug, 2023 1 commit
  10. 14 Aug, 2023 2 commits
  11. 04 Aug, 2023 1 commit
  12. 03 Aug, 2023 1 commit
  13. 23 Jul, 2023 1 commit
    • lvhan028's avatar
      Refactor the chat template of supported models using factory pattern (#144) · 7b470f07
      lvhan028 authored
      * refactor model.py and support baichuan-7b
      
      * remove model_name
      
      * remove hard session_len
      
      * export tokenizer.py to target dir
      
      * remove model_name from client
      
      * remove model_name
      
      * update
      
      * correct throughput equation
      
      * fix session.response
      
      * update serving.md
      
      * update readme
      
      * update according to review comments
      
      * update
      
      * update
      
      * update
      
      * update
      7b470f07
  14. 17 Jul, 2023 1 commit
  15. 11 Jul, 2023 1 commit
  16. 05 Jul, 2023 5 commits
    • lvhan028's avatar
      improve readme (#52) · 3e7b6bfd
      lvhan028 authored
      * add performance
      
      * use png
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      3e7b6bfd
    • tpoisonooo's avatar
      fix(kv_qparams.py): zp use min (#59) · ec53d63f
      tpoisonooo authored
      * fix(kv_qparams.py): zp use min
      
      * revert(qparams.py): revert format
      
      * fix(kv_qparams.py): update formula
      ec53d63f
    • tpoisonooo's avatar
      docs(README): typo (#56) · 7396d8f6
      tpoisonooo authored
      7396d8f6
    • pppppM's avatar
      [Feature] Stats Quantization Parameters for KV Cache (#45) · 3fff964d
      pppppM authored
      * add cal qparams
      
      * support offload inference
      
      * add collect funtions (mod,weight)
      
      * stats kv scales
      
      * update init
      
      * add user guide
      
      * fix hints
      
      * fix comments & support turbomind format
      
      * update user guide
      
      * fix slice kv cache error & support pileval dataset (used in llm-awq)
      
      * fix wrong num heads slice
      
      * update default dataset
      
      * fix conflict
      
      * fix hints
      
      * fix hints
      
      * add gitignore
      3fff964d
    • tpoisonooo's avatar
      docs(quantization): add more test (#53) · edb6eb86
      tpoisonooo authored
      * docs(quantization): add more test
      
      * revert(generate.sh): revert ninja
      
      * revert(llama_config.ini): revert empty line
      
      * fix(quantization.md): fix link error
      edb6eb86
  17. 04 Jul, 2023 2 commits