- 16 Oct, 2023 1 commit
-
-
q.yao authored
* move tokenizer * remove Tokenizer in init * update deploy.py
-
- 13 Oct, 2023 1 commit
-
-
Chen Xin authored
* add tp hint for deploy * fix lint * assert tp in turbomind * fix lint
-
- 09 Oct, 2023 1 commit
-
-
Lyu Han authored
-
- 26 Sep, 2023 1 commit
-
-
AllentDan authored
* expose stop words * support string * fix * remove eoa from chatbot * remove eoa of turbomind * fix ut * suffix wheel and fix InternLM no system bug
-
- 25 Sep, 2023 1 commit
-
-
Lyu Han authored
Fix side effect brought by supporting codellama: `sequence_start` is always true when calling `model.get_prompt` (#466)
-
- 13 Sep, 2023 1 commit
-
-
WRH authored
-
- 11 Sep, 2023 1 commit
-
-
Lyu Han authored
* tmp * add demo for codellama inference * update * update * update * update codellama.md * export rope_theta * update * update doc * fix client.py * define SamplingParam * rollback 'end' * rotary_emb_base to rotary_embedding_base * change to baichuan2-7b
-
- 08 Sep, 2023 1 commit
-
-
WRH authored
* support baichuan2-chat * update args from generation config * update deploy.py * update readme * tested with tp * step-1 when last id is eos * add news --------- Co-authored-by:chenxin <chenxin@pjlab.org.cn>
-
- 07 Sep, 2023 1 commit
-
-
AllentDan authored
-
- 01 Sep, 2023 2 commits
-
-
AllentDan authored
* add incremental decoding for turbomind * update TIS * fix triton post processing * update doc * fix typo * SentencePieceTokenizer incremental decode, add qwen message prompt * docstring * update bot
-
Chen Xin authored
* pack llama_gemm * update CMakeLists.txt * remove candidate * update MANIFEST.in
-
- 22 Aug, 2023 1 commit
-
-
AllentDan authored
* add restful api * refine * add simple doc * lint * add uvicorn requirement * more args * add llama2 * docstring * update doc * save * refine * lint * better decode * add v1/embedding * add GenerateRequest * add llama2 chat template * correct profiling * update documents * add length judge * add faq * update doc and rename req_que to req_queue * fix md link, use get_logger, fix sequence_end bug * use another doc link for go to avoid lint error * add api_client.py * update doc * update doc * update function interface * update FAQ * resolve comments
-
- 18 Aug, 2023 1 commit
-
-
Li Zhang authored
* qwen support * dynamic ntk & logn attn * fix ntk & add chat template * fix ntk scaling & stop words * fix lint * add tiktoken to requirements.txt * fix tokenizer, set model format automatically * update model.py * update readme * fix lint
-
- 07 Aug, 2023 1 commit
-
-
lvhan028 authored
* add non-stream inference api for chatbot * update according to reviewer's comments
-
- 03 Aug, 2023 1 commit
-
-
lvhan028 authored
-
- 31 Jul, 2023 1 commit
-
-
q.yao authored
* works on interlm and vicuna * support GQA * remove comment * update readme, add logger, default tp=1 * remove log
-
- 24 Jul, 2023 1 commit
-
-
Li Zhang authored
* decode only forward pass * fix lint * batch embedding
-
- 23 Jul, 2023 1 commit
-
-
lvhan028 authored
* refactor model.py and support baichuan-7b * remove model_name * remove hard session_len * export tokenizer.py to target dir * remove model_name from client * remove model_name * update * correct throughput equation * fix session.response * update serving.md * update readme * update according to review comments * update * update * update * update
-
- 22 Jul, 2023 1 commit
-
-
q.yao authored
* add profile throughput benchmark * add output only throughput * update req/min * update benckmark readme * fix lint --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
- 19 Jul, 2023 1 commit
-
-
q.yao authored
* remove copy * repetition_penalty=1 * add repetition_penalty to chat args * update readme * update readme
-
- 18 Jul, 2023 1 commit
-
-
q.yao authored
* wip * profile disable tp * fix profile * lint * fix dlpack * remove comment * add tp flag * add session len check * add eos * remove tp and session len inputs * warp tokenizer * multithread load weight * update profile * refactor tokenizer * remove pre/post process * remove mpi4py requirement * remove * remove bind * remove mpi requirement * check backend_tokenizer
-
- 12 Jul, 2023 1 commit
-
-
lvhan028 authored
* add docstring * update * update * fix according to review results
-
- 06 Jul, 2023 2 commits
-
-
q.yao authored
* streaming-output * fix end * fix profile * support chinese streaming * lint * update chat * lint * fix benchmark --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
tpoisonooo authored
-
- 05 Jul, 2023 4 commits
-
-
lvhan028 authored
* update internlm model * update * update * update * update * update temperature, topk and top_p * update * update * loosen log level
-
q.yao authored
-
pppppM authored
* add cal qparams * support offload inference * add collect funtions (mod,weight) * stats kv scales * update init * add user guide * fix hints * fix comments & support turbomind format * update user guide * fix slice kv cache error & support pileval dataset (used in llm-awq) * fix wrong num heads slice * update default dataset * fix conflict * fix hints * fix hints * add gitignore
-
q.yao authored
* wip * wip * example finish * fix include and namespace * wtf * install lib * batchize * update cmake install * multithread * fix comment * fix * add mmengine * bind llamamodel --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-