- 24 Oct, 2023 1 commit
-
-
Chen Xin authored
* fix crash * update profile_generation.py * format * use self.bos_id * remove sys_instruct
-
- 18 Oct, 2023 1 commit
-
-
AllentDan authored
-
- 16 Oct, 2023 1 commit
-
-
q.yao authored
* move tokenizer * remove Tokenizer in init * update deploy.py
-
- 25 Sep, 2023 1 commit
-
-
Lyu Han authored
Fix side effect brought by supporting codellama: `sequence_start` is always true when calling `model.get_prompt` (#466)
-
- 11 Sep, 2023 1 commit
-
-
Lyu Han authored
* tmp * add demo for codellama inference * update * update * update * update codellama.md * export rope_theta * update * update doc * fix client.py * define SamplingParam * rollback 'end' * rotary_emb_base to rotary_embedding_base * change to baichuan2-7b
-
- 07 Sep, 2023 1 commit
-
-
AllentDan authored
-
- 01 Sep, 2023 1 commit
-
-
AllentDan authored
* add incremental decoding for turbomind * update TIS * fix triton post processing * update doc * fix typo * SentencePieceTokenizer incremental decode, add qwen message prompt * docstring * update bot
-
- 07 Aug, 2023 1 commit
-
-
lvhan028 authored
* add non-stream inference api for chatbot * update according to reviewer's comments
-
- 31 Jul, 2023 1 commit
-
-
q.yao authored
* works on interlm and vicuna * support GQA * remove comment * update readme, add logger, default tp=1 * remove log
-
- 23 Jul, 2023 1 commit
-
-
lvhan028 authored
* refactor model.py and support baichuan-7b * remove model_name * remove hard session_len * export tokenizer.py to target dir * remove model_name from client * remove model_name * update * correct throughput equation * fix session.response * update serving.md * update readme * update according to review comments * update * update * update * update
-
- 19 Jul, 2023 1 commit
-
-
q.yao authored
* remove copy * repetition_penalty=1 * add repetition_penalty to chat args * update readme * update readme
-
- 18 Jul, 2023 1 commit
-
-
q.yao authored
* wip * profile disable tp * fix profile * lint * fix dlpack * remove comment * add tp flag * add session len check * add eos * remove tp and session len inputs * warp tokenizer * multithread load weight * update profile * refactor tokenizer * remove pre/post process * remove mpi4py requirement * remove * remove bind * remove mpi requirement * check backend_tokenizer
-
- 12 Jul, 2023 1 commit
-
-
lvhan028 authored
* add docstring * update * update * fix according to review results
-
- 06 Jul, 2023 2 commits
-
-
q.yao authored
* streaming-output * fix end * fix profile * support chinese streaming * lint * update chat * lint * fix benchmark --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
tpoisonooo authored
-
- 05 Jul, 2023 4 commits
-
-
lvhan028 authored
* update internlm model * update * update * update * update * update temperature, topk and top_p * update * update * loosen log level
-
q.yao authored
-
pppppM authored
* add cal qparams * support offload inference * add collect funtions (mod,weight) * stats kv scales * update init * add user guide * fix hints * fix comments & support turbomind format * update user guide * fix slice kv cache error & support pileval dataset (used in llm-awq) * fix wrong num heads slice * update default dataset * fix conflict * fix hints * fix hints * add gitignore
-
q.yao authored
* wip * wip * example finish * fix include and namespace * wtf * install lib * batchize * update cmake install * multithread * fix comment * fix * add mmengine * bind llamamodel --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-