- 16 Oct, 2023 1 commit
-
-
q.yao authored
* move tokenizer * remove Tokenizer in init * update deploy.py
-
- 18 Sep, 2023 1 commit
-
-
AllentDan authored
* better profiler * wait for releasing mem * remove fire * remove support for multiple model benchmark * comments * output more details * correct tp
-
- 14 Aug, 2023 1 commit
-
-
Lyu Han authored
* tmp * update * update * update * update * update * remove * update * update
-
- 31 Jul, 2023 1 commit
-
-
q.yao authored
* works on interlm and vicuna * support GQA * remove comment * update readme, add logger, default tp=1 * remove log
-
- 23 Jul, 2023 1 commit
-
-
lvhan028 authored
* refactor model.py and support baichuan-7b * remove model_name * remove hard session_len * export tokenizer.py to target dir * remove model_name from client * remove model_name * update * correct throughput equation * fix session.response * update serving.md * update readme * update according to review comments * update * update * update * update
-
- 22 Jul, 2023 1 commit
-
-
q.yao authored
* add profile throughput benchmark * add output only throughput * update req/min * update benckmark readme * fix lint --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
- 18 Jul, 2023 1 commit
-
-
q.yao authored
* wip * profile disable tp * fix profile * lint * fix dlpack * remove comment * add tp flag * add session len check * add eos * remove tp and session len inputs * warp tokenizer * multithread load weight * update profile * refactor tokenizer * remove pre/post process * remove mpi4py requirement * remove * remove bind * remove mpi requirement * check backend_tokenizer
-
- 06 Jul, 2023 1 commit
-
-
q.yao authored
* streaming-output * fix end * fix profile * support chinese streaming * lint * update chat * lint * fix benchmark --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
- 05 Jul, 2023 2 commits
-
-
pppppM authored
* add cal qparams * support offload inference * add collect funtions (mod,weight) * stats kv scales * update init * add user guide * fix hints * fix comments & support turbomind format * update user guide * fix slice kv cache error & support pileval dataset (used in llm-awq) * fix wrong num heads slice * update default dataset * fix conflict * fix hints * fix hints * add gitignore
-
q.yao authored
* wip * wip * example finish * fix include and namespace * wtf * install lib * batchize * update cmake install * multithread * fix comment * fix * add mmengine * bind llamamodel --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
- 03 Jul, 2023 1 commit
-
-
lvhan028 authored
-
- 30 Jun, 2023 2 commits
- 25 Jun, 2023 1 commit
-
-
lvhan028 authored
* remove constraints on model name * remove duplicate model converter * add profile * get eos and bos from server * update stop_words * update sequence_length when the last generated token is eos_id * fix * fix * check-in models * valicate model_name * make stop_words as property * debug profiling * better stats * fix assistant reponse * update profile serving * update * update
-