- 24 Aug, 2023 1 commit
-
-
pppppM authored
* fix llama2 70b * fix qwen quantization * remove pdb * add faq
-
- 22 Aug, 2023 1 commit
-
-
AllentDan authored
* add restful api * refine * add simple doc * lint * add uvicorn requirement * more args * add llama2 * docstring * update doc * save * refine * lint * better decode * add v1/embedding * add GenerateRequest * add llama2 chat template * correct profiling * update documents * add length judge * add faq * update doc and rename req_que to req_queue * fix md link, use get_logger, fix sequence_end bug * use another doc link for go to avoid lint error * add api_client.py * update doc * update doc * update function interface * update FAQ * resolve comments
-
- 21 Aug, 2023 1 commit
-
-
AllentDan authored
* pass args like meta_prompt to model * update chatbot * update * rollback * update llama2 and qwen * refine
-
- 18 Aug, 2023 2 commits
- 16 Aug, 2023 2 commits
- 15 Aug, 2023 1 commit
-
-
Chen Xin authored
-
- 14 Aug, 2023 4 commits
-
-
Lyu Han authored
-
Lyu Han authored
* rollback * rollback chatbot.py
-
tpoisonooo authored
* feat(quantization): kv cache use asymmetric
-
Li Zhang authored
* add w4a16 * fix `deploy.py` * add doc * add w4a16 kernels * fuse w1/w3 & bugfixes * fix typo * python * guard sm75/80 features * add missing header * refactor * qkvo bias * update cost model * fix lint * update `deploy.py`
-
- 11 Aug, 2023 1 commit
-
-
pppppM authored
* support kv cache offload * add dataloader docstring * complete gitignore * refactor collect mod fn * add calibration * fix lint * add observers and quantizers * fix lints * add global available mixin * fix lints * split batch inference * support smoothquant and awq * update export kv scales * fix lints * fix some bugs * update weight only usage * update usage * auto mapping and support smooth internlm * trust remote code * fix num head key error * fix bias error * align shape and pack order with llm-awq * modified according to LZHgrla's comments. * update gitignore * fix kv qparams export error * update usage * decouple calibrate and awq * update docstrings * update api name * update readme * update readme * update readme * update readme * update kv_qparams and readme * fix typos
-
- 07 Aug, 2023 5 commits
-
-
WRH authored
* add some dist utils * add model utils * add termio and basicstreamer * typo * fix world size * refactor chat and tested llama1 * add internlm adapter and support stoping criteria * concat with id for internlm * update docstring * update and support llama2 * typo * move docs to docs * update docstring of session manager * update docstring * update docs * fix accel none in model * fix and add test for tensor broadcast * fix session using typing to check type * add docstrings and comprehensive condition test * unit test for dist * fix session * split unittests of utils * typo * update control flow of accel * move test model * remove main in unittest * remove some log * remove some comments
-
lvhan028 authored
-
lvhan028 authored
* add non-stream inference api for chatbot * update according to reviewer's comments
-
LZHgrla authored
* add get_small_sharded_hf.py * fix pre-commit
-
lvhan028 authored
* change to incremental decoding * update
-
- 04 Aug, 2023 1 commit
-
-
AllentDan authored
* use local model for webui * local model for app.py * lint * remove print * add seed * comments * fixed seesion_id * support turbomind batch inference * update app.py * lint and docstring * move webui to serve/gradio * update doc * update doc * update docstring and rmeove print conversition * log * Update docs/zh_cn/build.md Co-authored-by:
Chen Xin <xinchen.tju@gmail.com> * Update docs/en/build.md Co-authored-by:
Chen Xin <xinchen.tju@gmail.com> * use latest gradio * fix * replace partial with InterFace * use host ip instead of coolie --------- Co-authored-by:
Chen Xin <xinchen.tju@gmail.com>
-
- 03 Aug, 2023 1 commit
-
-
lvhan028 authored
-
- 31 Jul, 2023 1 commit
-
-
q.yao authored
* works on interlm and vicuna * support GQA * remove comment * update readme, add logger, default tp=1 * remove log
-
- 28 Jul, 2023 1 commit
-
-
lvhan028 authored
* bump version to v0.0.2 * fix command * update installation and inference section
-
- 27 Jul, 2023 1 commit
-
-
MaxMatthew authored
-
- 26 Jul, 2023 1 commit
-
-
Chen Xin authored
* defer symlink * fix lint
-
- 25 Jul, 2023 2 commits
-
-
q.yao authored
Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
lvhan028 authored
-
- 24 Jul, 2023 1 commit
-
-
Li Zhang authored
* decode only forward pass * fix lint * batch embedding
-
- 23 Jul, 2023 1 commit
-
-
lvhan028 authored
* refactor model.py and support baichuan-7b * remove model_name * remove hard session_len * export tokenizer.py to target dir * remove model_name from client * remove model_name * update * correct throughput equation * fix session.response * update serving.md * update readme * update according to review comments * update * update * update * update
-
- 22 Jul, 2023 1 commit
-
-
q.yao authored
* add profile throughput benchmark * add output only throughput * update req/min * update benckmark readme * fix lint --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
- 21 Jul, 2023 3 commits
-
-
MaxMatthew authored
* Fix lmdeploy.serve.turbomind bug * add __init__.py for turbomind * add resume function * fix the assignment for session.response * Fix code style
-
Li Zhang authored
* add GQA for llama2 * fix model conversion * fix lint & remove dev log * update news * minor * fix allocation size * fix split_dim for w_qkv.bias
-
Kevin Wang authored
* [Fix] fix issue 127 * 优化防止接口更改 * 如果没有deepspeed用python启动需要手动加载到GPU上 * rollback the changes about max_out_tokens and delelte torch > 2.0 if statement * support kernel injection with customized deepspeed * spelling error * Update chat.py --------- Co-authored-by:wangruohui <12756472+wangruohui@users.noreply.github.com>
-
- 20 Jul, 2023 3 commits
-
-
q.yao authored
* add llama2 template * update readme and fix lint * update readme * add bos * add bos * remove bos * Update model.py --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
WRH authored
-
humu789 authored
* fix get_dataset error * fix lint * add datasets to requirements.txt * update some msci
-
- 19 Jul, 2023 2 commits
- 18 Jul, 2023 3 commits
-
-
AllentDan authored
* update requirements * update transformers version * lint * comments * lint * update requirements * remove setup_requires --------- Co-authored-by:dongchunyu <dongchunyu@pjlab.org.cn>
-
Kevin Wang authored
-
q.yao authored
* wip * profile disable tp * fix profile * lint * fix dlpack * remove comment * add tp flag * add session len check * add eos * remove tp and session len inputs * warp tokenizer * multithread load weight * update profile * refactor tokenizer * remove pre/post process * remove mpi4py requirement * remove * remove bind * remove mpi requirement * check backend_tokenizer
-