- 23 Jul, 2023 1 commit
-
-
lvhan028 authored
* refactor model.py and support baichuan-7b * remove model_name * remove hard session_len * export tokenizer.py to target dir * remove model_name from client * remove model_name * update * correct throughput equation * fix session.response * update serving.md * update readme * update according to review comments * update * update * update * update
-
- 21 Jul, 2023 1 commit
-
-
Li Zhang authored
* add GQA for llama2 * fix model conversion * fix lint & remove dev log * update news * minor * fix allocation size * fix split_dim for w_qkv.bias
-
- 20 Jul, 2023 1 commit
-
-
q.yao authored
* add llama2 template * update readme and fix lint * update readme * add bos * add bos * remove bos * Update model.py --------- Co-authored-by:grimoire <yaoqian@pjlab.org.cn>
-
- 19 Jul, 2023 1 commit
-
-
q.yao authored
* remove copy * repetition_penalty=1 * add repetition_penalty to chat args * update readme * update readme
-
- 18 Jul, 2023 1 commit
-
-
AllentDan authored
* update requirements * update transformers version * lint * comments * lint * update requirements * remove setup_requires --------- Co-authored-by:dongchunyu <dongchunyu@pjlab.org.cn>
-
- 17 Jul, 2023 3 commits
-
-
vansin authored
* docs: fix doc * fix: fix lint
-
Jaylin Lee authored
* [bugfix] Fix some docs' bug in 'serving' * [bugfix] Fix some docs' bug in 'serving'
-
del-zhenwu authored
* Update README.md: use internlm-chat-7b * Update README_zh-CN.md: use intern-chat-7b
-
- 14 Jul, 2023 1 commit
-
-
vansin authored
* Doc: update discord and wechat link * Doc: update discord and wechat link * [Doc] add discord and wechat link * [Doc] add discord and wechat link * [Doc] add discord and wechat link * [Doc] add discord and wechat link
-
- 11 Jul, 2023 2 commits
-
-
tpoisonooo authored
* docs(serving.md): typo * docs(README): quantization
-
WRH authored
* previous merged * add chinese * support torch<2 * add a docstring * fix typo * rename torch submodule * rename to pytorch * rename in readme
-
- 06 Jul, 2023 6 commits
-
-
pppppM authored
* update benchmark image * update image url
-
WRH authored
* draft torch client * deal with space of tokenizer * support tensor parallel * fix * fix * move folder * move instruction to readme * move to torch/ * rename client to chat * very bad response * stash * rename streamer * support internlm * change default args * remove test * improve instructions * remove module docstring * decrease header level of torch model
-
tpoisonooo authored
* docs(README): fix script
-
pppppM authored
-
tpoisonooo authored
-
pppppM authored
-
- 05 Jul, 2023 5 commits
-
-
lvhan028 authored
* add performance * use png * update * update * update * update * update
-
AllentDan authored
* add demo gif * add demo gif
-
tpoisonooo authored
-
tpoisonooo authored
-
pppppM authored
* add cal qparams * support offload inference * add collect funtions (mod,weight) * stats kv scales * update init * add user guide * fix hints * fix comments & support turbomind format * update user guide * fix slice kv cache error & support pileval dataset (used in llm-awq) * fix wrong num heads slice * update default dataset * fix conflict * fix hints * fix hints * add gitignore
-
- 04 Jul, 2023 2 commits
-
-
tpoisonooo authored
-
tpoisonooo authored
* docs(README): update description * docs(project): add quantization test results * docs(README): reorder * docs(quantization): add more description * docs(README): remove openmmlab badge * docs(README): scale up image * docs(dir): add zh_cn subdir
-
- 03 Jul, 2023 1 commit
-
-
vansin authored
* [Doc] add persistent batch inference * update * update * Update README.md * Update README_zh-CN.md
-
- 01 Jul, 2023 2 commits
- 30 Jun, 2023 2 commits
- 29 Jun, 2023 1 commit
-
-
AllentDan authored
* add webui * update readme * resolve comments * readme
-
- 28 Jun, 2023 1 commit
-
-
tpoisonooo authored
* feat(src): add int8 and compile passed * feat(kernels): fix * feat(llama): update kernel * feat(src): add debug * fix(kernel): k_cache use int8_t pointer * style(llama): clean code * feat(deploy.py): revert to enable fmha * style(LlamaV2): clean code * feat(deploy.py): add default quant policy
-
- 25 Jun, 2023 1 commit
-
-
tpoisonooo authored
-
- 21 Jun, 2023 1 commit
-
-
lvhan028 authored
-
- 20 Jun, 2023 1 commit
-
-
lvhan028 authored
* add logo * update readme
-
- 18 Jun, 2023 1 commit
-
-
lvhan028 authored
-