- 15 Jan, 2024 1 commit
-
-
Chenhui Zhang authored
-
- 12 Jan, 2024 1 commit
-
-
陈序 authored
* Align top_p and top_k with huggingface * remove _get_prompt_and_output_tokens * rename _apply_top_p_top_k * compare top_p top_k with hf * fix test errors
-
- 09 Jan, 2024 1 commit
-
-
Cade Daniel authored
-
- 08 Jan, 2024 1 commit
-
-
Woosuk Kwon authored
-
- 03 Jan, 2024 2 commits
-
-
Zhuohan Li authored
-
Roy authored
-
- 20 Dec, 2023 1 commit
-
-
Antoni Baum authored
-
- 17 Dec, 2023 2 commits
-
-
Antoni Baum authored
-
Woosuk Kwon authored
Co-authored-by:
Chen Shen <scv119@gmail.com> Co-authored-by:
Antoni Baum <antoni.baum@protonmail.com>
-
- 15 Dec, 2023 1 commit
-
-
CHU Tianxiang authored
-
- 12 Dec, 2023 1 commit
-
-
Megha Agarwal authored
Co-authored-by:Woosuk Kwon <woosuk.kwon@berkeley.edu>
-
- 10 Dec, 2023 1 commit
-
-
wbn authored
Co-authored-by:
wangguoya <wangguoya@baidu.com> Co-authored-by:
Yang Zhao <zhaoyangstar@foxmail.com>
-
- 08 Dec, 2023 1 commit
-
-
TJian authored
Co-authored-by:
Philipp Moritz <pcmoritz@gmail.com> Co-authored-by:
Amir Balwel <amoooori04@gmail.com> Co-authored-by:
root <kuanfu.liu@akirakan.com> Co-authored-by:
tjtanaa <tunjian.tan@embeddedllm.com> Co-authored-by:
kuanfu <kuanfu.liu@embeddedllm.com> Co-authored-by:
miloice <17350011+kliuae@users.noreply.github.com>
-
- 03 Dec, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 30 Nov, 2023 3 commits
-
-
Roy authored
-
Jee Li authored
-
Woosuk Kwon authored
-
- 29 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 28 Nov, 2023 1 commit
-
-
Zhuohan Li authored
-
- 24 Nov, 2023 1 commit
-
-
Yanming W authored
-
- 22 Nov, 2023 1 commit
-
-
ljss authored
-
- 21 Nov, 2023 3 commits
-
-
Zhuohan Li authored
-
Woosuk Kwon authored
-
ljss authored
-
- 20 Nov, 2023 1 commit
-
-
Simon Mo authored
-
- 19 Nov, 2023 2 commits
-
-
ljss authored
-
Woosuk Kwon authored
-
- 18 Nov, 2023 1 commit
-
-
Roy authored
-
- 16 Nov, 2023 1 commit
-
-
Zhuohan Li authored
TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622) Refactor the tensor parallelism, quantization, and weight-loading codes. Summary of the new features enabled by this PR: - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580). - Model loading code became much simpler. - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
-
- 13 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 03 Nov, 2023 2 commits
-
-
Antoni Baum authored
Signed-off-by:
Antoni Baum <antoni.baum@protonmail.com> Co-authored-by:
Viktor Ferenczi <viktor@ferenczi.eu> Co-authored-by:
Woosuk Kwon <woosuk.kwon@berkeley.edu>
-
Noam Gat authored
-
- 01 Nov, 2023 1 commit
-
-
Antoni Baum authored
-
- 30 Oct, 2023 1 commit
-
-
Antoni Baum authored
-
- 29 Oct, 2023 1 commit
-
-
ljss authored
-
- 22 Oct, 2023 1 commit
-
-
chooper1 authored
Co-authored-by:
squeeze-ai-lab <squeezeailab.bair@gmail.com> Co-authored-by:
Woosuk Kwon <woosuk.kwon@berkeley.edu>
-
- 17 Oct, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 16 Oct, 2023 2 commits
-
-
Zhuohan Li authored
Co-authored-by:Yunmo Chen <16273544+wanmok@users.noreply.github.com>
-
Woosuk Kwon authored
-
- 11 Oct, 2023 1 commit
-
-
yhlskt23 authored
-