- 12 Jan, 2024 2 commits
-
-
Gary Hui authored
-
Woosuk Kwon authored
-
- 03 Jan, 2024 1 commit
-
-
Zhuohan Li authored
-
- 30 Dec, 2023 1 commit
-
-
Jong-hun Shin authored
-
- 20 Dec, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 19 Dec, 2023 1 commit
-
-
avideci authored
-
- 17 Dec, 2023 1 commit
-
-
Woosuk Kwon authored
Co-authored-by:
Chen Shen <scv119@gmail.com> Co-authored-by:
Antoni Baum <antoni.baum@protonmail.com>
-
- 16 Dec, 2023 1 commit
-
-
Roy authored
-
- 15 Dec, 2023 1 commit
-
-
CHU Tianxiang authored
-
- 14 Dec, 2023 1 commit
-
-
Antoni Baum authored
-
- 13 Dec, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 12 Dec, 2023 2 commits
-
-
Megha Agarwal authored
Co-authored-by:Woosuk Kwon <woosuk.kwon@berkeley.edu>
-
Woosuk Kwon authored
-
- 11 Dec, 2023 5 commits
-
-
Woosuk Kwon authored
-
Woosuk Kwon authored
-
Woosuk Kwon authored
-
Woosuk Kwon authored
-
Pierre Stock authored
Co-authored-by:
Pierre Stock <p@mistral.ai> Co-authored-by:
Zhuohan Li <zhuohan123@gmail.com>
-
- 10 Dec, 2023 2 commits
-
-
Woosuk Kwon authored
-
Woosuk Kwon authored
-
- 09 Dec, 2023 1 commit
-
-
Jun Gao authored
-
- 08 Dec, 2023 1 commit
-
-
firebook authored
-
- 07 Dec, 2023 1 commit
-
-
Jie Li authored
Co-authored-by:lijie8 <lijie8@sensetime.com>
-
- 01 Dec, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 30 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 29 Nov, 2023 2 commits
-
-
Woosuk Kwon authored
-
Woosuk Kwon authored
-
- 28 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 24 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 21 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 20 Nov, 2023 1 commit
-
-
Simon Mo authored
-
- 19 Nov, 2023 2 commits
-
-
ljss authored
-
Woosuk Kwon authored
-
- 16 Nov, 2023 3 commits
-
-
maximzubkov authored
-
Megha Agarwal authored
-
Zhuohan Li authored
TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622) Refactor the tensor parallelism, quantization, and weight-loading codes. Summary of the new features enabled by this PR: - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580). - Model loading code became much simpler. - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
-
- 12 Nov, 2023 1 commit
-
-
lirui authored
-
- 07 Nov, 2023 1 commit
-
-
GoHomeToMacDonal authored
-
- 06 Nov, 2023 1 commit
-
-
Roy authored
-
- 01 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-