- 01 Mar, 2024 1 commit
-
-
Robert Shaw authored
Co-authored-by:
Robert Shaw <114415538+rib-2@users.noreply.github.com> Co-authored-by:
alexm <alexm@neuralmagic.com>
-
- 29 Feb, 2024 1 commit
-
-
CHU Tianxiang authored
-
- 12 Feb, 2024 1 commit
-
-
Rex authored
Co-authored-by:Chunan Zeng <chunanzeng@Chunans-Air.attlocal.net>
-
- 01 Feb, 2024 1 commit
-
-
Kunshang Ji authored
Co-authored-by:
Jiang Li <jiang1.li@intel.com> Co-authored-by:
Kunshang Ji <kunshang.ji@intel.com>
-
- 27 Jan, 2024 1 commit
-
-
Casper authored
-
- 15 Dec, 2023 1 commit
-
-
CHU Tianxiang authored
-
- 08 Dec, 2023 1 commit
-
-
TJian authored
Co-authored-by:
Philipp Moritz <pcmoritz@gmail.com> Co-authored-by:
Amir Balwel <amoooori04@gmail.com> Co-authored-by:
root <kuanfu.liu@akirakan.com> Co-authored-by:
tjtanaa <tunjian.tan@embeddedllm.com> Co-authored-by:
kuanfu <kuanfu.liu@embeddedllm.com> Co-authored-by:
miloice <17350011+kliuae@users.noreply.github.com>
-
- 24 Nov, 2023 1 commit
-
-
Yanming W authored
-
- 20 Nov, 2023 1 commit
-
-
Simon Mo authored
-
- 19 Nov, 2023 1 commit
-
-
Woosuk Kwon authored
-
- 16 Nov, 2023 1 commit
-
-
Zhuohan Li authored
TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622) Refactor the tensor parallelism, quantization, and weight-loading codes. Summary of the new features enabled by this PR: - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580). - Model loading code became much simpler. - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
-