1. 01 Mar, 2024 1 commit
  2. 01 Feb, 2024 1 commit
  3. 15 Jan, 2024 1 commit
  4. 15 Dec, 2023 1 commit
  5. 16 Nov, 2023 1 commit
    • Zhuohan Li's avatar
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear... · 7076fa1c
      Zhuohan Li authored
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622)
      
      Refactor the tensor parallelism, quantization, and weight-loading codes.
      
      Summary of the new features enabled by this PR:
      - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580).
      - Model loading code became much simpler.
      - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
      7076fa1c