• Zhuohan Li's avatar
    TP/quantization/weight loading refactor part 2 - Refactor quantized linear... · 7076fa1c
    Zhuohan Li authored
    TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622)
    
    Refactor the tensor parallelism, quantization, and weight-loading codes.
    
    Summary of the new features enabled by this PR:
    - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580).
    - Model loading code became much simpler.
    - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
    7076fa1c
utils.py 928 Bytes