"custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py" did not exist on "57b0ad8e820e370e608810364d80d8212d2407e9"
- 16 Nov, 2023 1 commit
-
-
Zhuohan Li authored
TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622) Refactor the tensor parallelism, quantization, and weight-loading codes. Summary of the new features enabled by this PR: - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580). - Model loading code became much simpler. - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
-