1. 29 Feb, 2024 1 commit
  2. 31 Jan, 2024 1 commit
  3. 30 Jan, 2024 1 commit
  4. 28 Jan, 2024 1 commit
  5. 23 Jan, 2024 1 commit
  6. 18 Jan, 2024 1 commit
  7. 12 Jan, 2024 1 commit
  8. 05 Jan, 2024 1 commit
  9. 03 Jan, 2024 1 commit
  10. 26 Dec, 2023 1 commit
  11. 14 Dec, 2023 1 commit
  12. 03 Dec, 2023 1 commit
  13. 16 Nov, 2023 1 commit
    • Zhuohan Li's avatar
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear... · 7076fa1c
      Zhuohan Li authored
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622)
      
      Refactor the tensor parallelism, quantization, and weight-loading codes.
      
      Summary of the new features enabled by this PR:
      - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580).
      - Model loading code became much simpler.
      - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
      7076fa1c
  14. 11 Nov, 2023 1 commit
  15. 01 Nov, 2023 1 commit
  16. 03 Oct, 2023 1 commit
  17. 18 Sep, 2023 1 commit
  18. 17 Sep, 2023 1 commit
  19. 15 Sep, 2023 1 commit
  20. 12 Sep, 2023 1 commit
  21. 08 Sep, 2023 1 commit
  22. 07 Sep, 2023 1 commit
  23. 06 Sep, 2023 1 commit
  24. 05 Sep, 2023 2 commits
  25. 04 Sep, 2023 1 commit
  26. 20 Jul, 2023 1 commit
  27. 03 Jul, 2023 3 commits
  28. 22 Jun, 2023 1 commit
  29. 17 Jun, 2023 2 commits
  30. 16 Jun, 2023 1 commit
  31. 15 Jun, 2023 1 commit
  32. 07 Jun, 2023 1 commit
  33. 05 Jun, 2023 1 commit
  34. 24 May, 2023 1 commit
  35. 22 May, 2023 1 commit
  36. 21 May, 2023 1 commit