1. 19 Nov, 2023 1 commit
  2. 18 Nov, 2023 1 commit
  3. 16 Nov, 2023 1 commit
    • Zhuohan Li's avatar
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear... · 7076fa1c
      Zhuohan Li authored
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622)
      
      Refactor the tensor parallelism, quantization, and weight-loading codes.
      
      Summary of the new features enabled by this PR:
      - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580).
      - Model loading code became much simpler.
      - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
      7076fa1c
  4. 13 Nov, 2023 1 commit
  5. 03 Nov, 2023 2 commits
  6. 01 Nov, 2023 1 commit
  7. 30 Oct, 2023 1 commit
  8. 29 Oct, 2023 1 commit
  9. 22 Oct, 2023 1 commit
  10. 17 Oct, 2023 1 commit
  11. 16 Oct, 2023 2 commits
  12. 11 Oct, 2023 1 commit
  13. 02 Oct, 2023 2 commits
  14. 28 Sep, 2023 1 commit
  15. 27 Sep, 2023 1 commit
  16. 26 Sep, 2023 1 commit
  17. 24 Sep, 2023 1 commit
  18. 23 Sep, 2023 1 commit
  19. 16 Sep, 2023 1 commit
  20. 13 Sep, 2023 1 commit
  21. 11 Sep, 2023 1 commit
  22. 09 Sep, 2023 1 commit
  23. 08 Sep, 2023 1 commit
  24. 06 Sep, 2023 1 commit
  25. 05 Sep, 2023 1 commit
  26. 31 Aug, 2023 2 commits
  27. 30 Aug, 2023 1 commit
  28. 25 Aug, 2023 1 commit
  29. 23 Aug, 2023 1 commit
  30. 22 Aug, 2023 1 commit
  31. 15 Aug, 2023 1 commit
  32. 02 Aug, 2023 2 commits
  33. 20 Jul, 2023 1 commit
  34. 18 Jul, 2023 1 commit
  35. 15 Jul, 2023 1 commit