"vllm/model_executor/models/yi.py" did not exist on "c9d5b6d4a8b3f51ff6c9eee7eb52bb5149d89b6a"
  1. 03 Jan, 2024 1 commit
  2. 17 Dec, 2023 1 commit
  3. 15 Dec, 2023 1 commit
  4. 30 Nov, 2023 1 commit
  5. 28 Nov, 2023 1 commit
  6. 24 Nov, 2023 1 commit
  7. 21 Nov, 2023 1 commit
  8. 20 Nov, 2023 1 commit
  9. 19 Nov, 2023 1 commit
  10. 16 Nov, 2023 1 commit
    • Zhuohan Li's avatar
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear... · 7076fa1c
      Zhuohan Li authored
      TP/quantization/weight loading refactor part 2 - Refactor quantized linear logic and extend quantization support to all models (#1622)
      
      Refactor the tensor parallelism, quantization, and weight-loading codes.
      
      Summary of the new features enabled by this PR:
      - **All models** are able to be quantized with AWQ and SqueezeLLM, and [soon GPTQ](https://github.com/vllm-project/vllm/pull/1580).
      - Model loading code became much simpler.
      - Support model parallelism for all MQA/GQA models when the number of key/value heads is smaller than the tensor parallel size.
      7076fa1c
  11. 02 Oct, 2023 1 commit
  12. 13 Sep, 2023 1 commit
  13. 07 Sep, 2023 1 commit
  14. 05 Sep, 2023 1 commit
  15. 03 Jul, 2023 1 commit
  16. 17 Jun, 2023 1 commit
  17. 25 May, 2023 1 commit
  18. 24 May, 2023 1 commit
  19. 19 May, 2023 1 commit
  20. 15 May, 2023 3 commits
  21. 09 May, 2023 1 commit
  22. 05 May, 2023 1 commit
  23. 04 May, 2023 1 commit
  24. 03 May, 2023 1 commit
  25. 28 Apr, 2023 1 commit
  26. 09 Apr, 2023 1 commit
  27. 02 Apr, 2023 2 commits
  28. 30 Mar, 2023 2 commits
  29. 21 Mar, 2023 1 commit
  30. 10 Mar, 2023 1 commit
  31. 25 Feb, 2023 1 commit
  32. 23 Feb, 2023 3 commits
  33. 22 Feb, 2023 1 commit
  34. 09 Feb, 2023 1 commit