"vscode:/vscode.git/clone" did not exist on "dda3ba756717c2513f4dbacb19ec0d118d667ed7"
  1. 20 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revamp the dynamic library shim · 7555ea44
      Daniel Hiltgen authored
      This switches the default llama.cpp to be CPU based, and builds the GPU variants
      as dynamically loaded libraries which we can select at runtime.
      
      This also bumps the ROCm library to version 6 given 5.7 builds don't work
      on the latest ROCm library that just shipped.
      7555ea44
  2. 19 Dec, 2023 8 commits
  3. 18 Dec, 2023 3 commits
  4. 13 Dec, 2023 1 commit
  5. 04 Dec, 2023 1 commit
  6. 26 Nov, 2023 2 commits
  7. 24 Nov, 2023 2 commits
  8. 22 Nov, 2023 1 commit
  9. 21 Nov, 2023 1 commit
  10. 20 Nov, 2023 1 commit
  11. 17 Nov, 2023 1 commit
  12. 27 Oct, 2023 1 commit
  13. 24 Oct, 2023 3 commits
  14. 23 Oct, 2023 2 commits
  15. 17 Oct, 2023 1 commit
  16. 06 Oct, 2023 2 commits
  17. 21 Sep, 2023 2 commits
  18. 20 Sep, 2023 6 commits
  19. 18 Sep, 2023 1 commit
    • Bruce MacDonald's avatar
      subprocess improvements (#524) · 66003e1d
      Bruce MacDonald authored
      * subprocess improvements
      
      - increase start-up timeout
      - when runner fails to start fail rather than timing out
      - try runners in order rather than choosing 1 runner
      - embed metal runner in metal dir rather than gpu
      - refactor logging and error messages
      
      * Update llama.go
      
      * Update llama.go
      
      * simplify by using glob
      66003e1d