1. 17 Sep, 2024 1 commit
    • Michael Yang's avatar
      make patches git am-able · 7bd7b027
      Michael Yang authored
      raw diffs can be applied using `git apply` but not with `git am`. git
      patches, e.g. through `git format-patch` are both apply-able and am-able
      7bd7b027
  2. 23 Aug, 2024 1 commit
  3. 19 Aug, 2024 4 commits
  4. 20 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Adjust windows ROCm discovery · 283948c8
      Daniel Hiltgen authored
      The v5 hip library returns unsupported GPUs which wont enumerate at
      inference time in the runner so this makes sure we align discovery.  The
      gfx906 cards are no longer supported so we shouldn't compile with that
      GPU type as it wont enumerate at runtime.
      283948c8
  5. 10 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Bump ROCm on windows to 6.1.2 · 1f50356e
      Daniel Hiltgen authored
      This also adjusts our algorithm to favor our bundled ROCm.
      I've confirmed VRAM reporting still doesn't work properly so we
      can't yet enable concurrency by default.
      1f50356e
  6. 08 Jul, 2024 1 commit
  7. 06 Jul, 2024 2 commits
  8. 05 Jul, 2024 1 commit
  9. 17 Jun, 2024 4 commits
  10. 15 Jun, 2024 1 commit
  11. 07 Jun, 2024 1 commit
  12. 24 May, 2024 1 commit
  13. 27 Apr, 2024 2 commits
  14. 26 Apr, 2024 5 commits
  15. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Move nested payloads to installer and zip file on windows · 058f6cd2
      Daniel Hiltgen authored
      Now that the llm runner is an executable and not just a dll, more users are facing
      problems with security policy configurations on windows that prevent users
      writing to directories and then executing binaries from the same location.
      This change removes payloads from the main executable on windows and shifts them
      over to be packaged in the installer and discovered based on the executables location.
      This also adds a new zip file for people who want to "roll their own" installation model.
      058f6cd2
  16. 21 Apr, 2024 1 commit
  17. 18 Apr, 2024 2 commits
  18. 09 Apr, 2024 2 commits
  19. 07 Apr, 2024 1 commit
  20. 04 Apr, 2024 2 commits
  21. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  22. 26 Mar, 2024 1 commit
  23. 15 Mar, 2024 2 commits
  24. 12 Mar, 2024 1 commit