1. 29 Aug, 2024 1 commit
  2. 19 Aug, 2024 2 commits
    • Daniel Hiltgen's avatar
      Wire up ccache and pigz in the docker based build · c7bcb003
      Daniel Hiltgen authored
      This should help speed things up a little
      c7bcb003
    • Daniel Hiltgen's avatar
      Refactor linux packaging · 74d45f01
      Daniel Hiltgen authored
      This adjusts linux to follow a similar model to windows with a discrete archive
      (zip/tgz) to cary the primary executable, and dependent libraries. Runners are
      still carried as payloads inside the main binary
      
      Darwin retain the payload model where the go binary is fully self contained.
      74d45f01
  3. 06 Jul, 2024 1 commit
  4. 05 Jul, 2024 1 commit
  5. 25 Apr, 2024 1 commit
  6. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  7. 25 Mar, 2024 1 commit
  8. 12 Mar, 2024 1 commit
  9. 07 Mar, 2024 1 commit
  10. 29 Feb, 2024 1 commit
  11. 02 Feb, 2024 1 commit
  12. 25 Jan, 2024 1 commit
  13. 20 Jan, 2024 2 commits
  14. 19 Jan, 2024 1 commit
  15. 17 Jan, 2024 1 commit
  16. 13 Jan, 2024 2 commits
  17. 11 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Build multiple CPU variants and pick the best · d88c527b
      Daniel Hiltgen authored
      This reduces the built-in linux version to not use any vector extensions
      which enables the resulting builds to run under Rosetta on MacOS in
      Docker.  Then at runtime it checks for the actual CPU vector
      extensions and loads the best CPU library available
      d88c527b
  18. 05 Jan, 2024 1 commit
  19. 04 Jan, 2024 3 commits
  20. 02 Jan, 2024 3 commits
  21. 22 Dec, 2023 2 commits
  22. 20 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revamp the dynamic library shim · 7555ea44
      Daniel Hiltgen authored
      This switches the default llama.cpp to be CPU based, and builds the GPU variants
      as dynamically loaded libraries which we can select at runtime.
      
      This also bumps the ROCm library to version 6 given 5.7 builds don't work
      on the latest ROCm library that just shipped.
      7555ea44
  23. 19 Dec, 2023 3 commits