1. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  2. 25 Mar, 2024 1 commit
  3. 12 Mar, 2024 1 commit
  4. 07 Mar, 2024 1 commit
  5. 29 Feb, 2024 1 commit
  6. 02 Feb, 2024 1 commit
  7. 25 Jan, 2024 1 commit
  8. 20 Jan, 2024 2 commits
  9. 19 Jan, 2024 1 commit
  10. 17 Jan, 2024 1 commit
  11. 13 Jan, 2024 2 commits
  12. 11 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Build multiple CPU variants and pick the best · d88c527b
      Daniel Hiltgen authored
      This reduces the built-in linux version to not use any vector extensions
      which enables the resulting builds to run under Rosetta on MacOS in
      Docker.  Then at runtime it checks for the actual CPU vector
      extensions and loads the best CPU library available
      d88c527b
  13. 05 Jan, 2024 1 commit
  14. 04 Jan, 2024 3 commits
  15. 02 Jan, 2024 3 commits
  16. 22 Dec, 2023 2 commits
  17. 20 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revamp the dynamic library shim · 7555ea44
      Daniel Hiltgen authored
      This switches the default llama.cpp to be CPU based, and builds the GPU variants
      as dynamically loaded libraries which we can select at runtime.
      
      This also bumps the ROCm library to version 6 given 5.7 builds don't work
      on the latest ROCm library that just shipped.
      7555ea44
  18. 19 Dec, 2023 3 commits