1. 01 Feb, 2024 1 commit
  2. 29 Jan, 2024 1 commit
  3. 25 Jan, 2024 1 commit
  4. 22 Jan, 2024 2 commits
  5. 21 Jan, 2024 1 commit
  6. 18 Jan, 2024 1 commit
  7. 17 Jan, 2024 1 commit
  8. 16 Jan, 2024 1 commit
  9. 13 Jan, 2024 1 commit
  10. 11 Jan, 2024 2 commits
  11. 08 Jan, 2024 1 commit
  12. 07 Jan, 2024 1 commit
  13. 04 Jan, 2024 1 commit
  14. 03 Jan, 2024 1 commit
  15. 02 Jan, 2024 2 commits
    • Daniel Hiltgen's avatar
      Switch windows build to fully dynamic · d966b730
      Daniel Hiltgen authored
      Refactor where we store build outputs, and support a fully dynamic loading
      model on windows so the base executable has no special dependencies thus
      doesn't require a special PATH.
      d966b730
    • Daniel Hiltgen's avatar
      Refactor how we augment llama.cpp · 9a70aecc
      Daniel Hiltgen authored
      This changes the model for llama.cpp inclusion so we're not applying a patch,
      but instead have the C++ code directly in the ollama tree, which should make it
      easier to refine and update over time.
      9a70aecc
  16. 27 Dec, 2023 1 commit
  17. 22 Dec, 2023 2 commits
  18. 21 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revive windows build · d9cd3d96
      Daniel Hiltgen authored
      The windows native setup still needs some more work, but this gets it building
      again and if you set the PATH properly, you can run the resulting exe on a cuda system.
      d9cd3d96
  19. 20 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revamp the dynamic library shim · 7555ea44
      Daniel Hiltgen authored
      This switches the default llama.cpp to be CPU based, and builds the GPU variants
      as dynamically loaded libraries which we can select at runtime.
      
      This also bumps the ROCm library to version 6 given 5.7 builds don't work
      on the latest ROCm library that just shipped.
      7555ea44
  20. 19 Dec, 2023 5 commits