"vscode:/vscode.git/clone" did not exist on "5a1d2722ac47a192089b9b820ab7fd831411866a"
  1. 11 Mar, 2024 1 commit
  2. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
  3. 20 Feb, 2024 1 commit
  4. 09 Feb, 2024 1 commit
    • Daniel Hiltgen's avatar
      Shutdown faster · 66807615
      Daniel Hiltgen authored
      Make sure that when a shutdown signal comes, we shutdown quickly instead
      of waiting for a potentially long exchange to wrap up.
      66807615
  5. 01 Feb, 2024 1 commit
  6. 29 Jan, 2024 1 commit
  7. 25 Jan, 2024 1 commit
  8. 22 Jan, 2024 2 commits
  9. 21 Jan, 2024 1 commit
  10. 18 Jan, 2024 1 commit
  11. 17 Jan, 2024 1 commit
  12. 16 Jan, 2024 1 commit
  13. 13 Jan, 2024 1 commit
  14. 11 Jan, 2024 2 commits
  15. 08 Jan, 2024 1 commit
  16. 07 Jan, 2024 1 commit
  17. 04 Jan, 2024 1 commit
  18. 03 Jan, 2024 1 commit
  19. 02 Jan, 2024 2 commits
    • Daniel Hiltgen's avatar
      Switch windows build to fully dynamic · d966b730
      Daniel Hiltgen authored
      Refactor where we store build outputs, and support a fully dynamic loading
      model on windows so the base executable has no special dependencies thus
      doesn't require a special PATH.
      d966b730
    • Daniel Hiltgen's avatar
      Refactor how we augment llama.cpp · 9a70aecc
      Daniel Hiltgen authored
      This changes the model for llama.cpp inclusion so we're not applying a patch,
      but instead have the C++ code directly in the ollama tree, which should make it
      easier to refine and update over time.
      9a70aecc
  20. 27 Dec, 2023 1 commit
  21. 22 Dec, 2023 2 commits
  22. 21 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revive windows build · d9cd3d96
      Daniel Hiltgen authored
      The windows native setup still needs some more work, but this gets it building
      again and if you set the PATH properly, you can run the resulting exe on a cuda system.
      d9cd3d96
  23. 20 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revamp the dynamic library shim · 7555ea44
      Daniel Hiltgen authored
      This switches the default llama.cpp to be CPU based, and builds the GPU variants
      as dynamically loaded libraries which we can select at runtime.
      
      This also bumps the ROCm library to version 6 given 5.7 builds don't work
      on the latest ROCm library that just shipped.
      7555ea44
  24. 19 Dec, 2023 5 commits