1. 19 Aug, 2024 5 commits
  2. 02 Aug, 2024 1 commit
  3. 25 Jul, 2024 1 commit
  4. 09 Jul, 2024 1 commit
  5. 02 Jul, 2024 1 commit
  6. 19 Jun, 2024 1 commit
  7. 17 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Move libraries out of users path · b2799f11
      Daniel Hiltgen authored
      We update the PATH on windows to get the CLI mapped, but this has
      an unintended side effect of causing other apps that may use our bundled
      DLLs to get terminated when we upgrade.
      b2799f11
  8. 11 Jun, 2024 1 commit
  9. 28 May, 2024 4 commits
  10. 26 May, 2024 2 commits
  11. 01 May, 2024 1 commit
  12. 27 Apr, 2024 2 commits
  13. 26 Apr, 2024 2 commits
  14. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Move nested payloads to installer and zip file on windows · 058f6cd2
      Daniel Hiltgen authored
      Now that the llm runner is an executable and not just a dll, more users are facing
      problems with security policy configurations on windows that prevent users
      writing to directories and then executing binaries from the same location.
      This change removes payloads from the main executable on windows and shifts them
      over to be packaged in the installer and discovered based on the executables location.
      This also adds a new zip file for people who want to "roll their own" installation model.
      058f6cd2
  15. 28 Mar, 2024 1 commit
  16. 26 Mar, 2024 2 commits
  17. 23 Mar, 2024 1 commit
  18. 15 Mar, 2024 2 commits
  19. 11 Mar, 2024 1 commit
  20. 10 Mar, 2024 1 commit
  21. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
  22. 29 Feb, 2024 1 commit
  23. 27 Feb, 2024 1 commit
  24. 22 Feb, 2024 2 commits
  25. 21 Feb, 2024 3 commits