1. 07 Mar, 2025 1 commit
    • ‮rekcäH nitraM‮'s avatar
      Better WantedBy declaration · 25248f4b
      ‮rekcäH nitraM‮ authored
      The problem with default.target is that it always points to the target that is currently started. So if you boot into single user mode or the rescue mode still Ollama tries to start.
      
      I noticed this because either tried (and failed) to start all the time during a system update, where Ollama definitely is not wanted.
      25248f4b
  2. 07 Feb, 2025 1 commit
  3. 06 Feb, 2025 1 commit
  4. 03 Feb, 2025 1 commit
  5. 10 Dec, 2024 1 commit
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  6. 17 Nov, 2024 1 commit
  7. 07 Sep, 2024 1 commit
  8. 04 Sep, 2024 1 commit
  9. 27 Aug, 2024 1 commit
  10. 19 Aug, 2024 2 commits
  11. 09 Jun, 2024 1 commit
  12. 06 May, 2024 1 commit
  13. 09 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Finish unwinding idempotent payload logic · 4a5c9b80
      Daniel Hiltgen authored
      The recent ROCm change partially removed idempotent
      payloads, but the ggml-metal.metal file for mac was still
      idempotent.  This finishes switching to always extract
      the payloads, and now that idempotentcy is gone, the
      version directory is no longer useful.
      4a5c9b80
  14. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
  15. 09 Feb, 2024 1 commit
  16. 12 Jan, 2024 1 commit
    • Tristram Oaten's avatar
      Add group delete to uninstall instructions (#1924) · 40a0a90a
      Tristram Oaten authored
      After executing the `userdel ollama` command, I saw this message:
      
      ```sh
      $ sudo userdel ollama
      userdel: group ollama not removed because it has other members.
      ```
      
      Which reminded me that I had to remove the dangling group too. For completeness, the uninstall instructions should do this too.
      
      Thanks!
      40a0a90a
  17. 25 Oct, 2023 1 commit
  18. 24 Oct, 2023 1 commit
  19. 15 Oct, 2023 1 commit
  20. 01 Oct, 2023 1 commit
  21. 25 Sep, 2023 5 commits