1. 10 Dec, 2024 1 commit
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  2. 19 Nov, 2024 1 commit
  3. 18 Nov, 2024 1 commit
  4. 16 Nov, 2024 1 commit
  5. 05 Sep, 2024 1 commit
  6. 04 Sep, 2024 1 commit
  7. 20 Aug, 2024 1 commit
    • Daniel Hiltgen's avatar
      Split rocm back out of bundle (#6432) · a017cf2f
      Daniel Hiltgen authored
      We're over budget for github's maximum release artifact size with rocm + 2 cuda
      versions.  This splits rocm back out as a discrete artifact, but keeps the layout so it can
      be extracted into the same location as the main bundle.
      a017cf2f
  8. 19 Aug, 2024 2 commits
    • Daniel Hiltgen's avatar
      Adjust layout to bin+lib/ollama · 88bb9e33
      Daniel Hiltgen authored
      88bb9e33
    • Daniel Hiltgen's avatar
      Refactor linux packaging · 74d45f01
      Daniel Hiltgen authored
      This adjusts linux to follow a similar model to windows with a discrete archive
      (zip/tgz) to cary the primary executable, and dependent libraries. Runners are
      still carried as payloads inside the main binary
      
      Darwin retain the payload model where the go binary is fully self contained.
      74d45f01
  9. 02 Aug, 2024 1 commit
  10. 25 Jul, 2024 1 commit
  11. 19 Jun, 2024 1 commit
  12. 11 Jun, 2024 1 commit
  13. 28 May, 2024 4 commits
  14. 26 May, 2024 2 commits
  15. 01 May, 2024 1 commit
  16. 15 Mar, 2024 1 commit
  17. 29 Feb, 2024 1 commit
  18. 21 Feb, 2024 1 commit
  19. 09 Feb, 2024 1 commit
  20. 16 Jan, 2024 1 commit
  21. 04 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fail fast on WSL1 while allowing on WSL2 · 2fcd41ef
      Daniel Hiltgen authored
      This prevents users from accidentally installing on WSL1 with instructions
      guiding how to upgrade their WSL instance to version 2.  Once running WSL2
      if you have an NVIDIA card, you can follow their instructions to set up
      GPU passthrough and run models on the GPU.  This is not possible on WSL1.
      2fcd41ef
  22. 03 Jan, 2024 1 commit
  23. 04 Dec, 2023 1 commit
  24. 29 Nov, 2023 1 commit
  25. 17 Nov, 2023 2 commits
  26. 16 Nov, 2023 1 commit
  27. 07 Nov, 2023 1 commit
  28. 01 Nov, 2023 1 commit
  29. 25 Oct, 2023 1 commit
  30. 23 Oct, 2023 1 commit
  31. 16 Oct, 2023 1 commit
  32. 27 Sep, 2023 2 commits
  33. 26 Sep, 2023 1 commit