1. 17 Sep, 2025 1 commit
  2. 16 Sep, 2025 1 commit
  3. 10 Sep, 2025 1 commit
    • Daniel Hiltgen's avatar
      Add v12 + v13 cuda support (#12000) · 17a023f3
      Daniel Hiltgen authored
      * Add support for upcoming NVIDIA Jetsons
      
      The latest Jetsons with JetPack 7 are moving to an SBSA compatible model and
      will not require building a JetPack specific variant.
      
      * cuda: bring back dual versions
      
      This adds back dual CUDA versions for our releases,
      with v11 and v13 to cover a broad set of GPUs and
      driver versions.
      
      * win: break up native builds in build_windows.ps1
      
      * v11 build working on windows and linux
      
      * switch to cuda v12.8 not JIT
      
      * Set CUDA compression to size
      
      * enhance manual install linux docs
      17a023f3
  4. 29 Aug, 2025 1 commit
  5. 14 Aug, 2025 1 commit
    • Jesse Gross's avatar
      llm: New memory management · d5a0d8d9
      Jesse Gross authored
      This changes the memory allocation strategy from upfront estimation to
      tracking actual allocations done by the engine and reacting to that. The
      goal is avoid issues caused by both under-estimation (crashing) and
      over-estimation (low performance due to under-utilized GPUs).
      
      It is currently opt-in and can be enabled for models running on the
      Ollama engine by setting OLLAMA_NEW_ESTIMATES=1. Behavior in other
      cases is unchanged and will continue to use the existing estimates.
      d5a0d8d9
  6. 13 Aug, 2025 1 commit
    • Daniel Hiltgen's avatar
      discovery: fix cudart driver version (#11614) · 837379a9
      Daniel Hiltgen authored
      We prefer the nvcuda library, which reports driver versions. When we
      dropped cuda v11, we added a safety check for too-old drivers.  What
      we missed was the cudart fallback discovery logic didn't have driver
      version wired up.  This fixes cudart discovery to expose the driver
      version as well so we no longer reject all GPUs if nvcuda didn't work.
      837379a9
  7. 11 Aug, 2025 1 commit
  8. 30 Jul, 2025 1 commit
  9. 23 Jun, 2025 1 commit
    • Daniel Hiltgen's avatar
      Re-remove cuda v11 (#10694) · 1c6669e6
      Daniel Hiltgen authored
      * Re-remove cuda v11
      
      Revert the revert - drop v11 support requiring drivers newer than Feb 23
      
      This reverts commit c6bcdc42.
      
      * Simplify layout
      
      With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)
      
      * distinct sbsa variant for linux arm64
      
      This avoids accidentally trying to load the sbsa cuda libraries on
      a jetson system which results in crashes.
      
      * temporary prevent rocm+cuda mixed loading
      1c6669e6
  10. 13 May, 2025 1 commit
  11. 12 May, 2025 1 commit
  12. 07 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      remove cuda v11 (#10569) · fa393554
      Daniel Hiltgen authored
      This reduces the size of our Windows installer payloads by ~256M by dropping
      support for nvidia drivers older than Feb 2023.  Hardware support is unchanged.
      
      Linux default bundle sizes are reduced by ~600M to 1G.
      fa393554
  13. 06 May, 2025 1 commit
  14. 05 May, 2025 1 commit
  15. 02 Apr, 2025 1 commit
  16. 01 Apr, 2025 1 commit
  17. 25 Feb, 2025 1 commit
  18. 14 Feb, 2025 2 commits
    • Jeffrey Morgan's avatar
      llm: attempt to evaluate symlinks, but do not fail (#9089) · 5296f487
      Jeffrey Morgan authored
      provides a better approach to #9088 that will attempt to
      evaluate symlinks (important for macOS where 'ollama' is
      often a symlink), but use the result of os.Executable()
      as a fallback in scenarios where filepath.EvalSymlinks
      fails due to permission erorrs or other issues
      5296f487
    • Jeffrey Morgan's avatar
      llm: do not evaluate symlink for exe path lookup (#9088) · f05774b0
      Jeffrey Morgan authored
      In some cases, the directories in the executable path read by
      filepath.EvalSymlinks are not accessible, resulting in permission
      errors which results in an error when running models. It also
      doesn't work well on long paths on windows, also resulting in
      errors. This change removes filepath.EvalSymlinks when accessing
      os.Executable() altogether
      f05774b0
  19. 31 Jan, 2025 1 commit
  20. 30 Jan, 2025 2 commits
  21. 29 Jan, 2025 1 commit
    • Michael Yang's avatar
      next build (#8539) · dcfb7a10
      Michael Yang authored
      
      
      * add build to .dockerignore
      
      * test: only build one arch
      
      * add build to .gitignore
      
      * fix ccache path
      
      * filter amdgpu targets
      
      * only filter if autodetecting
      
      * Don't clobber gpu list for default runner
      
      This ensures the GPU specific environment variables are set properly
      
      * explicitly set CXX compiler for HIP
      
      * Update build_windows.ps1
      
      This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.
      
      * build: add ollama subdir
      
      * add .git to .dockerignore
      
      * docs: update development.md
      
      * update build_darwin.sh
      
      * remove unused scripts
      
      * llm: add cwd and build/lib/ollama to library paths
      
      * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
      
      * add additional cmake output vars for msvc
      
      * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
      
      * remove unncessary filepath.Dir, cleanup
      
      * add hardware-specific directory to path
      
      * use absolute server path
      
      * build: linux arm
      
      * cmake install targets
      
      * remove unused files
      
      * ml: visit each library path once
      
      * build: skip cpu variants on arm
      
      * build: install cpu targets
      
      * build: fix workflow
      
      * shorter names
      
      * fix rocblas install
      
      * docs: clean up development.md
      
      * consistent build dir removal in development.md
      
      * silence -Wimplicit-function-declaration build warnings in ggml-cpu
      
      * update readme
      
      * update development readme
      
      * llm: update library lookup logic now that there is one runner (#8587)
      
      * tweak development.md
      
      * update docs
      
      * add windows cuda/rocm tests
      
      ---------
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      dcfb7a10
  22. 03 Jan, 2025 1 commit
  23. 11 Dec, 2024 1 commit
  24. 10 Dec, 2024 2 commits
    • Stefan Weil's avatar
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  25. 03 Dec, 2024 1 commit
  26. 12 Nov, 2024 1 commit
  27. 07 Nov, 2024 1 commit
  28. 02 Nov, 2024 1 commit
  29. 30 Oct, 2024 1 commit
  30. 26 Oct, 2024 1 commit
    • Daniel Hiltgen's avatar
      Better support for AMD multi-GPU on linux (#7212) · d7c94e0c
      Daniel Hiltgen authored
      * Better support for AMD multi-GPU
      
      This resolves a number of problems related to AMD multi-GPU setups on linux.
      
      The numeric IDs used by rocm are not the same as the numeric IDs exposed in
      sysfs although the ordering is consistent.  We have to count up from the first
      valid gfx (major/minor/patch with non-zero values) we find starting at zero.
      
      There are 3 different env vars for selecting GPUs, and only ROCR_VISIBLE_DEVICES
      supports UUID based identification, so we should favor that one, and try
      to use UUIDs if detected to avoid potential ordering bugs with numeric IDs
      
      * ROCR_VISIBLE_DEVICES only works on linux
      
      Use the numeric ID only HIP_VISIBLE_DEVICES on windows
      d7c94e0c
  31. 17 Oct, 2024 1 commit