1. 29 Jan, 2025 1 commit
    • Michael Yang's avatar
      next build (#8539) · dcfb7a10
      Michael Yang authored
      
      
      * add build to .dockerignore
      
      * test: only build one arch
      
      * add build to .gitignore
      
      * fix ccache path
      
      * filter amdgpu targets
      
      * only filter if autodetecting
      
      * Don't clobber gpu list for default runner
      
      This ensures the GPU specific environment variables are set properly
      
      * explicitly set CXX compiler for HIP
      
      * Update build_windows.ps1
      
      This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.
      
      * build: add ollama subdir
      
      * add .git to .dockerignore
      
      * docs: update development.md
      
      * update build_darwin.sh
      
      * remove unused scripts
      
      * llm: add cwd and build/lib/ollama to library paths
      
      * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
      
      * add additional cmake output vars for msvc
      
      * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
      
      * remove unncessary filepath.Dir, cleanup
      
      * add hardware-specific directory to path
      
      * use absolute server path
      
      * build: linux arm
      
      * cmake install targets
      
      * remove unused files
      
      * ml: visit each library path once
      
      * build: skip cpu variants on arm
      
      * build: install cpu targets
      
      * build: fix workflow
      
      * shorter names
      
      * fix rocblas install
      
      * docs: clean up development.md
      
      * consistent build dir removal in development.md
      
      * silence -Wimplicit-function-declaration build warnings in ggml-cpu
      
      * update readme
      
      * update development readme
      
      * llm: update library lookup logic now that there is one runner (#8587)
      
      * tweak development.md
      
      * update docs
      
      * add windows cuda/rocm tests
      
      ---------
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      dcfb7a10
  2. 10 Dec, 2024 1 commit
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  3. 03 Dec, 2024 1 commit
  4. 12 Nov, 2024 1 commit
  5. 30 Oct, 2024 1 commit
  6. 17 Oct, 2024 1 commit
  7. 15 Oct, 2024 1 commit
  8. 14 Oct, 2024 1 commit
  9. 23 Aug, 2024 1 commit
  10. 19 Aug, 2024 3 commits
  11. 11 Jul, 2024 1 commit
  12. 09 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Detect CUDA OS Overhead · f6f759fc
      Daniel Hiltgen authored
      This adds logic to detect skew between the driver and
      management library which can be attributed to OS overhead
      and records that so we can adjust subsequent management
      library free VRAM updates and avoid OOM scenarios.
      f6f759fc
  13. 21 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Disable concurrency for AMD + Windows · 9929751c
      Daniel Hiltgen authored
      Until ROCm v6.2 ships, we wont be able to get accurate free memory
      reporting on windows, which makes automatic concurrency too risky.
      Users can still opt-in but will need to pay attention to model sizes otherwise they may thrash/page VRAM or cause OOM crashes.
      All other platforms and GPUs have accurate VRAM reporting wired
      up now, so we can turn on concurrency by default.
      9929751c
  14. 14 Jun, 2024 5 commits
  15. 09 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Record more GPU information · 8727a9c1
      Daniel Hiltgen authored
      This cleans up the logging for GPU discovery a bit, and can
      serve as a foundation to report GPU information in a future UX.
      8727a9c1
  16. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  17. 10 Apr, 2024 1 commit
  18. 01 Apr, 2024 1 commit
  19. 12 Feb, 2024 1 commit
  20. 11 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Support multiple variants for a given llm lib type · 8da7bef0
      Daniel Hiltgen authored
      In some cases we may want multiple variants for a given GPU type or CPU.
      This adds logic to have an optional Variant which we can use to select
      an optimal library, but also allows us to try multiple variants in case
      some fail to load.
      
      This can be useful for scenarios such as ROCm v5 vs v6 incompatibility
      or potentially CPU features.
      8da7bef0
  21. 09 Jan, 2024 1 commit
  22. 03 Jan, 2024 1 commit
  23. 02 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch windows build to fully dynamic · d966b730
      Daniel Hiltgen authored
      Refactor where we store build outputs, and support a fully dynamic loading
      model on windows so the base executable has no special dependencies thus
      doesn't require a special PATH.
      d966b730
  24. 20 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revamp the dynamic library shim · 7555ea44
      Daniel Hiltgen authored
      This switches the default llama.cpp to be CPU based, and builds the GPU variants
      as dynamically loaded libraries which we can select at runtime.
      
      This also bumps the ROCm library to version 6 given 5.7 builds don't work
      on the latest ROCm library that just shipped.
      7555ea44
  25. 19 Dec, 2023 1 commit