1. 29 Jan, 2025 1 commit
    • Michael Yang's avatar
      next build (#8539) · dcfb7a10
      Michael Yang authored
      
      
      * add build to .dockerignore
      
      * test: only build one arch
      
      * add build to .gitignore
      
      * fix ccache path
      
      * filter amdgpu targets
      
      * only filter if autodetecting
      
      * Don't clobber gpu list for default runner
      
      This ensures the GPU specific environment variables are set properly
      
      * explicitly set CXX compiler for HIP
      
      * Update build_windows.ps1
      
      This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.
      
      * build: add ollama subdir
      
      * add .git to .dockerignore
      
      * docs: update development.md
      
      * update build_darwin.sh
      
      * remove unused scripts
      
      * llm: add cwd and build/lib/ollama to library paths
      
      * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
      
      * add additional cmake output vars for msvc
      
      * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
      
      * remove unncessary filepath.Dir, cleanup
      
      * add hardware-specific directory to path
      
      * use absolute server path
      
      * build: linux arm
      
      * cmake install targets
      
      * remove unused files
      
      * ml: visit each library path once
      
      * build: skip cpu variants on arm
      
      * build: install cpu targets
      
      * build: fix workflow
      
      * shorter names
      
      * fix rocblas install
      
      * docs: clean up development.md
      
      * consistent build dir removal in development.md
      
      * silence -Wimplicit-function-declaration build warnings in ggml-cpu
      
      * update readme
      
      * update development readme
      
      * llm: update library lookup logic now that there is one runner (#8587)
      
      * tweak development.md
      
      * update docs
      
      * add windows cuda/rocm tests
      
      ---------
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      dcfb7a10
  2. 26 Oct, 2024 1 commit
    • Daniel Hiltgen's avatar
      Better support for AMD multi-GPU on linux (#7212) · d7c94e0c
      Daniel Hiltgen authored
      * Better support for AMD multi-GPU
      
      This resolves a number of problems related to AMD multi-GPU setups on linux.
      
      The numeric IDs used by rocm are not the same as the numeric IDs exposed in
      sysfs although the ordering is consistent.  We have to count up from the first
      valid gfx (major/minor/patch with non-zero values) we find starting at zero.
      
      There are 3 different env vars for selecting GPUs, and only ROCR_VISIBLE_DEVICES
      supports UUID based identification, so we should favor that one, and try
      to use UUIDs if detected to avoid potential ordering bugs with numeric IDs
      
      * ROCR_VISIBLE_DEVICES only works on linux
      
      Use the numeric ID only HIP_VISIBLE_DEVICES on windows
      d7c94e0c
  3. 17 Oct, 2024 1 commit
  4. 27 Aug, 2024 1 commit
  5. 19 Aug, 2024 2 commits
    • Daniel Hiltgen's avatar
      Adjust layout to bin+lib/ollama · 88bb9e33
      Daniel Hiltgen authored
      88bb9e33
    • Daniel Hiltgen's avatar
      Refactor linux packaging · 74d45f01
      Daniel Hiltgen authored
      This adjusts linux to follow a similar model to windows with a discrete archive
      (zip/tgz) to cary the primary executable, and dependent libraries. Runners are
      still carried as payloads inside the main binary
      
      Darwin retain the payload model where the go binary is fully self contained.
      74d45f01
  6. 02 Aug, 2024 1 commit
  7. 10 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Bump ROCm on windows to 6.1.2 · 1f50356e
      Daniel Hiltgen authored
      This also adjusts our algorithm to favor our bundled ROCm.
      I've confirmed VRAM reporting still doesn't work properly so we
      can't yet enable concurrency by default.
      1f50356e
  8. 01 May, 2024 1 commit
  9. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  10. 12 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fix iGPU detection for linux · 82b0c7c2
      Daniel Hiltgen authored
      This fixes a few bugs in the new sysfs discovery logic.  iGPUs are now
      correctly identified by their <1G VRAM reported.  the sysfs IDs are off
      by one compared to what HIP wants due to the CPU being reported
      in amdgpu, but HIP only cares about GPUs.
      82b0c7c2
  11. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11