1. 10 Dec, 2024 1 commit
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  2. 12 Nov, 2024 1 commit
  3. 26 Oct, 2024 1 commit
    • Daniel Hiltgen's avatar
      Better support for AMD multi-GPU on linux (#7212) · d7c94e0c
      Daniel Hiltgen authored
      * Better support for AMD multi-GPU
      
      This resolves a number of problems related to AMD multi-GPU setups on linux.
      
      The numeric IDs used by rocm are not the same as the numeric IDs exposed in
      sysfs although the ordering is consistent.  We have to count up from the first
      valid gfx (major/minor/patch with non-zero values) we find starting at zero.
      
      There are 3 different env vars for selecting GPUs, and only ROCR_VISIBLE_DEVICES
      supports UUID based identification, so we should favor that one, and try
      to use UUIDs if detected to avoid potential ordering bugs with numeric IDs
      
      * ROCR_VISIBLE_DEVICES only works on linux
      
      Use the numeric ID only HIP_VISIBLE_DEVICES on windows
      d7c94e0c
  4. 17 Oct, 2024 1 commit
  5. 14 Oct, 2024 1 commit
  6. 11 Sep, 2024 1 commit
    • Daniel Hiltgen's avatar
      Verify permissions for AMD GPU (#6736) · 9246e6dd
      Daniel Hiltgen authored
      This adds back a check which was lost many releases back to verify /dev/kfd permissions
      which when lacking, can lead to confusing failure modes of:
        "rocBLAS error: Could not initialize Tensile host: No devices found"
      
      This implementation does not hard fail the serve command but instead will fall back to CPU
      with an error log.  In the future we can include this in the GPU discovery UX to show
      detected but unsupported devices we discovered.
      9246e6dd
  7. 02 Aug, 2024 1 commit
  8. 24 Jul, 2024 1 commit
  9. 22 Jul, 2024 1 commit
  10. 14 Jun, 2024 6 commits
  11. 09 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Record more GPU information · 8727a9c1
      Daniel Hiltgen authored
      This cleans up the logging for GPU discovery a bit, and can
      serve as a foundation to report GPU information in a future UX.
      8727a9c1
  12. 01 May, 2024 1 commit
  13. 24 Apr, 2024 1 commit
  14. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  15. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  16. 28 Mar, 2024 1 commit
  17. 12 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fix iGPU detection for linux · 82b0c7c2
      Daniel Hiltgen authored
      This fixes a few bugs in the new sysfs discovery logic.  iGPUs are now
      correctly identified by their <1G VRAM reported.  the sysfs IDs are off
      by one compared to what HIP wants due to the CPU being reported
      in amdgpu, but HIP only cares about GPUs.
      82b0c7c2
  18. 11 Mar, 2024 1 commit
  19. 10 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Add ollama executable peer dir for rocm · 00ec2693
      Daniel Hiltgen authored
      This allows people who package up ollama on their own to place
      the rocm dependencies in a peer directory to the ollama executable
      much like our windows install flow.
      00ec2693
  20. 09 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Finish unwinding idempotent payload logic · 4a5c9b80
      Daniel Hiltgen authored
      The recent ROCm change partially removed idempotent
      payloads, but the ggml-metal.metal file for mac was still
      idempotent.  This finishes switching to always extract
      the payloads, and now that idempotentcy is gone, the
      version directory is no longer useful.
      4a5c9b80
  21. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11