1. 22 Jul, 2024 1 commit
  2. 20 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Adjust windows ROCm discovery · 283948c8
      Daniel Hiltgen authored
      The v5 hip library returns unsupported GPUs which wont enumerate at
      inference time in the runner so this makes sure we align discovery.  The
      gfx906 cards are no longer supported so we shouldn't compile with that
      GPU type as it wont enumerate at runtime.
      283948c8
  3. 10 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Bump ROCm on windows to 6.1.2 · 1f50356e
      Daniel Hiltgen authored
      This also adjusts our algorithm to favor our bundled ROCm.
      I've confirmed VRAM reporting still doesn't work properly so we
      can't yet enable concurrency by default.
      1f50356e
  4. 21 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Disable concurrency for AMD + Windows · 9929751c
      Daniel Hiltgen authored
      Until ROCm v6.2 ships, we wont be able to get accurate free memory
      reporting on windows, which makes automatic concurrency too risky.
      Users can still opt-in but will need to pay attention to model sizes otherwise they may thrash/page VRAM or cause OOM crashes.
      All other platforms and GPUs have accurate VRAM reporting wired
      up now, so we can turn on concurrency by default.
      9929751c
  5. 18 Jun, 2024 1 commit
  6. 14 Jun, 2024 3 commits
  7. 04 Jun, 2024 1 commit
  8. 09 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Record more GPU information · 8727a9c1
      Daniel Hiltgen authored
      This cleans up the logging for GPU discovery a bit, and can
      serve as a foundation to report GPU information in a future UX.
      8727a9c1
  9. 01 May, 2024 1 commit
  10. 24 Apr, 2024 1 commit
  11. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  12. 09 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Finish unwinding idempotent payload logic · 4a5c9b80
      Daniel Hiltgen authored
      The recent ROCm change partially removed idempotent
      payloads, but the ggml-metal.metal file for mac was still
      idempotent.  This finishes switching to always extract
      the payloads, and now that idempotentcy is gone, the
      version directory is no longer useful.
      4a5c9b80
  13. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11