1. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  2. 16 Apr, 2024 2 commits
  3. 10 Apr, 2024 1 commit
  4. 01 Apr, 2024 6 commits
  5. 28 Mar, 2024 1 commit
  6. 25 Mar, 2024 1 commit
  7. 20 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Better tmpdir cleanup · 74788b48
      Daniel Hiltgen authored
      If expanding the runners fails, don't leave a corrupt/incomplete payloads dir
      We now write a pid file out to the tmpdir, which allows us to scan for stale tmpdirs
      and remove this as long as there isn't still a process running.
      74788b48
  8. 12 Mar, 2024 2 commits
  9. 11 Mar, 2024 1 commit
  10. 10 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Add ollama executable peer dir for rocm · 00ec2693
      Daniel Hiltgen authored
      This allows people who package up ollama on their own to place
      the rocm dependencies in a peer directory to the ollama executable
      much like our windows install flow.
      00ec2693
  11. 09 Mar, 2024 2 commits
    • Jeffrey Morgan's avatar
      tidy cleanup logs · 0bd0f4a2
      Jeffrey Morgan authored
      0bd0f4a2
    • Daniel Hiltgen's avatar
      Finish unwinding idempotent payload logic · 4a5c9b80
      Daniel Hiltgen authored
      The recent ROCm change partially removed idempotent
      payloads, but the ggml-metal.metal file for mac was still
      idempotent.  This finishes switching to always extract
      the payloads, and now that idempotentcy is gone, the
      version directory is no longer useful.
      4a5c9b80
  12. 07 Mar, 2024 2 commits
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
    • Daniel Hiltgen's avatar
      Allow setting max vram for workarounds · be330174
      Daniel Hiltgen authored
      Until we get all the memory calculations correct, this can provide
      and escape valve for users to workaround out of memory crashes.
      be330174
  13. 29 Feb, 2024 1 commit
  14. 25 Feb, 2024 1 commit
  15. 17 Feb, 2024 1 commit
  16. 12 Feb, 2024 1 commit
  17. 28 Jan, 2024 2 commits
  18. 27 Jan, 2024 1 commit
  19. 26 Jan, 2024 3 commits
    • Daniel Hiltgen's avatar
      Detect lack of AVX and fallback to CPU mode · 667a2ba1
      Daniel Hiltgen authored
      We build the GPU libraries with AVX enabled to ensure that if not all
      layers fit on the GPU we get better performance in a mixed mode.
      If the user is using a virtualization/emulation system that lacks AVX
      this used to result in an illegal instruction error and crash before this
      fix.  Now we will report a warning in the server log, and just use
      CPU mode to ensure we don't crash.
      667a2ba1
    • Daniel Hiltgen's avatar
      Ignore AMD integrated GPUs · 9d7b5d6c
      Daniel Hiltgen authored
      Detect and ignore integrated GPUs reported by rocm.
      9d7b5d6c
    • Daniel Hiltgen's avatar
      Fix crash on cuda ml init failure · 5d9c4a5f
      Daniel Hiltgen authored
      The new driver lookup code was triggering after init failure due to a missing return
      5d9c4a5f
  20. 24 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      More logging for gpu management · 013fd071
      Daniel Hiltgen authored
      Fix an ordering glitch of dlerr/dlclose and add more logging to help
      root cause some crashes users are hitting. This also refines the
      function pointer names to use the underlying function names instead
      of simplified names for readability.
      013fd071
  21. 23 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Report more information about GPUs in verbose mode · 987c16b2
      Daniel Hiltgen authored
      This adds additional calls to both CUDA and ROCm management libraries to
      discover additional attributes about the GPU(s) detected in the system, and
      wires up runtime verbosity selection.  When users hit problems with GPUs we can
      ask them to run with `OLLAMA_DEBUG=1 ollama serve` and share the results.
      987c16b2
  22. 20 Jan, 2024 3 commits
  23. 19 Jan, 2024 2 commits
  24. 18 Jan, 2024 1 commit
  25. 14 Jan, 2024 1 commit