1. 22 Jul, 2024 3 commits
  2. 11 Jul, 2024 1 commit
  3. 09 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Detect CUDA OS Overhead · f6f759fc
      Daniel Hiltgen authored
      This adds logic to detect skew between the driver and
      management library which can be attributed to OS overhead
      and records that so we can adjust subsequent management
      library free VRAM updates and avoid OOM scenarios.
      f6f759fc
  4. 03 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Better nvidia GPU discovery logging · ef757da2
      Daniel Hiltgen authored
      Refine the way we log GPU discovery to improve the non-debug
      output, and report more actionable log messages when possible
      to help users troubleshoot on their own.
      ef757da2
  5. 19 Jun, 2024 2 commits
  6. 17 Jun, 2024 2 commits
  7. 14 Jun, 2024 7 commits
  8. 13 Jun, 2024 1 commit
  9. 04 Jun, 2024 1 commit
  10. 02 Jun, 2024 1 commit
  11. 24 May, 2024 2 commits
  12. 10 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Bump VRAM buffer back up · 30a7d709
      Daniel Hiltgen authored
      Under stress scenarios we're seeing OOMs so this should help stabilize
      the allocations under heavy concurrency stress.
      30a7d709
  13. 09 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Record more GPU information · 8727a9c1
      Daniel Hiltgen authored
      This cleans up the logging for GPU discovery a bit, and can
      serve as a foundation to report GPU information in a future UX.
      8727a9c1
  14. 07 May, 2024 1 commit
  15. 06 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Use our libraries first · 380378cc
      Daniel Hiltgen authored
      Trying to live off the land for cuda libraries was not the right strategy.  We need to use the version we compiled against to ensure things work properly
      380378cc
  16. 05 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Centralize server config handling · f56aa200
      Daniel Hiltgen authored
      This moves all the env var reading into one central module
      and logs the loaded config once at startup which should
      help in troubleshooting user server logs
      f56aa200
  17. 03 May, 2024 1 commit
  18. 01 May, 2024 1 commit
  19. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  20. 10 Apr, 2024 1 commit
  21. 01 Apr, 2024 3 commits
  22. 25 Mar, 2024 1 commit
  23. 07 Mar, 2024 2 commits
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
    • Daniel Hiltgen's avatar
      Allow setting max vram for workarounds · be330174
      Daniel Hiltgen authored
      Until we get all the memory calculations correct, this can provide
      and escape valve for users to workaround out of memory crashes.
      be330174
  24. 17 Feb, 2024 1 commit
  25. 12 Feb, 2024 1 commit
  26. 28 Jan, 2024 1 commit