1. 09 May, 2024 4 commits
  2. 08 May, 2024 3 commits
  3. 07 May, 2024 1 commit
  4. 06 May, 2024 2 commits
  5. 05 May, 2024 2 commits
  6. 01 May, 2024 3 commits
  7. 26 Apr, 2024 1 commit
  8. 24 Apr, 2024 1 commit
  9. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  10. 15 Apr, 2024 1 commit
  11. 08 Apr, 2024 2 commits
  12. 02 Apr, 2024 1 commit
  13. 01 Apr, 2024 2 commits
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
    • Michael Yang's avatar
      update memory calcualtions · 91b3e4d2
      Michael Yang authored
      count each layer independently when deciding gpu offloading
      91b3e4d2
  14. 27 Mar, 2024 1 commit
  15. 26 Mar, 2024 1 commit
  16. 15 Mar, 2024 1 commit
  17. 13 Mar, 2024 1 commit
  18. 09 Mar, 2024 5 commits
  19. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
  20. 01 Mar, 2024 1 commit
  21. 29 Feb, 2024 1 commit
  22. 21 Feb, 2024 2 commits
  23. 16 Feb, 2024 1 commit
  24. 15 Feb, 2024 1 commit