1. 19 Aug, 2024 1 commit
  2. 02 Aug, 2024 1 commit
  3. 11 Jul, 2024 1 commit
  4. 06 Jul, 2024 1 commit
  5. 14 Jun, 2024 2 commits
  6. 10 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Bump VRAM buffer back up · 30a7d709
      Daniel Hiltgen authored
      Under stress scenarios we're seeing OOMs so this should help stabilize
      the allocations under heavy concurrency stress.
      30a7d709
  7. 07 May, 2024 1 commit
  8. 01 May, 2024 1 commit
  9. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a
  10. 16 Apr, 2024 2 commits
  11. 10 Apr, 2024 1 commit
  12. 07 Mar, 2024 1 commit
  13. 25 Feb, 2024 1 commit
  14. 11 Jan, 2024 3 commits
    • Daniel Hiltgen's avatar
      Fix up the CPU fallback selection · 7427fa13
      Daniel Hiltgen authored
      The memory changes and multi-variant change had some merge
      glitches I missed.  This fixes them so we actually get the cpu llm lib
      and best variant for the given system.
      7427fa13
    • Daniel Hiltgen's avatar
      Always dynamically load the llm server library · 39928a42
      Daniel Hiltgen authored
      This switches darwin to dynamic loading, and refactors the code now that no
      static linking of the library is used on any platform
      39928a42
    • Daniel Hiltgen's avatar
      Build multiple CPU variants and pick the best · d88c527b
      Daniel Hiltgen authored
      This reduces the built-in linux version to not use any vector extensions
      which enables the resulting builds to run under Rosetta on MacOS in
      Docker.  Then at runtime it checks for the actual CPU vector
      extensions and loads the best CPU library available
      d88c527b
  15. 09 Jan, 2024 1 commit
  16. 08 Jan, 2024 1 commit
  17. 03 Jan, 2024 2 commits
  18. 02 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch windows build to fully dynamic · d966b730
      Daniel Hiltgen authored
      Refactor where we store build outputs, and support a fully dynamic loading
      model on windows so the base executable has no special dependencies thus
      doesn't require a special PATH.
      d966b730
  19. 20 Dec, 2023 1 commit
    • Daniel Hiltgen's avatar
      Revamp the dynamic library shim · 7555ea44
      Daniel Hiltgen authored
      This switches the default llama.cpp to be CPU based, and builds the GPU variants
      as dynamically loaded libraries which we can select at runtime.
      
      This also bumps the ROCm library to version 6 given 5.7 builds don't work
      on the latest ROCm library that just shipped.
      7555ea44
  20. 19 Dec, 2023 3 commits