1. 04 Jun, 2024 2 commits
  2. 24 May, 2024 1 commit
  3. 21 May, 2024 1 commit
  4. 14 May, 2024 2 commits
  5. 10 May, 2024 2 commits
  6. 09 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Wait for GPU free memory reporting to converge · 354ad925
      Daniel Hiltgen authored
      The GPU drivers take a while to update their free memory reporting, so we need
      to wait until the values converge with what we're expecting before proceeding
      to start another runner in order to get an accurate picture.
      354ad925
  7. 06 May, 2024 2 commits
  8. 05 May, 2024 3 commits
  9. 01 May, 2024 2 commits
  10. 28 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fix concurrency for CPU mode · d6e3b645
      Daniel Hiltgen authored
      Prior refactoring passes accidentally removed the logic to bypass VRAM
      checks for CPU loads.  This adds that back, along with test coverage.
      
      This also fixes loaded map access in the unit test to be behind the mutex which was
      likely the cause of various flakes in the tests.
      d6e3b645
  11. 25 Apr, 2024 2 commits
  12. 24 Apr, 2024 2 commits
  13. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Request and model concurrency · 34b9db5a
      Daniel Hiltgen authored
      This change adds support for multiple concurrent requests, as well as
      loading multiple models by spawning multiple runners. The default
      settings are currently set at 1 concurrent request per model and only 1
      loaded model at a time, but these can be adjusted by setting
      OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
      34b9db5a