1. 17 Oct, 2024 1 commit
  2. 05 Aug, 2024 1 commit
    • Daniel Hiltgen's avatar
      Implement linux NUMA detection · f457d634
      Daniel Hiltgen authored
      If the system has multiple numa nodes, enable numa support in llama.cpp
      If we detect numactl in the path, use that, else use the basic "distribute" mode.
      f457d634
  3. 14 Jun, 2024 2 commits
  4. 09 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Wait for GPU free memory reporting to converge · 354ad925
      Daniel Hiltgen authored
      The GPU drivers take a while to update their free memory reporting, so we need
      to wait until the values converge with what we're expecting before proceeding
      to start another runner in order to get an accurate picture.
      354ad925
  5. 18 Jan, 2024 1 commit
  6. 11 Jan, 2024 1 commit