1. 05 Aug, 2024 1 commit
    • frob's avatar
      Disable paging for journalctl (#6154) · b73b0940
      frob authored
      Users using `journalctl` to get logs for issue logging sometimes don't realize that paging is causing information to be missed.
      b73b0940
  2. 04 Jul, 2024 1 commit
  3. 03 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Better nvidia GPU discovery logging · ef757da2
      Daniel Hiltgen authored
      Refine the way we log GPU discovery to improve the non-debug
      output, and report more actionable log messages when possible
      to help users troubleshoot on their own.
      ef757da2
  4. 19 Jun, 2024 1 commit
  5. 23 May, 2024 1 commit
  6. 21 May, 2024 1 commit
  7. 20 May, 2024 1 commit
  8. 09 May, 2024 1 commit
  9. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Safeguard for noexec · 0a74cb31
      Daniel Hiltgen authored
      We may have users that run into problems with our current
      payload model, so this gives us an escape valve.
      0a74cb31
  10. 21 Mar, 2024 1 commit
  11. 15 Mar, 2024 1 commit
  12. 11 Mar, 2024 1 commit
  13. 07 Mar, 2024 2 commits
    • Daniel Hiltgen's avatar
      Refined ROCm troubleshooting docs · 69f02278
      Daniel Hiltgen authored
      69f02278
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
  14. 15 Feb, 2024 1 commit
  15. 29 Jan, 2024 1 commit
  16. 11 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Build multiple CPU variants and pick the best · d88c527b
      Daniel Hiltgen authored
      This reduces the built-in linux version to not use any vector extensions
      which enables the resulting builds to run under Rosetta on MacOS in
      Docker.  Then at runtime it checks for the actual CPU vector
      extensions and loads the best CPU library available
      d88c527b
  17. 22 Dec, 2023 1 commit