1. 10 Jul, 2024 1 commit
  2. 08 Jul, 2024 1 commit
  3. 06 Jul, 2024 2 commits
  4. 05 Jul, 2024 1 commit
  5. 17 Jun, 2024 2 commits
  6. 07 Jun, 2024 1 commit
  7. 24 May, 2024 1 commit
  8. 15 May, 2024 1 commit
  9. 25 Apr, 2024 1 commit
  10. 18 Apr, 2024 1 commit
    • Jeremy's avatar
      Update gen_linux.sh · 440b7190
      Jeremy authored
      Added OLLAMA_CUSTOM_CUDA_DEFS and OLLAMA_CUSTOM_ROCM_DEFS instead of OLLAMA_CUSTOM_GPU_DEFS
      440b7190
  11. 17 Apr, 2024 4 commits
  12. 09 Apr, 2024 2 commits
  13. 07 Apr, 2024 1 commit
  14. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  15. 25 Mar, 2024 1 commit
  16. 15 Mar, 2024 1 commit
  17. 11 Mar, 2024 1 commit
  18. 10 Mar, 2024 1 commit
  19. 07 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Revamp ROCm support · 6c5ccb11
      Daniel Hiltgen authored
      This refines where we extract the LLM libraries to by adding a new
      OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already
      idempotenent, so this should speed up startups after the first time a
      new release is deployed.  It also cleans up after itself.
      
      We now build only a single ROCm version (latest major) on both windows
      and linux.  Given the large size of ROCms tensor files, we split the
      dependency out.  It's bundled into the installer on windows, and a
      separate download on windows.  The linux install script is now smart and
      detects the presence of AMD GPUs and looks to see if rocm v6 is already
      present, and if not, then downloads our dependency tar file.
      
      For Linux discovery, we now use sysfs and check each GPU against what
      ROCm supports so we can degrade to CPU gracefully instead of having
      llama.cpp+rocm assert/crash on us.  For Windows, we now use go's windows
      dynamic library loading logic to access the amdhip64.dll APIs to query
      the GPU information.
      6c5ccb11
  20. 12 Feb, 2024 1 commit
  21. 25 Jan, 2024 1 commit
  22. 21 Jan, 2024 1 commit
  23. 20 Jan, 2024 2 commits
  24. 17 Jan, 2024 1 commit
  25. 16 Jan, 2024 1 commit
    • Daniel Hiltgen's avatar
      Bump llama.cpp to b1842 and add new cuda lib dep · 795674dd
      Daniel Hiltgen authored
      Upstream llama.cpp has added a new dependency with the
      NVIDIA CUDA Driver Libraries (libcuda.so) which is part of the
      driver distribution, not the general cuda libraries, and is not
      available as an archive, so we can not statically link it.  This may
      introduce some additional compatibility challenges which we'll
      need to keep an eye on.
      795674dd
  26. 14 Jan, 2024 1 commit
  27. 12 Jan, 2024 1 commit
  28. 11 Jan, 2024 3 commits
    • Daniel Hiltgen's avatar
      Always dynamically load the llm server library · 39928a42
      Daniel Hiltgen authored
      This switches darwin to dynamic loading, and refactors the code now that no
      static linking of the library is used on any platform
      39928a42
    • Daniel Hiltgen's avatar
      Build multiple CPU variants and pick the best · d88c527b
      Daniel Hiltgen authored
      This reduces the built-in linux version to not use any vector extensions
      which enables the resulting builds to run under Rosetta on MacOS in
      Docker.  Then at runtime it checks for the actual CPU vector
      extensions and loads the best CPU library available
      d88c527b
    • Daniel Hiltgen's avatar
      Support multiple variants for a given llm lib type · 8da7bef0
      Daniel Hiltgen authored
      In some cases we may want multiple variants for a given GPU type or CPU.
      This adds logic to have an optional Variant which we can use to select
      an optimal library, but also allows us to try multiple variants in case
      some fail to load.
      
      This can be useful for scenarios such as ROCm v5 vs v6 incompatibility
      or potentially CPU features.
      8da7bef0
  29. 04 Jan, 2024 3 commits