"vscode:/vscode.git/clone" did not exist on "8b760f85e902f11a5dd0060a32ed054061a95e82"
  1. 05 Jul, 2024 2 commits
  2. 17 Jun, 2024 4 commits
  3. 15 Jun, 2024 1 commit
  4. 07 Jun, 2024 1 commit
  5. 31 May, 2024 1 commit
  6. 24 May, 2024 1 commit
  7. 15 May, 2024 1 commit
  8. 27 Apr, 2024 2 commits
  9. 26 Apr, 2024 5 commits
  10. 25 Apr, 2024 1 commit
  11. 23 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Move nested payloads to installer and zip file on windows · 058f6cd2
      Daniel Hiltgen authored
      Now that the llm runner is an executable and not just a dll, more users are facing
      problems with security policy configurations on windows that prevent users
      writing to directories and then executing binaries from the same location.
      This change removes payloads from the main executable on windows and shifts them
      over to be packaged in the installer and discovered based on the executables location.
      This also adds a new zip file for people who want to "roll their own" installation model.
      058f6cd2
  12. 21 Apr, 2024 1 commit
  13. 18 Apr, 2024 3 commits
    • Jeremy's avatar
      Update gen_windows.ps1 · 6f18297b
      Jeremy authored
      Forgot a " on the write-host
      6f18297b
    • Jeremy's avatar
      Update gen_windows.ps1 · 15016413
      Jeremy authored
      Added OLLAMA_CUSTOM_CUDA_DEFS and OLLAMA_CUSTOM_ROCM_DEFS to customize GPU builds on Windows
      15016413
    • Jeremy's avatar
      Update gen_linux.sh · 440b7190
      Jeremy authored
      Added OLLAMA_CUSTOM_CUDA_DEFS and OLLAMA_CUSTOM_ROCM_DEFS instead of OLLAMA_CUSTOM_GPU_DEFS
      440b7190
  14. 17 Apr, 2024 4 commits
  15. 09 Apr, 2024 2 commits
  16. 07 Apr, 2024 1 commit
  17. 04 Apr, 2024 2 commits
  18. 03 Apr, 2024 2 commits
  19. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  20. 26 Mar, 2024 1 commit
  21. 25 Mar, 2024 1 commit
  22. 15 Mar, 2024 2 commits