1. 29 Apr, 2025 1 commit
  2. 05 Mar, 2025 1 commit
    • Blake Mizerany's avatar
      server/internal/registry: take over pulls from server package (#9485) · e2252d0f
      Blake Mizerany authored
      This commit replaces the old pull implementation in the server package
      with the new, faster, more robust pull implementation in the registry
      package.
      
      The new endpoint, and now the remove endpoint too, are behind the
      feature gate "client2" enabled only by setting the OLLAMA_EXPERIMENT
      environment variable include "client2".
      
      Currently, the progress indication is wired to perform the same as the
      previous implementation to avoid making changes to the CLI, and because
      the status reports happen at the start of the download, and the end of
      the write to disk, the progress indication is not as smooth as it could
      be. This is a known issue and will be addressed in a future change.
      
      This implementation may be ~0.5-1.0% slower in rare cases, depending on
      network and disk speed, but is generally MUCH faster and more robust
      than the its predecessor in all other cases.
      e2252d0f
  3. 27 Feb, 2025 1 commit
    • Blake Mizerany's avatar
      server/internal: replace model delete API with new registry handler. (#9347) · 2412adf4
      Blake Mizerany authored
      This commit introduces a new API implementation for handling
      interactions with the registry and the local model cache. The new API is
      located in server/internal/registry. The package name is "registry" and
      should be considered temporary; it is hidden and not bleeding outside of
      the server package. As the commits roll in, we'll start consuming more
      of the API and then let reverse osmosis take effect, at which point it
      will surface closer to the root level packages as much as needed.
      2412adf4
  4. 29 Jan, 2025 1 commit
    • Michael Yang's avatar
      next build (#8539) · dcfb7a10
      Michael Yang authored
      
      
      * add build to .dockerignore
      
      * test: only build one arch
      
      * add build to .gitignore
      
      * fix ccache path
      
      * filter amdgpu targets
      
      * only filter if autodetecting
      
      * Don't clobber gpu list for default runner
      
      This ensures the GPU specific environment variables are set properly
      
      * explicitly set CXX compiler for HIP
      
      * Update build_windows.ps1
      
      This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.
      
      * build: add ollama subdir
      
      * add .git to .dockerignore
      
      * docs: update development.md
      
      * update build_darwin.sh
      
      * remove unused scripts
      
      * llm: add cwd and build/lib/ollama to library paths
      
      * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
      
      * add additional cmake output vars for msvc
      
      * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
      
      * remove unncessary filepath.Dir, cleanup
      
      * add hardware-specific directory to path
      
      * use absolute server path
      
      * build: linux arm
      
      * cmake install targets
      
      * remove unused files
      
      * ml: visit each library path once
      
      * build: skip cpu variants on arm
      
      * build: install cpu targets
      
      * build: fix workflow
      
      * shorter names
      
      * fix rocblas install
      
      * docs: clean up development.md
      
      * consistent build dir removal in development.md
      
      * silence -Wimplicit-function-declaration build warnings in ggml-cpu
      
      * update readme
      
      * update development readme
      
      * llm: update library lookup logic now that there is one runner (#8587)
      
      * tweak development.md
      
      * update docs
      
      * add windows cuda/rocm tests
      
      ---------
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      dcfb7a10
  5. 21 Dec, 2024 1 commit
  6. 20 Dec, 2024 1 commit
  7. 23 Nov, 2024 1 commit
  8. 22 Nov, 2024 1 commit
  9. 14 Nov, 2024 1 commit
  10. 18 Oct, 2024 1 commit
  11. 06 Jun, 2024 1 commit
  12. 21 May, 2024 1 commit
  13. 11 May, 2024 1 commit
  14. 15 Apr, 2024 1 commit
  15. 07 Mar, 2024 1 commit
  16. 24 Feb, 2024 1 commit
  17. 15 Feb, 2024 2 commits
  18. 19 Dec, 2023 1 commit
  19. 05 Dec, 2023 1 commit
  20. 14 Nov, 2023 1 commit
  21. 01 Nov, 2023 1 commit
  22. 25 Oct, 2023 2 commits
  23. 16 Oct, 2023 1 commit
  24. 06 Oct, 2023 1 commit
  25. 22 Sep, 2023 2 commits
  26. 05 Sep, 2023 1 commit
  27. 30 Aug, 2023 1 commit
    • Bruce MacDonald's avatar
      subprocess llama.cpp server (#401) · 42998d79
      Bruce MacDonald authored
      * remove c code
      * pack llama.cpp
      * use request context for llama_cpp
      * let llama_cpp decide the number of threads to use
      * stop llama runner when app stops
      * remove sample count and duration metrics
      * use go generate to get libraries
      * tmp dir for running llm
      42998d79
  28. 10 Aug, 2023 1 commit
  29. 08 Aug, 2023 1 commit
  30. 01 Aug, 2023 2 commits
  31. 22 Jul, 2023 1 commit
  32. 20 Jul, 2023 1 commit
  33. 19 Jul, 2023 1 commit
  34. 18 Jul, 2023 1 commit
  35. 17 Jul, 2023 1 commit
  36. 11 Jul, 2023 1 commit