1. 03 Mar, 2025 2 commits
  2. 25 Feb, 2025 2 commits
  3. 29 Jan, 2025 1 commit
    • Michael Yang's avatar
      next build (#8539) · dcfb7a10
      Michael Yang authored
      
      
      * add build to .dockerignore
      
      * test: only build one arch
      
      * add build to .gitignore
      
      * fix ccache path
      
      * filter amdgpu targets
      
      * only filter if autodetecting
      
      * Don't clobber gpu list for default runner
      
      This ensures the GPU specific environment variables are set properly
      
      * explicitly set CXX compiler for HIP
      
      * Update build_windows.ps1
      
      This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.
      
      * build: add ollama subdir
      
      * add .git to .dockerignore
      
      * docs: update development.md
      
      * update build_darwin.sh
      
      * remove unused scripts
      
      * llm: add cwd and build/lib/ollama to library paths
      
      * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
      
      * add additional cmake output vars for msvc
      
      * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
      
      * remove unncessary filepath.Dir, cleanup
      
      * add hardware-specific directory to path
      
      * use absolute server path
      
      * build: linux arm
      
      * cmake install targets
      
      * remove unused files
      
      * ml: visit each library path once
      
      * build: skip cpu variants on arm
      
      * build: install cpu targets
      
      * build: fix workflow
      
      * shorter names
      
      * fix rocblas install
      
      * docs: clean up development.md
      
      * consistent build dir removal in development.md
      
      * silence -Wimplicit-function-declaration build warnings in ggml-cpu
      
      * update readme
      
      * update development readme
      
      * llm: update library lookup logic now that there is one runner (#8587)
      
      * tweak development.md
      
      * update docs
      
      * add windows cuda/rocm tests
      
      ---------
      Co-authored-by: default avatarjmorganca <jmorganca@gmail.com>
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      dcfb7a10
  4. 06 Jan, 2025 1 commit
  5. 14 Dec, 2024 1 commit
  6. 11 Dec, 2024 1 commit
  7. 10 Dec, 2024 1 commit
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  8. 15 Nov, 2024 1 commit
  9. 12 Nov, 2024 1 commit
  10. 30 Oct, 2024 1 commit
  11. 27 Oct, 2024 1 commit
  12. 08 Oct, 2024 1 commit
    • Jeffrey Morgan's avatar
      Re-introduce the `llama` package (#5034) · 96efd905
      Jeffrey Morgan authored
      * Re-introduce the llama package
      
      This PR brings back the llama package, making it possible to call llama.cpp and
      ggml APIs from Go directly via CGo. This has a few advantages:
      
      - C APIs can be called directly from Go without needing to use the previous
        "server" REST API
      - On macOS and for CPU builds on Linux and Windows, Ollama can be built without
        a go generate ./... step, making it easy to get up and running to hack on
        parts of Ollama that don't require fast inference
      - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
        takes <5 min on a fast CPU)
      - No git submodule making it easier to clone and build from source
      
      This is a big PR, but much of it is vendor code except for:
      
      - llama.go CGo bindings
      - example/: a simple example of running inference
      - runner/: a subprocess server designed to replace the llm/ext_server package
      - Makefile an as minimal as possible Makefile to build the runner package for
        different...
      96efd905
  13. 12 Sep, 2024 1 commit
    • Daniel Hiltgen's avatar
      Optimize container images for startup (#6547) · cd5c8f64
      Daniel Hiltgen authored
      * Optimize container images for startup
      
      This change adjusts how to handle runner payloads to support
      container builds where we keep them extracted in the filesystem.
      This makes it easier to optimize the cpu/cuda vs cpu/rocm images for
      size, and should result in faster startup times for container images.
      
      * Refactor payload logic and add buildx support for faster builds
      
      * Move payloads around
      
      * Review comments
      
      * Converge to buildx based helper scripts
      
      * Use docker buildx action for release
      cd5c8f64
  14. 10 Sep, 2024 1 commit
  15. 03 Sep, 2024 1 commit
  16. 20 Aug, 2024 1 commit
    • Daniel Hiltgen's avatar
      Split rocm back out of bundle (#6432) · a017cf2f
      Daniel Hiltgen authored
      We're over budget for github's maximum release artifact size with rocm + 2 cuda
      versions.  This splits rocm back out as a discrete artifact, but keeps the layout so it can
      be extracted into the same location as the main bundle.
      a017cf2f
  17. 19 Aug, 2024 7 commits
  18. 22 Jul, 2024 1 commit
  19. 17 Jul, 2024 1 commit
  20. 15 Jul, 2024 1 commit
  21. 02 Jul, 2024 1 commit
  22. 14 Jun, 2024 1 commit
  23. 17 Apr, 2024 2 commits
  24. 11 Apr, 2024 1 commit
  25. 01 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  26. 28 Mar, 2024 1 commit
  27. 26 Mar, 2024 2 commits
  28. 25 Mar, 2024 1 commit
    • Daniel Hiltgen's avatar
      Switch arm cuda base image to centos 7 · 5dacc1eb
      Daniel Hiltgen authored
      We had started using rocky linux 8, but they've updated to GCC 10.3,
      which breaks NVCC.  10.2 is compatible (or 10.4, but that's not
      available from rocky linux 8 repos yet)
      5dacc1eb
  29. 21 Mar, 2024 1 commit
  30. 15 Mar, 2024 1 commit