"vscode:/vscode.git/clone" did not exist on "6b8102408453c4f7cec0a806b24930e60477c132"
- 07 May, 2025 2 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
This reduces the size of our Windows installer payloads by ~256M by dropping support for nvidia drivers older than Feb 2023. Hardware support is unchanged. Linux default bundle sizes are reduced by ~600M to 1G.
-
- 16 Apr, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 27 Feb, 2025 2 commits
-
-
Blake Mizerany authored
The linter is secondary to the tests, so it should run after the tests, exposing test failures faster.
-
Jeffrey Morgan authored
Fixes sync filters and lowers CUDA version to 11.3 in test.yaml
-
- 25 Feb, 2025 3 commits
-
-
Blake Mizerany authored
During work on our new registry client, I ran into frustrations with CI where a misspelling in a comment caused the linter to fail, which caused the tests to not run, which caused the build to not be cached, which caused the next run to be slow, which caused me to be sad. This commit address these issues, and pulls in some helpful changes we've had in CI on ollama.com for some time now. They are: * Always run tests, even if the other checks fail. Tests are the most important part of CI, and should always run. Failures in tests can be correlated with failures in other checks, and can help surface the root cause of the failure sooner. This is especially important when the failure is platform specific, and the tests are not platform independent. * Check that `go generate` is clean. This prevents 'go generate' abuse regressions. This codebase used to use it to generate platform specific binary build artifacts. Let's make sure that does not happen again and this powerful tool is used correctly, and the generated code is checked in. Also, while adding `go generate` the check, it was revealed that the generated metal code was putting dates in the comments, resulting in non-deterministic builds. This is a bad practice, and this commit fixes that. Git tells us the most important date: the commit date along with other associated changes. * Check that `go mod tidy` is clean. A new job to check that `go mod tidy` is clean was added, to prevent easily preventable merge conflicts or go.mod changes being deferred to a future PR that is unrelated to the change that caused the go.mod to change. * More robust caching. We now cache the go build cache, and the go mod download cache independently. This is because the download cache contains zips that can be unpacked in parallel faster than they can be fetched and extracted by tar. This speeds up the build significantly. The linter is hostile enough. It does not need to also punish us with longer build times due to small failures like misspellings.
-
Daniel Hiltgen authored
* Bump cuda and rocm versions Update ROCm to linux:6.3 win:6.2 and CUDA v12 to 12.8. Yum has some silent failure modes, so largely switch to dnf. * Fix windows build script
-
Blake Mizerany authored
This commit copies (without history) the bmizerany/ollama-go repository with the intention of integrating it into the ollama as a replacement for the pushing, and pulling of models, and management of the cache they are pushed and pulled from. New homes for these packages will be determined as they are integrated and we have a better understanding of proper package boundaries.
-
- 20 Feb, 2025 1 commit
-
-
Michael Yang authored
clang outputs are faster. we were previously building with clang via gcc wrapper in cgo but this was missed during the build updates so there was a drop in performance
-
- 18 Feb, 2025 1 commit
-
-
Michael Yang authored
set owner and group when building the linux tarball so extracted files are consistent. this is the behaviour of release tarballs in version 0.5.7 and lower
-
- 08 Feb, 2025 1 commit
-
-
Michael Yang authored
ollama requires vcruntime140_1.dll which isn't found on 2019. previously the job used the windows runner (2019) but it explicitly installs 2022 to build the app. since the sign job doesn't actually build anything, it can use the windows-2022 runner instead.
-
- 06 Feb, 2025 2 commits
-
-
Michael Yang authored
the find returns intermediate directories which pulls the parent directories. it also omits files under lib/ollama. switch back to globbing
-
Michael Yang authored
* chore: update gitattributes * chore: add build info source
-
- 05 Feb, 2025 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
this improves build reliability and concurrency
-
- 04 Feb, 2025 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
- 03 Feb, 2025 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
- 31 Jan, 2025 2 commits
-
-
Michael Yang authored
env context is not accessible from job.*.strategy. since it's in the environment, just tell docker to use the environment variable[1] [1]: https://docs.docker.com/reference/cli/docker/buildx/build/#build-arg
-
Michael Yang authored
-
- 30 Jan, 2025 1 commit
-
-
Michael Yang authored
-
- 29 Jan, 2025 1 commit
-
-
Michael Yang authored
* add build to .dockerignore * test: only build one arch * add build to .gitignore * fix ccache path * filter amdgpu targets * only filter if autodetecting * Don't clobber gpu list for default runner This ensures the GPU specific environment variables are set properly * explicitly set CXX compiler for HIP * Update build_windows.ps1 This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset. * build: add ollama subdir * add .git to .dockerignore * docs: update development.md * update build_darwin.sh * remove unused scripts * llm: add cwd and build/lib/ollama to library paths * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS * add additional cmake output vars for msvc * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12 * remove unncessary filepath.Dir, cleanup * add hardware-specific directory to path * use absolute server path * build: linux arm * cmake install targets * remove unused files * ml: visit each library path once * build: skip cpu variants on arm * build: install cpu targets * build: fix workflow * shorter names * fix rocblas install * docs: clean up development.md * consistent build dir removal in development.md * silence -Wimplicit-function-declaration build warnings in ggml-cpu * update readme * update development readme * llm: update library lookup logic now that there is one runner (#8587) * tweak development.md * update docs * add windows cuda/rocm tests --------- Co-authored-by:
jmorganca <jmorganca@gmail.com> Co-authored-by:
Daniel Hiltgen <daniel@ollama.com>
-
- 11 Dec, 2024 3 commits
-
-
Daniel Hiltgen authored
upload-artifacts strips off leading common paths so when the ./build/ artifacts were removed, the ./dist/windows-amd64 prefix became common and was stripped, making the later download-artifacts place them in the wrong location
-
Daniel Hiltgen authored
Remove no longer relevant build log dir
-
Jeffrey Morgan authored
-
- 10 Dec, 2024 1 commit
-
-
Daniel Hiltgen authored
* llama: wire up builtin runner This adds a new entrypoint into the ollama CLI to run the cgo built runner. On Mac arm64, this will have GPU support, but on all other platforms it will be the lowest common denominator CPU build. After we fully transition to the new Go runners more tech-debt can be removed and we can stop building the "default" runner via make and rely on the builtin always. * build: Make target improvements Add a few new targets and help for building locally. This also adjusts the runner lookup to favor local builds, then runners relative to the executable, and finally payloads. * Support customized CPU flags for runners This implements a simplified custom CPU flags pattern for the runners. When built without overrides, the runner name contains the vector flag we check for (AVX) to ensure we don't try to run on unsupported systems and crash. If the user builds a customized set, we omit the naming scheme and don't check for compatibility. This avoids checking requirements at runtime, so that logic has been removed as well. This can be used to build GPU runners with no vector flags, or CPU/GPU runners with additional flags (e.g. AVX512) enabled. * Use relative paths If the user checks out the repo in a path that contains spaces, make gets really confused so use relative paths for everything in-repo to avoid breakage. * Remove payloads from main binary * install: clean up prior libraries This removes support for v0.3.6 and older versions (before the tar bundle) and ensures we clean up prior libraries before extracting the bundle(s). Without this change, runners and dependent libraries could leak when we update and lead to subtle runtime errors.
-
- 05 Dec, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 12 Nov, 2024 1 commit
-
-
Daniel Hiltgen authored
It looks like 8 minutes isn't quite enough and we're seeing sporadic timeouts
-
- 04 Nov, 2024 3 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Github actions matrix strategy can't access env settings
-
Daniel Hiltgen authored
-
- 02 Nov, 2024 1 commit
-
-
Daniel Hiltgen authored
This leverages caching, and some reduced installer scope to try to speed up builds. It also tidies up some windows build logic that was only relevant for the older generate/cmake builds.
-
- 30 Oct, 2024 2 commits
-
-
Daniel Hiltgen authored
This will no longer error if built with regular gcc on windows. To help triage issues that may come in related to different compilers, the runner now reports the compier used by cgo.
-
Daniel Hiltgen authored
* Remove llama.cpp submodule and shift new build to top * CI: install msys and clang gcc on win Needed for deepseek to work properly on windows
-
- 17 Oct, 2024 1 commit
-
-
Daniel Hiltgen authored
* Refine llama.cpp vendoring workflow tools Switch from the sync.sh over to make based tooling * Run new make sync and patch flow
-
- 08 Oct, 2024 1 commit
-
-
Jeffrey Morgan authored
* Re-introduce the llama package This PR brings back the llama package, making it possible to call llama.cpp and ggml APIs from Go directly via CGo. This has a few advantages: - C APIs can be called directly from Go without needing to use the previous "server" REST API - On macOS and for CPU builds on Linux and Windows, Ollama can be built without a go generate ./... step, making it easy to get up and running to hack on parts of Ollama that don't require fast inference - Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners takes <5 min on a fast CPU) - No git submodule making it easier to clone and build from source This is a big PR, but much of it is vendor code except for: - llama.go CGo bindings - example/: a simple example of running inference - runner/: a subprocess server designed to replace the llm/ext_server package - Makefile an as minimal as possible Makefile to build the runner package for different targets (cpu, avx, avx2, cuda, rocm) Co-authored-by:
Jesse Gross <jesse@ollama.com> Co-authored-by:
Daniel Hiltgen <daniel@ollama.com> * cache: Clear old KV cache entries when evicting a slot When forking a cache entry, if no empty slots are available we evict the least recently used one and copy over the KV entries from the closest match. However, this copy does not overwrite existing values but only adds new ones. Therefore, we need to clear the old slot first. This change fixes two issues: - The KV cache fills up and runs out of space even though we think we are managing it correctly - Performance gets worse over time as we use new cache entries that are not hot in the processor caches * doc: explain golang objc linker warning (#6830) * llama: gather transitive dependencies for rocm for dist packaging (#6848) * Refine go server makefiles to be more DRY (#6924) This breaks up the monolithic Makefile for the Go based runners into a set of utility files as well as recursive Makefiles for the runners. Files starting with the name "Makefile" are buildable, while files that end with ".make" are utilities to include in other Makefiles. This reduces the amount of nearly identical targets and helps set a pattern for future community contributions for new GPU runner architectures. When we are ready to switch over to the Go runners, these files should move to the top of the repo, and we should add targets for the main CLI, as well as a helper "install" (put all the built binaries on the local system in a runnable state) and "dist" target (generate the various tar/zip files for distribution) for local developer use. * llama: don't create extraneous directories (#6988) * llama: Exercise the new build in CI (#6989) Wire up some basic sanity testing in CI for the Go runner. GPU runners are not covered yet. * llama: Refine developer docs for Go server (#6842) This enhances the documentation for development focusing on the new Go server. After we complete the transition further doc refinements can remove the "transition" discussion. * runner.go: Allocate batches for all sequences during init We should tell the model that we could have full batches for all sequences. We already do this when we allocate the batches but it was missed during initialization. * llama.go: Don't return nil from Tokenize on zero length input Potentially receiving nil in a non-error condition is surprising to most callers - it's better to return an empty slice. * runner.go: Remove stop tokens from cache If the last token is EOG then we don't return this and it isn't present in the cache (because it was never submitted to Decode). This works well for extending the cache entry with a new sequence. However, for multi-token stop sequences, we won't return any of the tokens but all but the last one will be in the cache. This means when the conversation continues the cache will contain tokens that don't overlap with the new prompt. This works (we will pick up the portion where there is overlap) but it causes unnecessary cache thrashing because we will fork the original cache entry as it is not a perfect match. By trimming the cache to the tokens that we actually return this issue can be avoided. * runner.go: Simplify flushing of pending tokens * runner.go: Update TODOs * runner.go: Don't panic when processing sequences If there is an error processing a sequence, we should return a clean HTTP error back to Ollama rather than panicing. This will make us more resilient to transient failures. Panics can still occur during startup as there is no way to serve requests if that fails. Co-authored-by:
jmorganca <jmorganca@gmail.com> * runner.go: More accurately capture timings Currently prompt processing time doesn't capture the that it takes to tokenize the input, only decoding time. We should capture the full process to more accurately reflect reality. This is especially true once we start processing images where the initial processing can take significant time. This is also more consistent with the existing C++ runner. * runner.go: Support for vision models In addition to bringing feature parity with the C++ runner, this also incorporates several improvements: - Cache prompting works with images, avoiding the need to re-decode embeddings for every message in a conversation - Parallelism is supported, avoiding the need to restrict to one sequence at a time. (Though for now Ollama will not schedule them while we might need to fall back to the old runner.) Co-authored-by:
jmorganca <jmorganca@gmail.com> * runner.go: Move Unicode checking code and add tests * runner.go: Export external cache members Runner and cache are in the same package so the change doesn't affect anything but it is more internally consistent. * runner.go: Image embedding cache Generating embeddings from images can take significant time (on my machine between 100ms and 8s depending on the model). Although we already cache the result of decoding these images, the embeddings need to be regenerated every time. This is not necessary if we get the same image over and over again, for example, during a conversation. This currently uses a very small cache with a very simple algorithm but it is easy to improve as is warranted. * llama: catch up on patches Carry forward solar-pro and cli-unicode patches * runner.go: Don't re-allocate memory for every batch We can reuse memory allocated from batch to batch since batch size is fixed. This both saves the cost of reallocation as well keeps the cache lines hot. This results in a roughly 1% performance improvement for token generation with Nvidia GPUs on Linux. * runner.go: Default to classic input cache policy The input cache as part of the go runner implemented a cache policy that aims to maximize hit rate in both single and multi- user scenarios. When there is a cache hit, the response is very fast. However, performance is actually slower when there is an input cache miss due to worse GPU VRAM locality. This means that performance is generally better overall for multi-user scenarios (better input cache hit rate, locality was relatively poor already). But worse for single users (input cache hit rate is about the same, locality is now worse). This defaults the policy back to the old one to avoid a regression but keeps the new one available through an environment variable OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is to improve this in the future to get the best of both worlds without user configuration. For inputs that result in cache misses, on Nvidia/Linux this change improves performance by 31% for prompt processing and 13% for token generation. * runner.go: Increase size of response channel Generally the CPU can easily keep up with handling reponses that are generated but there's no reason not to let generation continue and handle things in larger batches if needed. * llama: Add CI to verify all vendored changes have patches (#7066) Make sure we don't accidentally merge changes in the vendored code that aren't also reflected in the patches. * llama: adjust clip patch for mingw utf-16 (#7065) * llama: adjust clip patch for mingw utf-16 * llama: ensure static linking of runtime libs Avoid runtime dependencies on non-standard libraries * runner.go: Enable llamafile (all platforms) and BLAS (Mac OS) These are two features that are shown on llama.cpp's system info that are currently different between the two runners. On my test systems the performance difference is very small to negligible but it is probably still good to equalize the features. * llm: Don't add BOS/EOS for tokenize requests This is consistent with what server.cpp currently does. It affects things like token processing counts for embedding requests. * runner.go: Don't cache prompts for embeddings Our integration with server.cpp implicitly disables prompt caching because it is not part of the JSON object being parsed, this makes the Go runner behavior similarly. Prompt caching has been seen to affect the results of text completions on certain hardware. The results are not wrong either way but they are non-deterministic. However, embeddings seem to be affected even on hardware that does not show this behavior for completions. For now, it is best to maintain consistency with the existing behavior. * runner.go: Adjust debug log levels Add system info printed at startup and quiet down noisier logging. * llama: fix compiler flag differences (#7082) Adjust the flags for the new Go server to more closely match the generate flow * llama: refine developer docs (#7121) * llama: doc and example clean up (#7122) * llama: doc and example clean up * llama: Move new dockerfile into llama dir Temporary home until we fully transition to the Go server * llama: runner doc cleanup * llama.go: Add description for Tokenize error case --------- Co-authored-by:
Jesse Gross <jesse@ollama.com> Co-authored-by:
Daniel Hiltgen <daniel@ollama.com> Co-authored-by:
Daniel Hiltgen <dhiltgen@users.noreply.github.com>
-
- 24 Sep, 2024 1 commit
-
-
Daniel Hiltgen authored
write-host in powershell writes directly to the console and will not be picked up by a pipe. Echo, or write-output will.
-
- 21 Sep, 2024 1 commit
-
-
Daniel Hiltgen authored
The upload artifact is missing the dist prefix since all payloads are in the same directory, so restore the prefix on download.
-
- 20 Sep, 2024 1 commit
-
-
Daniel Hiltgen authored
-