"README_ORIGIN.md" did not exist on "051f58f1a5a8a7450ffea5c3aadaa2ea4b3a8630"
- 16 Apr, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 15 Apr, 2025 1 commit
-
-
Jesse Gross authored
When ggml_backend_buffer_free() is called, the device memory is released but not all backends consistently release the actual ggml_backend_buffer_t in system RAM, causing a memory leak. Bug #10040
-
- 03 Apr, 2025 1 commit
-
-
Bruce MacDonald authored
Mistral is a popular research lab making open source models. This updates the forward pass of llama architecture models to support both llama models and mistral models by accounting for additional metadata present in mistral models, and finding the correct dimensions for the output projection.
-
- 27 Mar, 2025 1 commit
-
-
saman-amd authored
-
- 13 Mar, 2025 1 commit
-
-
shane.xb.qian authored
* macOS has different definition per info from @mxyng
-
- 12 Mar, 2025 1 commit
-
-
shane.xb.qian authored
Signed-off-by:shane.xb.qian <shane.qian@foxmail.com>
-
- 11 Mar, 2025 1 commit
-
-
Michael Yang authored
-
- 07 Mar, 2025 3 commits
-
-
Michael Yang authored
some tensors are expected to be used in repeating layers but are not themselves repeated. this change copies these tensors into the same backends as their repeating counterparts to minimize copying tensors between backends
-
Michael Yang authored
use a similar strategy as llama.cpp for deciding where tensors should be allocated. this will be improved later to be aware of usable memory before assigning the tensor
-
Jeffrey Morgan authored
-
- 04 Mar, 2025 1 commit
-
-
Michael Yang authored
- output backend system info when initializing the backend. this ensures this information is always present without needing to be called explicitly - convert to structured logging - enumerate devices rather than backends since devices are ordered - track device indices grouped by device name
-
- 03 Mar, 2025 1 commit
-
-
Michael Yang authored
expand backend loading error handling to catch more problems and log them instead of panicing
-
- 27 Feb, 2025 3 commits
-
-
Michael Yang authored
-
Jeffrey Morgan authored
Fixes sync filters and lowers CUDA version to 11.3 in test.yaml
-
Jeffrey Morgan authored
-
- 25 Feb, 2025 1 commit
-
-
Blake Mizerany authored
During work on our new registry client, I ran into frustrations with CI where a misspelling in a comment caused the linter to fail, which caused the tests to not run, which caused the build to not be cached, which caused the next run to be slow, which caused me to be sad. This commit address these issues, and pulls in some helpful changes we've had in CI on ollama.com for some time now. They are: * Always run tests, even if the other checks fail. Tests are the most important part of CI, and should always run. Failures in tests can be correlated with failures in other checks, and can help surface the root cause of the failure sooner. This is especially important when the failure is platform specific, and the tests are not platform independent. * Check that `go generate` is clean. This prevents 'go generate' abuse regressions. This codebase used to use it to generate platform specific binary build artifacts. Let's make sure that does not happen again and this powerful tool is used correctly, and the generated code is checked in. Also, while adding `go generate` the check, it was revealed that the generated metal code was putting dates in the comments, resulting in non-deterministic builds. This is a bad practice, and this commit fixes that. Git tells us the most important date: the commit date along with other associated changes. * Check that `go mod tidy` is clean. A new job to check that `go mod tidy` is clean was added, to prevent easily preventable merge conflicts or go.mod changes being deferred to a future PR that is unrelated to the change that caused the go.mod to change. * More robust caching. We now cache the go build cache, and the go mod download cache independently. This is because the download cache contains zips that can be unpacked in parallel faster than they can be fetched and extracted by tar. This speeds up the build significantly. The linter is hostile enough. It does not need to also punish us with longer build times due to small failures like misspellings.
-
- 24 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 19 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 18 Feb, 2025 1 commit
-
-
Michael Yang authored
sapphire rapids has amx support but it ends up having a negative performance impact. emerald rapids also has amx support with a positive performance impact however there's no reasonable way in ggml to differentiate between the two. the impact is small (~6%) so disable amx entirely for simplicity
-
- 14 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 11 Feb, 2025 1 commit
-
-
Michael Yang authored
* wrap ggml_backend_load_best in try/catch * ignore non-ollama paths
-
- 10 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 06 Feb, 2025 1 commit
-
-
Michael Yang authored
* chore: update gitattributes * chore: add build info source
-
- 04 Feb, 2025 1 commit
-
-
Jeffrey Morgan authored
-
- 31 Jan, 2025 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
This reverts commit bea1f1fa.
-
- 30 Jan, 2025 1 commit
-
-
Michael Yang authored
-
- 29 Jan, 2025 1 commit
-
-
Michael Yang authored
* add build to .dockerignore * test: only build one arch * add build to .gitignore * fix ccache path * filter amdgpu targets * only filter if autodetecting * Don't clobber gpu list for default runner This ensures the GPU specific environment variables are set properly * explicitly set CXX compiler for HIP * Update build_windows.ps1 This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset. * build: add ollama subdir * add .git to .dockerignore * docs: update development.md * update build_darwin.sh * remove unused scripts * llm: add cwd and build/lib/ollama to library paths * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS * add additional cmake output vars for msvc * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12 * remove unncessary filepath.Dir, cleanup * add hardware-specific directory to path * use absolute server path * build: linux arm * cmake install targets * remove unused files * ml: visit each library path once * build: skip cpu variants on arm * build: install cpu targets * build: fix workflow * shorter names * fix rocblas install * docs: clean up development.md * consistent build dir removal in development.md * silence -Wimplicit-function-declaration build warnings in ggml-cpu * update readme * update development readme * llm: update library lookup logic now that there is one runner (#8587) * tweak development.md * update docs * add windows cuda/rocm tests --------- Co-authored-by:
jmorganca <jmorganca@gmail.com> Co-authored-by:
Daniel Hiltgen <daniel@ollama.com>
-