- 17 Dec, 2024 5 commits
-
-
Blake Mizerany authored
This fixes another regression in the previous commit that fixed other known bugs.
-
Jascha Beste authored
-
Blake Mizerany authored
Changes in #8002 introduced fixes for bugs with mangling JSON Schemas. It also fixed a bug where the server would silently fail when clients requested invalid formats. It also, unfortunately, introduced a bug where the server would reject requests with an empty format, which should be allowed. The change in #8127 updated the code to allow the empty format, but also reintroduced the regression where the server would silently fail when the format was set, but invalid. This commit fixes both regressions. The server does not reject the empty format, but it does reject invalid formats. It also adds tests to help us catch regressions in the future. Also, the updated code provides a more detailed error message when a client sends a non-empty, but invalid format, echoing the invalid format in the response. This commits also takes the opportunity to remove superfluous linter checks.
-
Jeffrey Morgan authored
-
Daniel Hiltgen authored
In 0.5.2 we simplified packaging to have avx only for macos x86. It looks like there may still be some non-AVX systems out there, so this puts back the prior logic of building no-AVX for the primary binary, and now 2 runners for avx and avx2. These will be packaged in the App bundle only, so the stand-alone binary will now be without AVX support on macos. On arm, we'll also see these runners reported as available in the log, but they're dormant and will never be used at runtime.
-
- 16 Dec, 2024 2 commits
-
-
Michael authored
readme: example/get started guide for pgai with Ollama
-
Jascha Beste authored
* docs: switch around database integrations order and link to quickstart * docs: link to blog post in example readme * chore: link to main readme * readme: removing example to link externally readme: removing example to link externally so we don't have to keep this example up-to-date ---------
-
- 15 Dec, 2024 1 commit
-
-
Patrick Devine authored
Refactor mllama image processing code, and add pixtral and qwen2vl
-
- 14 Dec, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Jeffrey Morgan authored
-
- 13 Dec, 2024 2 commits
-
-
Daniel Hiltgen authored
This puts the low-level runner logging back on stderr for consistency with prior releases
-
Anuraag (Rag) Agrawal authored
* openai: return usage as final chunk for streams --------- Co-authored-by:ParthSareen <parth.sareen@ollama.com>
-
- 12 Dec, 2024 2 commits
-
-
Pascal Patry authored
-
Parth Sareen authored
-
- 11 Dec, 2024 10 commits
-
-
Blake Mizerany authored
Fixes #7944
-
Daniel Hiltgen authored
Pass through the version override so the makefiles use it
-
Blake Mizerany authored
Previously we decoded and re-encoded JSON schemas during validation, which served no purpose since json.RawMessage already validates JSON syntax. Worse, the re-encoding lost field ordering from the original schema, which affects inference quality during step-by-step reasoning. While fixing this ordering issue by using json.RawMessage directly, testing revealed that schema_to_grammar (from llama.cpp) also fails to preserve field order during grammar generation. This appears to be the root cause of inference degradation. This change prevents us from mangling the user's original schema order, but we still need to address the ordering issue in schema_to_grammar. That will be a separate change. Updates #7978
-
Daniel Hiltgen authored
upload-artifacts strips off leading common paths so when the ./build/ artifacts were removed, the ./dist/windows-amd64 prefix became common and was stripped, making the later download-artifacts place them in the wrong location
-
Daniel Hiltgen authored
The new build embeds the arm runner in the main binary, so there is no longer a lib/ollama
-
Daniel Hiltgen authored
Remove no longer relevant build log dir
-
Jeffrey Morgan authored
-
Blake Mizerany authored
-
湛露先生 authored
Signed-off-by:zhanluxianshen <zhanluxianshen@163.com>
-
Phil Wornath authored
-
- 10 Dec, 2024 8 commits
-
-
Tao Zuhong authored
-
frob authored
-
Dr. Daniel Bender authored
-
Daniel Hiltgen authored
The final implementation of #7499 removed dynamic vector requirements in favor of a simpler filename based model, and this was left over logic that is no longer needed.
-
Stefan Weil authored
-
Daniel Hiltgen authored
The "F" was missing.
-
Daniel Hiltgen authored
* llama: wire up builtin runner This adds a new entrypoint into the ollama CLI to run the cgo built runner. On Mac arm64, this will have GPU support, but on all other platforms it will be the lowest common denominator CPU build. After we fully transition to the new Go runners more tech-debt can be removed and we can stop building the "default" runner via make and rely on the builtin always. * build: Make target improvements Add a few new targets and help for building locally. This also adjusts the runner lookup to favor local builds, then runners relative to the executable, and finally payloads. * Support customized CPU flags for runners This implements a simplified custom CPU flags pattern for the runners. When built without overrides, the runner name contains the vector flag we check for (AVX) to ensure we don't try to run on unsupported systems and crash. If the user builds a customized set, we omit the naming scheme and don't check for compatibility. This avoids checking requirements at runtime, so that logic has been removed as well. This can be used to build GPU runners with no vector flags, or CPU/GPU runners with additional flags (e.g. AVX512) enabled. * Use relative paths If the user checks out the repo in a path that contains spaces, make gets really confused so use relative paths for everything in-repo to avoid breakage. * Remove payloads from main binary * install: clean up prior libraries This removes support for v0.3.6 and older versions (before the tar bundle) and ensures we clean up prior libraries before extracting the bundle(s). Without this change, runners and dependent libraries could leak when we update and lead to subtle runtime errors.
-
frob authored
Co-authored-by:Richard Lyons <frob@cloudstaff.com>
-
- 09 Dec, 2024 1 commit
-
-
Jesse Gross authored
New lines can be an important part of a user's prompt and trimming it can alter the results. We previously only trimmed prompts with images but refactoring brought this behavior to all prompts, where it became more noticable. The /generate endpoint adds less whitespace and therefore doesn't need to trim it out - this brings the same behavior to /chat. Thanks to @gabe-l-hart for spotting the issue! Fixes #7795
-
- 08 Dec, 2024 2 commits
-
-
Yannick Gloster authored
-
湛露先生 authored
-
- 06 Dec, 2024 3 commits
-
-
Parth Sareen authored
-
Michael authored
readme: add llama3.3 to readme
-
Parth Sareen authored
-
- 05 Dec, 2024 2 commits
-
-
Jeffrey Morgan authored
-
Parth Sareen authored
-