- 25 Apr, 2025 9 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
use a default of 1024 when asking for zero is confusing since most calls seem to assume 0 means do not ready any data
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
the first call to http.ResponseWriter.Write implicitly calls WriteHeader with http.StatusOK if it hasn't already been called. once WriteHeader has been called, subsequent calls has no effect. Write is called when JSON encoding progressUpdateJSON{}. calls to http.ResponseWriter.WriteHeader after the first encode is useless and produces a warning: http: superfluous response.WriteHeader call from github.com/ollama/ollama/server/internal/registry.(*statusCodeRecorder).WriteHeader (server.go:77) -
Michael Yang authored
-
Michael Yang authored
-
Jeffrey Morgan authored
-
- 24 Apr, 2025 3 commits
-
-
Parth Sareen authored
-
Parth Sareen authored
-
Adrien Duermael authored
-
- 22 Apr, 2025 1 commit
-
-
Devon Rifkin authored
* increase default context length to 4096 We lower the default numParallel from 4 to 2 and use these "savings" to double the default context length from 2048 to 4096. We're memory neutral in cases when we previously would've used numParallel == 4, but we add the following mitigation to handle some cases where we would have previously fallen back to 1x2048 due to low VRAM: we decide between 2048 and 4096 using a runtime check, choosing 2048 if we're on a one GPU system with total VRAM of <= 4 GB. We purposefully don't check the available VRAM because we don't want the context window size to change unexpectedly based on the available VRAM. We plan on making the default even larger, but this is a relatively low-risk change we can make to quickly double it. * fix tests add an explicit context length so they don't get truncated. The code that converts -1 from being a signal for doing a runtime check isn't running as part of these tests. * tweak small gpu message * clarify context length default also make it actually show up in `ollama serve --help`
-
- 20 Apr, 2025 2 commits
-
-
Richard Shiue authored
-
greengrass821 authored
Co-authored-by:tooth paste <tooth_paste91@Poorneshwars-MacBook-Pro.local>
-
- 19 Apr, 2025 2 commits
-
-
Michael Yang authored
the models directory should have plenty of storage and also ensure there's no cross-device copy
-
Blake Mizerany authored
Previously, the pull handler would send an error message in the Status field, this prevented the client from using the message as a signal to stop. In the case of the "run" command, it would follow the pull with a "show" which would print a nearly identical "not found" message for unresolved models. Fixes #10307
-
- 18 Apr, 2025 1 commit
-
-
Michael Yang authored
-
- 17 Apr, 2025 3 commits
-
-
Blake Mizerany authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 16 Apr, 2025 8 commits
-
-
Devon Rifkin authored
docs: change more template blocks to have syntax highlighting
-
Jeffrey Morgan authored
-
Blake Mizerany authored
This removes the extra flushProgress() at the end of handlePull. It is unnecessary because final progress updates are flushed in all cases of the main select loop.
-
Blake Mizerany authored
The completed and received counters must work in tandem and the code should better reflect that. Previously, the act of updating them was 2-3 lines of code duplicated in multiple places. This consolidates them into a single update closure for easy reading and maintenance. This also simplifies error handling in places where we can use a return parameter and defer to handle the error case for updates. Also, remove the old Layer field from the trackingReader struct.
-
Daniel Hiltgen authored
Add some new test coverage for various model architectures, and switch from orca-mini to the small llama model.
-
Daniel Hiltgen authored
Fix flake failures on windows
-
Michael Yang authored
-
Blake Mizerany authored
This commit adds retry/backoff to the registry client for pull requests. Also, revert progress indication to match original client's until we can "get it right." Also, make WithTrace wrap existing traces instead of clobbering them. This allows clients to compose traces.
-
- 15 Apr, 2025 5 commits
-
-
Jesse Gross authored
When ggml_backend_buffer_free() is called, the device memory is released but not all backends consistently release the actual ggml_backend_buffer_t in system RAM, causing a memory leak. Bug #10040
-
Devon Rifkin authored
In #8215 syntax highlighting was added to most of the blocks, but there were a couple that were still being rendered as plaintext
-
Devon Rifkin authored
server: add `OpenAI-Beta` header to CORS safelist
-
Devon Rifkin authored
docs: update some response code blocks to json5
-
Devon Rifkin authored
This is to prevent rendering bright red comments indicating invalid JSON when the comments are just supposed to be explanatory
-
- 14 Apr, 2025 2 commits
-
-
Devon Rifkin authored
alphabetized the compat list and then added a single header fixes: #9801
-
CYJiang authored
-
- 11 Apr, 2025 4 commits
-
-
Jesse Gross authored
For every forward pass through the model, we need to allocate input tensors: tokens, images, positions, outputs and masks. These get allocated in system memory. However, when we close the context that the tensors were allocated through, the metadata gets freed but the actual backend memory does not. This results in a significant memory leak. This makes it so that all the memory allocated through a context gets freed when it is closed. Fixes #10040
-
Jesse Gross authored
Allocating (and in particular, freeing) memory from CUDA host buffers is expensive and can cause a significant performance hit if we do it for every token. Using normal system memory avoids this issue and also gives the OS more flexibility to manage it. There is no performance impact from this patch directly (either positive or negative) but it makes a difference once we start freeing memory correctly.
-
Jesse Gross authored
Context is currently mixed between pointer and value receivers. Change this to be all pointer receivers so don't have to reason about whether the things we are updating in the struct will be retained.
-
Jesse Gross authored
Sometimes loading the GGUF file fails with: panic: context canceled This is probably a filesystem error but it doesn't provide any information about what happened.
-