1. 25 Apr, 2025 6 commits
  2. 24 Apr, 2025 3 commits
  3. 22 Apr, 2025 1 commit
    • Devon Rifkin's avatar
      increase default context length to 4096 (#10364) · 424f6486
      Devon Rifkin authored
      * increase default context length to 4096
      
      We lower the default numParallel from 4 to 2 and use these "savings" to
      double the default context length from 2048 to 4096.
      
      We're memory neutral in cases when we previously would've used
      numParallel == 4, but we add the following mitigation to handle some
      cases where we would have previously fallen back to 1x2048 due to low
      VRAM: we decide between 2048 and 4096 using a runtime check, choosing
      2048 if we're on a one GPU system with total VRAM of <= 4 GB. We
      purposefully don't check the available VRAM because we don't want the
      context window size to change unexpectedly based on the available VRAM.
      
      We plan on making the default even larger, but this is a relatively
      low-risk change we can make to quickly double it.
      
      * fix tests
      
      add an explicit context length so they don't get truncated. The code
      that converts -1 from being a signal for doing a runtime check isn't
      running as part of these tests.
      
      * tweak small gpu message
      
      * clarify context length default
      
      also make it actually show up in `ollama serve --help`
      424f6486
  4. 20 Apr, 2025 2 commits
  5. 19 Apr, 2025 2 commits
  6. 18 Apr, 2025 1 commit
  7. 17 Apr, 2025 3 commits
  8. 16 Apr, 2025 8 commits
  9. 15 Apr, 2025 5 commits
  10. 14 Apr, 2025 2 commits
  11. 11 Apr, 2025 4 commits
    • Jesse Gross's avatar
      ggml: Fix memory leak on input tensors · f50d6912
      Jesse Gross authored
      For every forward pass through the model, we need to allocate input
      tensors: tokens, images, positions, outputs and masks. These get
      allocated in system memory.
      
      However, when we close the context that the tensors were allocated
      through, the metadata gets freed but the actual backend memory does
      not. This results in a significant memory leak.
      
      This makes it so that all the memory allocated through a context
      gets freed when it is closed.
      
      Fixes #10040
      f50d6912
    • Jesse Gross's avatar
      ggml: Don't allocate CPU buffers as CUDA Host buffers · 34c3b68f
      Jesse Gross authored
      Allocating (and in particular, freeing) memory from CUDA host buffers
      is expensive and can cause a significant performance hit if we do
      it for every token. Using normal system memory avoids this issue
      and also gives the OS more flexibility to manage it.
      
      There is no performance impact from this patch directly (either
      positive or negative) but it makes a difference once we start
      freeing memory correctly.
      34c3b68f
    • Jesse Gross's avatar
      ggml: Use pointer receivers for Context · f33ccd5d
      Jesse Gross authored
      Context is currently mixed between pointer and value receivers. Change
      this to be all pointer receivers so don't have to reason about whether
      the things we are updating in the struct will be retained.
      f33ccd5d
    • Jesse Gross's avatar
      ggml: Log filesystem errors · bc108b9a
      Jesse Gross authored
      Sometimes loading the GGUF file fails with:
      panic: context canceled
      
      This is probably a filesystem error but it doesn't provide any
      information about what happened.
      bc108b9a
  12. 10 Apr, 2025 1 commit
  13. 09 Apr, 2025 2 commits