1. 14 Mar, 2025 6 commits
    • Jesse Gross's avatar
      gemma3: Allow multiple image in a single input · 7bf793a6
      Jesse Gross authored
      Previously processing multiple images in a batch would trigger
      segfaults so sending images together was disabled as a way to
      mitigate this. The trigger was processing one image on the CPU
      and one on the GPU.
      
      This can no longer happen:
       - The vision encoder is now on the GPU so both images would be
         processed on the GPU.
       - We require images to be fully contained in a batch and each
         image including its special tokens is over half the batch size.
         As a result, we will never get two images in the same batch.
      
      Fixes #9731
      7bf793a6
    • Jesse Gross's avatar
      ollamarunner: Use a separate context per multimodal input · 282bfaaa
      Jesse Gross authored
      Currently there is a single context per sequence, shared all by
      all multimodal inputs. Since we build a vision encoder graph per
      image, with a large number of inputs we can eventually hit the
      maximum number of graph nodes per context.
      
      This changes to use a separate context for each image, ensuring
      that available resource limits are consistent.
      282bfaaa
    • Jesse Gross's avatar
      ml: Allow models to constrain inputs to a single batch · 9679f401
      Jesse Gross authored
      Models may require that a set of inputs all be processed as part
      of the same batch. For example, if an image has multiple patches
      with fully connected attention between them, we should not split
      the batch in the middle of an image.
      
      Fixes #9697
      9679f401
    • Bruce MacDonald's avatar
      llm: remove internal subprocess req and resp types (#9324) · 3892c3a7
      Bruce MacDonald authored
      This commit refactors the LLM subsystem by removing internal subprocess
      request and response types. It consolidates duplicate type definitions
      across the codebase, moving them to centralized locations. The change also
      standardizes interfaces between components, simplifies the ServerStatusResp
      struct, and moves the ParseDurationMs function to a common package. This
      cleanup reduces code duplication between different runner implementations
      (llamarunner and ollamarunner).
      3892c3a7
    • Blake Mizerany's avatar
      4e320b8b
    • Blake Mizerany's avatar
      server/internal/client: use chunksums for concurrent blob verification (#9746) · eb2b22b0
      Blake Mizerany authored
      Replace large-chunk blob downloads with parallel small-chunk
      verification to solve timeout and performance issues. Registry users
      experienced progressively slowing download speeds as large-chunk
      transfers aged, often timing out completely.
      
      The previous approach downloaded blobs in a few large chunks but
      required a separate, single-threaded pass to read the entire blob back
      from disk for verification after download completion.
      
      This change uses the new chunksums API to fetch many smaller
      chunk+digest pairs, allowing concurrent downloads and immediate
      verification as each chunk arrives. Chunks are written directly to their
      final positions, eliminating the entire separate verification pass.
      
      The result is more reliable downloads that maintain speed throughout the
      transfer process and significantly faster overall completion, especially
      over unstable connections or with large blobs.
      eb2b22b0
  2. 13 Mar, 2025 17 commits
  3. 12 Mar, 2025 8 commits
  4. 11 Mar, 2025 9 commits