1. 18 May, 2025 1 commit
    • Ronald Wilson's avatar
      readme: add TinyNotepad to community integrations (#10763) · 7edfdd2f
      Ronald Wilson authored
      This PR adds Tiny Notepad, a lightweight, notepad-like interface to chat with local LLMs via Ollama. 
      
      - It’s designed as a simple, distraction-free alternative. 
      - The app supports basic note-taking, timestamped logs, and model parameter controls. 
      - Built with Tkinter, it runs entirely offline and available via PyPI.
      
      Aims to provide a lightweight easy to run and install interface for ollama.
      7edfdd2f
  2. 16 May, 2025 1 commit
  3. 15 May, 2025 7 commits
    • Daniel Hiltgen's avatar
      27da2cdd
    • Bruce MacDonald's avatar
      cmd: add ellipses to truncated show metadata (#10717) · feb8923a
      Bruce MacDonald authored
      When a piece of information has been truncated in the show output an ellipses to indicate that more data has not been displayed
      feb8923a
    • Jesse Gross's avatar
      ollamarunner: Multi-modal worst case graph · fe623c2c
      Jesse Gross authored
      We currently preallocate compute graph memory for the worst case
      batch of text tokens. This adds support for doing the same for
      images.
      
      Note that image models are more complicated than text models in
      how they process their inputs so there may be cases where this
      approach isn't completely generic for all models. It covers all
      currently supported models though.
      fe623c2c
    • Jesse Gross's avatar
      ollamarunner: Separate text and multimodal graphs · 3c14461d
      Jesse Gross authored
      For some multimodal models (such as gemma3), we create a single
      graph that generates the image embedding and then use this in the
      text model. The embedding tensor is completely opaque to the runner.
      
      However, this doesn't work if we need to use the embedding in multiple
      batches. This can arise if the embedding is larger than the batch size.
      In these cases (as with llama4), we would like to create views that
      are more appropriately sized. However, if we do this then the original
      source tensor is used in multiple graphs, which isn't allowed. To
      avoid that problem, models with this pattern compute the embedding
      tensor on first use and recreate the individual views. There is no
      longer a single vision and text graph.
      
      This codifies the pattern of separating vision and text graphs. The
      logic of computing tensors on demand is moved to the runner, so models
      no longer have to worry about this. It also gives the runner visibility
      into the multimodal tensors, which is important for memory management.
      3c14461d
    • Jesse Gross's avatar
      ollamarunner: Base cached tokens on current prompt · 499ae731
      Jesse Gross authored
      When we restore a sequence from the cache, we split the prompt into
      the already used tokens (stored in the cache) and new tokens that
      need to be processed. Currently, the references to the used tokens
      are coming from the stored previous sequence.
      
      However, even though we know that the used tokens are semantically
      equivalent to the prefix of the prompt, tokens can contain pointers
      which are no longer valid. As a result, it is better to get the
      used tokens from the prompt, which has currently valid pointers.
      
      This doesn't currently have any impact because it isn't possible
      to reuse the pointers (which are tensors) anyways. However, it
      becomes an issue once we can.
      499ae731
    • Michael Yang's avatar
      fix pixel values padding (#10718) · ef202789
      Michael Yang authored
      * panic if trying to pad 4d
      
      * fix pixel values padding
      ef202789
    • Michael Yang's avatar
      fix mllama conversion (#10716) · 55760195
      Michael Yang authored
      cross attention Q and K projections needs to have their heads swapped, similar to non-cross attention Q and K tensors
      55760195
  4. 14 May, 2025 4 commits
  5. 13 May, 2025 7 commits
  6. 12 May, 2025 5 commits
  7. 11 May, 2025 2 commits
  8. 10 May, 2025 5 commits
  9. 08 May, 2025 5 commits
  10. 07 May, 2025 3 commits