1. 26 Jun, 2025 1 commit
  2. 23 Jun, 2025 1 commit
    • Daniel Hiltgen's avatar
      Re-remove cuda v11 (#10694) · 1c6669e6
      Daniel Hiltgen authored
      * Re-remove cuda v11
      
      Revert the revert - drop v11 support requiring drivers newer than Feb 23
      
      This reverts commit c6bcdc42.
      
      * Simplify layout
      
      With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)
      
      * distinct sbsa variant for linux arm64
      
      This avoids accidentally trying to load the sbsa cuda libraries on
      a jetson system which results in crashes.
      
      * temporary prevent rocm+cuda mixed loading
      1c6669e6
  3. 20 Jun, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Check return status for computation. · 87b7af6c
      Jesse Gross authored
      We don't check the return status after computing the graph, which
      can silently lead to bad outputs if we try to keep going and future
      computation succeeds. This appears to happens in certain cases on
      Apple M2 devices.
      
      Fixes #11070
      87b7af6c
  4. 18 Jun, 2025 2 commits
  5. 29 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Export GPU UUIDs · aaa78180
      Jesse Gross authored
      This enables matching up devices and information reported by the backend
      with system management libraries such as nvml to get accurate free
      memory reporting.
      aaa78180
  6. 24 May, 2025 1 commit
  7. 22 May, 2025 3 commits
    • Jesse Gross's avatar
      ml: Panic rather than return error on tensor allocation failure · 1f371ea9
      Jesse Gross authored
      FromFloatSlice and FromIntSlice return an error if the shape doesn't
      match the passed data or if memory can't be allocated. Since these
      are inputs, the memory being allocated is system memory rather than VRAM.
      
      In many cases, the caller can't really handle the error and panics.
      
      Empty and Zeros directly panic if they can't allocate memory.
      
      This makes things consistent by panicing for the first two cases,
      removing a fair amount of error handling code. This is also consistent
      with how Go typically handles these situations.
      1f371ea9
    • Jesse Gross's avatar
      ollamarunner: Memory usage reporting · 73d6a82c
      Jesse Gross authored
      This provides granular information about the backend memory allocations
      required by the runner:
       - Per backend
       - Per layer
       - Weights, cache and graph
       - Allocation status
      
      This can be used for debugging and validating memory estimates.
      73d6a82c
    • Jesse Gross's avatar
      ggml: Report graph memory for failed allocations · 6db8a377
      Jesse Gross authored
      GGML has a function to report the allocated size of a backend buffer.
      However, this returns 0 if we tried to allocate a buffer and it failed.
      For memory management purposes, it's important to know how much we were
      trying to allocate. This extends the API to report attempted sizes for
      all buffers and whether it succeeeded.
      6db8a377
  8. 21 May, 2025 2 commits
  9. 20 May, 2025 1 commit
  10. 19 May, 2025 1 commit
    • Jesse Gross's avatar
      ggml: Seperate tensor load from backend creation · 94ab428e
      Jesse Gross authored
      Currently, when the backend is created, the tensors are loaded at the
      same time, which is a slow operation. This separates them to be two
      steps:
       - Create backend, including enumerating tensors and memory allocation
       - Loading tensor data
      
      This allows more flexibility in managing model loading.
      94ab428e
  11. 15 May, 2025 1 commit
  12. 14 May, 2025 2 commits
  13. 12 May, 2025 2 commits
  14. 10 May, 2025 1 commit
  15. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  16. 05 May, 2025 1 commit
  17. 02 May, 2025 3 commits
    • Jesse Gross's avatar
      ggml: Fix race that resulted in "context canceled" when loading · a6ef73f4
      Jesse Gross authored
      Successfully completing processing with an errgroup cancels the
      associated context. However, we also have a goroutine that is checking
      for cancelation of the context. As a result, there is a race where
      the goroutine can pick up the cancelation and report an error,
      replacing the sucessful error message.
      
      To avoid that, this replaces the goroutine with a cancelation check
      when we are reading files. This also has the advantage of stopping
      all reads relatively quickly on error and also ensuring that there are
      no outstanding I/O operations when we return in this case.
      
      The downside is that if a file read blocks forever (for example, over
      the network) then cancelation of the context effectively won't be
      honored. However, this is also true for other smaller files we read
      and the tensors are read in small chunks (128K), so it's consistent
      and better on balance overall.
      a6ef73f4
    • Jesse Gross's avatar
      ollamarunner: Re-enable worst case graph preallocation. · c2f5d666
      Jesse Gross authored
      Worst case graph preallocation was disabled by a27462b7
      "ollamarunner: Temporarily disable worst case graph preallocation"
      since it caused crashes with large batches when not using the GPU.
      
      This backports upstream llama.cpp commit f057808
      "ggml: Don't assert fail when tensor data changes (#13222)", which
      fixes the underlying bug and allows reverting the previous workaround.
      c2f5d666
    • Jeffrey Morgan's avatar
      llama: update to commit e1e8e099 (#10513) · 8dd12c87
      Jeffrey Morgan authored
      8dd12c87
  18. 30 Apr, 2025 1 commit
  19. 25 Apr, 2025 2 commits
  20. 18 Apr, 2025 1 commit
  21. 17 Apr, 2025 1 commit
  22. 16 Apr, 2025 1 commit
  23. 15 Apr, 2025 1 commit
  24. 11 Apr, 2025 4 commits
    • Jesse Gross's avatar
      ggml: Fix memory leak on input tensors · f50d6912
      Jesse Gross authored
      For every forward pass through the model, we need to allocate input
      tensors: tokens, images, positions, outputs and masks. These get
      allocated in system memory.
      
      However, when we close the context that the tensors were allocated
      through, the metadata gets freed but the actual backend memory does
      not. This results in a significant memory leak.
      
      This makes it so that all the memory allocated through a context
      gets freed when it is closed.
      
      Fixes #10040
      f50d6912
    • Jesse Gross's avatar
      ggml: Don't allocate CPU buffers as CUDA Host buffers · 34c3b68f
      Jesse Gross authored
      Allocating (and in particular, freeing) memory from CUDA host buffers
      is expensive and can cause a significant performance hit if we do
      it for every token. Using normal system memory avoids this issue
      and also gives the OS more flexibility to manage it.
      
      There is no performance impact from this patch directly (either
      positive or negative) but it makes a difference once we start
      freeing memory correctly.
      34c3b68f
    • Jesse Gross's avatar
      ggml: Use pointer receivers for Context · f33ccd5d
      Jesse Gross authored
      Context is currently mixed between pointer and value receivers. Change
      this to be all pointer receivers so don't have to reason about whether
      the things we are updating in the struct will be retained.
      f33ccd5d
    • Jesse Gross's avatar
      ggml: Log filesystem errors · bc108b9a
      Jesse Gross authored
      Sometimes loading the GGUF file fails with:
      panic: context canceled
      
      This is probably a filesystem error but it doesn't provide any
      information about what happened.
      bc108b9a
  25. 08 Apr, 2025 2 commits
    • Jesse Gross's avatar
      ollamarunner: Preallocate worst case graph at startup · dbb149e6
      Jesse Gross authored
      Currently, the KV cache and graph are lazily allocated as needed.
      The cache is fully allocated on first use of the corresponding
      layer whereas the graph grows with the size of the context.
      
      This can be an issue if another application allocates more VRAM
      after we do our calculations - Ollama will crash in the middle of
      inference. If we instead allocate the maximum needed memory at
      startup of the runner, we will either succeed or fail at that point
      rather than at some surprising time in the future.
      
      Currently, this only generates a worst case batch for text, which
      means that vision models may get a partial allocation and continue
      to lazily allocate the rest.
      dbb149e6
    • Jesse Gross's avatar
      ggml: Check for OOM and return as Go errors · a807985e
      Jesse Gross authored
      If there is a CUDA OOM, we currently don't check the return value
      and will evetually segfault. This checks for the problem and generates
      a Go error. At the moment, this will still result in a panic but having
      the error is the first step to being able to handle it more gracefully.
      a807985e
  26. 05 Apr, 2025 1 commit
  27. 03 Apr, 2025 1 commit
    • Bruce MacDonald's avatar
      model: support for mistral-small in the ollama runner · 6bd0a983
      Bruce MacDonald authored
      Mistral is a popular research lab making open source models. This updates
      the forward pass of llama architecture models to support both llama models
      and mistral models by accounting for additional metadata present in mistral
      models, and finding the correct dimensions for the output projection.
      6bd0a983