"git@developer.sourcefind.cn:Fzc7075/nunchaku.git" did not exist on "51732b7a02a7278ec82d18cc7b2dab989d06605c"
  1. 18 Apr, 2025 1 commit
  2. 17 Apr, 2025 1 commit
  3. 16 Apr, 2025 1 commit
  4. 15 Apr, 2025 1 commit
  5. 11 Apr, 2025 4 commits
    • Jesse Gross's avatar
      ggml: Fix memory leak on input tensors · f50d6912
      Jesse Gross authored
      For every forward pass through the model, we need to allocate input
      tensors: tokens, images, positions, outputs and masks. These get
      allocated in system memory.
      
      However, when we close the context that the tensors were allocated
      through, the metadata gets freed but the actual backend memory does
      not. This results in a significant memory leak.
      
      This makes it so that all the memory allocated through a context
      gets freed when it is closed.
      
      Fixes #10040
      f50d6912
    • Jesse Gross's avatar
      ggml: Don't allocate CPU buffers as CUDA Host buffers · 34c3b68f
      Jesse Gross authored
      Allocating (and in particular, freeing) memory from CUDA host buffers
      is expensive and can cause a significant performance hit if we do
      it for every token. Using normal system memory avoids this issue
      and also gives the OS more flexibility to manage it.
      
      There is no performance impact from this patch directly (either
      positive or negative) but it makes a difference once we start
      freeing memory correctly.
      34c3b68f
    • Jesse Gross's avatar
      ggml: Use pointer receivers for Context · f33ccd5d
      Jesse Gross authored
      Context is currently mixed between pointer and value receivers. Change
      this to be all pointer receivers so don't have to reason about whether
      the things we are updating in the struct will be retained.
      f33ccd5d
    • Jesse Gross's avatar
      ggml: Log filesystem errors · bc108b9a
      Jesse Gross authored
      Sometimes loading the GGUF file fails with:
      panic: context canceled
      
      This is probably a filesystem error but it doesn't provide any
      information about what happened.
      bc108b9a
  6. 08 Apr, 2025 2 commits
    • Jesse Gross's avatar
      ollamarunner: Preallocate worst case graph at startup · dbb149e6
      Jesse Gross authored
      Currently, the KV cache and graph are lazily allocated as needed.
      The cache is fully allocated on first use of the corresponding
      layer whereas the graph grows with the size of the context.
      
      This can be an issue if another application allocates more VRAM
      after we do our calculations - Ollama will crash in the middle of
      inference. If we instead allocate the maximum needed memory at
      startup of the runner, we will either succeed or fail at that point
      rather than at some surprising time in the future.
      
      Currently, this only generates a worst case batch for text, which
      means that vision models may get a partial allocation and continue
      to lazily allocate the rest.
      dbb149e6
    • Jesse Gross's avatar
      ggml: Check for OOM and return as Go errors · a807985e
      Jesse Gross authored
      If there is a CUDA OOM, we currently don't check the return value
      and will evetually segfault. This checks for the problem and generates
      a Go error. At the moment, this will still result in a panic but having
      the error is the first step to being able to handle it more gracefully.
      a807985e
  7. 05 Apr, 2025 1 commit
  8. 03 Apr, 2025 2 commits
  9. 27 Mar, 2025 2 commits
    • Jesse Gross's avatar
      ml: Remove Output from Context interface · 01aa7887
      Jesse Gross authored
      Model implementations should use Input for all of their tensors
      supplied to the model. This includes tensors that relate to the
      outputs, which is confusing since there is also an Output funciton.
      
      Since Output is only used internally in GGML and not used by any
      model implementations, we can remove it from the interface to
      reduce confusion.
      01aa7887
    • saman-amd's avatar
      Add gfx1200 & gfx1201 support on linux (#9878) · ead27aa9
      saman-amd authored
      ead27aa9
  10. 21 Mar, 2025 2 commits
  11. 18 Mar, 2025 1 commit
  12. 17 Mar, 2025 2 commits
  13. 13 Mar, 2025 1 commit
  14. 12 Mar, 2025 1 commit
  15. 11 Mar, 2025 8 commits
  16. 10 Mar, 2025 1 commit
  17. 08 Mar, 2025 2 commits
  18. 07 Mar, 2025 7 commits