1. 07 Mar, 2025 5 commits
  2. 04 Mar, 2025 1 commit
    • Michael Yang's avatar
      ml/backend/ggml: consolidate system info logging · 05a01fde
      Michael Yang authored
      - output backend system info when initializing the backend. this ensures
        this information is always present without needing to be called
        explicitly
      - convert to structured logging
      - enumerate devices rather than backends since devices are ordered
      - track device indices grouped by device name
      05a01fde
  3. 03 Mar, 2025 1 commit
  4. 02 Mar, 2025 4 commits
    • Jesse Gross's avatar
      ml: Enable support for flash attention · 21aa666a
      Jesse Gross authored
      The GGML flash attention kernel has specific requirements for
      padding and permutation. This adds support to the KV cache
      for conforming to these requirements so that flash attention
      can be enabled.
      
      Flash attention can be used in the same situations as the llama
      engine and is enabled by the user in the same way.
      21aa666a
    • Jesse Gross's avatar
      ml: Empty tensor constructor for tensors · ee141cc8
      Jesse Gross authored
      In cases where we allocate a tensor and then fully overwrite it with
      copied data, it is wasteful to first zero out the memory.
      ee141cc8
    • Jesse Gross's avatar
      ggml-backend: Store parent backend as part of tensor · 55e5776c
      Jesse Gross authored
      It can be important for a tensor to know what backend it came from -
      for example, to know if flash attention is enabled.
      55e5776c
    • Jesse Gross's avatar
      attention: Remove unnecessary contiguous operations · 854a9195
      Jesse Gross authored
      Prior to performing attention, we need to permute query, key
      and value. Currently we call Contiguous after each of these
      permutations, which is correct but expensive. Avoiding the
      3 calls to Contiguous increases performance by over 20%.
      
      The permutations of query and key do not violate the continuity
      rules for mulmat and the Contiguous call can be simply removed.
      
      Value requires a different permutation and does require Contiguous.
      However, we can use the copy into the cache as a way to perform this
      without further overhead.
      
      To support this and avoid unexpected tensor shapes that are seen by
      models, we need tighter integration between attention, cache
      and backend. Future optimization will also likely need this structure
       - for example, flash attention has special padding requirements in
      the cache and other backends may have their own needs.
      
      This further contains the operations that go into attention so that
      these and other optimizations can be handled transparently. Models
      that have special requirements for attention can still implement
      their own version of it.
      854a9195
  5. 27 Feb, 2025 5 commits
  6. 25 Feb, 2025 1 commit
    • Blake Mizerany's avatar
      .github: always run tests, and other helpful fixes (#9348) · 0d694793
      Blake Mizerany authored
      During work on our new registry client, I ran into frustrations with CI
      where a misspelling in a comment caused the linter to fail, which caused
      the tests to not run, which caused the build to not be cached, which
      caused the next run to be slow, which caused me to be sad.
      
      This commit address these issues, and pulls in some helpful changes
      we've had in CI on ollama.com for some time now.
      
      They are:
      
      * Always run tests, even if the other checks fail.
      
      Tests are the most important part of CI, and should always run. Failures
      in tests can be correlated with failures in other checks, and can help
      surface the root cause of the failure sooner. This is especially
      important when the failure is platform specific, and the tests are not
      platform independent.
      
      * Check that `go generate` is clean.
      
      This prevents 'go generate' abuse regressions. This codebase used to use
      it to generate platform specific binary build artifacts. Let's make sure
      that does not happen again and this powerful tool is used correctly, and
      the generated code is checked in.
      
      Also, while adding `go generate` the check, it was revealed that the
      generated metal code was putting dates in the comments, resulting in
      non-deterministic builds. This is a bad practice, and this commit fixes
      that. Git tells us the most important date: the commit date along with
      other associated changes.
      
      * Check that `go mod tidy` is clean.
      
      A new job to check that `go mod tidy` is clean was added, to prevent
      easily preventable merge conflicts or go.mod changes being deferred to a
      future PR that is unrelated to the change that caused the go.mod to
      change.
      
      * More robust caching.
      
      We now cache the go build cache, and the go mod download cache
      independently. This is because the download cache contains zips that can
      be unpacked in parallel faster than they can be fetched and extracted by
      tar. This speeds up the build significantly.
      
      The linter is hostile enough. It does not need to also punish us with
      longer build times due to small failures like misspellings.
      0d694793
  7. 24 Feb, 2025 1 commit
  8. 21 Feb, 2025 2 commits
  9. 20 Feb, 2025 2 commits
    • Jesse Gross's avatar
      ggml-backend: Don't recreate the scheduler for each context · e5bcc51a
      Jesse Gross authored
      We don't need to create and destroy the GGML scheduler for every
      context. This introduces extra CPU overhead for every forward
      pass and extra memory for contexts that don't actually get scheduled
      (for example, KV caches). We can instead just have one scheduler
      for the backend and reset it each time we call Compute.
      
      This improves token generation performance by 1-2% and removes
      scheduler create/destroy from profile traces.
      e5bcc51a
    • Jesse Gross's avatar
      ollamarunner: Pass runner performance parameters to backends · bd6a7d5e
      Jesse Gross authored
      Currently the following parameters are in the runner but not used:
       - numGPULayers
       - mainGPU
       - threads
       - tensorSplit
      
      This passes them through to the backend, which is where they would
      actually get used. However, the GGML backend does not yet do anything
      with them.
      bd6a7d5e
  10. 19 Feb, 2025 1 commit
  11. 18 Feb, 2025 1 commit
    • Michael Yang's avatar
      build: remove backend build for sapphirerapids · 5f8c0318
      Michael Yang authored
      sapphire rapids has amx support but it ends up having a negative
      performance impact.
      
      emerald rapids also has amx support with a positive performance impact
      however there's no reasonable way in ggml to differentiate between the
      two. the impact is small (~6%) so disable amx entirely for simplicity
      5f8c0318
  12. 14 Feb, 2025 11 commits
    • Daniel Hiltgen's avatar
      df2680b4
    • Jeffrey Morgan's avatar
      6600bd7d
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03
    • Jesse Gross's avatar
      ggml-backend: Close on nil should be a no-op · d223f3b6
      Jesse Gross authored
      d223f3b6
    • Jesse Gross's avatar
      ggml-backend: Ensure data is available after async computation · 60830695
      Jesse Gross authored
      We need to sync before retrieving data after async computation.
      It is also important to ensure that the Go buffer is not moved by
      the GC across function calls so we do a synchronous copy.
      60830695
    • Jesse Gross's avatar
      ggml-backend: Let GGML allocate context memory · 01d9a468
      Jesse Gross authored
      Passing in a Go buffer is not safe because the garbage collector could
      free or move the memory while the context is still open. However, if
      we pass in the size and a nil pointer then GGML will allocate it from
      the C side.
      01d9a468
    • Jesse Gross's avatar
      backend: API to support full precision matmul · d773b7d6
      Jesse Gross authored
      Most tensor backends try to optimize performance by using a lower
      precision for matmuls. However, some operations (such as kq) on
      some models are sensitive to this and require full precision.
      d773b7d6
    • Jesse Gross's avatar
      backend: Support graph computation that does not return an output · 4d4463b2
      Jesse Gross authored
      There are two cases where we may not have an output after computing:
       - Prompt processing where the length of the input exceeds the batch
         size
       - Internal memory management operations such as cache defrag and shift
      4d4463b2
    • Jesse Gross's avatar
      backend: Consistently use int (vs. int64) for tensor shapes · 0e38297f
      Jesse Gross authored
      Currently there is a mixture of int and int64 used when dealing with
      tensor dimensions and shapes, which causes unnecessary conversions -
      they all should be the same type.
      
      In general, most interfaces (such as Pytorch) use int64 for
      generality but most implementations (such as CUDA) use int32 for
      performance. There isn't much benefit to us to being more flexible
      than the implementations we are likely to run on.
      
      In addition, as a practical matter, a model with a tensor with a single
      dimension larger than 32 bits is unlikely to run on a 32-bit machine.
      0e38297f
    • Jesse Gross's avatar
      backend: Don't return an error on Close · 7e13f568
      Jesse Gross authored
      It is not common to return errors with close/free operations - most
      people won't check it and even if they did there's probably not much
      that can do. It's better to not give implementations false expectations.
      7e13f568
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413
  13. 11 Feb, 2025 1 commit
  14. 10 Feb, 2025 1 commit
  15. 06 Feb, 2025 1 commit
  16. 04 Feb, 2025 1 commit
  17. 31 Jan, 2025 1 commit