1. 27 Feb, 2025 1 commit
  2. 21 Feb, 2025 1 commit
    • Jesse Gross's avatar
      ml: Abstract attention out of model definitions · f53f4198
      Jesse Gross authored
      
      
      There are two benefits to doing this:
       - Provide a library function that models can use, reducing code for
         each model implementation
       - Enables a single place to drop in optimized implementations of
         attention based on the backend or other factors. One is provided for
         GGML.
      
      On CUDA this improves token generation rate by about 3%. It does not
      have a significant effect on Metal.
      Co-authored-by: default avatarDaniel Hiltgen <daniel@ollama.com>
      f53f4198
  3. 20 Feb, 2025 2 commits
    • Jesse Gross's avatar
      models: Prune unused outputs earlier in the forward pass · 5c5535c0
      Jesse Gross authored
      Currently Rows is called as the last step in a model computation
      to get the values for the output tokens. However, if we move it
      earlier in the process then we can trim out computations that
      never get used. This is similar to how models are defined in
      llama.cpp.
      
      Changing the model definition in this way improves token generation
      performance by approximately 8%.
      5c5535c0
    • Jesse Gross's avatar
      ollamarunner: Pass runner performance parameters to backends · bd6a7d5e
      Jesse Gross authored
      Currently the following parameters are in the runner but not used:
       - numGPULayers
       - mainGPU
       - threads
       - tensorSplit
      
      This passes them through to the backend, which is where they would
      actually get used. However, the GGML backend does not yet do anything
      with them.
      bd6a7d5e
  4. 15 Feb, 2025 1 commit
  5. 14 Feb, 2025 8 commits
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03
    • Jesse Gross's avatar
      models: Move model into their own directory · 6945617a
      Jesse Gross authored
      This allows there to be a file that is a list of models that is
      not mixed into the runner code.
      6945617a
    • Jesse Gross's avatar
      vocab: Use int32 for special tokens · 7916f550
      Jesse Gross authored
      Special tokens are currently read as uint32 from the model metadata.
      However, all other parts of the system (including the tokenizer) use
      int32 to represent tokens so it is impossible to represent the high
      portion of the unsigned range. For consistency and to avoid casts,
      we should just use int32 everywhere.
      7916f550
    • Jesse Gross's avatar
      model: Load tensors behind an interface · d650ad39
      Jesse Gross authored
      Currently, if a model uses an interface for its data structures (as mllama
      does) then the tensor data in the structs implementing that interface will
      not get loaded.
      d650ad39
    • Jesse Gross's avatar
      backend: API to support full precision matmul · d773b7d6
      Jesse Gross authored
      Most tensor backends try to optimize performance by using a lower
      precision for matmuls. However, some operations (such as kq) on
      some models are sensitive to this and require full precision.
      d773b7d6
    • Jesse Gross's avatar
      backend: Support graph computation that does not return an output · 4d4463b2
      Jesse Gross authored
      There are two cases where we may not have an output after computing:
       - Prompt processing where the length of the input exceeds the batch
         size
       - Internal memory management operations such as cache defrag and shift
      4d4463b2
    • Jesse Gross's avatar
      backend: Consistently use int (vs. int64) for tensor shapes · 0e38297f
      Jesse Gross authored
      Currently there is a mixture of int and int64 used when dealing with
      tensor dimensions and shapes, which causes unnecessary conversions -
      they all should be the same type.
      
      In general, most interfaces (such as Pytorch) use int64 for
      generality but most implementations (such as CUDA) use int32 for
      performance. There isn't much benefit to us to being more flexible
      than the implementations we are likely to run on.
      
      In addition, as a practical matter, a model with a tensor with a single
      dimension larger than 32 bits is unlikely to run on a 32-bit machine.
      0e38297f
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413
  6. 15 Dec, 2024 1 commit