1. 20 Feb, 2025 2 commits
  2. 19 Feb, 2025 5 commits
  3. 18 Feb, 2025 9 commits
  4. 17 Feb, 2025 2 commits
    • Jeremy Schlatter's avatar
      cmd: fix cursor flickering in progress bar · 5930aaeb
      Jeremy Schlatter authored
      The previous commit fixed flickering in the progress bar itself. Cursor
      flickering is harder to address.
      
      Cursor flickering could be fixed by hiding the cursor altogether while
      the progress bar is displayed. The downside of this is that if the
      program is killed in such a way that it can't clean up its state, it
      would leave the cursor invisible.
      
      Instead, this commit introduces an output buffer. All of the escape
      codes and content for a single progress update are written to a buffer,
      which is then flushed to the terminal all at once. This significantly
      decreases the time during which the terminal has seen the cursor-hiding
      code but has not yet seen the cursor-showing code, thus minimizing (but
      not 100% eliminating) cursor flickering.
      
      For more context, see:
      https://gitlab.gnome.org/GNOME/vte/-/issues/2837#note_2269501
      5930aaeb
    • Jeremy Schlatter's avatar
      cmd: fix progress bar flickering · faf67db0
      Jeremy Schlatter authored
      Previous code cleared the display before writing new content, creating a
      window where the terminal could (and in some cases did) render empty lines.
      
      Instead, we now write new content over the old content, only clearing
      the trailing end of lines for cases where the new line is shorter.
      
      Fixes #1664
      faf67db0
  5. 15 Feb, 2025 2 commits
  6. 14 Feb, 2025 17 commits
    • Daniel Hiltgen's avatar
      df2680b4
    • Jesse Gross's avatar
      llamarunner: Init GGML before printing system info · 010313bb
      Jesse Gross authored
      We currently print system info before the GGML backends are loaded.
      This results in only getting information about the default lowest
      common denominator runner. If we move up the GGML init then we can
      see what we are actually running.
      
      Before:
      time=2025-02-14T11:15:07.606-08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | cgo(gcc)" threads=24
      
      After:
      time=2025-02-14T11:16:02.936-08:00 level=INFO source=runner.go:935 msg=system info="CPU : LLAMAFILE = 1 | CPU : LLAMAFILE = 1 | CUDA : ARCHS = 890 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | cgo(gcc)" threads=24
      010313bb
    • Jeffrey Morgan's avatar
      llm: attempt to evaluate symlinks, but do not fail (#9089) · 5296f487
      Jeffrey Morgan authored
      provides a better approach to #9088 that will attempt to
      evaluate symlinks (important for macOS where 'ollama' is
      often a symlink), but use the result of os.Executable()
      as a fallback in scenarios where filepath.EvalSymlinks
      fails due to permission erorrs or other issues
      5296f487
    • Jeffrey Morgan's avatar
      llm: do not evaluate symlink for exe path lookup (#9088) · f05774b0
      Jeffrey Morgan authored
      In some cases, the directories in the executable path read by
      filepath.EvalSymlinks are not accessible, resulting in permission
      errors which results in an error when running models. It also
      doesn't work well on long paths on windows, also resulting in
      errors. This change removes filepath.EvalSymlinks when accessing
      os.Executable() altogether
      f05774b0
    • Jeffrey Morgan's avatar
      6600bd7d
    • Jesse Gross's avatar
      Runner for Ollama engine · ed443a03
      Jesse Gross authored
      This provides integration with the new Ollama engine
      (58245413 next ollama runner (#7913)) and the rest of the Ollama
      infrastructure such as the runner and Ollama server.
      
      In addition, it also builds out the KV cache infrastructure to
      support requirements of how Ollama runs models such as:
       - Parallel processing
       - Memory management for defragmentation and shifting
       - Multi-modal modals
      
      Both old and new engines continue to be supported. By default, only
      the old engine is used. To enable the new engine:
      
      Start the server with the OLLAMA_NEW_ENGINE environment variable set:
      OLLAMA_NEW_ENGINE=1 ./ollama serve
      
      Start a model that is supported by the Ollama engine. This one is Llama 3.1 8b Q4_K_M:
      ./ollama run jessegross/llama3.1
      ed443a03
    • Jesse Gross's avatar
      models: Move model into their own directory · 6945617a
      Jesse Gross authored
      This allows there to be a file that is a list of models that is
      not mixed into the runner code.
      6945617a
    • Jesse Gross's avatar
      vocab: Use int32 for special tokens · 7916f550
      Jesse Gross authored
      Special tokens are currently read as uint32 from the model metadata.
      However, all other parts of the system (including the tokenizer) use
      int32 to represent tokens so it is impossible to represent the high
      portion of the unsigned range. For consistency and to avoid casts,
      we should just use int32 everywhere.
      7916f550
    • Jesse Gross's avatar
      model: Load tensors behind an interface · d650ad39
      Jesse Gross authored
      Currently, if a model uses an interface for its data structures (as mllama
      does) then the tensor data in the structs implementing that interface will
      not get loaded.
      d650ad39
    • Jesse Gross's avatar
      ggml-backend: Close on nil should be a no-op · d223f3b6
      Jesse Gross authored
      d223f3b6
    • Jesse Gross's avatar
      ggml-backend: Ensure data is available after async computation · 60830695
      Jesse Gross authored
      We need to sync before retrieving data after async computation.
      It is also important to ensure that the Go buffer is not moved by
      the GC across function calls so we do a synchronous copy.
      60830695
    • Jesse Gross's avatar
      ggml-backend: Let GGML allocate context memory · 01d9a468
      Jesse Gross authored
      Passing in a Go buffer is not safe because the garbage collector could
      free or move the memory while the context is still open. However, if
      we pass in the size and a nil pointer then GGML will allocate it from
      the C side.
      01d9a468
    • Jesse Gross's avatar
      backend: API to support full precision matmul · d773b7d6
      Jesse Gross authored
      Most tensor backends try to optimize performance by using a lower
      precision for matmuls. However, some operations (such as kq) on
      some models are sensitive to this and require full precision.
      d773b7d6
    • Jesse Gross's avatar
      backend: Support graph computation that does not return an output · 4d4463b2
      Jesse Gross authored
      There are two cases where we may not have an output after computing:
       - Prompt processing where the length of the input exceeds the batch
         size
       - Internal memory management operations such as cache defrag and shift
      4d4463b2
    • Jesse Gross's avatar
      backend: Consistently use int (vs. int64) for tensor shapes · 0e38297f
      Jesse Gross authored
      Currently there is a mixture of int and int64 used when dealing with
      tensor dimensions and shapes, which causes unnecessary conversions -
      they all should be the same type.
      
      In general, most interfaces (such as Pytorch) use int64 for
      generality but most implementations (such as CUDA) use int32 for
      performance. There isn't much benefit to us to being more flexible
      than the implementations we are likely to run on.
      
      In addition, as a practical matter, a model with a tensor with a single
      dimension larger than 32 bits is unlikely to run on a 32-bit machine.
      0e38297f
    • Jesse Gross's avatar
      backend: Don't return an error on Close · 7e13f568
      Jesse Gross authored
      It is not common to return errors with close/free operations - most
      people won't check it and even if they did there's probably not much
      that can do. It's better to not give implementations false expectations.
      7e13f568
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413
  7. 13 Feb, 2025 3 commits