1. 08 May, 2025 1 commit
  2. 06 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      Move quantization to new backend (#10363) · 42481045
      Daniel Hiltgen authored
      * Move quantization logic to GGML via new backend
      
      This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
      
      * Remove "add model quantizations"
      
      This is no longer needed now that quantization is implemented in Go+GGML code directly.
      42481045
  3. 03 May, 2025 1 commit
    • Daniel Hiltgen's avatar
      sched: logging improvements (#10550) · 76ea735a
      Daniel Hiltgen authored
      This enhances our logging in the scheduler.  The initial "waiting for server" log
      no longer claims an initial error state (now "not responding" which better reflects
      the actual state).  Runners now have slog wiring to report more details about the
      runner, including PID.
      76ea735a
  4. 28 Apr, 2025 1 commit
  5. 22 Apr, 2025 1 commit
    • Devon Rifkin's avatar
      increase default context length to 4096 (#10364) · 424f6486
      Devon Rifkin authored
      * increase default context length to 4096
      
      We lower the default numParallel from 4 to 2 and use these "savings" to
      double the default context length from 2048 to 4096.
      
      We're memory neutral in cases when we previously would've used
      numParallel == 4, but we add the following mitigation to handle some
      cases where we would have previously fallen back to 1x2048 due to low
      VRAM: we decide between 2048 and 4096 using a runtime check, choosing
      2048 if we're on a one GPU system with total VRAM of <= 4 GB. We
      purposefully don't check the available VRAM because we don't want the
      context window size to change unexpectedly based on the available VRAM.
      
      We plan on making the default even larger, but this is a relatively
      low-risk change we can make to quickly double it.
      
      * fix tests
      
      add an explicit context length so they don't get truncated. The code
      that converts -1 from being a signal for doing a runtime check isn't
      running as part of these tests.
      
      * tweak small gpu message
      
      * clarify context length default
      
      also make it actually show up in `ollama serve --help`
      424f6486
  6. 16 Apr, 2025 1 commit
  7. 14 Feb, 2025 1 commit
    • Michael Yang's avatar
      next ollama runner (#7913) · 58245413
      Michael Yang authored
      
      
      feat: add new Ollama engine using ggml through cgo
      
      This change introduces a new way to run pretrained models. It introduces 3 high level interfaces and a bunch of smaller helper interfaces to facilitate this.
      
      - `model.Model` defines the interface for a model architecture. Models such as `llama` and `mllama`, which are provided as examples, can implement the model's forward propagation in the `Forward` method. This method will be called to generate completions. This interface can be found in `model/model.go`
      - `ml.Backend` defines the interface for a backend tensor library, in this case `ggml`. Among other things, a Backend is responsible for loading a pretrained model into hardware (GPU, CPU, etc) and providing an interface for Models to access loaded tensors. This interface can be found in `ml/backend.go`
      - `ml.Tensor` defines the interface for a tensor and tensor operations
      
      This is the first implementation of the new engine. Follow up PRs will implement more features:
      
      - non-greedy sampling (#8410)
      - integration with Ollama and KV caching (#8301)
      - more model support (#9080) with more coming soon
      Co-authored-by: default avatarBruce MacDonald <brucewmacdonald@gmail.com>
      58245413
  8. 10 Dec, 2024 1 commit
  9. 17 Oct, 2024 1 commit
  10. 20 Sep, 2024 1 commit
    • Daniel Hiltgen's avatar
      Add Windows arm64 support to official builds (#5712) · d632e23f
      Daniel Hiltgen authored
      * Unified arm/x86 windows installer
      
      This adjusts the installer payloads to be architecture aware so we can cary
      both amd64 and arm64 binaries in the installer, and install only the applicable
      architecture at install time.
      
      * Include arm64 in official windows build
      
      * Harden schedule test for slow windows timers
      
      This test seems to be a bit flaky on windows, so give it more time to converge
      d632e23f
  11. 11 Sep, 2024 1 commit
  12. 21 Aug, 2024 1 commit
  13. 11 Aug, 2024 1 commit
  14. 02 Aug, 2024 1 commit
  15. 31 Jul, 2024 2 commits
  16. 30 Jul, 2024 2 commits
    • royjhan's avatar
      Add Metrics to `api\embed` response (#5709) · 1b44d873
      royjhan authored
      * add prompt tokens to embed response
      
      * rm slog
      
      * metrics
      
      * types
      
      * prompt n
      
      * clean up
      
      * reset submodule
      
      * update tests
      
      * test name
      
      * list metrics
      1b44d873
    • Daniel Hiltgen's avatar
      Prevent partial loading on mixed GPU brands · 34542099
      Daniel Hiltgen authored
      In mult-brand GPU setups, if we couldn't fully load the model we
      would fall through the scheduler and mistakenly try to load across
      a mix of brands.  This makes sure we find the set of GPU(s) that
      best fit for the partial load.
      34542099
  17. 22 Jul, 2024 1 commit
  18. 21 Jul, 2024 1 commit
  19. 15 Jul, 2024 1 commit
    • royjhan's avatar
      Introduce `/api/embed` endpoint supporting batch embedding (#5127) · b9f5e16c
      royjhan authored
      * Initial Batch Embedding
      
      * Revert "Initial Batch Embedding"
      
      This reverts commit c22d54895a280b54c727279d85a5fc94defb5a29.
      
      * Initial Draft
      
      * mock up notes
      
      * api/embed draft
      
      * add server function
      
      * check normalization
      
      * clean up
      
      * normalization
      
      * playing around with truncate stuff
      
      * Truncation
      
      * Truncation
      
      * move normalization to go
      
      * Integration Test Template
      
      * Truncation Integration Tests
      
      * Clean up
      
      * use float32
      
      * move normalize
      
      * move normalize test
      
      * refactoring
      
      * integration float32
      
      * input handling and handler testing
      
      * Refactoring of legacy and new
      
      * clear comments
      
      * merge conflicts
      
      * touches
      
      * embedding type 64
      
      * merge conflicts
      
      * fix hanging on single string
      
      * refactoring
      
      * test values
      
      * set context length
      
      * clean up
      
      * testing clean up
      
      * testing clean up
      
      * remove function closure
      
      * Revert "remove function closure"
      
      This reverts commit 55d48c6ed17abe42e7a122e69d603ef0c1506787.
      
      * remove function closure
      
      * remove redundant error check
      
      * clean up
      
      * more clean up
      
      * clean up
      b9f5e16c
  20. 09 Jul, 2024 1 commit
  21. 03 Jul, 2024 2 commits
    • Daniel Hiltgen's avatar
      Only set default keep_alive on initial model load · 955f2a4e
      Daniel Hiltgen authored
      This change fixes the handling of keep_alive so that if client
      request omits the setting, we only set this on initial load.  Once
      the model is loaded, if new requests leave this unset, we'll keep
      whatever keep_alive was there.
      955f2a4e
    • Daniel Hiltgen's avatar
      Prevent loading models larger than total memory · 3c75113e
      Daniel Hiltgen authored
      Users may not realize the siny new model they're trying to load
      fits on their disk, but can't load into system+GPU memory.  Today
      we crash, but with this fix, we'll give them a better error message
      before even trying to load it.
      3c75113e
  22. 25 Jun, 2024 1 commit
    • Blake Mizerany's avatar
      llm: speed up gguf decoding by a lot (#5246) · cb42e607
      Blake Mizerany authored
      Previously, some costly things were causing the loading of GGUF files
      and their metadata and tensor information to be VERY slow:
      
        * Too many allocations when decoding strings
        * Hitting disk for each read of each key and value, resulting in a
          not-okay amount of syscalls/disk I/O.
      
      The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro
      m3.
      
      This commit also prevents collecting large arrays of values when
      decoding GGUFs (if desired). When such keys are encountered, their
      values are null, and are encoded as such in JSON.
      
      Also, this fixes a broken test that was not encoding valid GGUF.
      cb42e607
  23. 21 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Enable concurrency by default · 17b7186c
      Daniel Hiltgen authored
      This adjusts our default settings to enable multiple models and parallel
      requests to a single model.  Users can still override these by the same
      env var settings as before.  Parallel has a direct impact on
      num_ctx, which in turn can have a significant impact on small VRAM GPUs
      so this change also refines the algorithm so that when parallel is not
      explicitly set by the user, we try to find a reasonable default that fits
      the model on their GPU(s).  As before, multiple models will only load
      concurrently if they fully fit in VRAM.
      17b7186c
  24. 14 Jun, 2024 4 commits
  25. 04 Jun, 2024 1 commit
  26. 24 May, 2024 1 commit
  27. 23 May, 2024 1 commit
  28. 14 May, 2024 1 commit
  29. 06 May, 2024 2 commits
  30. 05 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Centralize server config handling · f56aa200
      Daniel Hiltgen authored
      This moves all the env var reading into one central module
      and logs the loaded config once at startup which should
      help in troubleshooting user server logs
      f56aa200
  31. 03 May, 2024 1 commit
  32. 28 Apr, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fix concurrency for CPU mode · d6e3b645
      Daniel Hiltgen authored
      Prior refactoring passes accidentally removed the logic to bypass VRAM
      checks for CPU loads.  This adds that back, along with test coverage.
      
      This also fixes loaded map access in the unit test to be behind the mutex which was
      likely the cause of various flakes in the tests.
      d6e3b645
  33. 25 Apr, 2024 1 commit