1. 11 Aug, 2024 1 commit
  2. 07 Aug, 2024 1 commit
  3. 05 Aug, 2024 1 commit
    • Daniel Hiltgen's avatar
      Implement linux NUMA detection · f457d634
      Daniel Hiltgen authored
      If the system has multiple numa nodes, enable numa support in llama.cpp
      If we detect numactl in the path, use that, else use the basic "distribute" mode.
      f457d634
  4. 02 Aug, 2024 1 commit
  5. 30 Jul, 2024 1 commit
  6. 27 Jul, 2024 1 commit
  7. 22 Jul, 2024 5 commits
    • Daniel Hiltgen's avatar
      Enable windows error dialog for subprocess startup · e12fff88
      Daniel Hiltgen authored
      Make sure if something goes wrong spawning the process, the user gets
      enough info to be able to try to self correct, or at least file a bug
      with details so we can fix it.  Once the process starts, we immediately
      change back to the recommended setting to prevent the blocking dialog.
      This ensures if the model fails to load (OOM, unsupported model type,
      etc.) the process will exit quickly and we can scan the stdout/stderr
      of the subprocess for the reason to report via API.
      e12fff88
    • Michael Yang's avatar
      string · e2c3f6b3
      Michael Yang authored
      e2c3f6b3
    • Michael Yang's avatar
      bool · 55cd3ddc
      Michael Yang authored
      55cd3ddc
    • Michael Yang's avatar
      rfc: dynamic environ lookup · 35b89b2e
      Michael Yang authored
      35b89b2e
    • Daniel Hiltgen's avatar
      Refine error reporting for subprocess crash · a3c20e3f
      Daniel Hiltgen authored
      On windows, the exit status winds up being the search term many
      users search for and end up piling in on issues that are unrelated.
      This refines the reporting so that if we have a more detailed message
      we'll suppress the exit status portion of the message.
      a3c20e3f
  8. 20 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Adjust windows ROCm discovery · 283948c8
      Daniel Hiltgen authored
      The v5 hip library returns unsupported GPUs which wont enumerate at
      inference time in the runner so this makes sure we align discovery.  The
      gfx906 cards are no longer supported so we shouldn't compile with that
      GPU type as it wont enumerate at runtime.
      283948c8
  9. 15 Jul, 2024 1 commit
    • royjhan's avatar
      Introduce `/api/embed` endpoint supporting batch embedding (#5127) · b9f5e16c
      royjhan authored
      * Initial Batch Embedding
      
      * Revert "Initial Batch Embedding"
      
      This reverts commit c22d54895a280b54c727279d85a5fc94defb5a29.
      
      * Initial Draft
      
      * mock up notes
      
      * api/embed draft
      
      * add server function
      
      * check normalization
      
      * clean up
      
      * normalization
      
      * playing around with truncate stuff
      
      * Truncation
      
      * Truncation
      
      * move normalization to go
      
      * Integration Test Template
      
      * Truncation Integration Tests
      
      * Clean up
      
      * use float32
      
      * move normalize
      
      * move normalize test
      
      * refactoring
      
      * integration float32
      
      * input handling and handler testing
      
      * Refactoring of legacy and new
      
      * clear comments
      
      * merge conflicts
      
      * touches
      
      * embedding type 64
      
      * merge conflicts
      
      * fix hanging on single string
      
      * refactoring
      
      * test values
      
      * set context length
      
      * clean up
      
      * testing clean up
      
      * testing clean up
      
      * remove function closure
      
      * Revert "remove function closure"
      
      This reverts commit 55d48c6ed17abe42e7a122e69d603ef0c1506787.
      
      * remove function closure
      
      * remove redundant error check
      
      * clean up
      
      * more clean up
      
      * clean up
      b9f5e16c
  10. 13 Jul, 2024 1 commit
  11. 11 Jul, 2024 2 commits
  12. 10 Jul, 2024 1 commit
  13. 07 Jul, 2024 1 commit
  14. 05 Jul, 2024 1 commit
    • Michael Yang's avatar
      fix model reloading · ac7a842e
      Michael Yang authored
      ensure runtime model changes (template, system prompt, messages,
      options) are captured on model updates without needing to reload the
      server
      ac7a842e
  15. 03 Jul, 2024 1 commit
    • Daniel Hiltgen's avatar
      Fix corner cases on tmp cleaner on mac · 0e982bc1
      Daniel Hiltgen authored
      When ollama is running a long time, tmp cleaners can remove the
      runners.  This tightens up a few corner cases on arm macs where
      we failed with "server cpu not listed in available servers map[]"
      0e982bc1
  16. 01 Jul, 2024 2 commits
  17. 25 Jun, 2024 1 commit
    • Blake Mizerany's avatar
      llm: speed up gguf decoding by a lot (#5246) · cb42e607
      Blake Mizerany authored
      Previously, some costly things were causing the loading of GGUF files
      and their metadata and tensor information to be VERY slow:
      
        * Too many allocations when decoding strings
        * Hitting disk for each read of each key and value, resulting in a
          not-okay amount of syscalls/disk I/O.
      
      The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro
      m3.
      
      This commit also prevents collecting large arrays of values when
      decoding GGUFs (if desired). When such keys are encountered, their
      values are null, and are encoded as such in JSON.
      
      Also, this fixes a broken test that was not encoding valid GGUF.
      cb42e607
  18. 21 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Enable concurrency by default · 17b7186c
      Daniel Hiltgen authored
      This adjusts our default settings to enable multiple models and parallel
      requests to a single model.  Users can still override these by the same
      env var settings as before.  Parallel has a direct impact on
      num_ctx, which in turn can have a significant impact on small VRAM GPUs
      so this change also refines the algorithm so that when parallel is not
      explicitly set by the user, we try to find a reasonable default that fits
      the model on their GPU(s).  As before, multiple models will only load
      concurrently if they fully fit in VRAM.
      17b7186c
  19. 20 Jun, 2024 1 commit
  20. 18 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Tighten up memory prediction logging · 7784ca33
      Daniel Hiltgen authored
      Prior to this change, we logged the memory prediction multiple times
      as the scheduler iterates to find a suitable configuration, which can be
      confusing since only the last log before the server starts is actually valid.
      This now logs once just before starting the server on the final configuration.
      It also reports what library instead of always saying "offloading to gpu" when
      using CPU.
      7784ca33
  21. 17 Jun, 2024 2 commits
    • Daniel Hiltgen's avatar
      Adjust mmap logic for cuda windows for faster model load · 17179679
      Daniel Hiltgen authored
      On Windows, recent llama.cpp changes make mmap slower in most
      cases, so default to off.  This also implements a tri-state for
      use_mmap so we can detect the difference between a user provided
      value of true/false, or unspecified.
      17179679
    • Daniel Hiltgen's avatar
      Move libraries out of users path · b2799f11
      Daniel Hiltgen authored
      We update the PATH on windows to get the CLI mapped, but this has
      an unintended side effect of causing other apps that may use our bundled
      DLLs to get terminated when we upgrade.
      b2799f11
  22. 14 Jun, 2024 4 commits
  23. 09 Jun, 2024 1 commit
  24. 04 Jun, 2024 2 commits
  25. 01 Jun, 2024 1 commit
  26. 30 May, 2024 1 commit
  27. 29 May, 2024 1 commit
  28. 28 May, 2024 2 commits