1. 07 Jul, 2024 3 commits
  2. 06 Jul, 2024 8 commits
  3. 05 Jul, 2024 6 commits
  4. 04 Jul, 2024 1 commit
  5. 03 Jul, 2024 3 commits
  6. 01 Jul, 2024 2 commits
  7. 29 Jun, 2024 1 commit
  8. 27 Jun, 2024 2 commits
  9. 25 Jun, 2024 1 commit
    • Blake Mizerany's avatar
      llm: speed up gguf decoding by a lot (#5246) · cb42e607
      Blake Mizerany authored
      Previously, some costly things were causing the loading of GGUF files
      and their metadata and tensor information to be VERY slow:
      
        * Too many allocations when decoding strings
        * Hitting disk for each read of each key and value, resulting in a
          not-okay amount of syscalls/disk I/O.
      
      The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro
      m3.
      
      This commit also prevents collecting large arrays of values when
      decoding GGUFs (if desired). When such keys are encountered, their
      values are null, and are encoded as such in JSON.
      
      Also, this fixes a broken test that was not encoding valid GGUF.
      cb42e607
  10. 21 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Enable concurrency by default · 17b7186c
      Daniel Hiltgen authored
      This adjusts our default settings to enable multiple models and parallel
      requests to a single model.  Users can still override these by the same
      env var settings as before.  Parallel has a direct impact on
      num_ctx, which in turn can have a significant impact on small VRAM GPUs
      so this change also refines the algorithm so that when parallel is not
      explicitly set by the user, we try to find a reasonable default that fits
      the model on their GPU(s).  As before, multiple models will only load
      concurrently if they fully fit in VRAM.
      17b7186c
  11. 20 Jun, 2024 2 commits
  12. 19 Jun, 2024 1 commit
  13. 18 Jun, 2024 3 commits
    • Michael Yang's avatar
      deepseek v2 graph · e873841c
      Michael Yang authored
      e873841c
    • Daniel Hiltgen's avatar
      Handle models with divergent layer sizes · 359b15a5
      Daniel Hiltgen authored
      The recent refactoring of the memory prediction assumed all layers
      are the same size, but for some models (like deepseek-coder-v2) this
      is not the case, so our predictions were significantly off.
      359b15a5
    • Daniel Hiltgen's avatar
      Tighten up memory prediction logging · 7784ca33
      Daniel Hiltgen authored
      Prior to this change, we logged the memory prediction multiple times
      as the scheduler iterates to find a suitable configuration, which can be
      confusing since only the last log before the server starts is actually valid.
      This now logs once just before starting the server on the final configuration.
      It also reports what library instead of always saying "offloading to gpu" when
      using CPU.
      7784ca33
  14. 17 Jun, 2024 5 commits
  15. 15 Jun, 2024 1 commit