1. 25 Jun, 2024 1 commit
    • Blake Mizerany's avatar
      llm: speed up gguf decoding by a lot (#5246) · cb42e607
      Blake Mizerany authored
      Previously, some costly things were causing the loading of GGUF files
      and their metadata and tensor information to be VERY slow:
      
        * Too many allocations when decoding strings
        * Hitting disk for each read of each key and value, resulting in a
          not-okay amount of syscalls/disk I/O.
      
      The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro
      m3.
      
      This commit also prevents collecting large arrays of values when
      decoding GGUFs (if desired). When such keys are encountered, their
      values are null, and are encoded as such in JSON.
      
      Also, this fixes a broken test that was not encoding valid GGUF.
      cb42e607
  2. 20 Jun, 2024 1 commit
  3. 18 Jun, 2024 1 commit
    • Daniel Hiltgen's avatar
      Tighten up memory prediction logging · 7784ca33
      Daniel Hiltgen authored
      Prior to this change, we logged the memory prediction multiple times
      as the scheduler iterates to find a suitable configuration, which can be
      confusing since only the last log before the server starts is actually valid.
      This now logs once just before starting the server on the final configuration.
      It also reports what library instead of always saying "offloading to gpu" when
      using CPU.
      7784ca33
  4. 17 Jun, 2024 2 commits
    • Daniel Hiltgen's avatar
      Adjust mmap logic for cuda windows for faster model load · 17179679
      Daniel Hiltgen authored
      On Windows, recent llama.cpp changes make mmap slower in most
      cases, so default to off.  This also implements a tri-state for
      use_mmap so we can detect the difference between a user provided
      value of true/false, or unspecified.
      17179679
    • Daniel Hiltgen's avatar
      Move libraries out of users path · b2799f11
      Daniel Hiltgen authored
      We update the PATH on windows to get the CLI mapped, but this has
      an unintended side effect of causing other apps that may use our bundled
      DLLs to get terminated when we upgrade.
      b2799f11
  5. 14 Jun, 2024 4 commits
  6. 09 Jun, 2024 1 commit
  7. 04 Jun, 2024 2 commits
  8. 01 Jun, 2024 1 commit
  9. 30 May, 2024 1 commit
  10. 29 May, 2024 1 commit
  11. 28 May, 2024 2 commits
  12. 25 May, 2024 1 commit
  13. 24 May, 2024 1 commit
  14. 23 May, 2024 2 commits
  15. 20 May, 2024 1 commit
    • Sam's avatar
      feat: add support for flash_attn (#4120) · e15307fd
      Sam authored
      * feat: enable flash attention if supported
      
      * feat: enable flash attention if supported
      
      * feat: enable flash attention if supported
      
      * feat: add flash_attn support
      e15307fd
  16. 15 May, 2024 2 commits
  17. 14 May, 2024 1 commit
  18. 11 May, 2024 1 commit
  19. 10 May, 2024 2 commits
  20. 09 May, 2024 5 commits
  21. 08 May, 2024 1 commit
  22. 07 May, 2024 1 commit
  23. 06 May, 2024 3 commits
  24. 05 May, 2024 1 commit
    • Daniel Hiltgen's avatar
      Centralize server config handling · f56aa200
      Daniel Hiltgen authored
      This moves all the env var reading into one central module
      and logs the loaded config once at startup which should
      help in troubleshooting user server logs
      f56aa200
  25. 01 May, 2024 1 commit