1. 11 Jun, 2024 1 commit
  2. 09 Jun, 2024 1 commit
  3. 01 Jun, 2024 1 commit
  4. 29 May, 2024 3 commits
  5. 23 May, 2024 2 commits
  6. 20 May, 2024 1 commit
    • Sam's avatar
      feat: add support for flash_attn (#4120) · e15307fd
      Sam authored
      * feat: enable flash attention if supported
      
      * feat: enable flash attention if supported
      
      * feat: enable flash attention if supported
      
      * feat: add flash_attn support
      e15307fd
  7. 09 May, 2024 1 commit
  8. 04 May, 2024 1 commit
  9. 30 Apr, 2024 3 commits
  10. 17 Apr, 2024 1 commit
  11. 16 Apr, 2024 1 commit
  12. 01 Apr, 2024 2 commits
    • Daniel Hiltgen's avatar
      Apply 01-cache.diff · 0a0e9f3e
      Daniel Hiltgen authored
      0a0e9f3e
    • Daniel Hiltgen's avatar
      Switch back to subprocessing for llama.cpp · 58d95cc9
      Daniel Hiltgen authored
      This should resolve a number of memory leak and stability defects by allowing
      us to isolate llama.cpp in a separate process and shutdown when idle, and
      gracefully restart if it has problems.  This also serves as a first step to be
      able to run multiple copies to support multiple models concurrently.
      58d95cc9
  13. 26 Mar, 2024 1 commit
  14. 23 Mar, 2024 1 commit
  15. 16 Mar, 2024 1 commit
  16. 12 Mar, 2024 2 commits