1. 15 Nov, 2023 2 commits
  2. 10 Nov, 2023 1 commit
  3. 08 Nov, 2023 1 commit
  4. 03 Nov, 2023 2 commits
  5. 30 Oct, 2023 1 commit
  6. 28 Oct, 2023 1 commit
  7. 21 Oct, 2023 1 commit
  8. 19 Oct, 2023 7 commits
  9. 18 Oct, 2023 1 commit
  10. 16 Oct, 2023 2 commits
  11. 12 Oct, 2023 1 commit
  12. 11 Oct, 2023 1 commit
  13. 06 Oct, 2023 1 commit
  14. 30 Sep, 2023 1 commit
  15. 29 Sep, 2023 1 commit
  16. 27 Sep, 2023 1 commit
  17. 23 Sep, 2023 1 commit
  18. 22 Sep, 2023 1 commit
  19. 21 Sep, 2023 4 commits
  20. 20 Sep, 2023 1 commit
  21. 18 Sep, 2023 1 commit
  22. 12 Sep, 2023 1 commit
  23. 11 Sep, 2023 1 commit
  24. 06 Sep, 2023 1 commit
  25. 03 Sep, 2023 1 commit
  26. 31 Aug, 2023 2 commits
  27. 30 Aug, 2023 1 commit
    • Bruce MacDonald's avatar
      subprocess llama.cpp server (#401) · 42998d79
      Bruce MacDonald authored
      * remove c code
      * pack llama.cpp
      * use request context for llama_cpp
      * let llama_cpp decide the number of threads to use
      * stop llama runner when app stops
      * remove sample count and duration metrics
      * use go generate to get libraries
      * tmp dir for running llm
      42998d79