1. 24 Oct, 2023 5 commits
  2. 19 Oct, 2023 1 commit
  3. 17 Oct, 2023 1 commit
  4. 16 Oct, 2023 1 commit
  5. 15 Oct, 2023 5 commits
  6. 14 Oct, 2023 1 commit
  7. 12 Oct, 2023 2 commits
  8. 11 Oct, 2023 1 commit
  9. 09 Oct, 2023 1 commit
  10. 02 Oct, 2023 2 commits
  11. 01 Oct, 2023 1 commit
  12. 30 Sep, 2023 1 commit
  13. 28 Sep, 2023 1 commit
  14. 27 Sep, 2023 3 commits
  15. 25 Sep, 2023 5 commits
  16. 20 Sep, 2023 4 commits
  17. 14 Sep, 2023 2 commits
  18. 12 Sep, 2023 1 commit
  19. 06 Sep, 2023 1 commit
  20. 30 Aug, 2023 1 commit
    • Bruce MacDonald's avatar
      subprocess llama.cpp server (#401) · 42998d79
      Bruce MacDonald authored
      * remove c code
      * pack llama.cpp
      * use request context for llama_cpp
      * let llama_cpp decide the number of threads to use
      * stop llama runner when app stops
      * remove sample count and duration metrics
      * use go generate to get libraries
      * tmp dir for running llm
      42998d79