1. 09 Oct, 2023 1 commit
  2. 02 Oct, 2023 2 commits
  3. 01 Oct, 2023 1 commit
  4. 30 Sep, 2023 1 commit
  5. 28 Sep, 2023 1 commit
  6. 27 Sep, 2023 3 commits
  7. 25 Sep, 2023 5 commits
  8. 20 Sep, 2023 4 commits
  9. 14 Sep, 2023 2 commits
  10. 12 Sep, 2023 1 commit
  11. 06 Sep, 2023 1 commit
  12. 30 Aug, 2023 2 commits
    • Bruce MacDonald's avatar
      subprocess llama.cpp server (#401) · 42998d79
      Bruce MacDonald authored
      * remove c code
      * pack llama.cpp
      * use request context for llama_cpp
      * let llama_cpp decide the number of threads to use
      * stop llama runner when app stops
      * remove sample count and duration metrics
      * use go generate to get libraries
      * tmp dir for running llm
      42998d79
    • Quinn Slack's avatar
      treat stop as stop sequences, not exact tokens (#442) · f4432e1d
      Quinn Slack authored
      The `stop` option to the generate API is a list of sequences that should cause generation to stop. Although these are commonly called "stop tokens", they do not necessarily correspond to LLM tokens (per the LLM's tokenizer). For example, if the caller sends a generate request with `"stop":["\n"]`, then generation should stop on any token containing `\n` (and trim `\n` from the output), not just if the token exactly matches `\n`. If `stop` were interpreted strictly as LLM tokens, then it would require callers of the generate API to know the LLM's tokenizer and enumerate many tokens in the `stop` list.
      
      Fixes https://github.com/jmorganca/ollama/issues/295.
      f4432e1d
  13. 27 Aug, 2023 1 commit
  14. 25 Aug, 2023 1 commit
  15. 15 Aug, 2023 1 commit
  16. 14 Aug, 2023 5 commits
  17. 11 Aug, 2023 6 commits
  18. 10 Aug, 2023 2 commits