1. 22 Jul, 2024 1 commit
  2. 03 May, 2024 1 commit
  3. 15 Apr, 2024 1 commit
  4. 26 Mar, 2024 1 commit
  5. 25 Mar, 2024 1 commit
  6. 12 Mar, 2024 1 commit
  7. 12 Feb, 2024 1 commit
  8. 09 Feb, 2024 1 commit
  9. 26 Jan, 2024 2 commits
  10. 08 Jan, 2024 1 commit
  11. 22 Dec, 2023 2 commits
  12. 19 Dec, 2023 1 commit
  13. 12 Dec, 2023 1 commit
  14. 11 Dec, 2023 1 commit
  15. 20 Nov, 2023 1 commit
  16. 09 Nov, 2023 1 commit
  17. 16 Oct, 2023 1 commit
  18. 14 Oct, 2023 1 commit
  19. 12 Oct, 2023 1 commit
  20. 02 Oct, 2023 2 commits
  21. 01 Oct, 2023 1 commit
  22. 28 Sep, 2023 1 commit
  23. 27 Sep, 2023 2 commits
  24. 30 Aug, 2023 1 commit
    • Quinn Slack's avatar
      treat stop as stop sequences, not exact tokens (#442) · f4432e1d
      Quinn Slack authored
      The `stop` option to the generate API is a list of sequences that should cause generation to stop. Although these are commonly called "stop tokens", they do not necessarily correspond to LLM tokens (per the LLM's tokenizer). For example, if the caller sends a generate request with `"stop":["\n"]`, then generation should stop on any token containing `\n` (and trim `\n` from the output), not just if the token exactly matches `\n`. If `stop` were interpreted strictly as LLM tokens, then it would require callers of the generate API to know the LLM's tokenizer and enumerate many tokens in the `stop` list.
      
      Fixes https://github.com/jmorganca/ollama/issues/295.
      f4432e1d
  25. 15 Aug, 2023 1 commit
  26. 14 Aug, 2023 1 commit
  27. 11 Aug, 2023 1 commit
  28. 10 Aug, 2023 2 commits
  29. 09 Aug, 2023 2 commits
  30. 08 Aug, 2023 2 commits
  31. 03 Aug, 2023 1 commit
  32. 28 Jul, 2023 1 commit
  33. 27 Jul, 2023 1 commit