1. 11 Nov, 2024 1 commit
  2. 25 Sep, 2024 1 commit
  3. 10 Sep, 2024 1 commit
  4. 01 Sep, 2024 1 commit
  5. 27 Aug, 2024 1 commit
  6. 27 Jul, 2024 1 commit
  7. 22 Jul, 2024 1 commit
  8. 03 May, 2024 1 commit
  9. 15 Apr, 2024 1 commit
  10. 26 Mar, 2024 1 commit
  11. 25 Mar, 2024 1 commit
  12. 12 Mar, 2024 1 commit
  13. 12 Feb, 2024 1 commit
  14. 09 Feb, 2024 1 commit
  15. 26 Jan, 2024 2 commits
  16. 08 Jan, 2024 1 commit
  17. 22 Dec, 2023 2 commits
  18. 19 Dec, 2023 1 commit
  19. 12 Dec, 2023 1 commit
  20. 11 Dec, 2023 1 commit
  21. 20 Nov, 2023 1 commit
  22. 09 Nov, 2023 1 commit
  23. 16 Oct, 2023 1 commit
  24. 14 Oct, 2023 1 commit
  25. 12 Oct, 2023 1 commit
  26. 02 Oct, 2023 2 commits
  27. 01 Oct, 2023 1 commit
  28. 28 Sep, 2023 1 commit
  29. 27 Sep, 2023 2 commits
  30. 30 Aug, 2023 1 commit
    • Quinn Slack's avatar
      treat stop as stop sequences, not exact tokens (#442) · f4432e1d
      Quinn Slack authored
      The `stop` option to the generate API is a list of sequences that should cause generation to stop. Although these are commonly called "stop tokens", they do not necessarily correspond to LLM tokens (per the LLM's tokenizer). For example, if the caller sends a generate request with `"stop":["\n"]`, then generation should stop on any token containing `\n` (and trim `\n` from the output), not just if the token exactly matches `\n`. If `stop` were interpreted strictly as LLM tokens, then it would require callers of the generate API to know the LLM's tokenizer and enumerate many tokens in the `stop` list.
      
      Fixes https://github.com/jmorganca/ollama/issues/295.
      f4432e1d
  31. 15 Aug, 2023 1 commit
  32. 14 Aug, 2023 1 commit
  33. 11 Aug, 2023 1 commit
  34. 10 Aug, 2023 2 commits
  35. 09 Aug, 2023 1 commit