- 22 Jul, 2024 1 commit
-
-
Michael Yang authored
-
- 03 May, 2024 1 commit
-
-
Dr Nic Williams authored
* Update 'llama2' -> 'llama3' in most places --------- Co-authored-by:Patrick Devine <patrick@infrahq.com>
-
- 15 Apr, 2024 1 commit
-
-
Jeffrey Morgan authored
Remove Modelfile parameters that are decided at runtime
-
- 26 Mar, 2024 1 commit
-
-
Patrick Devine authored
-
- 25 Mar, 2024 1 commit
-
-
Blake Mizerany authored
-
- 12 Mar, 2024 1 commit
-
-
Patrick Devine authored
-
- 12 Feb, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 09 Feb, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 26 Jan, 2024 2 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 08 Jan, 2024 1 commit
-
-
Bruce MacDonald authored
-
- 22 Dec, 2023 2 commits
-
-
Matt Williams authored
* Clean up documentation Will probably need to update with PRs for new release. Signed-off-by:
Matt Williams <m@technovangelist.com> * Correcting to fit in 0.1.15 changes Signed-off-by:
Matt Williams <m@technovangelist.com> * Update README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * addressing comments Signed-off-by:
Matt Williams <m@technovangelist.com> * more api cleanup Signed-off-by:
Matt Williams <m@technovangelist.com> * its llava not llama Signed-off-by:
Matt Williams <m@technovangelist.com> * Update docs/troubleshooting.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Updated hosting to server and documented all env vars Signed-off-by:
Matt Williams <m@technovangelist.com> * remove last of the cli descriptions Signed-off-by:
Matt Williams <m@technovangelist.com> * Update README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * update further per conversation with jeff earlier today Signed-off-by:
Matt Williams <m@technovangelist.com> * cleanup the doc readme Signed-off-by:
Matt Williams <m@technovangelist.com> * move upgrade to faq Signed-off-by:
Matt Williams <m@technovangelist.com> * first change Signed-off-by:
Matt Williams <m@technovangelist.com> * updated Signed-off-by:
Matt Williams <m@technovangelist.com> * Update docs/faq.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * examples in parent Signed-off-by:
Matt Williams <m@technovangelist.com> * add exapmle for create model. Signed-off-by:
Matt Williams <m@technovangelist.com> * update faq Signed-off-by:
Matt Williams <m@technovangelist.com> * update create model api Signed-off-by:
Matt Williams <m@technovangelist.com> * Update docs/api.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/faq.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/troubleshooting.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * update the readme in docs Signed-off-by:
Matt Williams <m@technovangelist.com> * update a few more things Signed-off-by:
Matt Williams <m@technovangelist.com> * Update docs/troubleshooting.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/faq.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update README.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/modelfile.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> * Update docs/troubleshooting.md Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com> --------- Signed-off-by:
Matt Williams <m@technovangelist.com> Co-authored-by:
Jeffrey Morgan <jmorganca@gmail.com>
-
Daniel Hiltgen authored
-
- 19 Dec, 2023 1 commit
-
-
Bruce MacDonald authored
- remove ggml runner - automatically pull gguf models when ggml detected - tell users to update to gguf in the case automatic pull fails Co-Authored-By:Jeffrey Morgan <jmorganca@gmail.com>
-
- 12 Dec, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 11 Dec, 2023 1 commit
-
-
Patrick Devine authored
--------- Co-authored-by:Matt Apperson <mattapperson@Matts-MacBook-Pro.local>
-
- 20 Nov, 2023 1 commit
-
-
James Braza authored
* Documented viewing Modelfiles in ollama.ai/library * Moved Modelfile in ollama.ai down per request
-
- 09 Nov, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 16 Oct, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 14 Oct, 2023 1 commit
-
-
Matt Williams authored
Signed-off-by:Matt Williams <m@technovangelist.com>
-
- 12 Oct, 2023 1 commit
-
-
Matt Williams authored
Signed-off-by:Matt Williams <m@technovangelist.com>
-
- 02 Oct, 2023 2 commits
-
-
James Braza authored
-
James Braza authored
-
- 01 Oct, 2023 1 commit
-
-
Jiayu Liu authored
-
- 28 Sep, 2023 1 commit
-
-
Aaron Coffey authored
-
- 27 Sep, 2023 2 commits
-
-
Bruce MacDonald authored
-
James Braza authored
-
- 30 Aug, 2023 1 commit
-
-
Quinn Slack authored
The `stop` option to the generate API is a list of sequences that should cause generation to stop. Although these are commonly called "stop tokens", they do not necessarily correspond to LLM tokens (per the LLM's tokenizer). For example, if the caller sends a generate request with `"stop":["\n"]`, then generation should stop on any token containing `\n` (and trim `\n` from the output), not just if the token exactly matches `\n`. If `stop` were interpreted strictly as LLM tokens, then it would require callers of the generate API to know the LLM's tokenizer and enumerate many tokens in the `stop` list. Fixes https://github.com/jmorganca/ollama/issues/295.
-
- 15 Aug, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 14 Aug, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 11 Aug, 2023 1 commit
-
-
Arturas Smorgun authored
Co-authored-by:Michael Yang <mxyng@pm.me>
-
- 10 Aug, 2023 2 commits
-
-
Arturas Smorgun authored
It is required to be adjusted for some models, see https://github.com/jmorganca/ollama/issues/320 for more context
-
Michael Yang authored
-
- 09 Aug, 2023 2 commits
-
-
Bruce MacDonald authored
-
Bruce MacDonald authored
-
- 08 Aug, 2023 2 commits
-
-
Bruce MacDonald authored
- defer closing llm on embedding - do not override licenses - remove debugging print line - reformat model file docs
-
Bruce MacDonald authored
-
- 03 Aug, 2023 1 commit
-
-
Michael Yang authored
-
- 28 Jul, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 27 Jul, 2023 1 commit
-
-
Bruce MacDonald authored
-