- 17 Nov, 2023 9 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Michael Yang authored
update faq
-
Michael Yang authored
-
Michael Yang authored
update faq
-
Michael Yang authored
Co-authored-by:Jeffrey Morgan <jmorganca@gmail.com>
-
Michael Yang authored
-
Matt Williams authored
Log Analysis Example
-
- 16 Nov, 2023 10 commits
-
-
Jeffrey Morgan authored
-
Bruce MacDonald authored
Co-authored-by:Jeffrey Morgan <jmorganca@gmail.com>
-
Michael Yang authored
-
Michael Yang authored
-
yanndegat authored
On debian 12, sources definitions have moved from /etc/apt/sources.list to /etc/apt/sources.list.d/debian.sources
-
Matt Williams authored
Add example using JSON format output
-
Michael Yang authored
-
Piero Savastano authored
-
Dane Madsen authored
-
Michael Yang authored
create remote models
-
- 15 Nov, 2023 18 commits
-
-
Michael Yang authored
-
Michael authored
-
Michael authored
Co-authored-by:Bruce MacDonald <brucewmacdonald@gmail.com>
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Michael Yang authored
Co-authored-by:Bruce MacDonald <brucewmacdonald@gmail.com>
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Matt Williams authored
* faq: does ollama share my prompts Signed-off-by:
Matt Williams <m@technovangelist.com> * faq: ollama and openai Signed-off-by:
Matt Williams <m@technovangelist.com> * faq: vscode plugins Signed-off-by:
Matt Williams <m@technovangelist.com> * faq: send a doc to Ollama Signed-off-by:
Matt Williams <m@technovangelist.com> * extra spacing Signed-off-by:
Matt Williams <m@technovangelist.com> * Update faq.md * Update faq.md --------- Signed-off-by:
Matt Williams <m@technovangelist.com> Co-authored-by:
Michael <mchiang0610@users.noreply.github.com>
-
Michael Yang authored
replace go-humanize with format.HumanBytes
-
bnodnarb authored
-
- 14 Nov, 2023 3 commits
-
-
Michael Yang authored
-
Jeffrey Morgan authored
Previously, `ollama run` treated a non-terminal stdin (such as `ollama run model < file`) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run `ollama run` interactively and wrap the prompt in `"""..."""`. Now, `ollama run` treats a non-terminal stdin as containing a single prompt. For example, if `myprompt.txt` is a multi-line file, then `ollama run model < myprompt.txt` would treat `myprompt.txt`'s entire contents as the prompt. Co-authored-by:Quinn Slack <quinn@slack.org>
-
Bruce MacDonald authored
This field is optional and should be under the `Advanced parameters` header
-