- 19 Dec, 2023 1 commit
-
-
Bruce MacDonald authored
- remove ggml runner - automatically pull gguf models when ggml detected - tell users to update to gguf in the case automatic pull fails Co-Authored-By:Jeffrey Morgan <jmorganca@gmail.com>
-
- 26 Nov, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 22 Nov, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 21 Nov, 2023 1 commit
-
-
Michael Yang authored
-
- 20 Nov, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 17 Nov, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 24 Oct, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 23 Oct, 2023 1 commit
-
-
Michael Yang authored
-
- 06 Oct, 2023 2 commits
-
-
Jeffrey Morgan authored
-
Bruce MacDonald authored
- this makes it easier to see that the subprocess is associated with ollama
-
- 21 Sep, 2023 1 commit
-
-
Michael Yang authored
-
- 20 Sep, 2023 1 commit
-
-
Michael Yang authored
-
- 12 Sep, 2023 1 commit
-
-
Bruce MacDonald authored
* linux gpu support * handle multiple gpus * add cuda docker image (#488) --------- Co-authored-by:Michael Yang <mxyng@pm.me>
-
- 07 Sep, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 06 Sep, 2023 2 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
- 05 Sep, 2023 2 commits
-
-
Bruce MacDonald authored
-
Jeffrey Morgan authored
-
- 30 Aug, 2023 1 commit
-
-
Bruce MacDonald authored
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
-