- 19 Dec, 2023 1 commit
-
-
Daniel Hiltgen authored
If someone checks out the ollama repo and doesn't install the CUDA library, this will ensure they can build a CPU only version
-
- 01 Oct, 2023 1 commit
-
-
Jiayu Liu authored
-
- 20 Sep, 2023 4 commits
-
-
Michael Yang authored
-
Bruce MacDonald authored
-
Bruce MacDonald authored
-
Bruce MacDonald authored
-
- 14 Sep, 2023 1 commit
-
-
Bruce MacDonald authored
* enable packaging multiple cuda versions * use nvcc cuda version if available --------- Co-authored-by:Michael Yang <mxyng@pm.me>
-
- 12 Sep, 2023 1 commit
-
-
Bruce MacDonald authored
* linux gpu support * handle multiple gpus * add cuda docker image (#488) --------- Co-authored-by:Michael Yang <mxyng@pm.me>
-
- 30 Aug, 2023 1 commit
-
-
Bruce MacDonald authored
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
-
- 25 Aug, 2023 1 commit
-
-
Michael Yang authored
-
- 08 Aug, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 24 Jul, 2023 1 commit
-
-
Michael Yang authored
-
- 21 Jul, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 18 Jul, 2023 1 commit
-
-
Matt Williams authored
Signed-off-by:Matt Williams <m@technovangelist.com>
-
- 07 Jul, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 28 Jun, 2023 4 commits
-
-
Michael Yang authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Bruce MacDonald authored
-
- 27 Jun, 2023 6 commits
-
-
Bruce MacDonald authored
-
Michael Chiang authored
-
Michael Chiang authored
-
Michael Chiang authored
-
Jeffrey Morgan authored
-
Bruce MacDonald authored
-