- 07 May, 2024 2 commits
-
-
Michael Yang authored
This reverts commit 04f971c8.
-
alwqx authored
-
- 26 Apr, 2024 4 commits
-
-
Blake Mizerany authored
Also, remove a superfluous 'go get'
-
Michael Yang authored
-
Daniel Hiltgen authored
This will make it simpler for CI to accumulate artifacts from prior steps
-
Daniel Hiltgen authored
download-artifact path was being used incorrectly. It is where to extract the zip not the files in the zip to extract. Default is workspace dir which is what we want, so omit it
-
- 23 Apr, 2024 2 commits
-
-
Daniel Hiltgen authored
Now that the llm runner is an executable and not just a dll, more users are facing problems with security policy configurations on windows that prevent users writing to directories and then executing binaries from the same location. This change removes payloads from the main executable on windows and shifts them over to be packaged in the installer and discovered based on the executables location. This also adds a new zip file for people who want to "roll their own" installation model.
-
Daniel Hiltgen authored
-
- 17 Apr, 2024 5 commits
- 10 Apr, 2024 1 commit
-
-
Michael Yang authored
-
- 09 Apr, 2024 3 commits
-
-
Blake Mizerany authored
-
Blake Mizerany authored
This commit introduces a more friendly way to build Ollama dependencies and the binary without abusing `go generate` and removing the unnecessary extra steps it brings with it. This script also provides nicer feedback to the user about what is happening during the build process. At the end, it prints a helpful message to the user about what to do next (e.g. run the new local Ollama).
-
Michael Yang authored
-
- 04 Apr, 2024 3 commits
-
-
Jeffrey Morgan authored
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
-
- 03 Apr, 2024 2 commits
-
-
Daniel Hiltgen authored
The subprocess change moved the build directory arm64 builds weren't setting cross-compilation flags when building on x86
-
Jeffrey Morgan authored
-
- 02 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 01 Apr, 2024 2 commits
-
-
Daniel Hiltgen authored
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
-
Michael Yang authored
-
- 29 Mar, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 28 Mar, 2024 3 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
If we're doing generate, test windows cuda and rocm as well
-
- 27 Mar, 2024 5 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
Michael Yang authored
-
- 26 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
The manifest and tagging step use a lot of disk space
-
- 15 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
Flesh out our github actions CI so we can build official releaes.
-
- 14 Mar, 2024 2 commits
-
-
Blake Mizerany authored
-
Blake Mizerany authored
-
- 07 Mar, 2024 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-