- 24 May, 2024 1 commit
-
-
Wang,Zhe authored
-
- 27 Apr, 2024 2 commits
-
-
Hernan Martinez authored
-
Hernan Martinez authored
-
- 26 Apr, 2024 5 commits
-
-
Daniel Hiltgen authored
This will speed up CI which already tries to only build static for unit tests
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
This will make it simpler for CI to accumulate artifacts from prior steps
-
- 23 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
Now that the llm runner is an executable and not just a dll, more users are facing problems with security policy configurations on windows that prevent users writing to directories and then executing binaries from the same location. This change removes payloads from the main executable on windows and shifts them over to be packaged in the installer and discovered based on the executables location. This also adds a new zip file for people who want to "roll their own" installation model.
-
- 21 Apr, 2024 1 commit
-
-
Jeremy authored
Fixed improper env references
-
- 18 Apr, 2024 2 commits
- 09 Apr, 2024 2 commits
-
-
Blake Mizerany authored
-
Blake Mizerany authored
This commit introduces a more friendly way to build Ollama dependencies and the binary without abusing `go generate` and removing the unnecessary extra steps it brings with it. This script also provides nicer feedback to the user about what is happening during the build process. At the end, it prints a helpful message to the user about what to do next (e.g. run the new local Ollama).
-
- 07 Apr, 2024 1 commit
-
-
Jeffrey Morgan authored
update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` to avoid compiler errors (#3528)
-
- 04 Apr, 2024 2 commits
-
-
Daniel Hiltgen authored
-
mofanke authored
-
- 01 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
-
- 26 Mar, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 15 Mar, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Flesh out our github actions CI so we can build official releaes.
-
- 12 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 07 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
This refines where we extract the LLM libraries to by adding a new OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already idempotenent, so this should speed up startups after the first time a new release is deployed. It also cleans up after itself. We now build only a single ROCm version (latest major) on both windows and linux. Given the large size of ROCms tensor files, we split the dependency out. It's bundled into the installer on windows, and a separate download on windows. The linux install script is now smart and detects the presence of AMD GPUs and looks to see if rocm v6 is already present, and if not, then downloads our dependency tar file. For Linux discovery, we now use sysfs and check each GPU against what ROCm supports so we can degrade to CPU gracefully instead of having llama.cpp+rocm assert/crash on us. For Windows, we now use go's windows dynamic library loading logic to access the amdhip64.dll APIs to query the GPU information.
-
- 21 Feb, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 16 Feb, 2024 1 commit
-
-
Daniel Hiltgen authored
Also fixes a few fit-and-finish items for better developer experience
-
- 15 Feb, 2024 2 commits
-
-
Daniel Hiltgen authored
Even though we weren't setting it to on, somewhere in the cmake config it was getting toggled on. By explicitly setting it to off, we get `/arch:AVX` as intended.
-
Daniel Hiltgen authored
This focuses on Windows first, but coudl be used for Mac and possibly linux in the future.
-
- 25 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
* Fix clearing kv cache between requests with the same prompt * fix powershell script
-
- 20 Jan, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Jeffrey Morgan authored
-
- 19 Jan, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 17 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This also refines the build process for the ext_server build.
-
- 11 Jan, 2024 1 commit
-
-
Daniel Hiltgen authored
This switches darwin to dynamic loading, and refactors the code now that no static linking of the library is used on any platform
-
- 05 Jan, 2024 1 commit
-
-
Bruce MacDonald authored
-
- 04 Jan, 2024 3 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
On linux, we link the CPU library in to the Go app and fall back to it when no GPU match is found. On windows we do not link in the CPU library so that we can better control our dependencies for the CLI. This fixes the logic so we correctly fallback to the dynamic CPU library on windows.
-
Jeffrey Morgan authored
* update cmake flags for intel macOS * remove `LLAMA_K_QUANTS` * put back `CMAKE_OSX_DEPLOYMENT_TARGET` and disable `LLAMA_F16C`
-
- 02 Jan, 2024 3 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Refactor where we store build outputs, and support a fully dynamic loading model on windows so the base executable has no special dependencies thus doesn't require a special PATH.
-
Daniel Hiltgen authored
This changes the model for llama.cpp inclusion so we're not applying a patch, but instead have the C++ code directly in the ollama tree, which should make it easier to refine and update over time.
-