"vscode:/vscode.git/clone" did not exist on "9965cb50eac12e397473f01535aab43aae76b4ab"
- 24 May, 2024 1 commit
-
-
Patrick Devine authored
-
- 10 May, 2024 1 commit
-
-
Daniel Hiltgen authored
Under stress scenarios we're seeing OOMs so this should help stabilize the allocations under heavy concurrency stress.
-
- 09 May, 2024 2 commits
-
-
Daniel Hiltgen authored
The GPU drivers take a while to update their free memory reporting, so we need to wait until the values converge with what we're expecting before proceeding to start another runner in order to get an accurate picture.
-
Daniel Hiltgen authored
This cleans up the logging for GPU discovery a bit, and can serve as a foundation to report GPU information in a future UX.
-
- 07 May, 2024 1 commit
-
-
Michael Yang authored
-
- 06 May, 2024 1 commit
-
-
Daniel Hiltgen authored
Trying to live off the land for cuda libraries was not the right strategy. We need to use the version we compiled against to ensure things work properly
-
- 05 May, 2024 1 commit
-
-
Daniel Hiltgen authored
This moves all the env var reading into one central module and logs the loaded config once at startup which should help in troubleshooting user server logs
-
- 03 May, 2024 1 commit
-
-
Daniel Hiltgen authored
For some reason this library gives incorrect GPU information, so skip it
-
- 01 May, 2024 3 commits
-
-
Daniel Hiltgen authored
-
Jeffrey Morgan authored
-
Daniel Hiltgen authored
We're seeing some corner cases with cudart which might be resolved by switching to the driver API which comes bundled with the driver package
-
- 29 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 26 Apr, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 24 Apr, 2024 1 commit
-
-
Daniel Hiltgen authored
Correctly handle gfx90a discovery
-
- 23 Apr, 2024 2 commits
-
-
Daniel Hiltgen authored
Now that the llm runner is an executable and not just a dll, more users are facing problems with security policy configurations on windows that prevent users writing to directories and then executing binaries from the same location. This change removes payloads from the main executable on windows and shifts them over to be packaged in the installer and discovered based on the executables location. This also adds a new zip file for people who want to "roll their own" installation model.
-
Daniel Hiltgen authored
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. The default settings are currently set at 1 concurrent request per model and only 1 loaded model at a time, but these can be adjusted by setting OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS.
-
- 16 Apr, 2024 2 commits
-
-
Michael Yang authored
-
Michael Yang authored
-
- 10 Apr, 2024 1 commit
-
-
Michael Yang authored
-
- 01 Apr, 2024 6 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
Leaving the cudart library loaded kept ~30m of memory pinned in the GPU in the main process. This change ensures we don't hold GPU resources when idle.
-
Daniel Hiltgen authored
We may have users that run into problems with our current payload model, so this gives us an escape valve.
-
Daniel Hiltgen authored
"cudart init failure: 35" isn't particularly helpful in the logs.
-
Daniel Hiltgen authored
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
-
Michael Yang authored
count each layer independently when deciding gpu offloading
-
- 28 Mar, 2024 1 commit
-
-
Michael Yang authored
-
- 25 Mar, 2024 1 commit
-
-
Jeremy authored
-
- 20 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
If expanding the runners fails, don't leave a corrupt/incomplete payloads dir We now write a pid file out to the tmpdir, which allows us to scan for stale tmpdirs and remove this as long as there isn't still a process running.
-
- 12 Mar, 2024 2 commits
-
-
Daniel Hiltgen authored
This fixes a few bugs in the new sysfs discovery logic. iGPUs are now correctly identified by their <1G VRAM reported. the sysfs IDs are off by one compared to what HIP wants due to the CPU being reported in amdgpu, but HIP only cares about GPUs.
-
mofanke authored
-
- 11 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
Putting the rocm symlink next to the runners is risky. This moves the payloads into a subdir to avoid potential clashes.
-
- 10 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
This allows people who package up ollama on their own to place the rocm dependencies in a peer directory to the ollama executable much like our windows install flow.
-
- 09 Mar, 2024 2 commits
-
-
Jeffrey Morgan authored
-
Daniel Hiltgen authored
The recent ROCm change partially removed idempotent payloads, but the ggml-metal.metal file for mac was still idempotent. This finishes switching to always extract the payloads, and now that idempotentcy is gone, the version directory is no longer useful.
-
- 07 Mar, 2024 2 commits
-
-
Daniel Hiltgen authored
This refines where we extract the LLM libraries to by adding a new OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already idempotenent, so this should speed up startups after the first time a new release is deployed. It also cleans up after itself. We now build only a single ROCm version (latest major) on both windows and linux. Given the large size of ROCms tensor files, we split the dependency out. It's bundled into the installer on windows, and a separate download on windows. The linux install script is now smart and detects the presence of AMD GPUs and looks to see if rocm v6 is already present, and if not, then downloads our dependency tar file. For Linux discovery, we now use sysfs and check each GPU against what ROCm supports so we can degrade to CPU gracefully instead of having llama.cpp+rocm assert/crash on us. For Windows, we now use go's windows dynamic library loading logic to access the amdhip64.dll APIs to query the GPU information.
-
Daniel Hiltgen authored
Until we get all the memory calculations correct, this can provide and escape valve for users to workaround out of memory crashes.
-
- 29 Feb, 2024 1 commit
-
-
tylinux authored
-
- 25 Feb, 2024 1 commit
-
-
peanut256 authored
* read iogpu.wired_limit_mb on macOS Fix for https://github.com/ollama/ollama/issues/1826 * improved determination of available vram on macOS read the recommended maximal vram on macOS via Metal API * Removed macOS-specific logging * Remove logging from gpu_darwin.go * release Core Foundation object fixes a possible memory leak
-
- 17 Feb, 2024 1 commit
-
-
Daniel Hiltgen authored
It looks like the version file doesnt exist on older(?) drivers
-
- 12 Feb, 2024 1 commit
-
-
Daniel Hiltgen authored
This wires up some new logic to start using sysfs to discover AMD GPU information and detects old cards we can't yet support so we can fallback to CPU mode.
-