- 07 Jun, 2025 1 commit
-
-
Krzysztof Jeziorny authored
-
- 07 Mar, 2025 1 commit
-
-
rekcäH nitraM authored
The problem with default.target is that it always points to the target that is currently started. So if you boot into single user mode or the rescue mode still Ollama tries to start. I noticed this because either tried (and failed) to start all the time during a system update, where Ollama definitely is not wanted.
-
- 07 Feb, 2025 1 commit
-
-
Azis Alvriyanto authored
-
- 06 Feb, 2025 1 commit
-
-
Abhinav Pant authored
-
- 03 Feb, 2025 1 commit
-
-
Melroy van den Berg authored
-
- 10 Dec, 2024 1 commit
-
-
Daniel Hiltgen authored
* llama: wire up builtin runner This adds a new entrypoint into the ollama CLI to run the cgo built runner. On Mac arm64, this will have GPU support, but on all other platforms it will be the lowest common denominator CPU build. After we fully transition to the new Go runners more tech-debt can be removed and we can stop building the "default" runner via make and rely on the builtin always. * build: Make target improvements Add a few new targets and help for building locally. This also adjusts the runner lookup to favor local builds, then runners relative to the executable, and finally payloads. * Support customized CPU flags for runners This implements a simplified custom CPU flags pattern for the runners. When built without overrides, the runner name contains the vector flag we check for (AVX) to ensure we don't try to run on unsupported systems and crash. If the user builds a customized set, we omit the naming scheme and don't check for compatibility. This avoids checking requirements at runtime, so that logic has been removed as well. This can be used to build GPU runners with no vector flags, or CPU/GPU runners with additional flags (e.g. AVX512) enabled. * Use relative paths If the user checks out the repo in a path that contains spaces, make gets really confused so use relative paths for everything in-repo to avoid breakage. * Remove payloads from main binary * install: clean up prior libraries This removes support for v0.3.6 and older versions (before the tar bundle) and ensures we clean up prior libraries before extracting the bundle(s). Without this change, runners and dependent libraries could leak when we update and lead to subtle runtime errors.
-
- 17 Nov, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 07 Sep, 2024 1 commit
-
-
Jeffrey Morgan authored
Includes small improvements to document layout and code blocks
-
- 04 Sep, 2024 1 commit
-
-
Tomoya Fujita authored
-
- 27 Aug, 2024 1 commit
-
-
Daniel Hiltgen authored
-
- 19 Aug, 2024 2 commits
-
-
Daniel Hiltgen authored
-
Daniel Hiltgen authored
-
- 09 Jun, 2024 1 commit
-
-
Napuh authored
* Added instructions to easily install specific versions on faq.md * Small typo * Moved instructions on how to install specific version to linux.md * Update docs/linux.md * Update docs/linux.md --------- Co-authored-by:Jeffrey Morgan <jmorganca@gmail.com>
-
- 06 May, 2024 1 commit
-
-
Mohamed A. Fouad authored
Add -e to viewing logs in order to show end of ollama logs
-
- 09 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
The recent ROCm change partially removed idempotent payloads, but the ggml-metal.metal file for mac was still idempotent. This finishes switching to always extract the payloads, and now that idempotentcy is gone, the version directory is no longer useful.
-
- 07 Mar, 2024 1 commit
-
-
Daniel Hiltgen authored
This refines where we extract the LLM libraries to by adding a new OLLAMA_HOME env var, that defaults to `~/.ollama` The logic was already idempotenent, so this should speed up startups after the first time a new release is deployed. It also cleans up after itself. We now build only a single ROCm version (latest major) on both windows and linux. Given the large size of ROCms tensor files, we split the dependency out. It's bundled into the installer on windows, and a separate download on windows. The linux install script is now smart and detects the presence of AMD GPUs and looks to see if rocm v6 is already present, and if not, then downloads our dependency tar file. For Linux discovery, we now use sysfs and check each GPU against what ROCm supports so we can degrade to CPU gracefully instead of having llama.cpp+rocm assert/crash on us. For Windows, we now use go's windows dynamic library loading logic to access the amdhip64.dll APIs to query the GPU information.
-
- 09 Feb, 2024 1 commit
-
-
Jeffrey Morgan authored
-
- 12 Jan, 2024 1 commit
-
-
Tristram Oaten authored
After executing the `userdel ollama` command, I saw this message: ```sh $ sudo userdel ollama userdel: group ollama not removed because it has other members. ``` Which reminded me that I had to remove the dangling group too. For completeness, the uninstall instructions should do this too. Thanks!
-
- 25 Oct, 2023 1 commit
-
-
Michael Yang authored
-
- 24 Oct, 2023 1 commit
-
-
Bruce MacDonald authored
-
- 15 Oct, 2023 1 commit
-
-
Jeffrey Morgan authored
-
- 01 Oct, 2023 1 commit
-
-
Jiayu Liu authored
-
- 25 Sep, 2023 5 commits
-
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-
Jeffrey Morgan authored
-