"launcher/vscode:/vscode.git/clone" did not exist on "2335459556d50d5dd5e39dc41e0ee16366d655dc"
- 07 Aug, 2024 1 commit
-
-
Jesse Gross authored
Currently if the config field is missing in the manifest file (or corrupted), Ollama will crash when it tries to read it. This can happen at startup or when pulling new models. This data is mostly just used for showing model information so we can be tolerant of it not being present - it is not required to run the models. Besides avoiding crashing, this also gives us the ability to restructure the config in the future by pulling it into the main manifest file.
-
- 20 May, 2024 1 commit
-
-
Michael Yang authored
particularly useful for zipfiles and f16s
-
- 14 May, 2024 1 commit
-
-
Michael Yang authored
-
- 06 May, 2024 1 commit
-
-
Michael Yang authored
- FROM /path/to/{safetensors,pytorch} - FROM /path/to/fp{16,32}.bin - FROM model:fp{16,32}
-
- 15 Mar, 2024 1 commit
-
-
Blake Mizerany authored
This fixes issues with blob file names that contain ':' characters to be rejected by file systems that do not support them.
-
- 05 Dec, 2023 1 commit
-
-
Michael Yang authored
previous layer creation was not ideal because: 1. it required reading the input file multiple times, once to calculate the sha256 checksum, another to write it to disk, and potentially one more to decode the underlying gguf 2. used io.ReadSeeker which is prone to user error. if the file isn't reset correctly or in the right place, it could end up reading an empty file there are also some brittleness when reading existing layers else writing the inherited layers will error reading an already closed file this commit aims to fix these issues by restructuring layer creation. 1. it will now write the layer to a temporary file as well as the hash function and move it to the final location on Commit 2. layers are read once once when copied to the destination. exception is raw model files which still requires a second read to decode the model metadata
-