1. 14 Aug, 2024 2 commits
  2. 08 Aug, 2024 1 commit
    • Jesse Gross's avatar
      manifest: Store layers inside manifests consistently as values. · 7edaf6e7
      Jesse Gross authored
      Commit 1829fb61 ("manifest: Fix crash on startup when trying to clean up
      unused files (#5840)") changed the config layer stored in manifests
      from a pointer to a value. This was done in order to avoid potential
      nil pointer dereferences after it is deserialized from JSON in the
      event that the field is missing.
      
      This changes the Layers slice to also be stored by value. This enables
      consistency in handling across the two objects.
      7edaf6e7
  3. 07 Aug, 2024 3 commits
    • Jesse Gross's avatar
      image: Clarify argument to WriteManifest is config · 97ec8cfd
      Jesse Gross authored
      When creating a model the config layer is appended to the list of
      layers and then the last layer is used as the config when writing the
      manifest. This change directly uses the config layer to write the
      manifest. There is no behavior change but it is less error prone.
      97ec8cfd
    • Jesse Gross's avatar
      manifest: Fix crash on startup when trying to clean up unused files (#5840) · 1829fb61
      Jesse Gross authored
      Currently if the config field is missing in the manifest file (or
      corrupted), Ollama will crash when it tries to read it. This can
      happen at startup or when pulling new models.
      
      This data is mostly just used for showing model information so we
      can be tolerant of it not being present - it is not required to
      run the models. Besides avoiding crashing, this also gives us the
      ability to restructure the config in the future by pulling it
      into the main manifest file.
      1829fb61
    • Jesse Gross's avatar
      manifest: Don't prune layers if we can't open a manifest file · 685a5353
      Jesse Gross authored
      If there is an error when opening a manifest file (corrupted, permission denied, etc.)
      then the referenced layers will not be included in the list of active
      layers. This causes them to be deleted when pruning happens at startup
      or a model is pulled.
      
      In such a situation, we should prefer to preserve data in the hopes that
      it can be recovered rather than being agressive about deletion.
      685a5353
  4. 02 Aug, 2024 1 commit
  5. 31 Jul, 2024 1 commit
  6. 26 Jul, 2024 1 commit
  7. 25 Jul, 2024 1 commit
  8. 22 Jul, 2024 1 commit
  9. 19 Jul, 2024 1 commit
  10. 16 Jul, 2024 1 commit
  11. 15 Jul, 2024 1 commit
  12. 05 Jul, 2024 1 commit
  13. 01 Jul, 2024 4 commits
  14. 25 Jun, 2024 1 commit
    • Blake Mizerany's avatar
      llm: speed up gguf decoding by a lot (#5246) · cb42e607
      Blake Mizerany authored
      Previously, some costly things were causing the loading of GGUF files
      and their metadata and tensor information to be VERY slow:
      
        * Too many allocations when decoding strings
        * Hitting disk for each read of each key and value, resulting in a
          not-okay amount of syscalls/disk I/O.
      
      The show API is now down to 33ms from 800ms+ for llama3 on a macbook pro
      m3.
      
      This commit also prevents collecting large arrays of values when
      decoding GGUFs (if desired). When such keys are encountered, their
      values are null, and are encoded as such in JSON.
      
      Also, this fixes a broken test that was not encoding valid GGUF.
      cb42e607
  15. 21 Jun, 2024 1 commit
  16. 13 Jun, 2024 1 commit
  17. 12 Jun, 2024 1 commit
  18. 07 Jun, 2024 1 commit
  19. 06 Jun, 2024 1 commit
  20. 05 Jun, 2024 1 commit
  21. 04 Jun, 2024 5 commits
  22. 24 May, 2024 1 commit
  23. 20 May, 2024 4 commits
  24. 14 May, 2024 1 commit
  25. 09 May, 2024 1 commit
  26. 08 May, 2024 2 commits