1. 06 Dec, 2024 1 commit
  2. 05 Dec, 2024 2 commits
  3. 03 Dec, 2024 1 commit
  4. 30 Nov, 2024 1 commit
  5. 26 Nov, 2024 1 commit
  6. 25 Nov, 2024 1 commit
  7. 22 Nov, 2024 2 commits
    • Bruce MacDonald's avatar
      server: remove out of date anonymous access check (#7785) · 7b5585b9
      Bruce MacDonald authored
      In the past the ollama.com server would return a JWT that contained
      information about the user being authenticated. This was used to return
      different error messages to the user. This is no longer possible since the
      token used to authenticate does not contain information about the user
      anymore. Removing this code that no longer works.
      
      Follow up changes will improve the error messages returned here, but good to
      clean up first.
      7b5585b9
    • Daniel Hiltgen's avatar
      Be quiet when redirecting output (#7360) · d88972ea
      Daniel Hiltgen authored
      This avoids emitting the progress indicators to stderr, and the interactive
      prompts to the output file or pipe.  Running "ollama run model > out.txt"
      now exits immediately, and "echo hello | ollama run model > out.txt"
      produces zero stderr output and a typical response in out.txt
      d88972ea
  8. 21 Nov, 2024 1 commit
  9. 14 Nov, 2024 1 commit
  10. 25 Oct, 2024 1 commit
  11. 22 Oct, 2024 1 commit
  12. 18 Oct, 2024 1 commit
  13. 10 Oct, 2024 1 commit
    • Jesse Gross's avatar
      cli: Send all images in conversation history · 7fe39025
      Jesse Gross authored
      Currently the CLI only sends images from the most recent image-
      containing message. This prevents doing things like sending
      one message with an image and then a follow message with a
      second image and asking for comparision based on additional
      information not present in any text that was output.
      
      It's possible that some models have a problem with this but the
      CLI is not the right place to do this since any adjustments are
      model-specific and should affect all clients.
      
      Both llava:34b and minicpm-v do reasonable things with multiple
      images in the history.
      7fe39025
  14. 01 Oct, 2024 1 commit
  15. 11 Sep, 2024 2 commits
  16. 05 Sep, 2024 2 commits
  17. 01 Sep, 2024 1 commit
  18. 23 Aug, 2024 1 commit
  19. 21 Aug, 2024 1 commit
  20. 14 Aug, 2024 1 commit
  21. 12 Aug, 2024 1 commit
  22. 02 Aug, 2024 1 commit
  23. 27 Jul, 2024 1 commit
  24. 26 Jul, 2024 3 commits
  25. 23 Jul, 2024 1 commit
  26. 22 Jul, 2024 3 commits
    • Michael Yang's avatar
      bool · 55cd3ddc
      Michael Yang authored
      55cd3ddc
    • Michael Yang's avatar
      host · 4f1afd57
      Michael Yang authored
      4f1afd57
    • Daniel Hiltgen's avatar
      Remove no longer supported max vram var · cc269ba0
      Daniel Hiltgen authored
      The OLLAMA_MAX_VRAM env var was a temporary workaround for OOM
      scenarios.  With Concurrency this was no longer wired up, and the simplistic
      value doesn't map to multi-GPU setups.  Users can still set `num_gpu`
      to limit memory usage to avoid OOM if we get our predictions wrong.
      cc269ba0
  27. 14 Jul, 2024 1 commit
  28. 12 Jul, 2024 2 commits
  29. 28 Jun, 2024 2 commits
  30. 27 Jun, 2024 1 commit