1. 11 Jan, 2025 1 commit
  2. 09 Jan, 2025 1 commit
  3. 01 Jan, 2025 1 commit
  4. 11 Dec, 2024 1 commit
  5. 10 Dec, 2024 1 commit
    • Daniel Hiltgen's avatar
      build: Make target improvements (#7499) · 4879a234
      Daniel Hiltgen authored
      * llama: wire up builtin runner
      
      This adds a new entrypoint into the ollama CLI to run the cgo built runner.
      On Mac arm64, this will have GPU support, but on all other platforms it will
      be the lowest common denominator CPU build.  After we fully transition
      to the new Go runners more tech-debt can be removed and we can stop building
      the "default" runner via make and rely on the builtin always.
      
      * build: Make target improvements
      
      Add a few new targets and help for building locally.
      This also adjusts the runner lookup to favor local builds, then
      runners relative to the executable, and finally payloads.
      
      * Support customized CPU flags for runners
      
      This implements a simplified custom CPU flags pattern for the runners.
      When built without overrides, the runner name contains the vector flag
      we check for (AVX) to ensure we don't try to run on unsupported systems
      and crash.  If the user builds a customized set, we omit the naming
      scheme and don't check for compatibility.  This avoids checking
      requirements at runtime, so that logic has been removed as well.  This
      can be used to build GPU runners with no vector flags, or CPU/GPU
      runners with additional flags (e.g. AVX512) enabled.
      
      * Use relative paths
      
      If the user checks out the repo in a path that contains spaces, make gets
      really confused so use relative paths for everything in-repo to avoid breakage.
      
      * Remove payloads from main binary
      
      * install: clean up prior libraries
      
      This removes support for v0.3.6 and older versions (before the tar bundle)
      and ensures we clean up prior libraries before extracting the bundle(s).
      Without this change, runners and dependent libraries could leak when we
      update and lead to subtle runtime errors.
      4879a234
  6. 06 Dec, 2024 1 commit
  7. 05 Dec, 2024 2 commits
  8. 03 Dec, 2024 1 commit
  9. 25 Nov, 2024 1 commit
  10. 22 Nov, 2024 2 commits
    • Bruce MacDonald's avatar
      server: remove out of date anonymous access check (#7785) · 7b5585b9
      Bruce MacDonald authored
      In the past the ollama.com server would return a JWT that contained
      information about the user being authenticated. This was used to return
      different error messages to the user. This is no longer possible since the
      token used to authenticate does not contain information about the user
      anymore. Removing this code that no longer works.
      
      Follow up changes will improve the error messages returned here, but good to
      clean up first.
      7b5585b9
    • Daniel Hiltgen's avatar
      Be quiet when redirecting output (#7360) · d88972ea
      Daniel Hiltgen authored
      This avoids emitting the progress indicators to stderr, and the interactive
      prompts to the output file or pipe.  Running "ollama run model > out.txt"
      now exits immediately, and "echo hello | ollama run model > out.txt"
      produces zero stderr output and a typical response in out.txt
      d88972ea
  11. 14 Nov, 2024 1 commit
  12. 25 Oct, 2024 1 commit
  13. 22 Oct, 2024 1 commit
  14. 18 Oct, 2024 1 commit
  15. 01 Oct, 2024 1 commit
  16. 11 Sep, 2024 2 commits
  17. 05 Sep, 2024 2 commits
  18. 01 Sep, 2024 1 commit
  19. 23 Aug, 2024 1 commit
  20. 21 Aug, 2024 1 commit
  21. 14 Aug, 2024 1 commit
  22. 12 Aug, 2024 1 commit
  23. 02 Aug, 2024 1 commit
  24. 26 Jul, 2024 2 commits
  25. 23 Jul, 2024 1 commit
  26. 22 Jul, 2024 2 commits
    • Michael Yang's avatar
      host · 4f1afd57
      Michael Yang authored
      4f1afd57
    • Daniel Hiltgen's avatar
      Remove no longer supported max vram var · cc269ba0
      Daniel Hiltgen authored
      The OLLAMA_MAX_VRAM env var was a temporary workaround for OOM
      scenarios.  With Concurrency this was no longer wired up, and the simplistic
      value doesn't map to multi-GPU setups.  Users can still set `num_gpu`
      to limit memory usage to avoid OOM if we get our predictions wrong.
      cc269ba0
  27. 14 Jul, 2024 1 commit
  28. 28 Jun, 2024 2 commits
  29. 27 Jun, 2024 1 commit
  30. 25 Jun, 2024 1 commit
    • Blake Mizerany's avatar
      cmd: defer stating model info until necessary (#5248) · 2aa91a93
      Blake Mizerany authored
      This commit changes the 'ollama run' command to defer fetching model
      information until it really needs it. That is, when in interactive mode.
      
      It also removes one such case where the model information is fetch in
      duplicate, just before calling generateInteractive and then again, first
      thing, in generateInteractive.
      
      This positively impacts the performance of the command:
      
          ; time ./before run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./before run llama3 'hi'  0.02s user 0.01s system 2% cpu 1.168 total
          ; time ./before run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./before run llama3 'hi'  0.02s user 0.01s system 2% cpu 1.220 total
          ; time ./before run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./before run llama3 'hi'  0.02s user 0.01s system 2% cpu 1.217 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.02s user 0.01s system 4% cpu 0.652 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.01s user 0.01s system 5% cpu 0.498 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with or would you like to chat?
      
          ./after run llama3 'hi'  0.01s user 0.01s system 3% cpu 0.479 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.02s user 0.01s system 5% cpu 0.507 total
          ; time ./after run llama3 'hi'
          Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
      
          ./after run llama3 'hi'  0.02s user 0.01s system 5% cpu 0.507 total
      2aa91a93
  31. 19 Jun, 2024 1 commit
    • royjhan's avatar
      Extend api/show and ollama show to return more model info (#4881) · fedf7163
      royjhan authored
      
      
      * API Show Extended
      
      * Initial Draft of Information
      Co-Authored-By: default avatarPatrick Devine <pdevine@sonic.net>
      
      * Clean Up
      
      * Descriptive arg error messages and other fixes
      
      * Second Draft of Show with Projectors Included
      
      * Remove Chat Template
      
      * Touches
      
      * Prevent wrapping from files
      
      * Verbose functionality
      
      * Docs
      
      * Address Feedback
      
      * Lint
      
      * Resolve Conflicts
      
      * Function Name
      
      * Tests for api/show model info
      
      * Show Test File
      
      * Add Projector Test
      
      * Clean routes
      
      * Projector Check
      
      * Move Show Test
      
      * Touches
      
      * Doc update
      
      ---------
      Co-authored-by: default avatarPatrick Devine <pdevine@sonic.net>
      fedf7163
  32. 12 Jun, 2024 1 commit
  33. 04 Jun, 2024 1 commit