• Jeffrey Morgan's avatar
    treat `ollama run model < file` as entire prompt, not prompt-per-line (#1126) · 42386204
    Jeffrey Morgan authored
    
    
    Previously, `ollama run` treated a non-terminal stdin (such as `ollama run model < file`) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run `ollama run` interactively and wrap the prompt in `"""..."""`.
    
    Now, `ollama run` treats a non-terminal stdin as containing a single prompt. For example, if `myprompt.txt` is a multi-line file, then `ollama run model < myprompt.txt` would treat `myprompt.txt`'s entire contents as the prompt.
    Co-authored-by: default avatarQuinn Slack <quinn@slack.org>
    42386204
cmd.go 22.3 KB