"backend/vscode:/vscode.git/clone" did not exist on "b8902072fdbad7e371e3c27ca335578061c7a6de"
Unverified Commit d4f783cb authored by Timothy Jaeryang Baek's avatar Timothy Jaeryang Baek Committed by GitHub
Browse files

Merge branch 'main' into dev

parents d7ae3b1f b69113ef
...@@ -20,9 +20,10 @@ This configuration allows Ollama to accept connections from any source. ...@@ -20,9 +20,10 @@ This configuration allows Ollama to accept connections from any source.
Ensure that the Ollama URL is correctly formatted in the application settings. Follow these steps: Ensure that the Ollama URL is correctly formatted in the application settings. Follow these steps:
- If your Ollama runs in a different host than Web UI make sure Ollama host address is provided when running Web UI container via `OLLAMA_API_BASE_URL` environment variable. [(e.g. OLLAMA_API_BASE_URL=http://192.168.1.1:11434/api)](https://github.com/ollama-webui/ollama-webui#accessing-external-ollama-on-a-different-server)
- Go to "Settings" within the Ollama WebUI. - Go to "Settings" within the Ollama WebUI.
- Navigate to the "General" section. - Navigate to the "General" section.
- Verify that the Ollama URL is in the following format: `http://localhost:11434/api`. - Verify that the Ollama Server URL is set to: `/ollama/api`.
It is crucial to include the `/api` at the end of the URL to ensure that the Ollama Web UI can communicate with the server. It is crucial to include the `/api` at the end of the URL to ensure that the Ollama Web UI can communicate with the server.
......
...@@ -59,6 +59,7 @@ def proxy(path): ...@@ -59,6 +59,7 @@ def proxy(path):
else: else:
pass pass
try:
# Make a request to the target server # Make a request to the target server
target_response = requests.request( target_response = requests.request(
method=request.method, method=request.method,
...@@ -68,6 +69,8 @@ def proxy(path): ...@@ -68,6 +69,8 @@ def proxy(path):
stream=True, # Enable streaming for server-sent events stream=True, # Enable streaming for server-sent events
) )
target_response.raise_for_status()
# Proxy the target server's response to the client # Proxy the target server's response to the client
def generate(): def generate():
for chunk in target_response.iter_content(chunk_size=8192): for chunk in target_response.iter_content(chunk_size=8192):
...@@ -80,6 +83,8 @@ def proxy(path): ...@@ -80,6 +83,8 @@ def proxy(path):
response.headers[key] = value response.headers[key] = value
return response return response
except Exception as e:
return jsonify({"detail": "Server Connection Error", "message": str(e)}), 400
if __name__ == "__main__": if __name__ == "__main__":
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment