Commit 77928ae1 authored by Jun Siang Cheah's avatar Jun Siang Cheah
Browse files

Merge branch 'dev' of https://github.com/open-webui/open-webui into feat/web-search-toggle

parents 2660a6e5 9a957670
version: 2 version: 2
updates: updates:
- package-ecosystem: pip - package-ecosystem: pip
directory: "/backend" directory: '/backend'
schedule: schedule:
interval: daily interval: weekly
time: "13:00" - package-ecosystem: 'github-actions'
groups: directory: '/'
python-packages: schedule:
patterns: # Check for updates to GitHub Actions every week
- "*" interval: 'weekly'
## Pull Request Checklist ## Pull Request Checklist
- [ ] **Target branch:** Pull requests should target the `dev` branch.
- [ ] **Description:** Briefly describe the changes in this pull request. - [ ] **Description:** Briefly describe the changes in this pull request.
- [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description. - [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description.
- [ ] **Documentation:** Have you updated relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs), or other documentation sources? - [ ] **Documentation:** Have you updated relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs), or other documentation sources?
......
...@@ -5,6 +5,28 @@ All notable changes to this project will be documented in this file. ...@@ -5,6 +5,28 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.1.124] - 2024-05-08
### Added
- **🖼️ Improved Chat Sidebar**: Now conveniently displays time ranges and organizes chats by today, yesterday, and more.
- **📜 Citations in RAG Feature**: Easily track the context fed to the LLM with added citations in the RAG feature.
- **🔒 Auth Disable Option**: Introducing the ability to disable authentication. Set 'WEBUI_AUTH' to False to disable authentication. Note: Only applicable for fresh installations without existing users.
- **📹 Enhanced YouTube RAG Pipeline**: Now supports non-English videos for an enriched experience.
- **🔊 Specify OpenAI TTS Models**: Customize your TTS experience by specifying OpenAI TTS models.
- **🔧 Additional Environment Variables**: Discover more environment variables in our comprehensive documentation at Open WebUI Documentation (https://docs.openwebui.com).
- **🌐 Language Support**: Arabic, Finnish, and Hindi added; Improved support for German, Vietnamese, and Chinese.
### Fixed
- **🛠️ Model Selector Styling**: Addressed styling issues for improved user experience.
- **⚠️ Warning Messages**: Resolved backend warning messages.
### Changed
- **📝 Title Generation**: Limited output to 50 tokens.
- **📦 Helm Charts**: Removed Helm charts, now available in a separate repository (https://github.com/open-webui/helm-charts).
## [0.1.123] - 2024-05-02 ## [0.1.123] - 2024-05-02
### Added ### Added
......
...@@ -120,22 +120,66 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open ...@@ -120,22 +120,66 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open
> [!TIP] > [!TIP]
> If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either `:cuda` or `:ollama`. To enable CUDA, you must install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your Linux/WSL system. > If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either `:cuda` or `:ollama`. To enable CUDA, you must install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your Linux/WSL system.
**If Ollama is on your computer**, use this command: ### Installation with Default Configuration
```bash - **If Ollama is on your computer**, use this command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
**If Ollama is on a Different Server**, use this command: ```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL: - **If Ollama is on a Different Server**, use this command:
```bash To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
``` ```bash
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
- **To run Open WebUI with Nvidia GPU support**, use this command:
```bash
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
```
### Installation for OpenAI API Usage Only
- **If you're only using OpenAI API**, use this command:
```bash
docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
### Installing Open WebUI with Bundled Ollama Support
This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:
- **With GPU Support**:
Utilize GPU resources by running the following command:
```bash
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
```
- **For CPU Only**:
If you're not using a GPU, use this command instead:
```bash
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
```
Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.
After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄 After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
### Other Installation Methods
We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.
### Troubleshooting
Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s).
#### Open WebUI: Server Connection Error #### Open WebUI: Server Connection Error
If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`. If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.
...@@ -146,14 +190,6 @@ If you're experiencing connection issues, it’s often due to the WebUI docker c ...@@ -146,14 +190,6 @@ If you're experiencing connection issues, it’s often due to the WebUI docker c
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
``` ```
### Other Installation Methods
We offer various installation alternatives, including non-Docker methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.
### Troubleshooting
Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s).
### Keeping Your Docker Installation Up-to-Date ### Keeping Your Docker Installation Up-to-Date
In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/): In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):
......
...@@ -43,6 +43,8 @@ from config import ( ...@@ -43,6 +43,8 @@ from config import (
DEVICE_TYPE, DEVICE_TYPE,
AUDIO_OPENAI_API_BASE_URL, AUDIO_OPENAI_API_BASE_URL,
AUDIO_OPENAI_API_KEY, AUDIO_OPENAI_API_KEY,
AUDIO_OPENAI_API_MODEL,
AUDIO_OPENAI_API_VOICE,
) )
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
...@@ -60,6 +62,8 @@ app.add_middleware( ...@@ -60,6 +62,8 @@ app.add_middleware(
app.state.OPENAI_API_BASE_URL = AUDIO_OPENAI_API_BASE_URL app.state.OPENAI_API_BASE_URL = AUDIO_OPENAI_API_BASE_URL
app.state.OPENAI_API_KEY = AUDIO_OPENAI_API_KEY app.state.OPENAI_API_KEY = AUDIO_OPENAI_API_KEY
app.state.OPENAI_API_MODEL = AUDIO_OPENAI_API_MODEL
app.state.OPENAI_API_VOICE = AUDIO_OPENAI_API_VOICE
# setting device type for whisper model # setting device type for whisper model
whisper_device_type = DEVICE_TYPE if DEVICE_TYPE and DEVICE_TYPE == "cuda" else "cpu" whisper_device_type = DEVICE_TYPE if DEVICE_TYPE and DEVICE_TYPE == "cuda" else "cpu"
...@@ -72,6 +76,8 @@ SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True) ...@@ -72,6 +76,8 @@ SPEECH_CACHE_DIR.mkdir(parents=True, exist_ok=True)
class OpenAIConfigUpdateForm(BaseModel): class OpenAIConfigUpdateForm(BaseModel):
url: str url: str
key: str key: str
model: str
speaker: str
@app.get("/config") @app.get("/config")
...@@ -79,6 +85,8 @@ async def get_openai_config(user=Depends(get_admin_user)): ...@@ -79,6 +85,8 @@ async def get_openai_config(user=Depends(get_admin_user)):
return { return {
"OPENAI_API_BASE_URL": app.state.OPENAI_API_BASE_URL, "OPENAI_API_BASE_URL": app.state.OPENAI_API_BASE_URL,
"OPENAI_API_KEY": app.state.OPENAI_API_KEY, "OPENAI_API_KEY": app.state.OPENAI_API_KEY,
"OPENAI_API_MODEL": app.state.OPENAI_API_MODEL,
"OPENAI_API_VOICE": app.state.OPENAI_API_VOICE,
} }
...@@ -91,11 +99,15 @@ async def update_openai_config( ...@@ -91,11 +99,15 @@ async def update_openai_config(
app.state.OPENAI_API_BASE_URL = form_data.url app.state.OPENAI_API_BASE_URL = form_data.url
app.state.OPENAI_API_KEY = form_data.key app.state.OPENAI_API_KEY = form_data.key
app.state.OPENAI_API_MODEL = form_data.model
app.state.OPENAI_API_VOICE = form_data.speaker
return { return {
"status": True, "status": True,
"OPENAI_API_BASE_URL": app.state.OPENAI_API_BASE_URL, "OPENAI_API_BASE_URL": app.state.OPENAI_API_BASE_URL,
"OPENAI_API_KEY": app.state.OPENAI_API_KEY, "OPENAI_API_KEY": app.state.OPENAI_API_KEY,
"OPENAI_API_MODEL": app.state.OPENAI_API_MODEL,
"OPENAI_API_VOICE": app.state.OPENAI_API_VOICE,
} }
......
...@@ -25,6 +25,7 @@ import uuid ...@@ -25,6 +25,7 @@ import uuid
import aiohttp import aiohttp
import asyncio import asyncio
import logging import logging
import time
from urllib.parse import urlparse from urllib.parse import urlparse
from typing import Optional, List, Union from typing import Optional, List, Union
...@@ -1031,6 +1032,75 @@ async def generate_openai_chat_completion( ...@@ -1031,6 +1032,75 @@ async def generate_openai_chat_completion(
) )
@app.get("/v1/models")
@app.get("/v1/models/{url_idx}")
async def get_openai_models(
url_idx: Optional[int] = None,
user=Depends(get_verified_user),
):
if url_idx == None:
models = await get_all_models()
if app.state.ENABLE_MODEL_FILTER:
if user.role == "user":
models["models"] = list(
filter(
lambda model: model["name"] in app.state.MODEL_FILTER_LIST,
models["models"],
)
)
return {
"data": [
{
"id": model["model"],
"object": "model",
"created": int(time.time()),
"owned_by": "openai",
}
for model in models["models"]
],
"object": "list",
}
else:
url = app.state.OLLAMA_BASE_URLS[url_idx]
try:
r = requests.request(method="GET", url=f"{url}/api/tags")
r.raise_for_status()
models = r.json()
return {
"data": [
{
"id": model["model"],
"object": "model",
"created": int(time.time()),
"owned_by": "openai",
}
for model in models["models"]
],
"object": "list",
}
except Exception as e:
log.exception(e)
error_detail = "Open WebUI: Server Connection Error"
if r is not None:
try:
res = r.json()
if "error" in res:
error_detail = f"Ollama: {res['error']}"
except:
error_detail = f"Ollama: {e}"
raise HTTPException(
status_code=r.status_code if r else 500,
detail=error_detail,
)
class UrlForm(BaseModel): class UrlForm(BaseModel):
url: str url: str
......
...@@ -93,6 +93,7 @@ from config import ( ...@@ -93,6 +93,7 @@ from config import (
CHUNK_OVERLAP, CHUNK_OVERLAP,
RAG_TEMPLATE, RAG_TEMPLATE,
ENABLE_RAG_LOCAL_WEB_FETCH, ENABLE_RAG_LOCAL_WEB_FETCH,
YOUTUBE_LOADER_LANGUAGE,
RAG_WEB_SEARCH_CONCURRENT_REQUESTS, RAG_WEB_SEARCH_CONCURRENT_REQUESTS,
) )
...@@ -126,6 +127,10 @@ app.state.OPENAI_API_KEY = RAG_OPENAI_API_KEY ...@@ -126,6 +127,10 @@ app.state.OPENAI_API_KEY = RAG_OPENAI_API_KEY
app.state.PDF_EXTRACT_IMAGES = PDF_EXTRACT_IMAGES app.state.PDF_EXTRACT_IMAGES = PDF_EXTRACT_IMAGES
app.state.YOUTUBE_LOADER_LANGUAGE = YOUTUBE_LOADER_LANGUAGE
app.state.YOUTUBE_LOADER_TRANSLATION = None
def update_embedding_model( def update_embedding_model(
embedding_model: str, embedding_model: str,
update_model: bool = False, update_model: bool = False,
...@@ -320,6 +325,10 @@ async def get_rag_config(user=Depends(get_admin_user)): ...@@ -320,6 +325,10 @@ async def get_rag_config(user=Depends(get_admin_user)):
"chunk_overlap": app.state.CHUNK_OVERLAP, "chunk_overlap": app.state.CHUNK_OVERLAP,
}, },
"web_loader_ssl_verification": app.state.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION, "web_loader_ssl_verification": app.state.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION,
"youtube": {
"language": app.state.YOUTUBE_LOADER_LANGUAGE,
"translation": app.state.YOUTUBE_LOADER_TRANSLATION,
},
} }
...@@ -328,10 +337,16 @@ class ChunkParamUpdateForm(BaseModel): ...@@ -328,10 +337,16 @@ class ChunkParamUpdateForm(BaseModel):
chunk_overlap: int chunk_overlap: int
class YoutubeLoaderConfig(BaseModel):
language: List[str]
translation: Optional[str] = None
class ConfigUpdateForm(BaseModel): class ConfigUpdateForm(BaseModel):
pdf_extract_images: Optional[bool] = None pdf_extract_images: Optional[bool] = None
chunk: Optional[ChunkParamUpdateForm] = None chunk: Optional[ChunkParamUpdateForm] = None
web_loader_ssl_verification: Optional[bool] = None web_loader_ssl_verification: Optional[bool] = None
youtube: Optional[YoutubeLoaderConfig] = None
@app.post("/config/update") @app.post("/config/update")
...@@ -358,6 +373,18 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_ ...@@ -358,6 +373,18 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_
else app.state.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION else app.state.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION
) )
app.state.YOUTUBE_LOADER_LANGUAGE = (
form_data.youtube.language
if form_data.youtube != None
else app.state.YOUTUBE_LOADER_LANGUAGE
)
app.state.YOUTUBE_LOADER_TRANSLATION = (
form_data.youtube.translation
if form_data.youtube != None
else app.state.YOUTUBE_LOADER_TRANSLATION
)
return { return {
"status": True, "status": True,
"pdf_extract_images": app.state.PDF_EXTRACT_IMAGES, "pdf_extract_images": app.state.PDF_EXTRACT_IMAGES,
...@@ -366,6 +393,10 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_ ...@@ -366,6 +393,10 @@ async def update_rag_config(form_data: ConfigUpdateForm, user=Depends(get_admin_
"chunk_overlap": app.state.CHUNK_OVERLAP, "chunk_overlap": app.state.CHUNK_OVERLAP,
}, },
"web_loader_ssl_verification": app.state.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION, "web_loader_ssl_verification": app.state.ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION,
"youtube": {
"language": app.state.YOUTUBE_LOADER_LANGUAGE,
"translation": app.state.YOUTUBE_LOADER_TRANSLATION,
},
} }
...@@ -492,7 +523,12 @@ def query_collection_handler( ...@@ -492,7 +523,12 @@ def query_collection_handler(
@app.post("/youtube") @app.post("/youtube")
def store_youtube_video(form_data: UrlForm, user=Depends(get_current_user)): def store_youtube_video(form_data: UrlForm, user=Depends(get_current_user)):
try: try:
loader = YoutubeLoader.from_youtube_url(form_data.url, add_video_info=False) loader = YoutubeLoader.from_youtube_url(
form_data.url,
add_video_info=True,
language=app.state.YOUTUBE_LOADER_LANGUAGE,
translation=app.state.YOUTUBE_LOADER_TRANSLATION,
)
data = loader.load() data = loader.load()
collection_name = form_data.collection_name collection_name = form_data.collection_name
...@@ -676,7 +712,7 @@ def store_docs_in_vector_db(docs, collection_name, overwrite: bool = False) -> b ...@@ -676,7 +712,7 @@ def store_docs_in_vector_db(docs, collection_name, overwrite: bool = False) -> b
for batch in create_batches( for batch in create_batches(
api=CHROMA_CLIENT, api=CHROMA_CLIENT,
ids=[str(uuid.uuid1()) for _ in texts], ids=[str(uuid.uuid4()) for _ in texts],
metadatas=metadatas, metadatas=metadatas,
embeddings=embeddings, embeddings=embeddings,
documents=texts, documents=texts,
......
...@@ -33,7 +33,7 @@ from utils.utils import ( ...@@ -33,7 +33,7 @@ from utils.utils import (
from utils.misc import parse_duration, validate_email_format from utils.misc import parse_duration, validate_email_format
from utils.webhook import post_webhook from utils.webhook import post_webhook
from constants import ERROR_MESSAGES, WEBHOOK_MESSAGES from constants import ERROR_MESSAGES, WEBHOOK_MESSAGES
from config import WEBUI_AUTH_TRUSTED_EMAIL_HEADER from config import WEBUI_AUTH, WEBUI_AUTH_TRUSTED_EMAIL_HEADER
router = APIRouter() router = APIRouter()
...@@ -118,6 +118,22 @@ async def signin(request: Request, form_data: SigninForm): ...@@ -118,6 +118,22 @@ async def signin(request: Request, form_data: SigninForm):
), ),
) )
user = Auths.authenticate_user_by_trusted_header(trusted_email) user = Auths.authenticate_user_by_trusted_header(trusted_email)
elif WEBUI_AUTH == False:
admin_email = "admin@localhost"
admin_password = "admin"
if Users.get_user_by_email(admin_email.lower()):
user = Auths.authenticate_user(admin_email.lower(), admin_password)
else:
if Users.get_num_users() != 0:
raise HTTPException(400, detail=ERROR_MESSAGES.EXISTING_USERS)
await signup(
request,
SignupForm(email=admin_email, password=admin_password, name="User"),
)
user = Auths.authenticate_user(admin_email.lower(), admin_password)
else: else:
user = Auths.authenticate_user(form_data.email.lower(), form_data.password) user = Auths.authenticate_user(form_data.email.lower(), form_data.password)
...@@ -147,7 +163,7 @@ async def signin(request: Request, form_data: SigninForm): ...@@ -147,7 +163,7 @@ async def signin(request: Request, form_data: SigninForm):
@router.post("/signup", response_model=SigninResponse) @router.post("/signup", response_model=SigninResponse)
async def signup(request: Request, form_data: SignupForm): async def signup(request: Request, form_data: SignupForm):
if not request.app.state.ENABLE_SIGNUP: if not request.app.state.ENABLE_SIGNUP and WEBUI_AUTH:
raise HTTPException( raise HTTPException(
status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.ACCESS_PROHIBITED status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.ACCESS_PROHIBITED
) )
......
...@@ -76,8 +76,11 @@ WEBUI_NAME = os.environ.get("WEBUI_NAME", "Open WebUI") ...@@ -76,8 +76,11 @@ WEBUI_NAME = os.environ.get("WEBUI_NAME", "Open WebUI")
if WEBUI_NAME != "Open WebUI": if WEBUI_NAME != "Open WebUI":
WEBUI_NAME += " (Open WebUI)" WEBUI_NAME += " (Open WebUI)"
WEBUI_URL = os.environ.get("WEBUI_URL", "http://localhost:3000")
WEBUI_FAVICON_URL = "https://openwebui.com/favicon.png" WEBUI_FAVICON_URL = "https://openwebui.com/favicon.png"
#################################### ####################################
# ENV (dev,test,prod) # ENV (dev,test,prod)
#################################### ####################################
...@@ -151,6 +154,23 @@ for version in soup.find_all("h2"): ...@@ -151,6 +154,23 @@ for version in soup.find_all("h2"):
CHANGELOG = changelog_json CHANGELOG = changelog_json
####################################
# WEBUI_VERSION
####################################
WEBUI_VERSION = os.environ.get("WEBUI_VERSION", "v1.0.0-alpha.100")
####################################
# WEBUI_AUTH (Required for security)
####################################
WEBUI_AUTH = os.environ.get("WEBUI_AUTH", "True").lower() == "true"
WEBUI_AUTH_TRUSTED_EMAIL_HEADER = os.environ.get(
"WEBUI_AUTH_TRUSTED_EMAIL_HEADER", None
)
#################################### ####################################
# DATA/FRONTEND BUILD DIR # DATA/FRONTEND BUILD DIR
#################################### ####################################
...@@ -343,7 +363,11 @@ OPENAI_API_BASE_URL = "https://api.openai.com/v1" ...@@ -343,7 +363,11 @@ OPENAI_API_BASE_URL = "https://api.openai.com/v1"
# WEBUI # WEBUI
#################################### ####################################
ENABLE_SIGNUP = os.environ.get("ENABLE_SIGNUP", "True").lower() == "true" ENABLE_SIGNUP = (
False
if WEBUI_AUTH == False
else os.environ.get("ENABLE_SIGNUP", "True").lower() == "true"
)
DEFAULT_MODELS = os.environ.get("DEFAULT_MODELS", None) DEFAULT_MODELS = os.environ.get("DEFAULT_MODELS", None)
...@@ -400,21 +424,6 @@ WEBHOOK_URL = os.environ.get("WEBHOOK_URL", "") ...@@ -400,21 +424,6 @@ WEBHOOK_URL = os.environ.get("WEBHOOK_URL", "")
ENABLE_ADMIN_EXPORT = os.environ.get("ENABLE_ADMIN_EXPORT", "True").lower() == "true" ENABLE_ADMIN_EXPORT = os.environ.get("ENABLE_ADMIN_EXPORT", "True").lower() == "true"
####################################
# WEBUI_VERSION
####################################
WEBUI_VERSION = os.environ.get("WEBUI_VERSION", "v1.0.0-alpha.100")
####################################
# WEBUI_AUTH (Required for security)
####################################
WEBUI_AUTH = True
WEBUI_AUTH_TRUSTED_EMAIL_HEADER = os.environ.get(
"WEBUI_AUTH_TRUSTED_EMAIL_HEADER", None
)
#################################### ####################################
# WEBUI_SECRET_KEY # WEBUI_SECRET_KEY
#################################### ####################################
...@@ -490,13 +499,6 @@ RAG_RERANKING_MODEL_TRUST_REMOTE_CODE = ( ...@@ -490,13 +499,6 @@ RAG_RERANKING_MODEL_TRUST_REMOTE_CODE = (
os.environ.get("RAG_RERANKING_MODEL_TRUST_REMOTE_CODE", "").lower() == "true" os.environ.get("RAG_RERANKING_MODEL_TRUST_REMOTE_CODE", "").lower() == "true"
) )
# device type embedding models - "cpu" (default), "cuda" (nvidia gpu required) or "mps" (apple silicon) - choosing this right can lead to better performance
USE_CUDA = os.environ.get("USE_CUDA_DOCKER", "false")
if USE_CUDA.lower() == "true":
DEVICE_TYPE = "cuda"
else:
DEVICE_TYPE = "cpu"
if CHROMA_HTTP_HOST != "": if CHROMA_HTTP_HOST != "":
CHROMA_CLIENT = chromadb.HttpClient( CHROMA_CLIENT = chromadb.HttpClient(
...@@ -516,6 +518,16 @@ else: ...@@ -516,6 +518,16 @@ else:
database=CHROMA_DATABASE, database=CHROMA_DATABASE,
) )
# device type embedding models - "cpu" (default), "cuda" (nvidia gpu required) or "mps" (apple silicon) - choosing this right can lead to better performance
USE_CUDA = os.environ.get("USE_CUDA_DOCKER", "false")
if USE_CUDA.lower() == "true":
DEVICE_TYPE = "cuda"
else:
DEVICE_TYPE = "cpu"
CHUNK_SIZE = int(os.environ.get("CHUNK_SIZE", "1500")) CHUNK_SIZE = int(os.environ.get("CHUNK_SIZE", "1500"))
CHUNK_OVERLAP = int(os.environ.get("CHUNK_OVERLAP", "100")) CHUNK_OVERLAP = int(os.environ.get("CHUNK_OVERLAP", "100"))
...@@ -542,6 +554,8 @@ ENABLE_RAG_LOCAL_WEB_FETCH = ( ...@@ -542,6 +554,8 @@ ENABLE_RAG_LOCAL_WEB_FETCH = (
os.getenv("ENABLE_RAG_LOCAL_WEB_FETCH", "False").lower() == "true" os.getenv("ENABLE_RAG_LOCAL_WEB_FETCH", "False").lower() == "true"
) )
YOUTUBE_LOADER_LANGUAGE = os.getenv("YOUTUBE_LOADER_LANGUAGE", "en").split(",")
SEARXNG_QUERY_URL = os.getenv("SEARXNG_QUERY_URL", "") SEARXNG_QUERY_URL = os.getenv("SEARXNG_QUERY_URL", "")
GOOGLE_PSE_API_KEY = os.getenv("GOOGLE_PSE_API_KEY", "") GOOGLE_PSE_API_KEY = os.getenv("GOOGLE_PSE_API_KEY", "")
GOOGLE_PSE_ENGINE_ID = os.getenv("GOOGLE_PSE_ENGINE_ID", "") GOOGLE_PSE_ENGINE_ID = os.getenv("GOOGLE_PSE_ENGINE_ID", "")
...@@ -595,6 +609,8 @@ IMAGE_GENERATION_MODEL = os.getenv("IMAGE_GENERATION_MODEL", "") ...@@ -595,6 +609,8 @@ IMAGE_GENERATION_MODEL = os.getenv("IMAGE_GENERATION_MODEL", "")
AUDIO_OPENAI_API_BASE_URL = os.getenv("AUDIO_OPENAI_API_BASE_URL", OPENAI_API_BASE_URL) AUDIO_OPENAI_API_BASE_URL = os.getenv("AUDIO_OPENAI_API_BASE_URL", OPENAI_API_BASE_URL)
AUDIO_OPENAI_API_KEY = os.getenv("AUDIO_OPENAI_API_KEY", OPENAI_API_KEY) AUDIO_OPENAI_API_KEY = os.getenv("AUDIO_OPENAI_API_KEY", OPENAI_API_KEY)
AUDIO_OPENAI_API_MODEL = os.getenv("AUDIO_OPENAI_API_MODEL", "tts-1")
AUDIO_OPENAI_API_VOICE = os.getenv("AUDIO_OPENAI_API_VOICE", "alloy")
#################################### ####################################
# LiteLLM # LiteLLM
......
...@@ -42,6 +42,9 @@ class ERROR_MESSAGES(str, Enum): ...@@ -42,6 +42,9 @@ class ERROR_MESSAGES(str, Enum):
"The password provided is incorrect. Please check for typos and try again." "The password provided is incorrect. Please check for typos and try again."
) )
INVALID_TRUSTED_HEADER = "Your provider has not provided a trusted header. Please contact your administrator for assistance." INVALID_TRUSTED_HEADER = "Your provider has not provided a trusted header. Please contact your administrator for assistance."
EXISTING_USERS = "You can't turn off authentication because there are existing users. If you want to disable WEBUI_AUTH, make sure your web interface doesn't have any existing users and is a fresh installation."
UNAUTHORIZED = "401 Unauthorized" UNAUTHORIZED = "401 Unauthorized"
ACCESS_PROHIBITED = "You do not have permission to access this resource. Please contact your administrator for assistance." ACCESS_PROHIBITED = "You do not have permission to access this resource. Please contact your administrator for assistance."
ACTION_PROHIBITED = ( ACTION_PROHIBITED = (
......
...@@ -15,7 +15,7 @@ from fastapi.middleware.wsgi import WSGIMiddleware ...@@ -15,7 +15,7 @@ from fastapi.middleware.wsgi import WSGIMiddleware
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from starlette.exceptions import HTTPException as StarletteHTTPException from starlette.exceptions import HTTPException as StarletteHTTPException
from starlette.middleware.base import BaseHTTPMiddleware from starlette.middleware.base import BaseHTTPMiddleware
from starlette.responses import StreamingResponse from starlette.responses import StreamingResponse, Response
from apps.ollama.main import app as ollama_app from apps.ollama.main import app as ollama_app
from apps.openai.main import app as openai_app from apps.openai.main import app as openai_app
...@@ -43,6 +43,8 @@ from apps.rag.utils import rag_messages ...@@ -43,6 +43,8 @@ from apps.rag.utils import rag_messages
from config import ( from config import (
CONFIG_DATA, CONFIG_DATA,
WEBUI_NAME, WEBUI_NAME,
WEBUI_URL,
WEBUI_AUTH,
ENV, ENV,
VERSION, VERSION,
CHANGELOG, CHANGELOG,
...@@ -239,6 +241,7 @@ async def get_app_config(): ...@@ -239,6 +241,7 @@ async def get_app_config():
"status": True, "status": True,
"name": WEBUI_NAME, "name": WEBUI_NAME,
"version": VERSION, "version": VERSION,
"auth": WEBUI_AUTH,
"default_locale": default_locale, "default_locale": default_locale,
"images": images_app.state.ENABLED, "images": images_app.state.ENABLED,
"default_models": webui_app.state.DEFAULT_MODELS, "default_models": webui_app.state.DEFAULT_MODELS,
...@@ -350,6 +353,21 @@ async def get_manifest_json(): ...@@ -350,6 +353,21 @@ async def get_manifest_json():
} }
@app.get("/opensearch.xml")
async def get_opensearch_xml():
xml_content = rf"""
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:moz="http://www.mozilla.org/2006/browser/search/">
<ShortName>{WEBUI_NAME}</ShortName>
<Description>Search {WEBUI_NAME}</Description>
<InputEncoding>UTF-8</InputEncoding>
<Image width="16" height="16" type="image/x-icon">{WEBUI_URL}/favicon.png</Image>
<Url type="text/html" method="get" template="{WEBUI_URL}/?q={"{searchTerms}"}"/>
<moz:SearchForm>{WEBUI_URL}</moz:SearchForm>
</OpenSearchDescription>
"""
return Response(content=xml_content, media_type="application/xml")
app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static") app.mount("/static", StaticFiles(directory=STATIC_DIR), name="static")
app.mount("/cache", StaticFiles(directory=CACHE_DIR), name="cache") app.mount("/cache", StaticFiles(directory=CACHE_DIR), name="cache")
......
...@@ -9,7 +9,6 @@ Flask-Cors==4.0.0 ...@@ -9,7 +9,6 @@ Flask-Cors==4.0.0
python-socketio==5.11.2 python-socketio==5.11.2
python-jose==3.3.0 python-jose==3.3.0
passlib[bcrypt]==1.7.4 passlib[bcrypt]==1.7.4
uuid==1.30
requests==2.31.0 requests==2.31.0
aiohttp==3.9.5 aiohttp==3.9.5
...@@ -19,7 +18,6 @@ psycopg2-binary==2.9.9 ...@@ -19,7 +18,6 @@ psycopg2-binary==2.9.9
PyMySQL==1.1.0 PyMySQL==1.1.0
bcrypt==4.1.2 bcrypt==4.1.2
litellm==1.35.28
litellm[proxy]==1.35.28 litellm[proxy]==1.35.28
boto3==1.34.95 boto3==1.34.95
...@@ -54,9 +52,9 @@ rank-bm25==0.2.2 ...@@ -54,9 +52,9 @@ rank-bm25==0.2.2
faster-whisper==1.0.1 faster-whisper==1.0.1
PyJWT==2.8.0
PyJWT[crypto]==2.8.0 PyJWT[crypto]==2.8.0
black==24.4.2 black==24.4.2
langfuse==2.27.3 langfuse==2.27.3
youtube-transcript-api youtube-transcript-api==0.6.2
pytube
\ No newline at end of file
...@@ -8,7 +8,7 @@ KEY_FILE=.webui_secret_key ...@@ -8,7 +8,7 @@ KEY_FILE=.webui_secret_key
PORT="${PORT:-8080}" PORT="${PORT:-8080}"
HOST="${HOST:-0.0.0.0}" HOST="${HOST:-0.0.0.0}"
if test "$WEBUI_SECRET_KEY $WEBUI_JWT_SECRET_KEY" = " "; then if test "$WEBUI_SECRET_KEY $WEBUI_JWT_SECRET_KEY" = " "; then
echo "No WEBUI_SECRET_KEY provided" echo "Loading WEBUI_SECRET_KEY from file, not provided as an environment variable."
if ! [ -e "$KEY_FILE" ]; then if ! [ -e "$KEY_FILE" ]; then
echo "Generating WEBUI_SECRET_KEY" echo "Generating WEBUI_SECRET_KEY"
......
...@@ -13,7 +13,7 @@ SET "WEBUI_JWT_SECRET_KEY=%WEBUI_JWT_SECRET_KEY%" ...@@ -13,7 +13,7 @@ SET "WEBUI_JWT_SECRET_KEY=%WEBUI_JWT_SECRET_KEY%"
:: Check if WEBUI_SECRET_KEY and WEBUI_JWT_SECRET_KEY are not set :: Check if WEBUI_SECRET_KEY and WEBUI_JWT_SECRET_KEY are not set
IF "%WEBUI_SECRET_KEY%%WEBUI_JWT_SECRET_KEY%" == " " ( IF "%WEBUI_SECRET_KEY%%WEBUI_JWT_SECRET_KEY%" == " " (
echo No WEBUI_SECRET_KEY provided echo Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.
IF NOT EXIST "%KEY_FILE%" ( IF NOT EXIST "%KEY_FILE%" (
echo Generating WEBUI_SECRET_KEY echo Generating WEBUI_SECRET_KEY
......
...@@ -38,10 +38,11 @@ def calculate_sha256_string(string): ...@@ -38,10 +38,11 @@ def calculate_sha256_string(string):
def validate_email_format(email: str) -> bool: def validate_email_format(email: str) -> bool:
if not re.match(r"[^@]+@[^@]+\.[^@]+", email): if email.endswith("@localhost"):
return False
return True return True
return bool(re.match(r"[^@]+@[^@]+\.[^@]+", email))
def sanitize_filename(file_name): def sanitize_filename(file_name):
# Convert to lowercase # Convert to lowercase
......
apiVersion: v2
name: open-webui
version: 1.0.0
appVersion: "latest"
home: https://www.openwebui.com/
icon: https://raw.githubusercontent.com/open-webui/open-webui/main/static/favicon.png
description: "Open WebUI: A User-Friendly Web Interface for Chat Interactions 👋"
keywords:
- llm
- chat
- web-ui
sources:
- https://github.com/open-webui/open-webui/tree/main/kubernetes/helm
- https://hub.docker.com/r/ollama/ollama
- https://github.com/open-webui/open-webui/pkgs/container/open-webui
annotations:
licenses: MIT
# Helm Charts
Open WebUI Helm Charts are now hosted in a separate repo, which can be found here: https://github.com/open-webui/helm-charts
The charts are released at https://helm.openwebui.com.
\ No newline at end of file
{{- define "open-webui.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end -}}
{{- define "ollama.name" -}}
ollama
{{- end -}}
{{- define "ollama.url" -}}
{{- if .Values.ollama.externalHost }}
{{- printf .Values.ollama.externalHost }}
{{- else }}
{{- printf "http://%s.%s.svc.cluster.local:%d" (include "ollama.name" .) (.Release.Namespace) (.Values.ollama.service.port | int) }}
{{- end }}
{{- end }}
{{- define "chart.name" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "base.labels" -}}
helm.sh/chart: {{ include "chart.name" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{- define "base.selectorLabels" -}}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{- define "open-webui.selectorLabels" -}}
{{ include "base.selectorLabels" . }}
app.kubernetes.io/component: {{ .Chart.Name }}
{{- end }}
{{- define "open-webui.labels" -}}
{{ include "base.labels" . }}
{{ include "open-webui.selectorLabels" . }}
{{- end }}
{{- define "ollama.selectorLabels" -}}
{{ include "base.selectorLabels" . }}
app.kubernetes.io/component: {{ include "ollama.name" . }}
{{- end }}
{{- define "ollama.labels" -}}
{{ include "base.labels" . }}
{{ include "ollama.selectorLabels" . }}
{{- end }}
{{- if not .Values.ollama.externalHost }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "ollama.name" . }}
labels:
{{- include "ollama.labels" . | nindent 4 }}
{{- with .Values.ollama.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
{{- include "ollama.selectorLabels" . | nindent 4 }}
{{- with .Values.ollama.service }}
type: {{ .type }}
ports:
- protocol: TCP
name: http
port: {{ .port }}
targetPort: http
{{- end }}
{{- end }}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment