"...git@developer.sourcefind.cn:chenpangpang/open-webui.git" did not exist on "c6f9e74477d7ce43c8f456894ba212ec7ecf08f6"
Unverified Commit c869652e authored by Timothy Jaeryang Baek's avatar Timothy Jaeryang Baek Committed by GitHub
Browse files

Merge pull request #4322 from open-webui/dev

0.3.12
parents d3146d20 240a3014
...@@ -8,36 +8,43 @@ assignees: '' ...@@ -8,36 +8,43 @@ assignees: ''
# Bug Report # Bug Report
## Description ## Installation Method
**Bug Summary:**
[Provide a brief but clear summary of the bug]
**Steps to Reproduce:**
[Outline the steps to reproduce the bug. Be as detailed as possible.]
**Expected Behavior:**
[Describe what you expected to happen.]
**Actual Behavior:** [Describe the method you used to install the project, e.g., git clone, Docker, pip, etc.]
[Describe what actually happened.]
## Environment ## Environment
- **Open WebUI Version:** [e.g., 0.1.120] - **Open WebUI Version:** [e.g., v0.3.11]
- **Ollama (if applicable):** [e.g., 0.1.30, 0.1.32-rc1] - **Ollama (if applicable):** [e.g., v0.2.0, v0.1.32-rc1]
- **Operating System:** [e.g., Windows 10, macOS Big Sur, Ubuntu 20.04] - **Operating System:** [e.g., Windows 10, macOS Big Sur, Ubuntu 20.04]
- **Browser (if applicable):** [e.g., Chrome 100.0, Firefox 98.0] - **Browser (if applicable):** [e.g., Chrome 100.0, Firefox 98.0]
## Reproduction Details
**Confirmation:** **Confirmation:**
- [ ] I have read and followed all the instructions provided in the README.md. - [ ] I have read and followed all the instructions provided in the README.md.
- [ ] I am on the latest version of both Open WebUI and Ollama. - [ ] I am on the latest version of both Open WebUI and Ollama.
- [ ] I have included the browser console logs. - [ ] I have included the browser console logs.
- [ ] I have included the Docker container logs. - [ ] I have included the Docker container logs.
- [ ] I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.
## Expected Behavior:
[Describe what you expected to happen.]
## Actual Behavior:
[Describe what actually happened.]
## Description
**Bug Summary:**
[Provide a brief but clear summary of the bug]
## Reproduction Details
**Steps to Reproduce:**
[Outline the steps to reproduce the bug. Be as detailed as possible.]
## Logs and Screenshots ## Logs and Screenshots
...@@ -47,13 +54,9 @@ assignees: '' ...@@ -47,13 +54,9 @@ assignees: ''
**Docker Container Logs:** **Docker Container Logs:**
[Include relevant Docker container logs, if applicable] [Include relevant Docker container logs, if applicable]
**Screenshots (if applicable):** **Screenshots/Screen Recordings (if applicable):**
[Attach any relevant screenshots to help illustrate the issue] [Attach any relevant screenshots to help illustrate the issue]
## Installation Method
[Describe the method you used to install the project, e.g., manual installation, Docker, package manager, etc.]
## Additional Information ## Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.] [Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
......
...@@ -26,6 +26,10 @@ jobs: ...@@ -26,6 +26,10 @@ jobs:
--file docker-compose.a1111-test.yaml \ --file docker-compose.a1111-test.yaml \
up --detach --build up --detach --build
- name: Delete Docker build cache
run: |
docker builder prune --all --force
- name: Wait for Ollama to be up - name: Wait for Ollama to be up
timeout-minutes: 5 timeout-minutes: 5
run: | run: |
...@@ -35,10 +39,6 @@ jobs: ...@@ -35,10 +39,6 @@ jobs:
done done
echo "Service is up!" echo "Service is up!"
- name: Delete Docker build cache
run: |
docker builder prune --all --force
- name: Preload Ollama model - name: Preload Ollama model
run: | run: |
docker exec ollama ollama pull qwen:0.5b-chat-v1.5-q2_K docker exec ollama ollama pull qwen:0.5b-chat-v1.5-q2_K
......
...@@ -5,6 +5,28 @@ All notable changes to this project will be documented in this file. ...@@ -5,6 +5,28 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.3.12] - 2024-08-07
### Added
- **🔄 Sidebar Infinite Scroll**: Added an infinite scroll feature in the sidebar for more efficient chat navigation, reducing load times and enhancing user experience.
- **🚀 Enhanced Markdown Rendering**: Support for rendering all code blocks and making images clickable for preview; codespan styling is also enhanced to improve readability and user interaction.
- **🔒 Admin Shared Chat Visibility**: Admins no longer have default visibility over shared chats when ENABLE_ADMIN_CHAT_ACCESS is set to false, tightening security and privacy settings for users.
- **🌍 Language Updates**: Added Malay (Bahasa Malaysia) translation and updated Catalan and Traditional Chinese translations to improve accessibility for more users.
### Fixed
- **📊 Markdown Rendering Issues**: Resolved issues with markdown rendering to ensure consistent and correct display across components.
- **🛠️ Styling Issues**: Multiple fixes applied to styling throughout the application, improving the overall visual experience and interface consistency.
- **🗃️ Modal Handling**: Fixed an issue where modals were not closing correctly in various model chat scenarios, enhancing usability and interface reliability.
- **📄 Missing OpenAI Usage Information**: Resolved issues where usage statistics for OpenAI services were not being correctly displayed, ensuring users have access to crucial data for managing and monitoring their API consumption.
- **🔧 Non-Streaming Support for Functions Plugin**: Fixed a functionality issue with the Functions plugin where non-streaming operations were not functioning as intended, restoring full capabilities for async and sync integration within the platform.
- **🔄 Environment Variable Type Correction (COMFYUI_FLUX_FP8_CLIP)**: Corrected the data type of the 'COMFYUI_FLUX_FP8_CLIP' environment variable from string to boolean, ensuring environment settings apply correctly and enhance configuration management.
### Changed
- **🔧 Backend Dependency Updates**: Updated several backend dependencies such as boto3, pypdf, python-pptx, validators, and black, ensuring up-to-date security and performance optimizations.
## [0.3.11] - 2024-08-02 ## [0.3.11] - 2024-08-02
### Added ### Added
......
from fastapi import FastAPI, Request, Response, HTTPException, Depends from fastapi import FastAPI, Request, HTTPException, Depends
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse, JSONResponse, FileResponse from fastapi.responses import StreamingResponse, FileResponse
import requests import requests
import aiohttp import aiohttp
...@@ -12,16 +12,12 @@ from pydantic import BaseModel ...@@ -12,16 +12,12 @@ from pydantic import BaseModel
from starlette.background import BackgroundTask from starlette.background import BackgroundTask
from apps.webui.models.models import Models from apps.webui.models.models import Models
from apps.webui.models.users import Users
from constants import ERROR_MESSAGES from constants import ERROR_MESSAGES
from utils.utils import ( from utils.utils import (
decode_token,
get_verified_user,
get_verified_user, get_verified_user,
get_admin_user, get_admin_user,
) )
from utils.task import prompt_template from utils.misc import apply_model_params_to_body, apply_model_system_prompt_to_body
from utils.misc import add_or_update_system_message
from config import ( from config import (
SRC_LOG_LEVELS, SRC_LOG_LEVELS,
...@@ -34,7 +30,7 @@ from config import ( ...@@ -34,7 +30,7 @@ from config import (
MODEL_FILTER_LIST, MODEL_FILTER_LIST,
AppConfig, AppConfig,
) )
from typing import List, Optional from typing import List, Optional, Literal, overload
import hashlib import hashlib
...@@ -69,8 +65,6 @@ app.state.MODELS = {} ...@@ -69,8 +65,6 @@ app.state.MODELS = {}
async def check_url(request: Request, call_next): async def check_url(request: Request, call_next):
if len(app.state.MODELS) == 0: if len(app.state.MODELS) == 0:
await get_all_models() await get_all_models()
else:
pass
response = await call_next(request) response = await call_next(request)
return response return response
...@@ -175,7 +169,7 @@ async def speech(request: Request, user=Depends(get_verified_user)): ...@@ -175,7 +169,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
res = r.json() res = r.json()
if "error" in res: if "error" in res:
error_detail = f"External: {res['error']}" error_detail = f"External: {res['error']}"
except: except Exception:
error_detail = f"External: {e}" error_detail = f"External: {e}"
raise HTTPException( raise HTTPException(
...@@ -234,64 +228,68 @@ def merge_models_lists(model_lists): ...@@ -234,64 +228,68 @@ def merge_models_lists(model_lists):
return merged_list return merged_list
async def get_all_models(raw: bool = False): def is_openai_api_disabled():
log.info("get_all_models()") api_keys = app.state.config.OPENAI_API_KEYS
no_keys = len(api_keys) == 1 and api_keys[0] == ""
return no_keys or not app.state.config.ENABLE_OPENAI_API
if (
len(app.state.config.OPENAI_API_KEYS) == 1
and app.state.config.OPENAI_API_KEYS[0] == ""
) or not app.state.config.ENABLE_OPENAI_API:
models = {"data": []}
else:
# Check if API KEYS length is same than API URLS length
if len(app.state.config.OPENAI_API_KEYS) != len(
app.state.config.OPENAI_API_BASE_URLS
):
# if there are more keys than urls, remove the extra keys
if len(app.state.config.OPENAI_API_KEYS) > len(
app.state.config.OPENAI_API_BASE_URLS
):
app.state.config.OPENAI_API_KEYS = app.state.config.OPENAI_API_KEYS[
: len(app.state.config.OPENAI_API_BASE_URLS)
]
# if there are more urls than keys, add empty keys
else:
app.state.config.OPENAI_API_KEYS += [
""
for _ in range(
len(app.state.config.OPENAI_API_BASE_URLS)
- len(app.state.config.OPENAI_API_KEYS)
)
]
tasks = [ async def get_all_models_raw() -> list:
fetch_url(f"{url}/models", app.state.config.OPENAI_API_KEYS[idx]) if is_openai_api_disabled():
for idx, url in enumerate(app.state.config.OPENAI_API_BASE_URLS) return []
]
# Check if API KEYS length is same than API URLS length
responses = await asyncio.gather(*tasks) num_urls = len(app.state.config.OPENAI_API_BASE_URLS)
log.debug(f"get_all_models:responses() {responses}") num_keys = len(app.state.config.OPENAI_API_KEYS)
if raw: if num_keys != num_urls:
return responses # if there are more keys than urls, remove the extra keys
if num_keys > num_urls:
models = { new_keys = app.state.config.OPENAI_API_KEYS[:num_urls]
"data": merge_models_lists( app.state.config.OPENAI_API_KEYS = new_keys
list( # if there are more urls than keys, add empty keys
map( else:
lambda response: ( app.state.config.OPENAI_API_KEYS += [""] * (num_urls - num_keys)
response["data"]
if (response and "data" in response) tasks = [
else (response if isinstance(response, list) else None) fetch_url(f"{url}/models", app.state.config.OPENAI_API_KEYS[idx])
), for idx, url in enumerate(app.state.config.OPENAI_API_BASE_URLS)
responses, ]
)
) responses = await asyncio.gather(*tasks)
) log.debug(f"get_all_models:responses() {responses}")
}
return responses
@overload
async def get_all_models(raw: Literal[True]) -> list: ...
log.debug(f"models: {models}")
app.state.MODELS = {model["id"]: model for model in models["data"]} @overload
async def get_all_models(raw: Literal[False] = False) -> dict[str, list]: ...
async def get_all_models(raw=False) -> dict[str, list] | list:
log.info("get_all_models()")
if is_openai_api_disabled():
return [] if raw else {"data": []}
responses = await get_all_models_raw()
if raw:
return responses
def extract_data(response):
if response and "data" in response:
return response["data"]
if isinstance(response, list):
return response
return None
models = {"data": merge_models_lists(map(extract_data, responses))}
log.debug(f"models: {models}")
app.state.MODELS = {model["id"]: model for model in models["data"]}
return models return models
...@@ -299,7 +297,7 @@ async def get_all_models(raw: bool = False): ...@@ -299,7 +297,7 @@ async def get_all_models(raw: bool = False):
@app.get("/models") @app.get("/models")
@app.get("/models/{url_idx}") @app.get("/models/{url_idx}")
async def get_models(url_idx: Optional[int] = None, user=Depends(get_verified_user)): async def get_models(url_idx: Optional[int] = None, user=Depends(get_verified_user)):
if url_idx == None: if url_idx is None:
models = await get_all_models() models = await get_all_models()
if app.state.config.ENABLE_MODEL_FILTER: if app.state.config.ENABLE_MODEL_FILTER:
if user.role == "user": if user.role == "user":
...@@ -340,7 +338,7 @@ async def get_models(url_idx: Optional[int] = None, user=Depends(get_verified_us ...@@ -340,7 +338,7 @@ async def get_models(url_idx: Optional[int] = None, user=Depends(get_verified_us
res = r.json() res = r.json()
if "error" in res: if "error" in res:
error_detail = f"External: {res['error']}" error_detail = f"External: {res['error']}"
except: except Exception:
error_detail = f"External: {e}" error_detail = f"External: {e}"
raise HTTPException( raise HTTPException(
...@@ -358,8 +356,7 @@ async def generate_chat_completion( ...@@ -358,8 +356,7 @@ async def generate_chat_completion(
): ):
idx = 0 idx = 0
payload = {**form_data} payload = {**form_data}
if "metadata" in payload: payload.pop("metadata")
del payload["metadata"]
model_id = form_data.get("model") model_id = form_data.get("model")
model_info = Models.get_model_by_id(model_id) model_info = Models.get_model_by_id(model_id)
...@@ -368,70 +365,9 @@ async def generate_chat_completion( ...@@ -368,70 +365,9 @@ async def generate_chat_completion(
if model_info.base_model_id: if model_info.base_model_id:
payload["model"] = model_info.base_model_id payload["model"] = model_info.base_model_id
model_info.params = model_info.params.model_dump() params = model_info.params.model_dump()
payload = apply_model_params_to_body(params, payload)
if model_info.params: payload = apply_model_system_prompt_to_body(params, payload, user)
if (
model_info.params.get("temperature", None) is not None
and payload.get("temperature") is None
):
payload["temperature"] = float(model_info.params.get("temperature"))
if model_info.params.get("top_p", None) and payload.get("top_p") is None:
payload["top_p"] = int(model_info.params.get("top_p", None))
if (
model_info.params.get("max_tokens", None)
and payload.get("max_tokens") is None
):
payload["max_tokens"] = int(model_info.params.get("max_tokens", None))
if (
model_info.params.get("frequency_penalty", None)
and payload.get("frequency_penalty") is None
):
payload["frequency_penalty"] = int(
model_info.params.get("frequency_penalty", None)
)
if (
model_info.params.get("seed", None) is not None
and payload.get("seed") is None
):
payload["seed"] = model_info.params.get("seed", None)
if model_info.params.get("stop", None) and payload.get("stop") is None:
payload["stop"] = (
[
bytes(stop, "utf-8").decode("unicode_escape")
for stop in model_info.params["stop"]
]
if model_info.params.get("stop", None)
else None
)
system = model_info.params.get("system", None)
if system:
system = prompt_template(
system,
**(
{
"user_name": user.name,
"user_location": (
user.info.get("location") if user.info else None
),
}
if user
else {}
),
)
if payload.get("messages"):
payload["messages"] = add_or_update_system_message(
system, payload["messages"]
)
else:
pass
model = app.state.MODELS[payload.get("model")] model = app.state.MODELS[payload.get("model")]
idx = model["urlIdx"] idx = model["urlIdx"]
...@@ -444,13 +380,6 @@ async def generate_chat_completion( ...@@ -444,13 +380,6 @@ async def generate_chat_completion(
"role": user.role, "role": user.role,
} }
# Check if the model is "gpt-4-vision-preview" and set "max_tokens" to 4000
# This is a workaround until OpenAI fixes the issue with this model
if payload.get("model") == "gpt-4-vision-preview":
if "max_tokens" not in payload:
payload["max_tokens"] = 4000
log.debug("Modified payload:", payload)
# Convert the modified body back to JSON # Convert the modified body back to JSON
payload = json.dumps(payload) payload = json.dumps(payload)
...@@ -506,7 +435,7 @@ async def generate_chat_completion( ...@@ -506,7 +435,7 @@ async def generate_chat_completion(
print(res) print(res)
if "error" in res: if "error" in res:
error_detail = f"External: {res['error']['message'] if 'message' in res['error'] else res['error']}" error_detail = f"External: {res['error']['message'] if 'message' in res['error'] else res['error']}"
except: except Exception:
error_detail = f"External: {e}" error_detail = f"External: {e}"
raise HTTPException(status_code=r.status if r else 500, detail=error_detail) raise HTTPException(status_code=r.status if r else 500, detail=error_detail)
finally: finally:
...@@ -569,7 +498,7 @@ async def proxy(path: str, request: Request, user=Depends(get_verified_user)): ...@@ -569,7 +498,7 @@ async def proxy(path: str, request: Request, user=Depends(get_verified_user)):
print(res) print(res)
if "error" in res: if "error" in res:
error_detail = f"External: {res['error']['message'] if 'message' in res['error'] else res['error']}" error_detail = f"External: {res['error']['message'] if 'message' in res['error'] else res['error']}"
except: except Exception:
error_detail = f"External: {e}" error_detail = f"External: {e}"
raise HTTPException(status_code=r.status if r else 500, detail=error_detail) raise HTTPException(status_code=r.status if r else 500, detail=error_detail)
finally: finally:
......
...@@ -44,23 +44,26 @@ async def user_join(sid, data): ...@@ -44,23 +44,26 @@ async def user_join(sid, data):
print("user-join", sid, data) print("user-join", sid, data)
auth = data["auth"] if "auth" in data else None auth = data["auth"] if "auth" in data else None
if not auth or "token" not in auth:
return
if auth and "token" in auth: data = decode_token(auth["token"])
data = decode_token(auth["token"]) if data is None or "id" not in data:
return
if data is not None and "id" in data: user = Users.get_user_by_id(data["id"])
user = Users.get_user_by_id(data["id"]) if not user:
return
if user: SESSION_POOL[sid] = user.id
SESSION_POOL[sid] = user.id if user.id in USER_POOL:
if user.id in USER_POOL: USER_POOL[user.id].append(sid)
USER_POOL[user.id].append(sid) else:
else: USER_POOL[user.id] = [sid]
USER_POOL[user.id] = [sid]
print(f"user {user.name}({user.id}) connected with session ID {sid}") print(f"user {user.name}({user.id}) connected with session ID {sid}")
await sio.emit("user-count", {"count": len(set(USER_POOL))}) await sio.emit("user-count", {"count": len(set(USER_POOL))})
@sio.on("user-count") @sio.on("user-count")
......
...@@ -22,9 +22,9 @@ from apps.webui.utils import load_function_module_by_id ...@@ -22,9 +22,9 @@ from apps.webui.utils import load_function_module_by_id
from utils.misc import ( from utils.misc import (
openai_chat_chunk_message_template, openai_chat_chunk_message_template,
openai_chat_completion_message_template, openai_chat_completion_message_template,
add_or_update_system_message, apply_model_params_to_body,
apply_model_system_prompt_to_body,
) )
from utils.task import prompt_template
from config import ( from config import (
...@@ -269,47 +269,6 @@ def get_function_params(function_module, form_data, user, extra_params={}): ...@@ -269,47 +269,6 @@ def get_function_params(function_module, form_data, user, extra_params={}):
return params return params
# inplace function: form_data is modified
def apply_model_params_to_body(params: dict, form_data: dict) -> dict:
if not params:
return form_data
mappings = {
"temperature": float,
"top_p": int,
"max_tokens": int,
"frequency_penalty": int,
"seed": lambda x: x,
"stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x],
}
for key, cast_func in mappings.items():
if (value := params.get(key)) is not None:
form_data[key] = cast_func(value)
return form_data
# inplace function: form_data is modified
def apply_model_system_prompt_to_body(params: dict, form_data: dict, user) -> dict:
system = params.get("system", None)
if not system:
return form_data
if user:
template_params = {
"user_name": user.name,
"user_location": user.info.get("location") if user.info else None,
}
else:
template_params = {}
system = prompt_template(system, **template_params)
form_data["messages"] = add_or_update_system_message(
system, form_data.get("messages", [])
)
return form_data
async def generate_function_chat_completion(form_data, user): async def generate_function_chat_completion(form_data, user):
model_id = form_data.get("model") model_id = form_data.get("model")
model_info = Models.get_model_by_id(model_id) model_info = Models.get_model_by_id(model_id)
......
...@@ -250,7 +250,7 @@ class ChatTable: ...@@ -250,7 +250,7 @@ class ChatTable:
user_id: str, user_id: str,
include_archived: bool = False, include_archived: bool = False,
skip: int = 0, skip: int = 0,
limit: int = 50, limit: int = -1,
) -> List[ChatTitleIdResponse]: ) -> List[ChatTitleIdResponse]:
with get_db() as db: with get_db() as db:
query = db.query(Chat).filter_by(user_id=user_id) query = db.query(Chat).filter_by(user_id=user_id)
...@@ -260,9 +260,10 @@ class ChatTable: ...@@ -260,9 +260,10 @@ class ChatTable:
all_chats = ( all_chats = (
query.order_by(Chat.updated_at.desc()) query.order_by(Chat.updated_at.desc())
# limit cols # limit cols
.with_entities( .with_entities(Chat.id, Chat.title, Chat.updated_at, Chat.created_at)
Chat.id, Chat.title, Chat.updated_at, Chat.created_at .limit(limit)
).all() .offset(skip)
.all()
) )
# result has to be destrctured from sqlalchemy `row` and mapped to a dict since the `ChatModel`is not the returned dataclass. # result has to be destrctured from sqlalchemy `row` and mapped to a dict since the `ChatModel`is not the returned dataclass.
return [ return [
......
...@@ -28,7 +28,7 @@ from apps.webui.models.tags import ( ...@@ -28,7 +28,7 @@ from apps.webui.models.tags import (
from constants import ERROR_MESSAGES from constants import ERROR_MESSAGES
from config import SRC_LOG_LEVELS, ENABLE_ADMIN_EXPORT from config import SRC_LOG_LEVELS, ENABLE_ADMIN_EXPORT, ENABLE_ADMIN_CHAT_ACCESS
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"]) log.setLevel(SRC_LOG_LEVELS["MODELS"])
...@@ -43,9 +43,15 @@ router = APIRouter() ...@@ -43,9 +43,15 @@ router = APIRouter()
@router.get("/", response_model=List[ChatTitleIdResponse]) @router.get("/", response_model=List[ChatTitleIdResponse])
@router.get("/list", response_model=List[ChatTitleIdResponse]) @router.get("/list", response_model=List[ChatTitleIdResponse])
async def get_session_user_chat_list( async def get_session_user_chat_list(
user=Depends(get_verified_user), skip: int = 0, limit: int = 50 user=Depends(get_verified_user), page: Optional[int] = None
): ):
return Chats.get_chat_title_id_list_by_user_id(user.id, skip=skip, limit=limit) if page is not None:
limit = 60
skip = (page - 1) * limit
return Chats.get_chat_title_id_list_by_user_id(user.id, skip=skip, limit=limit)
else:
return Chats.get_chat_title_id_list_by_user_id(user.id)
############################ ############################
...@@ -81,6 +87,11 @@ async def get_user_chat_list_by_user_id( ...@@ -81,6 +87,11 @@ async def get_user_chat_list_by_user_id(
skip: int = 0, skip: int = 0,
limit: int = 50, limit: int = 50,
): ):
if not ENABLE_ADMIN_CHAT_ACCESS:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
)
return Chats.get_chat_list_by_user_id( return Chats.get_chat_list_by_user_id(
user_id, include_archived=True, skip=skip, limit=limit user_id, include_archived=True, skip=skip, limit=limit
) )
...@@ -181,9 +192,9 @@ async def get_shared_chat_by_id(share_id: str, user=Depends(get_verified_user)): ...@@ -181,9 +192,9 @@ async def get_shared_chat_by_id(share_id: str, user=Depends(get_verified_user)):
status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.NOT_FOUND status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.NOT_FOUND
) )
if user.role == "user": if user.role == "user" or (user.role == "admin" and not ENABLE_ADMIN_CHAT_ACCESS):
chat = Chats.get_chat_by_share_id(share_id) chat = Chats.get_chat_by_share_id(share_id)
elif user.role == "admin": elif user.role == "admin" and ENABLE_ADMIN_CHAT_ACCESS:
chat = Chats.get_chat_by_id(share_id) chat = Chats.get_chat_by_id(share_id)
if chat: if chat:
......
from fastapi import Depends, FastAPI, HTTPException, status, Request from fastapi import Depends, HTTPException, status, Request
from datetime import datetime, timedelta from typing import List, Optional
from typing import List, Union, Optional
from fastapi import APIRouter from fastapi import APIRouter
from pydantic import BaseModel
import json
from apps.webui.models.users import Users
from apps.webui.models.tools import Tools, ToolForm, ToolModel, ToolResponse from apps.webui.models.tools import Tools, ToolForm, ToolModel, ToolResponse
from apps.webui.utils import load_toolkit_module_by_id from apps.webui.utils import load_toolkit_module_by_id
...@@ -14,7 +10,6 @@ from utils.utils import get_admin_user, get_verified_user ...@@ -14,7 +10,6 @@ from utils.utils import get_admin_user, get_verified_user
from utils.tools import get_tools_specs from utils.tools import get_tools_specs
from constants import ERROR_MESSAGES from constants import ERROR_MESSAGES
from importlib import util
import os import os
from pathlib import Path from pathlib import Path
...@@ -69,7 +64,7 @@ async def create_new_toolkit( ...@@ -69,7 +64,7 @@ async def create_new_toolkit(
form_data.id = form_data.id.lower() form_data.id = form_data.id.lower()
toolkit = Tools.get_tool_by_id(form_data.id) toolkit = Tools.get_tool_by_id(form_data.id)
if toolkit == None: if toolkit is None:
toolkit_path = os.path.join(TOOLS_DIR, f"{form_data.id}.py") toolkit_path = os.path.join(TOOLS_DIR, f"{form_data.id}.py")
try: try:
with open(toolkit_path, "w") as tool_file: with open(toolkit_path, "w") as tool_file:
...@@ -98,7 +93,7 @@ async def create_new_toolkit( ...@@ -98,7 +93,7 @@ async def create_new_toolkit(
print(e) print(e)
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e), detail=ERROR_MESSAGES.DEFAULT(str(e)),
) )
else: else:
raise HTTPException( raise HTTPException(
...@@ -170,7 +165,7 @@ async def update_toolkit_by_id( ...@@ -170,7 +165,7 @@ async def update_toolkit_by_id(
except Exception as e: except Exception as e:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e), detail=ERROR_MESSAGES.DEFAULT(str(e)),
) )
...@@ -210,7 +205,7 @@ async def get_toolkit_valves_by_id(id: str, user=Depends(get_admin_user)): ...@@ -210,7 +205,7 @@ async def get_toolkit_valves_by_id(id: str, user=Depends(get_admin_user)):
except Exception as e: except Exception as e:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e), detail=ERROR_MESSAGES.DEFAULT(str(e)),
) )
else: else:
raise HTTPException( raise HTTPException(
...@@ -233,7 +228,7 @@ async def get_toolkit_valves_spec_by_id( ...@@ -233,7 +228,7 @@ async def get_toolkit_valves_spec_by_id(
if id in request.app.state.TOOLS: if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id] toolkit_module = request.app.state.TOOLS[id]
else: else:
toolkit_module, frontmatter = load_toolkit_module_by_id(id) toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module request.app.state.TOOLS[id] = toolkit_module
if hasattr(toolkit_module, "Valves"): if hasattr(toolkit_module, "Valves"):
...@@ -261,7 +256,7 @@ async def update_toolkit_valves_by_id( ...@@ -261,7 +256,7 @@ async def update_toolkit_valves_by_id(
if id in request.app.state.TOOLS: if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id] toolkit_module = request.app.state.TOOLS[id]
else: else:
toolkit_module, frontmatter = load_toolkit_module_by_id(id) toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module request.app.state.TOOLS[id] = toolkit_module
if hasattr(toolkit_module, "Valves"): if hasattr(toolkit_module, "Valves"):
...@@ -276,7 +271,7 @@ async def update_toolkit_valves_by_id( ...@@ -276,7 +271,7 @@ async def update_toolkit_valves_by_id(
print(e) print(e)
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e), detail=ERROR_MESSAGES.DEFAULT(str(e)),
) )
else: else:
raise HTTPException( raise HTTPException(
...@@ -306,7 +301,7 @@ async def get_toolkit_user_valves_by_id(id: str, user=Depends(get_verified_user) ...@@ -306,7 +301,7 @@ async def get_toolkit_user_valves_by_id(id: str, user=Depends(get_verified_user)
except Exception as e: except Exception as e:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e), detail=ERROR_MESSAGES.DEFAULT(str(e)),
) )
else: else:
raise HTTPException( raise HTTPException(
...@@ -324,7 +319,7 @@ async def get_toolkit_user_valves_spec_by_id( ...@@ -324,7 +319,7 @@ async def get_toolkit_user_valves_spec_by_id(
if id in request.app.state.TOOLS: if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id] toolkit_module = request.app.state.TOOLS[id]
else: else:
toolkit_module, frontmatter = load_toolkit_module_by_id(id) toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module request.app.state.TOOLS[id] = toolkit_module
if hasattr(toolkit_module, "UserValves"): if hasattr(toolkit_module, "UserValves"):
...@@ -348,7 +343,7 @@ async def update_toolkit_user_valves_by_id( ...@@ -348,7 +343,7 @@ async def update_toolkit_user_valves_by_id(
if id in request.app.state.TOOLS: if id in request.app.state.TOOLS:
toolkit_module = request.app.state.TOOLS[id] toolkit_module = request.app.state.TOOLS[id]
else: else:
toolkit_module, frontmatter = load_toolkit_module_by_id(id) toolkit_module, _ = load_toolkit_module_by_id(id)
request.app.state.TOOLS[id] = toolkit_module request.app.state.TOOLS[id] = toolkit_module
if hasattr(toolkit_module, "UserValves"): if hasattr(toolkit_module, "UserValves"):
...@@ -365,7 +360,7 @@ async def update_toolkit_user_valves_by_id( ...@@ -365,7 +360,7 @@ async def update_toolkit_user_valves_by_id(
print(e) print(e)
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.DEFAULT(e), detail=ERROR_MESSAGES.DEFAULT(str(e)),
) )
else: else:
raise HTTPException( raise HTTPException(
......
...@@ -824,6 +824,10 @@ WEBHOOK_URL = PersistentConfig( ...@@ -824,6 +824,10 @@ WEBHOOK_URL = PersistentConfig(
ENABLE_ADMIN_EXPORT = os.environ.get("ENABLE_ADMIN_EXPORT", "True").lower() == "true" ENABLE_ADMIN_EXPORT = os.environ.get("ENABLE_ADMIN_EXPORT", "True").lower() == "true"
ENABLE_ADMIN_CHAT_ACCESS = (
os.environ.get("ENABLE_ADMIN_CHAT_ACCESS", "True").lower() == "true"
)
ENABLE_COMMUNITY_SHARING = PersistentConfig( ENABLE_COMMUNITY_SHARING = PersistentConfig(
"ENABLE_COMMUNITY_SHARING", "ENABLE_COMMUNITY_SHARING",
"ui.enable_community_sharing", "ui.enable_community_sharing",
...@@ -1317,7 +1321,7 @@ COMFYUI_FLUX_WEIGHT_DTYPE = PersistentConfig( ...@@ -1317,7 +1321,7 @@ COMFYUI_FLUX_WEIGHT_DTYPE = PersistentConfig(
COMFYUI_FLUX_FP8_CLIP = PersistentConfig( COMFYUI_FLUX_FP8_CLIP = PersistentConfig(
"COMFYUI_FLUX_FP8_CLIP", "COMFYUI_FLUX_FP8_CLIP",
"image_generation.comfyui.flux_fp8_clip", "image_generation.comfyui.flux_fp8_clip",
os.getenv("COMFYUI_FLUX_FP8_CLIP", ""), os.environ.get("COMFYUI_FLUX_FP8_CLIP", "").lower() == "true",
) )
IMAGES_OPENAI_API_BASE_URL = PersistentConfig( IMAGES_OPENAI_API_BASE_URL = PersistentConfig(
......
...@@ -116,6 +116,7 @@ from config import ( ...@@ -116,6 +116,7 @@ from config import (
WEBUI_SECRET_KEY, WEBUI_SECRET_KEY,
WEBUI_SESSION_COOKIE_SAME_SITE, WEBUI_SESSION_COOKIE_SAME_SITE,
WEBUI_SESSION_COOKIE_SECURE, WEBUI_SESSION_COOKIE_SECURE,
ENABLE_ADMIN_CHAT_ACCESS,
AppConfig, AppConfig,
) )
...@@ -957,7 +958,7 @@ async def get_all_models(): ...@@ -957,7 +958,7 @@ async def get_all_models():
custom_models = Models.get_all_models() custom_models = Models.get_all_models()
for custom_model in custom_models: for custom_model in custom_models:
if custom_model.base_model_id == None: if custom_model.base_model_id is None:
for model in models: for model in models:
if ( if (
custom_model.id == model["id"] custom_model.id == model["id"]
...@@ -1662,7 +1663,7 @@ async def get_pipelines_list(user=Depends(get_admin_user)): ...@@ -1662,7 +1663,7 @@ async def get_pipelines_list(user=Depends(get_admin_user)):
urlIdxs = [ urlIdxs = [
idx idx
for idx, response in enumerate(responses) for idx, response in enumerate(responses)
if response != None and "pipelines" in response if response is not None and "pipelines" in response
] ]
return { return {
...@@ -1723,7 +1724,7 @@ async def upload_pipeline( ...@@ -1723,7 +1724,7 @@ async def upload_pipeline(
res = r.json() res = r.json()
if "detail" in res: if "detail" in res:
detail = res["detail"] detail = res["detail"]
except: except Exception:
pass pass
raise HTTPException( raise HTTPException(
...@@ -1769,7 +1770,7 @@ async def add_pipeline(form_data: AddPipelineForm, user=Depends(get_admin_user)) ...@@ -1769,7 +1770,7 @@ async def add_pipeline(form_data: AddPipelineForm, user=Depends(get_admin_user))
res = r.json() res = r.json()
if "detail" in res: if "detail" in res:
detail = res["detail"] detail = res["detail"]
except: except Exception:
pass pass
raise HTTPException( raise HTTPException(
...@@ -1811,7 +1812,7 @@ async def delete_pipeline(form_data: DeletePipelineForm, user=Depends(get_admin_ ...@@ -1811,7 +1812,7 @@ async def delete_pipeline(form_data: DeletePipelineForm, user=Depends(get_admin_
res = r.json() res = r.json()
if "detail" in res: if "detail" in res:
detail = res["detail"] detail = res["detail"]
except: except Exception:
pass pass
raise HTTPException( raise HTTPException(
...@@ -1844,7 +1845,7 @@ async def get_pipelines(urlIdx: Optional[int] = None, user=Depends(get_admin_use ...@@ -1844,7 +1845,7 @@ async def get_pipelines(urlIdx: Optional[int] = None, user=Depends(get_admin_use
res = r.json() res = r.json()
if "detail" in res: if "detail" in res:
detail = res["detail"] detail = res["detail"]
except: except Exception:
pass pass
raise HTTPException( raise HTTPException(
...@@ -1859,7 +1860,6 @@ async def get_pipeline_valves( ...@@ -1859,7 +1860,6 @@ async def get_pipeline_valves(
pipeline_id: str, pipeline_id: str,
user=Depends(get_admin_user), user=Depends(get_admin_user),
): ):
models = await get_all_models()
r = None r = None
try: try:
url = openai_app.state.config.OPENAI_API_BASE_URLS[urlIdx] url = openai_app.state.config.OPENAI_API_BASE_URLS[urlIdx]
...@@ -1898,8 +1898,6 @@ async def get_pipeline_valves_spec( ...@@ -1898,8 +1898,6 @@ async def get_pipeline_valves_spec(
pipeline_id: str, pipeline_id: str,
user=Depends(get_admin_user), user=Depends(get_admin_user),
): ):
models = await get_all_models()
r = None r = None
try: try:
url = openai_app.state.config.OPENAI_API_BASE_URLS[urlIdx] url = openai_app.state.config.OPENAI_API_BASE_URLS[urlIdx]
...@@ -1922,7 +1920,7 @@ async def get_pipeline_valves_spec( ...@@ -1922,7 +1920,7 @@ async def get_pipeline_valves_spec(
res = r.json() res = r.json()
if "detail" in res: if "detail" in res:
detail = res["detail"] detail = res["detail"]
except: except Exception:
pass pass
raise HTTPException( raise HTTPException(
...@@ -1938,8 +1936,6 @@ async def update_pipeline_valves( ...@@ -1938,8 +1936,6 @@ async def update_pipeline_valves(
form_data: dict, form_data: dict,
user=Depends(get_admin_user), user=Depends(get_admin_user),
): ):
models = await get_all_models()
r = None r = None
try: try:
url = openai_app.state.config.OPENAI_API_BASE_URLS[urlIdx] url = openai_app.state.config.OPENAI_API_BASE_URLS[urlIdx]
...@@ -1967,7 +1963,7 @@ async def update_pipeline_valves( ...@@ -1967,7 +1963,7 @@ async def update_pipeline_valves(
res = r.json() res = r.json()
if "detail" in res: if "detail" in res:
detail = res["detail"] detail = res["detail"]
except: except Exception:
pass pass
raise HTTPException( raise HTTPException(
...@@ -2001,6 +1997,7 @@ async def get_app_config(): ...@@ -2001,6 +1997,7 @@ async def get_app_config():
"enable_image_generation": images_app.state.config.ENABLED, "enable_image_generation": images_app.state.config.ENABLED,
"enable_community_sharing": webui_app.state.config.ENABLE_COMMUNITY_SHARING, "enable_community_sharing": webui_app.state.config.ENABLE_COMMUNITY_SHARING,
"enable_admin_export": ENABLE_ADMIN_EXPORT, "enable_admin_export": ENABLE_ADMIN_EXPORT,
"enable_admin_chat_access": ENABLE_ADMIN_CHAT_ACCESS,
}, },
"audio": { "audio": {
"tts": { "tts": {
...@@ -2068,7 +2065,7 @@ async def update_webhook_url(form_data: UrlForm, user=Depends(get_admin_user)): ...@@ -2068,7 +2065,7 @@ async def update_webhook_url(form_data: UrlForm, user=Depends(get_admin_user)):
@app.get("/api/version") @app.get("/api/version")
async def get_app_config(): async def get_app_version():
return { return {
"version": VERSION, "version": VERSION,
} }
...@@ -2091,7 +2088,7 @@ async def get_app_latest_release_version(): ...@@ -2091,7 +2088,7 @@ async def get_app_latest_release_version():
latest_version = data["tag_name"] latest_version = data["tag_name"]
return {"current": VERSION, "latest": latest_version[1:]} return {"current": VERSION, "latest": latest_version[1:]}
except aiohttp.ClientError as e: except aiohttp.ClientError:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_503_SERVICE_UNAVAILABLE, status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
detail=ERROR_MESSAGES.RATE_LIMIT_EXCEEDED, detail=ERROR_MESSAGES.RATE_LIMIT_EXCEEDED,
......
...@@ -23,7 +23,7 @@ bcrypt==4.1.3 ...@@ -23,7 +23,7 @@ bcrypt==4.1.3
pymongo pymongo
redis redis
boto3==1.34.110 boto3==1.34.153
argon2-cffi==23.1.0 argon2-cffi==23.1.0
APScheduler==3.10.4 APScheduler==3.10.4
...@@ -41,9 +41,9 @@ langchain-chroma==0.1.2 ...@@ -41,9 +41,9 @@ langchain-chroma==0.1.2
fake-useragent==1.5.1 fake-useragent==1.5.1
chromadb==0.5.4 chromadb==0.5.4
sentence-transformers==3.0.1 sentence-transformers==3.0.1
pypdf==4.2.0 pypdf==4.3.1
docx2txt==0.8 docx2txt==0.8
python-pptx==0.6.23 python-pptx==1.0.0
unstructured==0.15.0 unstructured==0.15.0
Markdown==3.6 Markdown==3.6
pypandoc==1.13 pypandoc==1.13
...@@ -51,7 +51,7 @@ pandas==2.2.2 ...@@ -51,7 +51,7 @@ pandas==2.2.2
openpyxl==3.1.5 openpyxl==3.1.5
pyxlsb==1.0.10 pyxlsb==1.0.10
xlrd==2.0.1 xlrd==2.0.1
validators==0.28.1 validators==0.33.0
psutil psutil
opencv-python-headless==4.10.0.84 opencv-python-headless==4.10.0.84
...@@ -65,7 +65,7 @@ faster-whisper==1.0.2 ...@@ -65,7 +65,7 @@ faster-whisper==1.0.2
PyJWT[crypto]==2.8.0 PyJWT[crypto]==2.8.0
authlib==1.3.1 authlib==1.3.1
black==24.4.2 black==24.8.0
langfuse==2.39.2 langfuse==2.39.2
youtube-transcript-api==0.6.2 youtube-transcript-api==0.6.2
pytube==15.0.0 pytube==15.0.0
......
...@@ -6,6 +6,8 @@ from typing import Optional, List, Tuple ...@@ -6,6 +6,8 @@ from typing import Optional, List, Tuple
import uuid import uuid
import time import time
from utils.task import prompt_template
def get_last_user_message_item(messages: List[dict]) -> Optional[dict]: def get_last_user_message_item(messages: List[dict]) -> Optional[dict]:
for message in reversed(messages): for message in reversed(messages):
...@@ -97,18 +99,60 @@ def openai_chat_message_template(model: str): ...@@ -97,18 +99,60 @@ def openai_chat_message_template(model: str):
} }
def openai_chat_chunk_message_template(model: str, message: str): def openai_chat_chunk_message_template(model: str, message: str) -> dict:
template = openai_chat_message_template(model) template = openai_chat_message_template(model)
template["object"] = "chat.completion.chunk" template["object"] = "chat.completion.chunk"
template["choices"][0]["delta"] = {"content": message} template["choices"][0]["delta"] = {"content": message}
return template return template
def openai_chat_completion_message_template(model: str, message: str): def openai_chat_completion_message_template(model: str, message: str) -> dict:
template = openai_chat_message_template(model) template = openai_chat_message_template(model)
template["object"] = "chat.completion" template["object"] = "chat.completion"
template["choices"][0]["message"] = {"content": message, "role": "assistant"} template["choices"][0]["message"] = {"content": message, "role": "assistant"}
template["choices"][0]["finish_reason"] = "stop" template["choices"][0]["finish_reason"] = "stop"
return template
# inplace function: form_data is modified
def apply_model_system_prompt_to_body(params: dict, form_data: dict, user) -> dict:
system = params.get("system", None)
if not system:
return form_data
if user:
template_params = {
"user_name": user.name,
"user_location": user.info.get("location") if user.info else None,
}
else:
template_params = {}
system = prompt_template(system, **template_params)
form_data["messages"] = add_or_update_system_message(
system, form_data.get("messages", [])
)
return form_data
# inplace function: form_data is modified
def apply_model_params_to_body(params: dict, form_data: dict) -> dict:
if not params:
return form_data
mappings = {
"temperature": float,
"top_p": int,
"max_tokens": int,
"frequency_penalty": int,
"seed": lambda x: x,
"stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x],
}
for key, cast_func in mappings.items():
if (value := params.get(key)) is not None:
form_data[key] = cast_func(value)
return form_data
def get_gravatar_url(email): def get_gravatar_url(email):
......
...@@ -6,7 +6,7 @@ from typing import Optional ...@@ -6,7 +6,7 @@ from typing import Optional
def prompt_template( def prompt_template(
template: str, user_name: str = None, user_location: str = None template: str, user_name: Optional[str] = None, user_location: Optional[str] = None
) -> str: ) -> str:
# Get the current date # Get the current date
current_date = datetime.now() current_date = datetime.now()
...@@ -83,7 +83,6 @@ def title_generation_template( ...@@ -83,7 +83,6 @@ def title_generation_template(
def search_query_generation_template( def search_query_generation_template(
template: str, prompt: str, user: Optional[dict] = None template: str, prompt: str, user: Optional[dict] = None
) -> str: ) -> str:
def replacement_function(match): def replacement_function(match):
full_match = match.group(0) full_match = match.group(0)
start_length = match.group(1) start_length = match.group(1)
......
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from fastapi import HTTPException, status, Depends, Request from fastapi import HTTPException, status, Depends, Request
from sqlalchemy.orm import Session
from apps.webui.models.users import Users from apps.webui.models.users import Users
from pydantic import BaseModel
from typing import Union, Optional from typing import Union, Optional
from constants import ERROR_MESSAGES from constants import ERROR_MESSAGES
from passlib.context import CryptContext from passlib.context import CryptContext
from datetime import datetime, timedelta from datetime import datetime, timedelta
import requests
import jwt import jwt
import uuid import uuid
import logging import logging
...@@ -54,7 +51,7 @@ def decode_token(token: str) -> Optional[dict]: ...@@ -54,7 +51,7 @@ def decode_token(token: str) -> Optional[dict]:
try: try:
decoded = jwt.decode(token, SESSION_SECRET, algorithms=[ALGORITHM]) decoded = jwt.decode(token, SESSION_SECRET, algorithms=[ALGORITHM])
return decoded return decoded
except Exception as e: except Exception:
return None return None
...@@ -71,7 +68,7 @@ def get_http_authorization_cred(auth_header: str): ...@@ -71,7 +68,7 @@ def get_http_authorization_cred(auth_header: str):
try: try:
scheme, credentials = auth_header.split(" ") scheme, credentials = auth_header.split(" ")
return HTTPAuthorizationCredentials(scheme=scheme, credentials=credentials) return HTTPAuthorizationCredentials(scheme=scheme, credentials=credentials)
except: except Exception:
raise ValueError(ERROR_MESSAGES.INVALID_TOKEN) raise ValueError(ERROR_MESSAGES.INVALID_TOKEN)
...@@ -96,7 +93,7 @@ def get_current_user( ...@@ -96,7 +93,7 @@ def get_current_user(
# auth by jwt token # auth by jwt token
data = decode_token(token) data = decode_token(token)
if data != None and "id" in data: if data is not None and "id" in data:
user = Users.get_user_by_id(data["id"]) user = Users.get_user_by_id(data["id"])
if user is None: if user is None:
raise HTTPException( raise HTTPException(
......
...@@ -11,10 +11,26 @@ Our primary goal is to ensure the protection and confidentiality of sensitive da ...@@ -11,10 +11,26 @@ Our primary goal is to ensure the protection and confidentiality of sensitive da
## Reporting a Vulnerability ## Reporting a Vulnerability
If you discover a security issue within our system, please notify us immediately via a pull request or contact us on discord. We appreciate the community's interest in identifying potential vulnerabilities. However, effective immediately, we will **not** accept low-effort vulnerability reports. To ensure that submissions are constructive and actionable, please adhere to the following guidelines:
1. **No Vague Reports**: Submissions such as "I found a vulnerability" without any details will be treated as spam and will not be accepted.
2. **In-Depth Understanding Required**: Reports must reflect a clear understanding of the codebase and provide specific details about the vulnerability, including the affected components and potential impacts.
3. **Proof of Concept (PoC) is Mandatory**: Each submission must include a well-documented proof of concept (PoC) that demonstrates the vulnerability. If confidentiality is a concern, reporters are encouraged to create a private fork of the repository and share access with the maintainers. Reports lacking valid evidence will be disregarded.
4. **Required Patch Submission**: Along with the PoC, reporters must provide a patch or actionable steps to remediate the identified vulnerability. This helps us evaluate and implement fixes rapidly.
5. **Streamlined Merging Process**: When vulnerability reports meet the above criteria, we can consider them for immediate merging, similar to regular pull requests. Well-structured and thorough submissions will expedite the process of enhancing our security.
Submissions that do not meet these criteria will be closed, and repeat offenders may face a ban from future submissions. We aim to create a respectful and constructive reporting environment, where high-quality submissions foster better security for everyone.
## Product Security ## Product Security
We regularly audit our internal processes and system's architecture for vulnerabilities using a combination of automated and manual testing techniques. We regularly audit our internal processes and system architecture for vulnerabilities using a combination of automated and manual testing techniques. We are also planning to implement SAST and SCA scans in our project soon.
For immediate concerns or detailed reports that meet our guidelines, please create an issue in our [issue tracker](/open-webui/open-webui/issues) or contact us on [Discord](https://discord.gg/5rJgQTnV4s).
---
We are planning on implementing SAST and SCA scans in our project soon. _Last updated on **2024-08-06**._
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.3.11", "version": "0.3.12",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "open-webui", "name": "open-webui",
"version": "0.3.11", "version": "0.3.12",
"dependencies": { "dependencies": {
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",
"@codemirror/lang-python": "^6.1.6", "@codemirror/lang-python": "^6.1.6",
......
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.3.11", "version": "0.3.12",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "npm run pyodide:fetch && vite dev --host", "dev": "npm run pyodide:fetch && vite dev --host",
......
...@@ -31,14 +31,14 @@ dependencies = [ ...@@ -31,14 +31,14 @@ dependencies = [
"pymongo", "pymongo",
"redis", "redis",
"boto3==1.34.110", "boto3==1.34.153",
"argon2-cffi==23.1.0", "argon2-cffi==23.1.0",
"APScheduler==3.10.4", "APScheduler==3.10.4",
"openai", "openai",
"anthropic", "anthropic",
"google-generativeai==0.5.4", "google-generativeai==0.7.2",
"tiktoken", "tiktoken",
"langchain==0.2.11", "langchain==0.2.11",
...@@ -48,9 +48,9 @@ dependencies = [ ...@@ -48,9 +48,9 @@ dependencies = [
"fake-useragent==1.5.1", "fake-useragent==1.5.1",
"chromadb==0.5.4", "chromadb==0.5.4",
"sentence-transformers==3.0.1", "sentence-transformers==3.0.1",
"pypdf==4.2.0", "pypdf==4.3.1",
"docx2txt==0.8", "docx2txt==0.8",
"python-pptx==0.6.23", "python-pptx==1.0.0",
"unstructured==0.15.0", "unstructured==0.15.0",
"Markdown==3.6", "Markdown==3.6",
"pypandoc==1.13", "pypandoc==1.13",
...@@ -58,7 +58,7 @@ dependencies = [ ...@@ -58,7 +58,7 @@ dependencies = [
"openpyxl==3.1.5", "openpyxl==3.1.5",
"pyxlsb==1.0.10", "pyxlsb==1.0.10",
"xlrd==2.0.1", "xlrd==2.0.1",
"validators==0.28.1", "validators==0.33.0",
"psutil", "psutil",
"opencv-python-headless==4.10.0.84", "opencv-python-headless==4.10.0.84",
...@@ -72,7 +72,7 @@ dependencies = [ ...@@ -72,7 +72,7 @@ dependencies = [
"PyJWT[crypto]==2.8.0", "PyJWT[crypto]==2.8.0",
"authlib==1.3.1", "authlib==1.3.1",
"black==24.4.2", "black==24.8.0",
"langfuse==2.39.2", "langfuse==2.39.2",
"youtube-transcript-api==0.6.2", "youtube-transcript-api==0.6.2",
"pytube==15.0.0", "pytube==15.0.0",
......
...@@ -57,13 +57,13 @@ beautifulsoup4==4.12.3 ...@@ -57,13 +57,13 @@ beautifulsoup4==4.12.3
# via unstructured # via unstructured
bidict==0.23.1 bidict==0.23.1
# via python-socketio # via python-socketio
black==24.4.2 black==24.8.0
# via open-webui # via open-webui
blinker==1.8.2 blinker==1.8.2
# via flask # via flask
boto3==1.34.110 boto3==1.34.153
# via open-webui # via open-webui
botocore==1.34.110 botocore==1.34.155
# via boto3 # via boto3
# via s3transfer # via s3transfer
build==1.2.1 build==1.2.1
...@@ -179,7 +179,7 @@ frozenlist==1.4.1 ...@@ -179,7 +179,7 @@ frozenlist==1.4.1
fsspec==2024.3.1 fsspec==2024.3.1
# via huggingface-hub # via huggingface-hub
# via torch # via torch
google-ai-generativelanguage==0.6.4 google-ai-generativelanguage==0.6.6
# via google-generativeai # via google-generativeai
google-api-core==2.19.0 google-api-core==2.19.0
# via google-ai-generativelanguage # via google-ai-generativelanguage
...@@ -196,7 +196,7 @@ google-auth==2.29.0 ...@@ -196,7 +196,7 @@ google-auth==2.29.0
# via kubernetes # via kubernetes
google-auth-httplib2==0.2.0 google-auth-httplib2==0.2.0
# via google-api-python-client # via google-api-python-client
google-generativeai==0.5.4 google-generativeai==0.7.2
# via open-webui # via open-webui
googleapis-common-protos==1.63.0 googleapis-common-protos==1.63.0
# via google-api-core # via google-api-core
...@@ -502,7 +502,7 @@ pypandoc==1.13 ...@@ -502,7 +502,7 @@ pypandoc==1.13
pyparsing==2.4.7 pyparsing==2.4.7
# via httplib2 # via httplib2
# via oletools # via oletools
pypdf==4.2.0 pypdf==4.3.1
# via open-webui # via open-webui
# via unstructured-client # via unstructured-client
pypika==0.48.9 pypika==0.48.9
...@@ -533,7 +533,7 @@ python-magic==0.4.27 ...@@ -533,7 +533,7 @@ python-magic==0.4.27
python-multipart==0.0.9 python-multipart==0.0.9
# via fastapi # via fastapi
# via open-webui # via open-webui
python-pptx==0.6.23 python-pptx==1.0.0
# via open-webui # via open-webui
python-socketio==5.11.3 python-socketio==5.11.3
# via open-webui # via open-webui
...@@ -684,6 +684,7 @@ typing-extensions==4.11.0 ...@@ -684,6 +684,7 @@ typing-extensions==4.11.0
# via opentelemetry-sdk # via opentelemetry-sdk
# via pydantic # via pydantic
# via pydantic-core # via pydantic-core
# via python-pptx
# via sqlalchemy # via sqlalchemy
# via torch # via torch
# via typer # via typer
...@@ -718,7 +719,7 @@ uvicorn==0.22.0 ...@@ -718,7 +719,7 @@ uvicorn==0.22.0
# via open-webui # via open-webui
uvloop==0.19.0 uvloop==0.19.0
# via uvicorn # via uvicorn
validators==0.28.1 validators==0.33.0
# via open-webui # via open-webui
watchfiles==0.21.0 watchfiles==0.21.0
# via uvicorn # via uvicorn
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment