Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
open-webui
Commits
adb009f3
Unverified
Commit
adb009f3
authored
Apr 24, 2024
by
Steven Kreitzer
Committed by
GitHub
Apr 24, 2024
Browse files
Merge branch 'dev' into buroa/hybrid-search
parents
c0259aad
348186c4
Changes
14
Hide whitespace changes
Inline
Side-by-side
Showing
14 changed files
with
278 additions
and
204 deletions
+278
-204
CHANGELOG.md
CHANGELOG.md
+14
-3
backend/apps/images/main.py
backend/apps/images/main.py
+19
-13
backend/apps/litellm/main.py
backend/apps/litellm/main.py
+23
-8
backend/config.py
backend/config.py
+15
-0
package-lock.json
package-lock.json
+2
-2
package.json
package.json
+1
-1
src/lib/apis/images/index.ts
src/lib/apis/images/index.ts
+7
-6
src/lib/components/chat/Settings/Audio.svelte
src/lib/components/chat/Settings/Audio.svelte
+9
-7
src/lib/components/chat/Settings/Images.svelte
src/lib/components/chat/Settings/Images.svelte
+54
-21
src/lib/components/chat/Settings/Models.svelte
src/lib/components/chat/Settings/Models.svelte
+119
-129
src/lib/i18n/locales/nl-NL/translation.json
src/lib/i18n/locales/nl-NL/translation.json
+1
-1
src/lib/i18n/locales/pl-PL/translation.json
src/lib/i18n/locales/pl-PL/translation.json
+0
-0
src/lib/i18n/locales/ru-RU/translation.json
src/lib/i18n/locales/ru-RU/translation.json
+13
-13
static/manifest.json
static/manifest.json
+1
-0
No files found.
CHANGELOG.md
View file @
adb009f3
...
@@ -5,13 +5,24 @@ All notable changes to this project will be documented in this file.
...
@@ -5,13 +5,24 @@ All notable changes to this project will be documented in this file.
The format is based on
[
Keep a Changelog
](
https://keepachangelog.com/en/1.1.0/
)
,
The format is based on
[
Keep a Changelog
](
https://keepachangelog.com/en/1.1.0/
)
,
and this project adheres to
[
Semantic Versioning
](
https://semver.org/spec/v2.0.0.html
)
.
and this project adheres to
[
Semantic Versioning
](
https://semver.org/spec/v2.0.0.html
)
.
## [0.1.121] - 2024-04-22
## [0.1.122] - 2024-04-24
### Added
-
**🛠️ Improved Embedding Model Support**
: You can now use any embedding model
`sentence_transformers`
supports.
-
**🛠️ Improved Embedding Model Support**
: You can now use any embedding model
`sentence_transformers`
supports.
-
**🌟 Enhanced RAG Pipeline**
: Added
`BM25`
hybrid searching with reranking model support using
`sentence_transformers`
.
-
**🌟 Enhanced RAG Pipeline**
: Added
`BM25`
hybrid searching with reranking model support using
`sentence_transformers`
.
## [0.1.121] - 2024-04-24
### Fixed
-
**🔧 Translation Issues**
: Addressed various translation discrepancies.
-
**🔒 LiteLLM Security Fix**
: Updated LiteLLM version to resolve a security vulnerability.
-
**🖥️ HTML Tag Display**
: Rectified the issue where the '
<
br
>
' tag wasn't displaying correctly.
-
**🔗 WebSocket Connection**
: Resolved the failure of WebSocket connection under HTTPS security for ComfyUI server.
-
**📜 FileReader Optimization**
: Implemented FileReader initialization per image in multi-file drag & drop to ensure reusability.
-
**🏷️ Tag Display**
: Corrected tag display inconsistencies.
-
**📦 Archived Chat Styling**
: Fixed styling issues in archived chat.
-
**🔖 Safari Copy Button Bug**
: Addressed the bug where the copy button failed to copy links in Safari.
## [0.1.120] - 2024-04-20
## [0.1.120] - 2024-04-20
### Added
### Added
...
...
backend/apps/images/main.py
View file @
adb009f3
...
@@ -35,8 +35,8 @@ from config import (
...
@@ -35,8 +35,8 @@ from config import (
ENABLE_IMAGE_GENERATION
,
ENABLE_IMAGE_GENERATION
,
AUTOMATIC1111_BASE_URL
,
AUTOMATIC1111_BASE_URL
,
COMFYUI_BASE_URL
,
COMFYUI_BASE_URL
,
OPENAI_API_BASE_URL
,
IMAGES_
OPENAI_API_BASE_URL
,
OPENAI_API_KEY
,
IMAGES_
OPENAI_API_KEY
,
)
)
...
@@ -58,8 +58,8 @@ app.add_middleware(
...
@@ -58,8 +58,8 @@ app.add_middleware(
app
.
state
.
ENGINE
=
""
app
.
state
.
ENGINE
=
""
app
.
state
.
ENABLED
=
ENABLE_IMAGE_GENERATION
app
.
state
.
ENABLED
=
ENABLE_IMAGE_GENERATION
app
.
state
.
OPENAI_API_BASE_URL
=
OPENAI_API_BASE_URL
app
.
state
.
OPENAI_API_BASE_URL
=
IMAGES_
OPENAI_API_BASE_URL
app
.
state
.
OPENAI_API_KEY
=
OPENAI_API_KEY
app
.
state
.
OPENAI_API_KEY
=
IMAGES_
OPENAI_API_KEY
app
.
state
.
MODEL
=
""
app
.
state
.
MODEL
=
""
...
@@ -135,27 +135,33 @@ async def update_engine_url(
...
@@ -135,27 +135,33 @@ async def update_engine_url(
}
}
class
OpenAIKeyUpdateForm
(
BaseModel
):
class
OpenAIConfigUpdateForm
(
BaseModel
):
url
:
str
key
:
str
key
:
str
@
app
.
get
(
"/key"
)
@
app
.
get
(
"/openai/config"
)
async
def
get_openai_key
(
user
=
Depends
(
get_admin_user
)):
async
def
get_openai_config
(
user
=
Depends
(
get_admin_user
)):
return
{
"OPENAI_API_KEY"
:
app
.
state
.
OPENAI_API_KEY
}
return
{
"OPENAI_API_BASE_URL"
:
app
.
state
.
OPENAI_API_BASE_URL
,
"OPENAI_API_KEY"
:
app
.
state
.
OPENAI_API_KEY
,
}
@
app
.
post
(
"/
key
/update"
)
@
app
.
post
(
"/
openai/config
/update"
)
async
def
update_openai_
key
(
async
def
update_openai_
config
(
form_data
:
OpenAI
Key
UpdateForm
,
user
=
Depends
(
get_admin_user
)
form_data
:
OpenAI
Config
UpdateForm
,
user
=
Depends
(
get_admin_user
)
):
):
if
form_data
.
key
==
""
:
if
form_data
.
key
==
""
:
raise
HTTPException
(
status_code
=
400
,
detail
=
ERROR_MESSAGES
.
API_KEY_NOT_FOUND
)
raise
HTTPException
(
status_code
=
400
,
detail
=
ERROR_MESSAGES
.
API_KEY_NOT_FOUND
)
app
.
state
.
OPENAI_API_BASE_URL
=
form_data
.
url
app
.
state
.
OPENAI_API_KEY
=
form_data
.
key
app
.
state
.
OPENAI_API_KEY
=
form_data
.
key
return
{
return
{
"OPENAI_API_KEY"
:
app
.
state
.
OPENAI_API_KEY
,
"status"
:
True
,
"status"
:
True
,
"OPENAI_API_BASE_URL"
:
app
.
state
.
OPENAI_API_BASE_URL
,
"OPENAI_API_KEY"
:
app
.
state
.
OPENAI_API_KEY
,
}
}
...
...
backend/apps/litellm/main.py
View file @
adb009f3
import
sys
from
fastapi
import
FastAPI
,
Depends
,
HTTPException
from
fastapi
import
FastAPI
,
Depends
,
HTTPException
from
fastapi.routing
import
APIRoute
from
fastapi.routing
import
APIRoute
from
fastapi.middleware.cors
import
CORSMiddleware
from
fastapi.middleware.cors
import
CORSMiddleware
...
@@ -23,7 +25,13 @@ log = logging.getLogger(__name__)
...
@@ -23,7 +25,13 @@ log = logging.getLogger(__name__)
log
.
setLevel
(
SRC_LOG_LEVELS
[
"LITELLM"
])
log
.
setLevel
(
SRC_LOG_LEVELS
[
"LITELLM"
])
from
config
import
MODEL_FILTER_ENABLED
,
MODEL_FILTER_LIST
,
DATA_DIR
from
config
import
(
MODEL_FILTER_ENABLED
,
MODEL_FILTER_LIST
,
DATA_DIR
,
LITELLM_PROXY_PORT
,
LITELLM_PROXY_HOST
,
)
from
litellm.utils
import
get_llm_provider
from
litellm.utils
import
get_llm_provider
...
@@ -64,7 +72,7 @@ async def run_background_process(command):
...
@@ -64,7 +72,7 @@ async def run_background_process(command):
log
.
info
(
f
"Executing command:
{
command
}
"
)
log
.
info
(
f
"Executing command:
{
command
}
"
)
# Execute the command and create a subprocess
# Execute the command and create a subprocess
process
=
await
asyncio
.
create_subprocess_exec
(
process
=
await
asyncio
.
create_subprocess_exec
(
*
command
.
split
()
,
stdout
=
subprocess
.
PIPE
,
stderr
=
subprocess
.
PIPE
*
command
,
stdout
=
subprocess
.
PIPE
,
stderr
=
subprocess
.
PIPE
)
)
background_process
=
process
background_process
=
process
log
.
info
(
"Subprocess started successfully."
)
log
.
info
(
"Subprocess started successfully."
)
...
@@ -90,9 +98,17 @@ async def run_background_process(command):
...
@@ -90,9 +98,17 @@ async def run_background_process(command):
async
def
start_litellm_background
():
async
def
start_litellm_background
():
log
.
info
(
"start_litellm_background"
)
log
.
info
(
"start_litellm_background"
)
# Command to run in the background
# Command to run in the background
command
=
(
command
=
[
"litellm --port 14365 --telemetry False --config ./data/litellm/config.yaml"
"litellm"
,
)
"--port"
,
str
(
LITELLM_PROXY_PORT
),
"--host"
,
LITELLM_PROXY_HOST
,
"--telemetry"
,
"False"
,
"--config"
,
LITELLM_CONFIG_DIR
,
]
await
run_background_process
(
command
)
await
run_background_process
(
command
)
...
@@ -109,7 +125,6 @@ async def shutdown_litellm_background():
...
@@ -109,7 +125,6 @@ async def shutdown_litellm_background():
@
app
.
on_event
(
"startup"
)
@
app
.
on_event
(
"startup"
)
async
def
startup_event
():
async
def
startup_event
():
log
.
info
(
"startup_event"
)
log
.
info
(
"startup_event"
)
# TODO: Check config.yaml file and create one
# TODO: Check config.yaml file and create one
asyncio
.
create_task
(
start_litellm_background
())
asyncio
.
create_task
(
start_litellm_background
())
...
@@ -186,7 +201,7 @@ async def get_models(user=Depends(get_current_user)):
...
@@ -186,7 +201,7 @@ async def get_models(user=Depends(get_current_user)):
while
not
background_process
:
while
not
background_process
:
await
asyncio
.
sleep
(
0.1
)
await
asyncio
.
sleep
(
0.1
)
url
=
"http://localhost:
14365
/v1"
url
=
f
"http://localhost:
{
LITELLM_PROXY_PORT
}
/v1"
r
=
None
r
=
None
try
:
try
:
r
=
requests
.
request
(
method
=
"GET"
,
url
=
f
"
{
url
}
/models"
)
r
=
requests
.
request
(
method
=
"GET"
,
url
=
f
"
{
url
}
/models"
)
...
@@ -289,7 +304,7 @@ async def delete_model_from_config(
...
@@ -289,7 +304,7 @@ async def delete_model_from_config(
async
def
proxy
(
path
:
str
,
request
:
Request
,
user
=
Depends
(
get_verified_user
)):
async
def
proxy
(
path
:
str
,
request
:
Request
,
user
=
Depends
(
get_verified_user
)):
body
=
await
request
.
body
()
body
=
await
request
.
body
()
url
=
"http://localhost:
14365
"
url
=
f
"http://localhost:
{
LITELLM_PROXY_PORT
}
"
target_url
=
f
"
{
url
}
/
{
path
}
"
target_url
=
f
"
{
url
}
/
{
path
}
"
...
...
backend/config.py
View file @
adb009f3
...
@@ -499,9 +499,24 @@ AUTOMATIC1111_BASE_URL = os.getenv("AUTOMATIC1111_BASE_URL", "")
...
@@ -499,9 +499,24 @@ AUTOMATIC1111_BASE_URL = os.getenv("AUTOMATIC1111_BASE_URL", "")
COMFYUI_BASE_URL
=
os
.
getenv
(
"COMFYUI_BASE_URL"
,
""
)
COMFYUI_BASE_URL
=
os
.
getenv
(
"COMFYUI_BASE_URL"
,
""
)
IMAGES_OPENAI_API_BASE_URL
=
os
.
getenv
(
"IMAGES_OPENAI_API_BASE_URL"
,
OPENAI_API_BASE_URL
)
IMAGES_OPENAI_API_KEY
=
os
.
getenv
(
"IMAGES_OPENAI_API_KEY"
,
OPENAI_API_KEY
)
####################################
####################################
# Audio
# Audio
####################################
####################################
AUDIO_OPENAI_API_BASE_URL
=
os
.
getenv
(
"AUDIO_OPENAI_API_BASE_URL"
,
OPENAI_API_BASE_URL
)
AUDIO_OPENAI_API_BASE_URL
=
os
.
getenv
(
"AUDIO_OPENAI_API_BASE_URL"
,
OPENAI_API_BASE_URL
)
AUDIO_OPENAI_API_KEY
=
os
.
getenv
(
"AUDIO_OPENAI_API_KEY"
,
OPENAI_API_KEY
)
AUDIO_OPENAI_API_KEY
=
os
.
getenv
(
"AUDIO_OPENAI_API_KEY"
,
OPENAI_API_KEY
)
####################################
# LiteLLM
####################################
LITELLM_PROXY_PORT
=
int
(
os
.
getenv
(
"LITELLM_PROXY_PORT"
,
"14365"
))
if
LITELLM_PROXY_PORT
<
0
or
LITELLM_PROXY_PORT
>
65535
:
raise
ValueError
(
"Invalid port number for LITELLM_PROXY_PORT"
)
LITELLM_PROXY_HOST
=
os
.
getenv
(
"LITELLM_PROXY_HOST"
,
"127.0.0.1"
)
package-lock.json
View file @
adb009f3
{
{
"name"
:
"open-webui"
,
"name"
:
"open-webui"
,
"version"
:
"0.1.12
0
"
,
"version"
:
"0.1.12
1
"
,
"lockfileVersion"
:
3
,
"lockfileVersion"
:
3
,
"requires"
:
true
,
"requires"
:
true
,
"packages"
:
{
"packages"
:
{
""
:
{
""
:
{
"name"
:
"open-webui"
,
"name"
:
"open-webui"
,
"version"
:
"0.1.12
0
"
,
"version"
:
"0.1.12
1
"
,
"dependencies"
:
{
"dependencies"
:
{
"@sveltejs/adapter-node"
:
"^1.3.1"
,
"@sveltejs/adapter-node"
:
"^1.3.1"
,
"async"
:
"^3.2.5"
,
"async"
:
"^3.2.5"
,
...
...
package.json
View file @
adb009f3
{
{
"name"
:
"open-webui"
,
"name"
:
"open-webui"
,
"version"
:
"0.1.12
0
"
,
"version"
:
"0.1.12
1
"
,
"private"
:
true
,
"private"
:
true
,
"scripts"
:
{
"scripts"
:
{
"dev"
:
"vite dev --host"
,
"dev"
:
"vite dev --host"
,
...
...
src/lib/apis/images/index.ts
View file @
adb009f3
...
@@ -72,10 +72,10 @@ export const updateImageGenerationConfig = async (
...
@@ -72,10 +72,10 @@ export const updateImageGenerationConfig = async (
return
res
;
return
res
;
};
};
export
const
getOpenAI
Key
=
async
(
token
:
string
=
''
)
=>
{
export
const
getOpenAI
Config
=
async
(
token
:
string
=
''
)
=>
{
let
error
=
null
;
let
error
=
null
;
const
res
=
await
fetch
(
`
${
IMAGES_API_BASE_URL
}
/
key
`
,
{
const
res
=
await
fetch
(
`
${
IMAGES_API_BASE_URL
}
/
openai/config
`
,
{
method
:
'
GET
'
,
method
:
'
GET
'
,
headers
:
{
headers
:
{
Accept
:
'
application/json
'
,
Accept
:
'
application/json
'
,
...
@@ -101,13 +101,13 @@ export const getOpenAIKey = async (token: string = '') => {
...
@@ -101,13 +101,13 @@ export const getOpenAIKey = async (token: string = '') => {
throw
error
;
throw
error
;
}
}
return
res
.
OPENAI_API_KEY
;
return
res
;
};
};
export
const
updateOpenAI
Key
=
async
(
token
:
string
=
''
,
key
:
string
)
=>
{
export
const
updateOpenAI
Config
=
async
(
token
:
string
=
''
,
url
:
string
,
key
:
string
)
=>
{
let
error
=
null
;
let
error
=
null
;
const
res
=
await
fetch
(
`
${
IMAGES_API_BASE_URL
}
/
key
/update`
,
{
const
res
=
await
fetch
(
`
${
IMAGES_API_BASE_URL
}
/
openai/config
/update`
,
{
method
:
'
POST
'
,
method
:
'
POST
'
,
headers
:
{
headers
:
{
Accept
:
'
application/json
'
,
Accept
:
'
application/json
'
,
...
@@ -115,6 +115,7 @@ export const updateOpenAIKey = async (token: string = '', key: string) => {
...
@@ -115,6 +115,7 @@ export const updateOpenAIKey = async (token: string = '', key: string) => {
...(
token
&&
{
authorization
:
`Bearer
${
token
}
`
})
...(
token
&&
{
authorization
:
`Bearer
${
token
}
`
})
},
},
body
:
JSON
.
stringify
({
body
:
JSON
.
stringify
({
url
:
url
,
key
:
key
key
:
key
})
})
})
})
...
@@ -136,7 +137,7 @@ export const updateOpenAIKey = async (token: string = '', key: string) => {
...
@@ -136,7 +137,7 @@ export const updateOpenAIKey = async (token: string = '', key: string) => {
throw
error
;
throw
error
;
}
}
return
res
.
OPENAI_API_KEY
;
return
res
;
};
};
export
const
getImageGenerationEngineUrls
=
async
(
token
:
string
=
''
)
=>
{
export
const
getImageGenerationEngineUrls
=
async
(
token
:
string
=
''
)
=>
{
...
...
src/lib/components/chat/Settings/Audio.svelte
View file @
adb009f3
...
@@ -75,14 +75,16 @@
...
@@ -75,14 +75,16 @@
};
};
const updateConfigHandler = async () => {
const updateConfigHandler = async () => {
const res = await updateAudioConfig(localStorage.token, {
if (TTSEngine === 'openai') {
url: OpenAIUrl,
const res = await updateAudioConfig(localStorage.token, {
key: OpenAIKey
url: OpenAIUrl,
});
key: OpenAIKey
});
if (res) {
if (res) {
OpenAIUrl = res.OPENAI_API_BASE_URL;
OpenAIUrl = res.OPENAI_API_BASE_URL;
OpenAIKey = res.OPENAI_API_KEY;
OpenAIKey = res.OPENAI_API_KEY;
}
}
}
};
};
...
...
src/lib/components/chat/Settings/Images.svelte
View file @
adb009f3
...
@@ -15,8 +15,8 @@
...
@@ -15,8 +15,8 @@
updateImageSize,
updateImageSize,
getImageSteps,
getImageSteps,
updateImageSteps,
updateImageSteps,
getOpenAI
Key
,
getOpenAI
Config
,
updateOpenAI
Key
updateOpenAI
Config
} from '$lib/apis/images';
} from '$lib/apis/images';
import { getBackendConfig } from '$lib/apis';
import { getBackendConfig } from '$lib/apis';
const dispatch = createEventDispatcher();
const dispatch = createEventDispatcher();
...
@@ -33,6 +33,7 @@
...
@@ -33,6 +33,7 @@
let AUTOMATIC1111_BASE_URL = '';
let AUTOMATIC1111_BASE_URL = '';
let COMFYUI_BASE_URL = '';
let COMFYUI_BASE_URL = '';
let OPENAI_API_BASE_URL = '';
let OPENAI_API_KEY = '';
let OPENAI_API_KEY = '';
let selectedModel = '';
let selectedModel = '';
...
@@ -131,7 +132,10 @@
...
@@ -131,7 +132,10 @@
AUTOMATIC1111_BASE_URL = URLS.AUTOMATIC1111_BASE_URL;
AUTOMATIC1111_BASE_URL = URLS.AUTOMATIC1111_BASE_URL;
COMFYUI_BASE_URL = URLS.COMFYUI_BASE_URL;
COMFYUI_BASE_URL = URLS.COMFYUI_BASE_URL;
OPENAI_API_KEY = await getOpenAIKey(localStorage.token);
const config = await getOpenAIConfig(localStorage.token);
OPENAI_API_KEY = config.OPENAI_API_KEY;
OPENAI_API_BASE_URL = config.OPENAI_API_BASE_URL;
imageSize = await getImageSize(localStorage.token);
imageSize = await getImageSize(localStorage.token);
steps = await getImageSteps(localStorage.token);
steps = await getImageSteps(localStorage.token);
...
@@ -149,7 +153,7 @@
...
@@ -149,7 +153,7 @@
loading = true;
loading = true;
if (imageGenerationEngine === 'openai') {
if (imageGenerationEngine === 'openai') {
await updateOpenAI
Key
(localStorage.token, OPENAI_API_KEY);
await updateOpenAI
Config
(localStorage.token,
OPENAI_API_BASE_URL,
OPENAI_API_KEY);
}
}
await updateDefaultImageGenerationModel(localStorage.token, selectedModel);
await updateDefaultImageGenerationModel(localStorage.token, selectedModel);
...
@@ -300,13 +304,22 @@
...
@@ -300,13 +304,22 @@
</button>
</button>
</div>
</div>
{:else if imageGenerationEngine === 'openai'}
{:else if imageGenerationEngine === 'openai'}
<div class=" mb-2.5 text-sm font-medium">{$i18n.t('OpenAI API Key')}</div>
<div>
<div class="flex w-full">
<div class=" mb-1.5 text-sm font-medium">{$i18n.t('OpenAI API Config')}</div>
<div class="flex-1 mr-2">
<div class="flex gap-2 mb-1">
<input
class="w-full rounded-lg py-2 px-4 text-sm dark:text-gray-300 dark:bg-gray-850 outline-none"
placeholder={$i18n.t('API Base URL')}
bind:value={OPENAI_API_BASE_URL}
required
/>
<input
<input
class="w-full rounded-lg py-2 px-4 text-sm dark:text-gray-300 dark:bg-gray-850 outline-none"
class="w-full rounded-lg py-2 px-4 text-sm dark:text-gray-300 dark:bg-gray-850 outline-none"
placeholder={$i18n.t('
Enter
API Key')}
placeholder={$i18n.t('API Key')}
bind:value={OPENAI_API_KEY}
bind:value={OPENAI_API_KEY}
required
/>
/>
</div>
</div>
</div>
</div>
...
@@ -319,19 +332,39 @@
...
@@ -319,19 +332,39 @@
<div class=" mb-2.5 text-sm font-medium">{$i18n.t('Set Default Model')}</div>
<div class=" mb-2.5 text-sm font-medium">{$i18n.t('Set Default Model')}</div>
<div class="flex w-full">
<div class="flex w-full">
<div class="flex-1 mr-2">
<div class="flex-1 mr-2">
<select
{#if imageGenerationEngine === 'openai' && !OPENAI_API_BASE_URL.includes('https://api.openai.com')}
class="w-full rounded-lg py-2 px-4 text-sm dark:text-gray-300 dark:bg-gray-850 outline-none"
<div class="flex w-full">
bind:value={selectedModel}
<div class="flex-1">
placeholder={$i18n.t('Select a model')}
<input
required
list="model-list"
>
class="w-full rounded-lg py-2 px-4 text-sm dark:text-gray-300 dark:bg-gray-850 outline-none"
{#if !selectedModel}
bind:value={selectedModel}
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
placeholder="Select a model"
{/if}
/>
{#each models ?? [] as model}
<option value={model.id} class="bg-gray-100 dark:bg-gray-700">{model.name}</option>
<datalist id="model-list">
{/each}
{#each models ?? [] as model}
</select>
<option value={model.id}>{model.name}</option>
{/each}
</datalist>
</div>
</div>
{:else}
<select
class="w-full rounded-lg py-2 px-4 text-sm dark:text-gray-300 dark:bg-gray-850 outline-none"
bind:value={selectedModel}
placeholder={$i18n.t('Select a model')}
required
>
{#if !selectedModel}
<option value="" disabled selected>{$i18n.t('Select a model')}</option>
{/if}
{#each models ?? [] as model}
<option value={model.id} class="bg-gray-100 dark:bg-gray-700">{model.name}</option
>
{/each}
</select>
{/if}
</div>
</div>
</div>
</div>
</div>
</div>
...
...
src/lib/components/chat/Settings/Models.svelte
View file @
adb009f3
...
@@ -13,7 +13,7 @@
...
@@ -13,7 +13,7 @@
uploadModel
uploadModel
} from '$lib/apis/ollama';
} from '$lib/apis/ollama';
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
import { WEBUI_NAME, models, user } from '$lib/stores';
import { WEBUI_NAME, models,
MODEL_DOWNLOAD_POOL,
user } from '$lib/stores';
import { splitStream } from '$lib/utils';
import { splitStream } from '$lib/utils';
import { onMount, getContext } from 'svelte';
import { onMount, getContext } from 'svelte';
import { addLiteLLMModel, deleteLiteLLMModel, getLiteLLMModelInfo } from '$lib/apis/litellm';
import { addLiteLLMModel, deleteLiteLLMModel, getLiteLLMModelInfo } from '$lib/apis/litellm';
...
@@ -50,12 +50,6 @@
...
@@ -50,12 +50,6 @@
let showExperimentalOllama = false;
let showExperimentalOllama = false;
let ollamaVersion = '';
let ollamaVersion = '';
const MAX_PARALLEL_DOWNLOADS = 3;
const MAX_PARALLEL_DOWNLOADS = 3;
const modelDownloadQueue = queue(
(task: { modelName: string }, cb) =>
pullModelHandlerProcessor({ modelName: task.modelName, callback: cb }),
MAX_PARALLEL_DOWNLOADS
);
let modelDownloadStatus: Record<string, any> = {};
let modelTransferring = false;
let modelTransferring = false;
let modelTag = '';
let modelTag = '';
...
@@ -140,7 +134,8 @@
...
@@ -140,7 +134,8 @@
const pullModelHandler = async () => {
const pullModelHandler = async () => {
const sanitizedModelTag = modelTag.trim().replace(/^ollama\s+(run|pull)\s+/, '');
const sanitizedModelTag = modelTag.trim().replace(/^ollama\s+(run|pull)\s+/, '');
if (modelDownloadStatus[sanitizedModelTag]) {
console.log($MODEL_DOWNLOAD_POOL);
if ($MODEL_DOWNLOAD_POOL[sanitizedModelTag]) {
toast.error(
toast.error(
$i18n.t(`Model '{{modelTag}}' is already in queue for downloading.`, {
$i18n.t(`Model '{{modelTag}}' is already in queue for downloading.`, {
modelTag: sanitizedModelTag
modelTag: sanitizedModelTag
...
@@ -148,40 +143,117 @@
...
@@ -148,40 +143,117 @@
);
);
return;
return;
}
}
if (Object.keys(
modelDownloadStatus).length === 3
) {
if (Object.keys(
$MODEL_DOWNLOAD_POOL).length === MAX_PARALLEL_DOWNLOADS
) {
toast.error(
toast.error(
$i18n.t('Maximum of 3 models can be downloaded simultaneously. Please try again later.')
$i18n.t('Maximum of 3 models can be downloaded simultaneously. Please try again later.')
);
);
return;
return;
}
}
modelTransferring = true;
const res = await pullModel(localStorage.token, sanitizedModelTag, '0').catch((error) => {
toast.error(error);
return null;
});
if (res) {
const reader = res.body
.pipeThrough(new TextDecoderStream())
.pipeThrough(splitStream('\n'))
.getReader();
modelDownloadQueue.push(
while (true) {
{ modelName: sanitizedModelTag },
try {
async (data: { modelName: string; success: boolean; error?: Error }) => {
const { value, done } = await reader.read();
const { modelName } = data;
if (done) break;
// Remove the downloaded model
delete modelDownloadStatus[modelName];
modelDownloadStatus = { ...modelDownloadStatus };
let lines = value.split('\n');
for (const line of lines) {
if (line !== '') {
let data = JSON.parse(line);
console.log(data);
if (data.error) {
throw data.error;
}
if (data.detail) {
throw data.detail;
}
if (data.id) {
MODEL_DOWNLOAD_POOL.set({
...$MODEL_DOWNLOAD_POOL,
[sanitizedModelTag]: {
...$MODEL_DOWNLOAD_POOL[sanitizedModelTag],
requestId: data.id,
reader,
done: false
}
});
console.log(data);
}
if (data.status) {
if (data.digest) {
let downloadProgress = 0;
if (data.completed) {
downloadProgress = Math.round((data.completed / data.total) * 1000) / 10;
} else {
downloadProgress = 100;
}
if (!data.success) {
MODEL_DOWNLOAD_POOL.set({
toast.error(data.error);
...$MODEL_DOWNLOAD_POOL,
} else {
[sanitizedModelTag]: {
toast.success(
...$MODEL_DOWNLOAD_POOL[sanitizedModelTag],
$i18n.t(`Model '{{modelName}}' has been successfully downloaded.`, { modelName })
pullProgress: downloadProgress,
);
digest: data.digest
}
});
} else {
toast.success(data.status);
const notification = new Notification($WEBUI_NAME, {
MODEL_DOWNLOAD_POOL.set({
body: $i18n.t(`Model '{{modelName}}' has been successfully downloaded.`, { modelName }),
...$MODEL_DOWNLOAD_POOL,
icon: `${WEBUI_BASE_URL}/static/favicon.png`
[sanitizedModelTag]: {
});
...$MODEL_DOWNLOAD_POOL[sanitizedModelTag],
done: data.status === 'success'
}
});
}
}
}
}
} catch (error) {
console.log(error);
if (typeof error !== 'string') {
error = error.message;
}
models.set(await getModels());
toast.error(error);
// opts.callback({ success: false, error, modelName: opts.modelName });
}
}
}
}
);
console.log($MODEL_DOWNLOAD_POOL[sanitizedModelTag]);
if ($MODEL_DOWNLOAD_POOL[sanitizedModelTag].done) {
toast.success(
$i18n.t(`Model '{{modelName}}' has been successfully downloaded.`, {
modelName: sanitizedModelTag
})
);
models.set(await getModels(localStorage.token));
} else {
toast.error('Download canceled');
}
delete $MODEL_DOWNLOAD_POOL[sanitizedModelTag];
MODEL_DOWNLOAD_POOL.set({
...$MODEL_DOWNLOAD_POOL
});
}
modelTag = '';
modelTag = '';
modelTransferring = false;
modelTransferring = false;
...
@@ -352,88 +424,18 @@
...
@@ -352,88 +424,18 @@
models.set(await getModels());
models.set(await getModels());
};
};
const pullModelHandlerProcessor = async (opts: { modelName: string; callback: Function }) => {
const cancelModelPullHandler = async (model: string) => {
const res = await pullModel(localStorage.token, opts.modelName, selectedOllamaUrlIdx).catch(
const { reader, requestId } = $MODEL_DOWNLOAD_POOL[model];
(error) => {
if (reader) {
opts.callback({ success: false, error, modelName: opts.modelName });
await reader.cancel();
return null;
}
);
if (res) {
const reader = res.body
.pipeThrough(new TextDecoderStream())
.pipeThrough(splitStream('\n'))
.getReader();
while (true) {
try {
const { value, done } = await reader.read();
if (done) break;
let lines = value.split('\n');
for (const line of lines) {
if (line !== '') {
let data = JSON.parse(line);
console.log(data);
if (data.error) {
throw data.error;
}
if (data.detail) {
throw data.detail;
}
if (data.id) {
modelDownloadStatus[opts.modelName] = {
...modelDownloadStatus[opts.modelName],
requestId: data.id,
reader,
done: false
};
console.log(data);
}
if (data.status) {
if (data.digest) {
let downloadProgress = 0;
if (data.completed) {
downloadProgress = Math.round((data.completed / data.total) * 1000) / 10;
} else {
downloadProgress = 100;
}
modelDownloadStatus[opts.modelName] = {
...modelDownloadStatus[opts.modelName],
pullProgress: downloadProgress,
digest: data.digest
};
} else {
toast.success(data.status);
modelDownloadStatus[opts.modelName] = {
...modelDownloadStatus[opts.modelName],
done: data.status === 'success'
};
}
}
}
}
} catch (error) {
console.log(error);
if (typeof error !== 'string') {
error = error.message;
}
opts.callback({ success: false, error, modelName: opts.modelName });
}
}
console.log(modelDownloadStatus[opts.modelName]);
if (modelDownloadStatus[opts.modelName].done) {
await cancelOllamaRequest(localStorage.token, requestId);
opts.callback({ success: true, modelName: opts.modelName });
delete $MODEL_DOWNLOAD_POOL[model];
} else {
MODEL_DOWNLOAD_POOL.set({
opts.callback({ success: false, error: 'Download canceled', modelName: opts.modelName });
...$MODEL_DOWNLOAD_POOL
}
});
await deleteModel(localStorage.token, model);
toast.success(`${model} download has been canceled`);
}
}
};
};
...
@@ -503,18 +505,6 @@
...
@@ -503,18 +505,6 @@
ollamaVersion = await getOllamaVersion(localStorage.token).catch((error) => false);
ollamaVersion = await getOllamaVersion(localStorage.token).catch((error) => false);
liteLLMModelInfo = await getLiteLLMModelInfo(localStorage.token);
liteLLMModelInfo = await getLiteLLMModelInfo(localStorage.token);
});
});
const cancelModelPullHandler = async (model: string) => {
const { reader, requestId } = modelDownloadStatus[model];
if (reader) {
await reader.cancel();
await cancelOllamaRequest(localStorage.token, requestId);
delete modelDownloadStatus[model];
await deleteModel(localStorage.token, model);
toast.success(`${model} download has been canceled`);
}
};
</script>
</script>
<div class="flex flex-col h-full justify-between text-sm">
<div class="flex flex-col h-full justify-between text-sm">
...
@@ -643,9 +633,9 @@
...
@@ -643,9 +633,9 @@
>
>
</div>
</div>
{#if Object.keys(
modelDownloadStatus
).length > 0}
{#if Object.keys(
$MODEL_DOWNLOAD_POOL
).length > 0}
{#each Object.keys(
modelDownloadStatus
) as model}
{#each Object.keys(
$MODEL_DOWNLOAD_POOL
) as model}
{#if 'pullProgress' in
modelDownloadStatus
[model]}
{#if 'pullProgress' in
$MODEL_DOWNLOAD_POOL
[model]}
<div class="flex flex-col">
<div class="flex flex-col">
<div class="font-medium mb-1">{model}</div>
<div class="font-medium mb-1">{model}</div>
<div class="">
<div class="">
...
@@ -655,10 +645,10 @@
...
@@ -655,10 +645,10 @@
class="dark:bg-gray-600 bg-gray-500 text-xs font-medium text-gray-100 text-center p-0.5 leading-none rounded-full"
class="dark:bg-gray-600 bg-gray-500 text-xs font-medium text-gray-100 text-center p-0.5 leading-none rounded-full"
style="width: {Math.max(
style="width: {Math.max(
15,
15,
modelDownloadStatus
[model].pullProgress ?? 0
$MODEL_DOWNLOAD_POOL
[model].pullProgress ?? 0
)}%"
)}%"
>
>
{
modelDownloadStatus
[model].pullProgress ?? 0}%
{
$MODEL_DOWNLOAD_POOL
[model].pullProgress ?? 0}%
</div>
</div>
</div>
</div>
...
@@ -689,9 +679,9 @@
...
@@ -689,9 +679,9 @@
</button>
</button>
</Tooltip>
</Tooltip>
</div>
</div>
{#if 'digest' in
modelDownloadStatus
[model]}
{#if 'digest' in
$MODEL_DOWNLOAD_POOL
[model]}
<div class="mt-1 text-xs dark:text-gray-500" style="font-size: 0.5rem;">
<div class="mt-1 text-xs dark:text-gray-500" style="font-size: 0.5rem;">
{
modelDownloadStatus
[model].digest}
{
$MODEL_DOWNLOAD_POOL
[model].digest}
</div>
</div>
{/if}
{/if}
</div>
</div>
...
...
src/lib/i18n/locales/nl-NL/translation.json
View file @
adb009f3
...
@@ -62,7 +62,7 @@
...
@@ -62,7 +62,7 @@
"Click here to check other modelfiles."
:
"Klik hier om andere modelfiles te controleren."
,
"Click here to check other modelfiles."
:
"Klik hier om andere modelfiles te controleren."
,
"Click here to select"
:
"Klik hier om te selecteren"
,
"Click here to select"
:
"Klik hier om te selecteren"
,
"Click here to select documents."
:
"Klik hier om documenten te selecteren"
,
"Click here to select documents."
:
"Klik hier om documenten te selecteren"
,
"click here."
:
"
c
li
c
k her
e
."
,
"click here."
:
"
k
lik h
i
er."
,
"Click on the user role button to change a user's role."
:
"Klik op de gebruikersrol knop om de rol van een gebruiker te wijzigen."
,
"Click on the user role button to change a user's role."
:
"Klik op de gebruikersrol knop om de rol van een gebruiker te wijzigen."
,
"Close"
:
"Sluiten"
,
"Close"
:
"Sluiten"
,
"Collection"
:
"Verzameling"
,
"Collection"
:
"Verzameling"
,
...
...
src/lib/i18n/locales/pl-
pl
/translation.json
→
src/lib/i18n/locales/pl-
PL
/translation.json
View file @
adb009f3
File moved
src/lib/i18n/locales/ru-RU/translation.json
View file @
adb009f3
...
@@ -2,39 +2,39 @@
...
@@ -2,39 +2,39 @@
"'s', 'm', 'h', 'd', 'w' or '-1' for no expiration."
:
"'s', 'm', 'h', 'd', 'w' или '-1' для не истечение."
,
"'s', 'm', 'h', 'd', 'w' or '-1' for no expiration."
:
"'s', 'm', 'h', 'd', 'w' или '-1' для не истечение."
,
"(Beta)"
:
"(бета)"
,
"(Beta)"
:
"(бета)"
,
"(e.g. `sh webui.sh --api`)"
:
"(например: `sh webui.sh --api`)"
,
"(e.g. `sh webui.sh --api`)"
:
"(например: `sh webui.sh --api`)"
,
"(latest)"
:
"(
новы
й)"
,
"(latest)"
:
"(
последни
й)"
,
"{{modelName}} is thinking..."
:
"{{modelName}}
это
думает..."
,
"{{modelName}} is thinking..."
:
"{{modelName}} думает..."
,
"{{webUIName}} Backend Required"
:
"{{webUIName}} бэкенд требуемый"
,
"{{webUIName}} Backend Required"
:
"{{webUIName}} бэкенд требуемый"
,
"a user"
:
"
юзер
"
,
"a user"
:
"
пользователь
"
,
"About"
:
"О
тносительно
"
,
"About"
:
"О
б
"
,
"Account"
:
"Аккаунт"
,
"Account"
:
"Аккаунт"
,
"Action"
:
"Действие"
,
"Action"
:
"Действие"
,
"Add a model"
:
"Добавьте модель"
,
"Add a model"
:
"Добавьте модель"
,
"Add a model tag name"
:
"Добавьте тэг модели
имя
"
,
"Add a model tag name"
:
"Добавьте
имя
тэг
а
модели"
,
"Add a short description about what this modelfile does"
:
"Добавьте краткое описание, что делает этот модел
и
файл"
,
"Add a short description about what this modelfile does"
:
"Добавьте краткое описание, что делает этот моделфайл"
,
"Add a short title for this prompt"
:
"Добавьте кратк
ое название для этого взаимодействия
"
,
"Add a short title for this prompt"
:
"Добавьте кратк
ий заголовок для этого ввода
"
,
"Add a tag"
:
"Добавьте тэг"
,
"Add a tag"
:
"Добавьте тэг"
,
"Add Docs"
:
"Добавьте документы"
,
"Add Docs"
:
"Добавьте документы"
,
"Add Files"
:
"Добавьте файлы"
,
"Add Files"
:
"Добавьте файлы"
,
"Add message"
:
"Добавьте
message
"
,
"Add message"
:
"Добавьте
сообщение
"
,
"add tags"
:
"Добавьте тэгы"
,
"add tags"
:
"Добавьте тэгы"
,
"Adjusting these settings will apply changes universally to all users."
:
"Регулирующий этих настроек приведет к изменениям для все
юзеры
."
,
"Adjusting these settings will apply changes universally to all users."
:
"Регулирующий этих настроек приведет к изменениям для все
пользователей
."
,
"admin"
:
"админ"
,
"admin"
:
"админ"
,
"Admin Panel"
:
"Панель админ"
,
"Admin Panel"
:
"Панель админ"
,
"Admin Settings"
:
"Настройки админ"
,
"Admin Settings"
:
"Настройки админ"
,
"Advanced Parameters"
:
"Расширенные Параметры"
,
"Advanced Parameters"
:
"Расширенные Параметры"
,
"all"
:
"всё"
,
"all"
:
"всё"
,
"All Users"
:
"Вс
ё юзеры
"
,
"All Users"
:
"Вс
е пользователи
"
,
"Allow"
:
"
Дозволя
ть"
,
"Allow"
:
"
Разреши
ть"
,
"Allow Chat Deletion"
:
"Дозволять удаление чат"
,
"Allow Chat Deletion"
:
"Дозволять удаление чат"
,
"alphanumeric characters and hyphens"
:
"буквенно цифровые символы и дефисы"
,
"alphanumeric characters and hyphens"
:
"буквенно цифровые символы и дефисы"
,
"Already have an account?"
:
"у вас есть аккаунт
уже
?"
,
"Already have an account?"
:
"у вас
уже
есть аккаунт?"
,
"an assistant"
:
"ассистент"
,
"an assistant"
:
"ассистент"
,
"and"
:
"и"
,
"and"
:
"и"
,
"API Base URL"
:
"Базовый адрес API"
,
"API Base URL"
:
"Базовый адрес API"
,
"API Key"
:
"Ключ API"
,
"API Key"
:
"Ключ API"
,
"API RPM"
:
"API RPM"
,
"API RPM"
:
"API RPM"
,
"are allowed - Activate this command by typing"
:
"разрешено - активируйте эту команду
набор
ом"
,
"are allowed - Activate this command by typing"
:
"разрешено - активируйте эту команду
ввод
ом"
,
"Are you sure?"
:
"Вы уверены?"
,
"Are you sure?"
:
"Вы уверены?"
,
"Audio"
:
"Аудио"
,
"Audio"
:
"Аудио"
,
"Auto-playback response"
:
"Автоматическое воспроизведение ответа"
,
"Auto-playback response"
:
"Автоматическое воспроизведение ответа"
,
...
...
static/manifest.json
0 → 100644
View file @
adb009f3
{}
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment