Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ollama
Commits
abe67acf
Unverified
Commit
abe67acf
authored
Dec 15, 2025
by
Daniel Hiltgen
Committed by
GitHub
Dec 15, 2025
Browse files
Revert "Enable Ollama engine by default" (#13481)
This reverts commit 56f754f46b87749581f73ef3625314bb0e51bfed.
parent
4ff8a691
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
3 additions
and
3 deletions
+3
-3
envconfig/config.go
envconfig/config.go
+2
-2
llm/server.go
llm/server.go
+1
-1
No files found.
envconfig/config.go
View file @
abe67acf
...
@@ -199,7 +199,7 @@ var (
...
@@ -199,7 +199,7 @@ var (
// MultiUserCache optimizes prompt caching for multi-user scenarios
// MultiUserCache optimizes prompt caching for multi-user scenarios
MultiUserCache
=
Bool
(
"OLLAMA_MULTIUSER_CACHE"
)
MultiUserCache
=
Bool
(
"OLLAMA_MULTIUSER_CACHE"
)
// Enable the new Ollama engine
// Enable the new Ollama engine
NewEngine
=
Bool
WithDefault
(
"OLLAMA_NEW_ENGINE"
)
NewEngine
=
Bool
(
"OLLAMA_NEW_ENGINE"
)
// ContextLength sets the default context length
// ContextLength sets the default context length
ContextLength
=
Uint
(
"OLLAMA_CONTEXT_LENGTH"
,
4096
)
ContextLength
=
Uint
(
"OLLAMA_CONTEXT_LENGTH"
,
4096
)
// Auth enables authentication between the Ollama client and server
// Auth enables authentication between the Ollama client and server
...
@@ -291,7 +291,7 @@ func AsMap() map[string]EnvVar {
...
@@ -291,7 +291,7 @@ func AsMap() map[string]EnvVar {
"OLLAMA_SCHED_SPREAD"
:
{
"OLLAMA_SCHED_SPREAD"
,
SchedSpread
(),
"Always schedule model across all GPUs"
},
"OLLAMA_SCHED_SPREAD"
:
{
"OLLAMA_SCHED_SPREAD"
,
SchedSpread
(),
"Always schedule model across all GPUs"
},
"OLLAMA_MULTIUSER_CACHE"
:
{
"OLLAMA_MULTIUSER_CACHE"
,
MultiUserCache
(),
"Optimize prompt caching for multi-user scenarios"
},
"OLLAMA_MULTIUSER_CACHE"
:
{
"OLLAMA_MULTIUSER_CACHE"
,
MultiUserCache
(),
"Optimize prompt caching for multi-user scenarios"
},
"OLLAMA_CONTEXT_LENGTH"
:
{
"OLLAMA_CONTEXT_LENGTH"
,
ContextLength
(),
"Context length to use unless otherwise specified (default: 4096)"
},
"OLLAMA_CONTEXT_LENGTH"
:
{
"OLLAMA_CONTEXT_LENGTH"
,
ContextLength
(),
"Context length to use unless otherwise specified (default: 4096)"
},
"OLLAMA_NEW_ENGINE"
:
{
"OLLAMA_NEW_ENGINE"
,
NewEngine
(
true
),
"Enable the new Ollama engine"
},
"OLLAMA_NEW_ENGINE"
:
{
"OLLAMA_NEW_ENGINE"
,
NewEngine
(),
"Enable the new Ollama engine"
},
"OLLAMA_REMOTES"
:
{
"OLLAMA_REMOTES"
,
Remotes
(),
"Allowed hosts for remote models (default
\"
ollama.com
\"
)"
},
"OLLAMA_REMOTES"
:
{
"OLLAMA_REMOTES"
,
Remotes
(),
"Allowed hosts for remote models (default
\"
ollama.com
\"
)"
},
// Informational
// Informational
...
...
llm/server.go
View file @
abe67acf
...
@@ -143,7 +143,7 @@ func NewLlamaServer(systemInfo ml.SystemInfo, gpus []ml.DeviceInfo, modelPath st
...
@@ -143,7 +143,7 @@ func NewLlamaServer(systemInfo ml.SystemInfo, gpus []ml.DeviceInfo, modelPath st
var
llamaModel
*
llama
.
Model
var
llamaModel
*
llama
.
Model
var
textProcessor
model
.
TextProcessor
var
textProcessor
model
.
TextProcessor
var
err
error
var
err
error
if
envconfig
.
NewEngine
(
true
)
||
f
.
KV
()
.
OllamaEngineRequired
()
{
if
envconfig
.
NewEngine
()
||
f
.
KV
()
.
OllamaEngineRequired
()
{
if
len
(
projectors
)
==
0
{
if
len
(
projectors
)
==
0
{
textProcessor
,
err
=
model
.
NewTextProcessor
(
modelPath
)
textProcessor
,
err
=
model
.
NewTextProcessor
(
modelPath
)
}
else
{
}
else
{
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment