Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ollama
Commits
f804e8a4
"git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "219636f7e4a3bc0ffb5ec74f35ea837787a7a3e8"
Unverified
Commit
f804e8a4
authored
Aug 18, 2025
by
Michael Yang
Committed by
GitHub
Aug 18, 2025
Browse files
disable output_all (#11959)
parent
9cfbffaf
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
25 additions
and
3 deletions
+25
-3
llama/llama.cpp/src/llama-context.cpp
llama/llama.cpp/src/llama-context.cpp
+1
-2
llama/patches/0019-Enable-CUDA-Graphs-for-gemma3n.patch
llama/patches/0019-Enable-CUDA-Graphs-for-gemma3n.patch
+1
-1
llama/patches/0023-decode-disable-output_all.patch
llama/patches/0023-decode-disable-output_all.patch
+23
-0
No files found.
llama/llama.cpp/src/llama-context.cpp
View file @
f804e8a4
...
@@ -962,8 +962,7 @@ int llama_context::decode(const llama_batch & batch_inp) {
...
@@ -962,8 +962,7 @@ int llama_context::decode(const llama_batch & batch_inp) {
const
int64_t
n_vocab
=
vocab
.
n_tokens
();
const
int64_t
n_vocab
=
vocab
.
n_tokens
();
const
int64_t
n_embd
=
hparams
.
n_embd
;
const
int64_t
n_embd
=
hparams
.
n_embd
;
// when computing embeddings, all tokens are output
const
bool
output_all
=
false
;
const
bool
output_all
=
cparams
.
embeddings
;
if
(
!
balloc
->
init
(
batch_inp
,
vocab
,
memory
.
get
(),
n_embd
,
cparams
.
kv_unified
?
LLAMA_MAX_SEQ
:
cparams
.
n_seq_max
,
output_all
))
{
if
(
!
balloc
->
init
(
batch_inp
,
vocab
,
memory
.
get
(),
n_embd
,
cparams
.
kv_unified
?
LLAMA_MAX_SEQ
:
cparams
.
n_seq_max
,
output_all
))
{
LLAMA_LOG_ERROR
(
"%s: failed to initialize batch
\n
"
,
__func__
);
LLAMA_LOG_ERROR
(
"%s: failed to initialize batch
\n
"
,
__func__
);
...
...
llama/patches/0019-Enable-CUDA-Graphs-for-gemma3n.patch
View file @
f804e8a4
...
@@ -13,7 +13,7 @@ checks.
...
@@ -13,7 +13,7 @@ checks.
1 file changed, 18 insertions(+)
1 file changed, 18 insertions(+)
diff --git a/ggml/src/ggml-cuda/ggml-cuda.cu b/ggml/src/ggml-cuda/ggml-cuda.cu
diff --git a/ggml/src/ggml-cuda/ggml-cuda.cu b/ggml/src/ggml-cuda/ggml-cuda.cu
index 57eae461..
9db0c8b5
100644
index 57eae461..
c7f9dc3a
100644
--- a/ggml/src/ggml-cuda/ggml-cuda.cu
--- a/ggml/src/ggml-cuda/ggml-cuda.cu
+++ b/ggml/src/ggml-cuda/ggml-cuda.cu
+++ b/ggml/src/ggml-cuda/ggml-cuda.cu
@@ -2671,12 +2671,24 @@
static bool check_node_graph_compatibility_and_refresh_copy_ops(ggml_backend_cud
@@ -2671,12 +2671,24 @@
static bool check_node_graph_compatibility_and_refresh_copy_ops(ggml_backend_cud
...
...
llama/patches/0023-decode-disable-output_all.patch
0 → 100644
View file @
f804e8a4
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Michael Yang <git@mxy.ng>
Date: Mon, 18 Aug 2025 16:58:39 -0700
Subject: [PATCH] decode: disable output_all
---
src/llama-context.cpp | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/src/llama-context.cpp b/src/llama-context.cpp
index 26a5cf9c..6ece5263 100644
--- a/src/llama-context.cpp
+++ b/src/llama-context.cpp
@@ -962,8 +962,7 @@
int llama_context::decode(const llama_batch & batch_inp) {
const int64_t n_vocab = vocab.n_tokens();
const int64_t n_embd = hparams.n_embd;
- // when computing embeddings, all tokens are output
- const bool output_all = cparams.embeddings;
+ const bool output_all = false;
if (!balloc->init(batch_inp, vocab, memory.get(), n_embd, cparams.kv_unified ? LLAMA_MAX_SEQ : cparams.n_seq_max, output_all)) {
LLAMA_LOG_ERROR("%s: failed to initialize batch\n", __func__);
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment