Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ox696c
ktransformers
Commits
f74c2d1d
Unverified
Commit
f74c2d1d
authored
Feb 15, 2025
by
MuWinds
Committed by
GitHub
Feb 15, 2025
Browse files
Solve `torch.backends.cuda.sdp_kernel()` is deprecated.
parent
1548c992
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
ktransformers/server/backend/interfaces/transformers.py
ktransformers/server/backend/interfaces/transformers.py
+2
-1
No files found.
ktransformers/server/backend/interfaces/transformers.py
View file @
f74c2d1d
...
...
@@ -13,6 +13,7 @@ from transformers import (
from
ktransformers.server.config.config
import
Config
from
ktransformers.server.schemas.base
import
ObjectID
from
ktransformers.server.utils.multi_timer
import
Profiler
from
torch.nn.attention
import
SDPBackend
import
torch
import
sys
,
os
from
..base
import
ThreadContext
,
BackendInterfaceBase
...
...
@@ -292,7 +293,7 @@ class TransformersInterface(BackendInterfaceBase):
def
generate
(
self
):
self
.
profiler
.
set_counter
(
"decode"
,
0
)
for
_
in
range
(
1
,
self
.
args
.
max_new_tokens
):
with
torch
.
backends
.
cuda
.
sdp_kernel
(
enable_flash
=
False
,
enable_mem_efficient
=
False
,
enable_math
=
True
):
with
torch
.
nn
.
attention
.
sdp
a
_kernel
(
backends
=
[
SDPBackend
.
FLASH_ATTENTION
,
SDPBackend
.
MATH
,
SDPBackend
.
EFFICIENT_ATTENTION
]
):
next_token
=
self
.
decode_one_tokens
()
self
.
profiler
.
inc
(
"decode"
)
if
next_token
==
self
.
tokenizer
.
eos_token_id
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment