Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
2614adf9
Unverified
Commit
2614adf9
authored
Oct 17, 2025
by
Antonin Vidon
Committed by
GitHub
Oct 17, 2025
Browse files
[Fix] Skip visual layers when applying LoRA to Qwen2VL modules (#11519)
parent
fdd7c69d
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
1 deletion
+4
-1
python/sglang/srt/models/qwen2_vl.py
python/sglang/srt/models/qwen2_vl.py
+4
-1
No files found.
python/sglang/srt/models/qwen2_vl.py
View file @
2614adf9
...
...
@@ -28,7 +28,6 @@ from typing import Iterable, List, Optional, Tuple, Type, TypedDict
import
torch
import
torch.nn
as
nn
import
torch.nn.functional
as
F
from
einops
import
rearrange
from
transformers
import
Qwen2VLConfig
from
transformers.models.qwen2_vl.configuration_qwen2_vl
import
Qwen2VLVisionConfig
...
...
@@ -514,6 +513,10 @@ class Qwen2VLForConditionalGeneration(nn.Module):
def
get_input_embeddings
(
self
):
return
self
.
model
.
embed_tokens
def
should_apply_lora
(
self
,
module_name
:
str
)
->
bool
:
# skip visual tower
return
not
module_name
.
startswith
(
"visual"
)
def
forward
(
self
,
input_ids
:
torch
.
Tensor
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment