Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
norm
vllm
Commits
6634a0e0
Commit
6634a0e0
authored
May 30, 2024
by
zhuwenwen
Browse files
support llama model tn/nn
parent
a10e9cee
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
vllm/model_executor/model_loader.py
vllm/model_executor/model_loader.py
+2
-1
No files found.
vllm/model_executor/model_loader.py
View file @
6634a0e0
...
@@ -24,7 +24,8 @@ def _set_default_torch_dtype(dtype: torch.dtype):
...
@@ -24,7 +24,8 @@ def _set_default_torch_dtype(dtype: torch.dtype):
def
_get_model_architecture
(
model_config
:
ModelConfig
)
->
Type
[
nn
.
Module
]:
def
_get_model_architecture
(
model_config
:
ModelConfig
)
->
Type
[
nn
.
Module
]:
architectures
=
getattr
(
model_config
.
hf_config
,
"architectures"
,
[])
architectures
=
getattr
(
model_config
.
hf_config
,
"architectures"
,
[])
if
architectures
==
[
'LlamaForCausalLM'
]:
if
architectures
==
[
'LlamaForCausalLM'
]:
os
.
environ
[
'LLAMA_NN'
]
=
'1'
if
os
.
getenv
(
'LLAMA_NN'
)
!=
'0'
:
os
.
environ
[
'LLAMA_NN'
]
=
'1'
# Special handling for quantized Mixtral.
# Special handling for quantized Mixtral.
# FIXME(woosuk): This is a temporary hack.
# FIXME(woosuk): This is a temporary hack.
if
(
model_config
.
quantization
is
not
None
if
(
model_config
.
quantization
is
not
None
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment