Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
norm
vllm
Commits
2acd76f3
Unverified
Commit
2acd76f3
authored
Dec 15, 2023
by
Woosuk Kwon
Committed by
GitHub
Dec 15, 2023
Browse files
[ROCm] Temporarily remove GPTQ ROCm support (#2138)
parent
b81a6a6b
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
setup.py
setup.py
+1
-1
vllm/config.py
vllm/config.py
+1
-1
No files found.
setup.py
View file @
2acd76f3
...
...
@@ -219,13 +219,13 @@ vllm_extension_sources = [
"csrc/activation_kernels.cu"
,
"csrc/layernorm_kernels.cu"
,
"csrc/quantization/squeezellm/quant_cuda_kernel.cu"
,
"csrc/quantization/gptq/q_gemm.cu"
,
"csrc/cuda_utils_kernels.cu"
,
"csrc/pybind.cpp"
,
]
if
_is_cuda
():
vllm_extension_sources
.
append
(
"csrc/quantization/awq/gemm_kernels.cu"
)
vllm_extension_sources
.
append
(
"csrc/quantization/gptq/q_gemm.cu"
)
vllm_extension
=
CUDAExtension
(
name
=
"vllm._C"
,
...
...
vllm/config.py
View file @
2acd76f3
...
...
@@ -143,7 +143,7 @@ class ModelConfig:
def
_verify_quantization
(
self
)
->
None
:
supported_quantization
=
[
"awq"
,
"gptq"
,
"squeezellm"
]
rocm_not_supported_quantization
=
[
"awq"
]
rocm_not_supported_quantization
=
[
"awq"
,
"gptq"
]
if
self
.
quantization
is
not
None
:
self
.
quantization
=
self
.
quantization
.
lower
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment