Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
a87fe425
Unverified
Commit
a87fe425
authored
Feb 27, 2025
by
Baber Abbasi
Committed by
GitHub
Feb 27, 2025
Browse files
fix vllm data parallel (#2746)
* remove ray.remote resources * remove kobtest tag (registered as group)
parent
af2d2f3e
Changes
7
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
5 additions
and
15 deletions
+5
-15
lm_eval/models/vllm_causallms.py
lm_eval/models/vllm_causallms.py
+3
-3
lm_eval/models/vllm_vlms.py
lm_eval/models/vllm_vlms.py
+2
-2
lm_eval/tasks/kobest/kobest_boolq.yaml
lm_eval/tasks/kobest/kobest_boolq.yaml
+0
-2
lm_eval/tasks/kobest/kobest_copa.yaml
lm_eval/tasks/kobest/kobest_copa.yaml
+0
-2
lm_eval/tasks/kobest/kobest_hellaswag.yaml
lm_eval/tasks/kobest/kobest_hellaswag.yaml
+0
-2
lm_eval/tasks/kobest/kobest_sentineg.yaml
lm_eval/tasks/kobest/kobest_sentineg.yaml
+0
-2
lm_eval/tasks/kobest/kobest_wic.yaml
lm_eval/tasks/kobest/kobest_wic.yaml
+0
-2
No files found.
lm_eval/models/vllm_causallms.py
View file @
a87fe425
...
...
@@ -243,13 +243,13 @@ class VLLM(TemplateLM):
temperature
=
0
,
prompt_logprobs
=
1
,
max_tokens
=
1
,
detokenize
=
False
)
if
self
.
data_parallel_size
>
1
:
# vLLM hangs if
tensor_parallel > 1 and
resources are set in ray.remote
# vLLM hangs if resources are set in ray.remote
# also seems to only work with decorator and not with ray.remote() fn
# see https://github.com/vllm-project/vllm/issues/973
@
ray
.
remote
(
num_gpus
=
1
if
self
.
tensor_parallel_size
==
1
else
None
)
@
ray
.
remote
def
run_inference_one_model
(
model_args
:
dict
,
sampling_params
,
sampling_params
:
SamplingParams
,
requests
:
List
[
List
[
int
]],
lora_request
:
LoRARequest
,
):
...
...
lm_eval/models/vllm_vlms.py
View file @
a87fe425
...
...
@@ -109,10 +109,10 @@ class VLLM_VLM(VLLM):
temperature
=
0
,
prompt_logprobs
=
1
,
max_tokens
=
1
,
detokenize
=
False
)
if
self
.
data_parallel_size
>
1
:
# vLLM hangs if
tensor_parallel > 1 and
resources are set in ray.remote
# vLLM hangs if resources are set in ray.remote
# also seems to only work with decorator and not with ray.remote() fn
# see https://github.com/vllm-project/vllm/issues/973
@
ray
.
remote
(
num_gpus
=
1
if
self
.
tensor_parallel_size
==
1
else
None
)
@
ray
.
remote
def
run_inference_one_model
(
model_args
:
dict
,
sampling_params
,
requests
:
List
[
List
[
dict
]]
):
...
...
lm_eval/tasks/kobest/kobest_boolq.yaml
View file @
a87fe425
tag
:
-
kobest
task
:
kobest_boolq
dataset_path
:
skt/kobest_v1
dataset_name
:
boolq
...
...
lm_eval/tasks/kobest/kobest_copa.yaml
View file @
a87fe425
tag
:
-
kobest
task
:
kobest_copa
dataset_path
:
skt/kobest_v1
dataset_name
:
copa
...
...
lm_eval/tasks/kobest/kobest_hellaswag.yaml
View file @
a87fe425
tag
:
-
kobest
task
:
kobest_hellaswag
dataset_path
:
skt/kobest_v1
dataset_name
:
hellaswag
...
...
lm_eval/tasks/kobest/kobest_sentineg.yaml
View file @
a87fe425
tag
:
-
kobest
task
:
kobest_sentineg
dataset_path
:
skt/kobest_v1
dataset_name
:
sentineg
...
...
lm_eval/tasks/kobest/kobest_wic.yaml
View file @
a87fe425
tag
:
-
kobest
task
:
kobest_wic
dataset_path
:
skt/kobest_v1
dataset_name
:
wic
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment