Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Qwen2_pytorch
Commits
5333d0ba
Commit
5333d0ba
authored
Sep 12, 2024
by
luopl
Browse files
Update Qwen2-7B_inference.py
parent
032b90a1
Pipeline
#1685
canceled with stages
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
inference_vllm/Qwen2-7B_inference.py
inference_vllm/Qwen2-7B_inference.py
+1
-1
No files found.
inference_vllm/Qwen2-7B_inference.py
View file @
5333d0ba
...
@@ -11,7 +11,7 @@ prompts = [
...
@@ -11,7 +11,7 @@ prompts = [
sampling_params
=
SamplingParams
(
temperature
=
0.8
,
top_p
=
0.95
)
sampling_params
=
SamplingParams
(
temperature
=
0.8
,
top_p
=
0.95
)
# Create an LLM.
# Create an LLM.
llm
=
LLM
(
model
=
"/data/model/Qwen2-
0.5
B-Instruct/"
,
trust_remote_code
=
True
,
dtype
=
"float16"
,
enforce_eager
=
True
)
llm
=
LLM
(
model
=
"/data/model/Qwen2-
7
B-Instruct/"
,
trust_remote_code
=
True
,
dtype
=
"float16"
,
enforce_eager
=
True
)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
# that contain the prompt, generated text, and other information.
outputs
=
llm
.
generate
(
prompts
,
sampling_params
)
outputs
=
llm
.
generate
(
prompts
,
sampling_params
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment