Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Qwen1.5_pytorch
Commits
aa2a1b8c
Commit
aa2a1b8c
authored
Apr 30, 2024
by
luopl
Browse files
Update Qwen1.5-7b_single_dcu_inference.py
parent
7504ebd1
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
3 deletions
+2
-3
inference/inference_vllm/Qwen1.5-7b_single_dcu_inference.py
inference/inference_vllm/Qwen1.5-7b_single_dcu_inference.py
+2
-3
No files found.
inference/inference_vllm/Qwen1.5-7b_single_dcu_inference.py
View file @
aa2a1b8c
from
vllm
import
LLM
,
SamplingParams
from
vllm
import
LLM
,
SamplingParams
# Sample prompts.
# Sample prompts.
prompts
=
[
prompts
=
[
"The capital of France is"
,
"The capital of France is"
,
]
]
# Create a sampling params object.
# Create a sampling params object.
sampling_params
=
SamplingParams
(
temperature
=
0.8
,
top_p
=
0.95
,
top_k
=
50
,
stop
=
"</s>"
)
sampling_params
=
SamplingParams
(
temperature
=
0.8
,
top_p
=
0.95
)
# Create an LLM.
# Create an LLM.
llm
=
LLM
(
model
=
"./qwen2/Qwen1.5-7B-Chat"
,
trust_remote_code
=
True
,
dtype
=
"float16"
,
enforce_eager
=
True
)
llm
=
LLM
(
model
=
"./qwen2/Qwen1.5-7B-Chat"
,
trust_remote_code
=
True
,
dtype
=
"float16"
,
enforce_eager
=
True
)
...
@@ -16,4 +15,4 @@ outputs = llm.generate(prompts, sampling_params)
...
@@ -16,4 +15,4 @@ outputs = llm.generate(prompts, sampling_params)
for
output
in
outputs
:
for
output
in
outputs
:
prompt
=
output
.
prompt
prompt
=
output
.
prompt
generated_text
=
output
.
outputs
[
0
].
text
generated_text
=
output
.
outputs
[
0
].
text
print
(
f
"Prompt:
{
prompt
!
r
}
, Generated text:
{
generated_text
!
r
}
"
)
print
(
f
"Prompt:
{
prompt
!
r
}
, Generated text:
{
generated_text
!
r
}
"
)
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment