Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Baichuan-M3_pytorch
Commits
53129bac
Commit
53129bac
authored
Mar 09, 2026
by
shihm
Browse files
upodata readme
parent
a9ee04b7
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
36 additions
and
37 deletions
+36
-37
README.md
README.md
+36
-37
No files found.
README.md
View file @
53129bac
...
...
@@ -56,6 +56,42 @@ docker run -it \
## 推理
### transformers
#### 单机推理
```
bash
python
from transformers import AutoTokenizer, AutoModelForCausalLM
import os
import torch
os.environ[
'TRANSFORMERS_OFFLINE'
]
=
'1'
os.environ[
'MODELSCOPE_OFFLINE'
]
=
'1'
model_path
=
"/path/to/Baichuan-M3-235B"
model
=
AutoModelForCausalLM.from_pretrained
(
model_path,
trust_remote_code
=
True,
device_map
=
"auto"
,
torch_dtype
=
torch.bfloat16
)
tokenizer
=
AutoTokenizer.from_pretrained
(
model_path,
trust_remote_code
=
True
)
messages
=
[{
"role"
:
"user"
,
"content"
:
"I've been having headaches lately, especially worse in the afternoon. What should I do?"
}]
text
=
tokenizer.apply_chat_template
(
messages,
tokenize
=
False,
add_generation_prompt
=
True,
thinking_mode
=
'on'
)
model_inputs
=
tokenizer
([
text],
return_tensors
=
"pt"
)
.to
(
model.device
)
generated_ids
=
model.generate
(
**
model_inputs,
max_new_tokens
=
32768,
temperature
=
0.6
)
response
=
tokenizer.decode
(
generated_ids[0][len
(
model_inputs.input_ids[0]
)
:],
skip_special_tokens
=
True
)
print
(
response
)
```
### vllm
#### 多机推理
...
...
@@ -145,43 +181,6 @@ curl http://localhost:8000/v1/chat/completions \
### transformers
#### 单机推理
```
bash
python
from transformers import AutoTokenizer, AutoModelForCausalLM
import os
import torch
os.environ[
'TRANSFORMERS_OFFLINE'
]
=
'1'
os.environ[
'MODELSCOPE_OFFLINE'
]
=
'1'
model_path
=
"/path/to/Baichuan-M3-235B"
model
=
AutoModelForCausalLM.from_pretrained
(
model_path,
trust_remote_code
=
True,
device_map
=
"auto"
,
torch_dtype
=
torch.bfloat16
)
tokenizer
=
AutoTokenizer.from_pretrained
(
model_path,
trust_remote_code
=
True
)
messages
=
[{
"role"
:
"user"
,
"content"
:
"I've been having headaches lately, especially worse in the afternoon. What should I do?"
}]
text
=
tokenizer.apply_chat_template
(
messages,
tokenize
=
False,
add_generation_prompt
=
True,
thinking_mode
=
'on'
)
model_inputs
=
tokenizer
([
text],
return_tensors
=
"pt"
)
.to
(
model.device
)
generated_ids
=
model.generate
(
**
model_inputs,
max_new_tokens
=
32768,
temperature
=
0.6
)
response
=
tokenizer.decode
(
generated_ids[0][len
(
model_inputs.input_ids[0]
)
:],
skip_special_tokens
=
True
)
print
(
response
)
```
### 精度
`DCU与GPU精度一致,推理框架:vllm,transformers`
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment