Commit 7f8094a3 authored by zhaoying1's avatar zhaoying1
Browse files

added baichuan2

parents
Pipeline #612 failed with stages
in 0 seconds
f14r1n19 slots=4
f14r2n00 slots=4
f14r2n01 slots=4
f14r2n02 slots=4
f14r2n03 slots=4
f14r2n04 slots=4
f14r2n05 slots=4
f14r2n06 slots=4
f14r2n07 slots=4
f14r2n08 slots=4
f14r2n09 slots=4
f14r2n10 slots=4
f14r2n11 slots=4
f14r2n12 slots=4
f14r2n13 slots=4
f14r2n14 slots=4
f14r2n15 slots=4
f14r2n16 slots=4
f14r2n17 slots=4
f14r2n18 slots=4
f14r2n19 slots=4
f14r3n00 slots=4
f14r3n01 slots=4
f14r3n02 slots=4
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("/public/home/zhaoying1/work/Baichuan2-main/fine-tune/slurm_script/output/checkpoint-420", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("/public/home/zhaoying1/work/Baichuan2-main/fine-tune/slurm_script/output/checkpoint-420", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("/public/home/zhaoying1/work/Baichuan2-main/fine-tune/slurm_script/output/checkpoint-420")
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment