Commit cd34ba56 authored by chenych's avatar chenych
Browse files

删除断点微调方法,更新README

parent f6f5b435
......@@ -208,6 +208,7 @@ cd finetune
# For Chat Fine-tune
export HIP_VISIBLE_DEVICES=1 # 可自行修改为指定显卡号
export HSA_FORCE_FINE_GRAIN_PCIE=1
export HF_ENDPOINT=https://hf-mirror.com
python finetune.py ../data/AdvertiseGen/saves/ THUDM/GLM-4-9B-0414 configs/lora.yaml
```
......@@ -218,23 +219,11 @@ python finetune.py ../data/AdvertiseGen/saves/ THUDM/GLM-4-9B-0414 configs/lor
# For Chat Fine-tune
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 # 可自行修改为指定显卡号
export HSA_FORCE_FINE_GRAIN_PCIE=1
export HF_ENDPOINT=https://hf-mirror.com
OMP_NUM_THREADS=1 torchrun --standalone --nnodes=1 --nproc_per_node=8 finetune.py ../data/AdvertiseGen/saves THUDM/GLM-4-9B-0414 configs/lora.yaml # For Chat Fine-tune
```
#### 从保存点进行微调
如果按照上述方式进行训练,每次微调都会从头开始,如果你想从训练一半的模型开始微调,你可以加入第四个参数,这个参数有两种传入方式:
1. `yes`, 自动从**最后一个保存的Checkpoint**开始训练,例如:
```shell
python finetune.py ../data/AdvertiseGen/saves/ THUDM/GLM-4-9B-0414 configs/lora.yaml yes
```
2. `XX`, 断点号数字,例`600`则从序号**600 Checkpoint**开始训练,例如:
```shell
python finetune.py ../data/AdvertiseGen/saves/ THUDM/GLM-4-9B-0414 configs/lora.yaml 600
```
### Llama Factory 微调方法(推荐)
训练库安装(**非glm-4_pytorch目录下**),安装版本**大于 v0.9.2**`Llama-Factory`具体安装方法请参考仓库的README。
```
......
......@@ -6,4 +6,3 @@ rouge_chinese==1.0.3
ruamel.yaml>=0.18.6
typer>=0.13.0
tqdm>=4.67.0
mpi4py
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment