Commit e2c0b4f7 authored by wxj's avatar wxj
Browse files

Update README.md

parent edb026d5
Pipeline #2654 passed with stage
......@@ -218,11 +218,22 @@ python tools/checkpoint/convert.py \
--loader llama_mistral \
--saver megatron \
--target-tensor-parallel-size 1 \
--target-pipeline-parallel-size 2 \
--checkpoint-type hf \
--model-size llama2-7Bf \
--load-dir /models/llama2/Llama-2-7b-hf/ \
--save-dir ./Llama-2-7b-megatron-lm-0108 \
--tokenizer-model /models/llama2/Llama-2-7b-hf
--load-dir /data/model_weights/Llama-2-7b-hf/ \
--save-dir ./tmp_modelconvert \
--tokenizer-model /data/model_weights/Llama-2-7b-hf/
```
然后在训练的脚本上添加微调的参数
```shell
FINETUNE_ARGS=(
# --finetune
# --pretrained-checkpoint $CHECKPOINT_PATH
--load $CHECKPOINT_PATH
--no-load-optim
--no-load-rng
)
```
# 参考
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment