Commit 40e5db84 authored by chenzk's avatar chenzk
Browse files

v1.0.2

parent c7e1ce0f
......@@ -19,7 +19,7 @@ llama2算法主要将转换成向量的分词用qkv自相关和全连接层提
## 环境配置
```
mv TinyLlama_pytorch TinyLlama # 去框架名后缀
mv tinyllama_pytorch TinyLlama # 去框架名后缀
```
### Docker(方法一)
......@@ -27,7 +27,7 @@ mv TinyLlama_pytorch TinyLlama # 去框架名后缀
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-centos7.6-dtk23.10-py38
# <your IMAGE ID>为以上拉取的docker的镜像ID替换,本镜像为:ffa1f63239fc
docker run -it --shm-size=32G -v $PWD/TinyLlama:/home/TinyLlama -v /opt/hyhal:/opt/hyhal --privileged=true --device=/dev/kfd --device=/dev/dri/ --group-add video --name tinyllama <your IMAGE ID> bash
cd TinyLlama
cd /home/TinyLlama
pip install -r requirements.txt
```
### Dockerfile(方法二)
......
......@@ -4,7 +4,7 @@
# We hope the community can explore on finetuning TinyLlama and come up with better chat models. I will include community-finetuned models in this repo.
# V0.1
# CUDA_VISIBLE_DEVICES=0 accelerate launch --main_process_port 1234 sft/finetune.py \
# HIP_VISIBLE_DEVICES=0 accelerate launch --main_process_port 1234 sft/finetune.py \
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --multi_gpu --num_processes 4 --main_process_port 1234 sft/finetune.py \
--model_name_or_path PY007/TinyLlama-1.1B-intermediate-step-240k-503b \
--output_dir ./output/503B_FT_lr1e-5_ep5 \
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment