README.md 2.64 KB
Newer Older
chenych's avatar
chenych committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# 基于llama-factory启动的GRPO训练仓库

## 环境配置
### Docker(方法一)
```bash
docker pull image.sourcefind.cn:5000/dcu/admin/base/vllm:0.8.5-ubuntu22.04-dtk25.04.1-rc5-das1.6-py3.10-20250724
docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash

cd /your_code_path/llama-grpo
pip install -e .
pip uninstall trl
cd ../
git clone -b v0.19.0 https://github.com/huggingface/trl.git
mv trl trl-v0.19.0
cd trl-v0.19.0
pip install -e .
cd ../llama-grpo
pip install transformers==4.51.3
```

### Dockerfile(方法二)
```bash
cd docker
docker build --no-cache -t llama-grpo:latest .

docker run -it --shm-size 200g --network=host --name {docker_name} --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /path/your_code_data/:/path/your_code_data/ -v /opt/hyhal/:/opt/hyhal/:ro {imageID} bash

cd /your_code_path/llama-grpo
pip install -e .
pip uninstall trl
cd ../
git clone -b v0.19.0 https://github.com/huggingface/trl.git
mv trl trl-v0.19.0
cd trl-v0.19.0
pip install -e .
cd ../llama-grpo
pip install transformers==4.51.3
```

### Anaconda(方法三)
关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.sourcefind.cn/tool/)开发者社区下载安装。
```bash
DTK: 25.04.1
python: 3.10.12
torch: 2.4.1+das.opt1.dtk25041
vllm: 0.8.5
transformers: 4.51.3
deepspeed: 0.14.2+das.opt1.dtk25041
```

## 训练方法
### 多节点启动
1. 启动`trl vllm-serve`
```bash
bash start_vllm_serve.sh
```

2. 启动训练
需要在每个服务器分别启动下面脚本,第一台服务器x=0,第二台服务器x=1,依次类推
其他参数请参考脚本内的参数对应说明进行配置修改
```bash
bash train.sh x
```

### slurm启动
1. vllm-serve启动
```bash
sbatch sbatch_vllm.sh
```

2. 训练启动
需要在`sbatch_train.sh`脚本内修改参数,并提交任务
```bash
sbatch sbatch_train.sh
```

chenych's avatar
chenych committed
77
## 已知问题
chenych's avatar
chenych committed
78

chenych's avatar
chenych committed
79
80
81
82
83
84
1. 如果报错:`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
解决方法如下:
修改 `trl-v0.19.0/trl/scripts/vllm_serve.py` 文件
```bash
# os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn" ## 将这个环境变量注释
## 添加下面的代码
chenych's avatar
chenych committed
85

chenych's avatar
chenych committed
86
87
88
89
90
91
from multiprocessing import set_start_method
try:
    set_start_method('spawn')
except RuntimeError:
    pass
```