README.md 2.17 KB
Newer Older
liangjing's avatar
v1  
liangjing committed
1
2
3
4
5
6
7
8
9
10
11
12
13
# 内容
- [内容](#内容)
- [环境配置](#环境配置)
- [下载词汇文件](#下载词汇文件)
- [下载训练数据](#下载训练数据)
- [训练](#训练)
  - [数据预处理](#数据预处理)
  - [GPT预训练](#gpt预训练)
    - [分布式多卡训练](#分布式多卡训练)
- [GPT文本生成](#gpt文本生成)
- [参考](#参考)

# 环境配置
liangjing's avatar
update  
liangjing committed
14
1. 拉取合适镜像
Neel Kant's avatar
Neel Kant committed
15
<pre>
liangjing's avatar
update  
liangjing committed
16
docker pull nvcr.io/nvidia/pytorch:24.06-py3
Neel Kant's avatar
Neel Kant committed
17
</pre>
liangjing's avatar
liangjing committed
18

liangjing's avatar
update  
liangjing committed
19
2. 创建镜像并进入
Mohammad's avatar
Mohammad committed
20
21

<pre>
liangjing's avatar
update  
liangjing committed
22
23
docker run -it --name xx --gpus all --network=host --ipc=host --privileged -v /path_to_work/:/path_to_work/ --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined nvcr.io/nvidia/pytorch:24.06-py3 /bin/bash
docker exec -it xx bash
24
</pre>
liangjing's avatar
liangjing committed
25
26


liangjing's avatar
v1  
liangjing committed
27
# 下载词汇文件
28

Mohammad's avatar
Mohammad committed
29
<pre>
liangjing's avatar
v1  
liangjing committed
30
31
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
Mohammad's avatar
Mohammad committed
32
</pre>
33
34


liangjing's avatar
v1  
liangjing committed
35
36
# 下载训练数据
使用1GB 79K jsonl数据集
Mohammad's avatar
Mohammad committed
37
<pre>
liangjing's avatar
v1  
liangjing committed
38
39
wget https://huggingface.co/bigscience/misc-test-data/resolve/main/stas/oscar-1GB.jsonl.xz
xz -d oscar-1GB.jsonl.xz
Mohammad's avatar
Mohammad committed
40
41
</pre>

liangjing's avatar
v1  
liangjing committed
42
# 训练
Mohammad's avatar
Mohammad committed
43

liangjing's avatar
v1  
liangjing committed
44
## 数据预处理
Mohammad's avatar
Mohammad committed
45
46

<pre>
liangjing's avatar
v1  
liangjing committed
47
48
49
python tools/preprocess_data.py \
    --input oscar-1GB.jsonl \ 
    --output-prefix ./dataset/my-gpt2 \
liangjing's avatar
liangjing committed
50
    --vocab-file gpt2-vocab.json \
liangjing's avatar
v1  
liangjing committed
51
52
53
54
    --tokenizer-type GPT2BPETokenizer \
    --merge-file gpt2-merges.txt \
    --append-eod \
    --workers 8
Mohammad's avatar
Mohammad committed
55
56
</pre>

liangjing's avatar
v1  
liangjing committed
57
58
59
参数说明
--input				输入数据集路径,即oscar-1GB.jsonl.xz解压后的文件路径
--output-prefix		输出数据路径,处理后会自动加上_text_document后缀
liangjing's avatar
liangjing committed
60
--vocab-file				下载的gpt2-vocab.json词表文件路径
liangjing's avatar
v1  
liangjing committed
61
62
63
64
--tokenizer-type 	tokenizer类型
--merge-file		下载的gpt2-merges.txt文件路径		
--append-eod		添加结束标志符		
--workers			进程数
Mohammad's avatar
Mohammad committed
65

liangjing's avatar
v1  
liangjing committed
66
## GPT预训练
Mohammad's avatar
Mohammad committed
67

liangjing's avatar
v1  
liangjing committed
68
69
### 分布式训练
- 修改DATA_PATH路径
Mohammad's avatar
Mohammad committed
70

liangjing's avatar
v1  
liangjing committed
71
72
73
  ```bash
  VOCAB_FILE=gpt2-vocab.json
  MERGE_FILE=gpt2-merges.txt
liangjing's avatar
liangjing committed
74
  DATA_PATH="./dataset/my-gpt2_text_document"
liangjing's avatar
v1  
liangjing committed
75
  ```
Raul Puri's avatar
Raul Puri committed
76

liangjing's avatar
v1  
liangjing committed
77
- 执行多卡训练
78

liangjing's avatar
v1  
liangjing committed
79
80
  ```
  #np为起的进程数,np\hostfile均需按实际填写
liangjing's avatar
liangjing committed
81
  mpirun -np 4 --hostfile hostfile single.sh localhost(基于单节点四卡)
liangjing's avatar
v1  
liangjing committed
82
  ```
83

liangjing's avatar
v1  
liangjing committed
84
# 参考
85

liangjing's avatar
liangjing committed
86
- [README_ORIGIN](README_ORIGIN.md)