README.md 6.87 KB
Newer Older
Rayyyyy's avatar
V0.6.3  
Rayyyyy committed
1
2
We provide diverse examples about fine-tuning LLMs.

chenych's avatar
chenych committed
3
4
5
6
7
8
9
10
11
12
13
14
15
Make sure to execute these commands in the `LLaMA-Factory` directory.

## Table of Contents

- [LoRA Fine-Tuning](#lora-fine-tuning)
- [QLoRA Fine-Tuning](#qlora-fine-tuning)
- [Full-Parameter Fine-Tuning](#full-parameter-fine-tuning)
- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization)
- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models)
- [Extras](#extras)

Use `CUDA_VISIBLE_DEVICES` (GPU) or `ASCEND_RT_VISIBLE_DEVICES` (NPU) to choose computing devices.

luopl's avatar
luopl committed
16
17
By default, LLaMA-Factory uses all visible computing devices.

chenych's avatar
chenych committed
18
19
20
Basic usage:

```bash
shihm's avatar
uodata  
shihm committed
21
llamafactory-cli train examples/train_lora/qwen3_lora_sft.yaml
chenych's avatar
chenych committed
22
23
24
25
26
```

Advanced usage:

```bash
shihm's avatar
uodata  
shihm committed
27
CUDA_VISIBLE_DEVICES=0,1 llamafactory-cli train examples/train_lora/qwen3_lora_sft.yaml \
chenych's avatar
chenych committed
28
29
30
31
32
    learning_rate=1e-5 \
    logging_steps=1
```

```bash
shihm's avatar
uodata  
shihm committed
33
bash examples/train_lora/qwen3_lora_sft.sh
chenych's avatar
chenych committed
34
35
```

chenych's avatar
chenych committed
36
37
38
39
40
41
42
## Examples

### LoRA Fine-Tuning

#### (Continuous) Pre-Training

```bash
shihm's avatar
uodata  
shihm committed
43
llamafactory-cli train examples/train_lora/qwen3_lora_pretrain.yaml
chenych's avatar
chenych committed
44
45
46
47
48
```

#### Supervised Fine-Tuning

```bash
shihm's avatar
uodata  
shihm committed
49
llamafactory-cli train examples/train_lora/qwen3_lora_sft.yaml
chenych's avatar
chenych committed
50
51
52
53
54
```

#### Multimodal Supervised Fine-Tuning

```bash
shihm's avatar
uodata  
shihm committed
55
llamafactory-cli train examples/train_lora/qwen3vl_lora_sft.yaml
chenych's avatar
chenych committed
56
57
```

luopl's avatar
luopl committed
58
#### DPO/ORPO/SimPO Training
chenych's avatar
chenych committed
59
60

```bash
shihm's avatar
uodata  
shihm committed
61
llamafactory-cli train examples/train_lora/qwen3_lora_dpo.yaml
chenych's avatar
chenych committed
62
63
```

luopl's avatar
luopl committed
64
#### Multimodal DPO/ORPO/SimPO Training
chenych's avatar
chenych committed
65
66

```bash
shihm's avatar
uodata  
shihm committed
67
llamafactory-cli train examples/train_lora/qwen3vl_lora_dpo.yaml
chenych's avatar
chenych committed
68
69
```

luopl's avatar
luopl committed
70
#### Reward Modeling
chenych's avatar
chenych committed
71
72

```bash
shihm's avatar
uodata  
shihm committed
73
llamafactory-cli train examples/train_lora/qwen3_lora_reward.yaml
chenych's avatar
chenych committed
74
75
76
77
78
```

#### KTO Training

```bash
shihm's avatar
uodata  
shihm committed
79
llamafactory-cli train examples/train_lora/qwen3_lora_kto.yaml
chenych's avatar
chenych committed
80
81
82
83
84
85
86
```

#### Preprocess Dataset

It is useful for large dataset, use `tokenized_path` in config to load the preprocessed dataset.

```bash
shihm's avatar
uodata  
shihm committed
87
llamafactory-cli train examples/train_lora/qwen3_preprocess.yaml
chenych's avatar
chenych committed
88
89
90
91
92
```

#### Supervised Fine-Tuning on Multiple Nodes

```bash
shihm's avatar
uodata  
shihm committed
93
94
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/qwen3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/qwen3_lora_sft.yaml
chenych's avatar
chenych committed
95
96
97
98
99
```

#### Supervised Fine-Tuning with DeepSpeed ZeRO-3 (Weight Sharding)

```bash
shihm's avatar
uodata  
shihm committed
100
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/qwen3_lora_sft_ds3.yaml
chenych's avatar
chenych committed
101
102
```

luopl's avatar
luopl committed
103
104
105
#### Supervised Fine-Tuning with Ray on 4 GPUs

```bash
shihm's avatar
uodata  
shihm committed
106
USE_RAY=1 llamafactory-cli train examples/train_lora/qwen3_lora_sft_ray.yaml
luopl's avatar
luopl committed
107
108
```

chenych's avatar
chenych committed
109
110
111
112
113
### QLoRA Fine-Tuning

#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes/HQQ/EETQ Quantization (Recommended)

```bash
shihm's avatar
uodata  
shihm committed
114
llamafactory-cli train examples/train_qlora/qwen3_lora_sft_otfq.yaml
chenych's avatar
chenych committed
115
116
```

luopl's avatar
luopl committed
117
118
119
#### Supervised Fine-Tuning with 4-bit Bitsandbytes Quantization on Ascend NPU

```bash
shihm's avatar
uodata  
shihm committed
120
llamafactory-cli train examples/train_qlora/qwen3_lora_sft_bnb_npu.yaml
luopl's avatar
luopl committed
121
122
```

chenych's avatar
chenych committed
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization

```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml
```

#### Supervised Fine-Tuning with 4-bit AWQ Quantization

```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml
```

#### Supervised Fine-Tuning with 2-bit AQLM Quantization

```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
```

### Full-Parameter Fine-Tuning

#### Supervised Fine-Tuning on Single Node

```bash
shihm's avatar
uodata  
shihm committed
146
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen3_full_sft.yaml
chenych's avatar
chenych committed
147
148
149
150
151
```

#### Supervised Fine-Tuning on Multiple Nodes

```bash
shihm's avatar
uodata  
shihm committed
152
153
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/qwen3_full_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/qwen3_full_sft.yaml
chenych's avatar
chenych committed
154
155
```

chenych's avatar
chenych committed
156
157
158
159
160
### Elastic and Fault-Tolerant Supervised Fine-Tuning on Multiple Nodes

To launch an elastic job with `MAX_RESTARTS` failures retries, run the following on at least `MIN_NNODES` nodes and at most `MAX_NNODES` nodes. `RDZV_ID` should be set as a unique job id (shared by all nodes participating in the job). See also [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html).

```bash
shihm's avatar
uodata  
shihm committed
161
FORCE_TORCHRUN=1 MIN_NNODES=1 MAX_NNODES=3 MAX_RESTARTS=3 RDZV_ID=llamafactory MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/qwen3_full_sft.yaml
chenych's avatar
chenych committed
162
163
```

luopl's avatar
luopl committed
164
165
166
#### Multimodal Supervised Fine-Tuning

```bash
shihm's avatar
uodata  
shihm committed
167
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen3vl_full_sft.yaml
luopl's avatar
luopl committed
168
169
```

chenych's avatar
chenych committed
170
171
172
173
174
175
176
### Merging LoRA Adapters and Quantization

#### Merge LoRA Adapters

Note: DO NOT use quantized model or `quantization_bit` when merging LoRA adapters.

```bash
shihm's avatar
uodata  
shihm committed
177
llamafactory-cli export examples/merge_lora/qwen3_lora_sft.yaml
chenych's avatar
chenych committed
178
179
180
181
182
```

#### Quantizing Model using AutoGPTQ

```bash
shihm's avatar
uodata  
shihm committed
183
llamafactory-cli export examples/merge_lora/qwen3_gptq.yaml
chenych's avatar
chenych committed
184
185
```

chenych's avatar
chenych committed
186
187
188
### Save Ollama modelfile

```bash
shihm's avatar
uodata  
shihm committed
189
llamafactory-cli export examples/merge_lora/qwen3_full_sft.yaml
chenych's avatar
chenych committed
190
191
```

chenych's avatar
chenych committed
192
193
### Inferring LoRA Fine-Tuned Models

chenych's avatar
chenych committed
194
#### Evaluation using vLLM's Multi-GPU Inference
luopl's avatar
luopl committed
195
196

```
shihm's avatar
uodata  
shihm committed
197
python scripts/vllm_infer.py --model_name_or_path Qwen/Qwen3-4B-Instruct-2507 --template qwen3_nothink --dataset alpaca_en_demo
chenych's avatar
chenych committed
198
python scripts/eval_bleu_rouge.py generated_predictions.jsonl
luopl's avatar
luopl committed
199
200
201
```

#### Use CLI ChatBox
chenych's avatar
chenych committed
202
203

```bash
shihm's avatar
uodata  
shihm committed
204
llamafactory-cli chat examples/inference/qwen3_lora_sft.yaml
chenych's avatar
chenych committed
205
206
```

luopl's avatar
luopl committed
207
#### Use Web UI ChatBox
chenych's avatar
chenych committed
208
209

```bash
shihm's avatar
uodata  
shihm committed
210
llamafactory-cli webchat examples/inference/qwen3_lora_sft.yaml
chenych's avatar
chenych committed
211
212
213
214
215
```

#### Launch OpenAI-style API

```bash
shihm's avatar
uodata  
shihm committed
216
llamafactory-cli api examples/inference/qwen3_lora_sft.yaml
chenych's avatar
chenych committed
217
218
219
220
221
222
223
224
225
226
```

### Extras

#### Full-Parameter Fine-Tuning using GaLore

```bash
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
```

luopl's avatar
luopl committed
227
228
229
230
231
232
#### Full-Parameter Fine-Tuning using APOLLO

```bash
llamafactory-cli train examples/extras/apollo/llama3_full_sft.yaml
```

chenych's avatar
chenych committed
233
234
235
236
237
238
239
240
241
242
243
244
#### Full-Parameter Fine-Tuning using BAdam

```bash
llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
```

#### Full-Parameter Fine-Tuning using Adam-mini

```bash
llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml
```

chenych's avatar
chenych committed
245
246
247
248
249
250
#### Full-Parameter Fine-Tuning using Muon

```bash
llamafactory-cli train examples/extras/muon/qwen2_full_sft.yaml
```

chenych's avatar
chenych committed
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
#### LoRA+ Fine-Tuning

```bash
llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
```

#### PiSSA Fine-Tuning

```bash
llamafactory-cli train examples/extras/pissa/llama3_lora_sft.yaml
```

#### Mixture-of-Depths Fine-Tuning

```bash
llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
```

#### LLaMA-Pro Fine-Tuning

```bash
bash examples/extras/llama_pro/expand.sh
llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
```

#### FSDP+QLoRA Fine-Tuning

```bash
bash examples/extras/fsdp_qlora/train.sh
Rayyyyy's avatar
V0.6.3  
Rayyyyy committed
280
```
shihm's avatar
uodata  
shihm committed
281
282
283
284
285
286
287
288
289
290
291
292

#### OFT Fine-Tuning

```bash
llamafactory-cli train examples/extras/oft/llama3_oft_sft.yaml
```

#### QOFT Fine-Tuning

```bash
llamafactory-cli train examples/extras/qoft/llama3_oft_sft_bnb_npu.yaml
```