Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
LLaMA-Factory-Llama3.2_pytorch
Commits
12d5cbac
Commit
12d5cbac
authored
Oct 21, 2024
by
chenzk
Browse files
v1.0
parents
Pipeline
#1780
canceled with stages
Changes
259
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
526 additions
and
0 deletions
+526
-0
examples/inference/llama3_vllm.yaml
examples/inference/llama3_vllm.yaml
+4
-0
examples/inference/llava1_5.yaml
examples/inference/llava1_5.yaml
+2
-0
examples/inference/qwen2_vl.yaml
examples/inference/qwen2_vl.yaml
+2
-0
examples/merge_lora/llama3_gptq.yaml
examples/merge_lora/llama3_gptq.yaml
+11
-0
examples/merge_lora/llama3_lora_sft.yaml
examples/merge_lora/llama3_lora_sft.yaml
+13
-0
examples/merge_lora/qwen2vl_lora_sft.yaml
examples/merge_lora/qwen2vl_lora_sft.yaml
+13
-0
examples/train_full/llama3_full_predict.yaml
examples/train_full/llama3_full_predict.yaml
+23
-0
examples/train_full/llama3_full_sft_ds3.yaml
examples/train_full/llama3_full_sft_ds3.yaml
+39
-0
examples/train_full/qwen2vl_full_sft.yaml
examples/train_full/qwen2vl_full_sft.yaml
+39
-0
examples/train_lora/llama3_lora_dpo.yaml
examples/train_lora/llama3_lora_dpo.yaml
+41
-0
examples/train_lora/llama3_lora_eval.yaml
examples/train_lora/llama3_lora_eval.yaml
+18
-0
examples/train_lora/llama3_lora_kto.yaml
examples/train_lora/llama3_lora_kto.yaml
+40
-0
examples/train_lora/llama3_lora_ppo.yaml
examples/train_lora/llama3_lora_ppo.yaml
+39
-0
examples/train_lora/llama3_lora_predict.yaml
examples/train_lora/llama3_lora_predict.yaml
+25
-0
examples/train_lora/llama3_lora_pretrain.yaml
examples/train_lora/llama3_lora_pretrain.yaml
+38
-0
examples/train_lora/llama3_lora_reward.yaml
examples/train_lora/llama3_lora_reward.yaml
+39
-0
examples/train_lora/llama3_lora_sft.yaml
examples/train_lora/llama3_lora_sft.yaml
+39
-0
examples/train_lora/llama3_lora_sft_ds0.yaml
examples/train_lora/llama3_lora_sft_ds0.yaml
+40
-0
examples/train_lora/llama3_lora_sft_ds3.yaml
examples/train_lora/llama3_lora_sft_ds3.yaml
+40
-0
examples/train_lora/llama3_preprocess.yaml
examples/train_lora/llama3_preprocess.yaml
+21
-0
No files found.
examples/inference/llama3_vllm.yaml
0 → 100644
View file @
12d5cbac
model_name_or_path
:
meta-llama/Llama-3.2-3B-Instruct
template
:
llama3
infer_backend
:
vllm
vllm_enforce_eager
:
true
examples/inference/llava1_5.yaml
0 → 100644
View file @
12d5cbac
model_name_or_path
:
llava-hf/llava-1.5-7b-hf
template
:
llava
examples/inference/qwen2_vl.yaml
0 → 100644
View file @
12d5cbac
model_name_or_path
:
Qwen/Qwen2-VL-7B-Instruct
template
:
qwen2_vl
examples/merge_lora/llama3_gptq.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
template
:
llama3
### export
export_dir
:
models/llama3_gptq
export_quantization_bit
:
4
export_quantization_dataset
:
data/c4_demo.json
export_size
:
2
export_device
:
cpu
export_legacy_format
:
false
examples/merge_lora/llama3_lora_sft.yaml
0 → 100644
View file @
12d5cbac
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path
:
meta-llama/Llama-3.2-3B-Instruct
adapter_name_or_path
:
saves/Llama-3.2-3B/lora/sft
template
:
llama3
finetuning_type
:
lora
### export
export_dir
:
models/llama3_lora_sft
export_size
:
2
export_device
:
cpu
export_legacy_format
:
false
examples/merge_lora/qwen2vl_lora_sft.yaml
0 → 100644
View file @
12d5cbac
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path
:
Qwen/Qwen2-VL-7B-Instruct
adapter_name_or_path
:
saves/qwen2_vl-7b/lora/sft
template
:
qwen2_vl
finetuning_type
:
lora
### export
export_dir
:
models/qwen2_vl_lora_sft
export_size
:
2
export_device
:
cpu
export_legacy_format
:
false
examples/train_full/llama3_full_predict.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
saves/llama3-8b/full/sft
### method
stage
:
sft
do_predict
:
true
finetuning_type
:
full
### dataset
eval_dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
50
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/full/predict
overwrite_output_dir
:
true
### eval
per_device_eval_batch_size
:
1
predict_with_generate
:
true
examples/train_full/llama3_full_sft_ds3.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
sft
do_train
:
true
finetuning_type
:
full
deepspeed
:
examples/deepspeed/ds_z3_config.json
### dataset
dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/full/sft
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
2
learning_rate
:
1.0e-5
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_full/qwen2vl_full_sft.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
Qwen/Qwen2-VL-7B-Instruct
### method
stage
:
sft
do_train
:
true
finetuning_type
:
full
deepspeed
:
examples/deepspeed/ds_z3_config.json
### dataset
dataset
:
mllm_demo,identity
template
:
qwen2_vl
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/qwen2_vl-7b/full/sft
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
2
learning_rate
:
1.0e-5
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_lora_dpo.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
dpo
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
pref_beta
:
0.1
pref_loss
:
sigmoid
# choices: [sigmoid (dpo), orpo, simpo]
### dataset
dataset
:
dpo_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/dpo
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
8
learning_rate
:
5.0e-6
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_lora_eval.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path
:
saves/llama3-8b/lora/sft
### method
finetuning_type
:
lora
### dataset
task
:
mmlu_test
# choices: [mmlu_test, ceval_validation, cmmlu_test]
template
:
fewshot
lang
:
en
n_shot
:
5
### output
save_dir
:
saves/llama3-8b/lora/eval
### eval
batch_size
:
4
examples/train_lora/llama3_lora_kto.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
kto
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
pref_beta
:
0.1
### dataset
dataset
:
kto_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/kto
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
8
learning_rate
:
5.0e-6
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_lora_ppo.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
reward_model
:
saves/llama3-8b/lora/reward
### method
stage
:
ppo
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
### dataset
dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/ppo
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
8
learning_rate
:
1.0e-5
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### generate
max_new_tokens
:
512
top_k
:
0
top_p
:
0.9
examples/train_lora/llama3_lora_predict.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path
:
saves/llama3-8b/lora/sft
### method
stage
:
sft
do_predict
:
true
finetuning_type
:
lora
### dataset
eval_dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
50
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/predict
overwrite_output_dir
:
true
### eval
per_device_eval_batch_size
:
1
predict_with_generate
:
true
ddp_timeout
:
180000000
examples/train_lora/llama3_lora_pretrain.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
pt
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
### dataset
dataset
:
c4_demo
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/pretrain
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
8
learning_rate
:
1.0e-4
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_lora_reward.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
rm
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
### dataset
dataset
:
dpo_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/reward
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
8
learning_rate
:
1.0e-4
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_lora_sft.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Llama-3.2-3B-Instruct
### method
stage
:
sft
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
### dataset
dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/Llama-3.2-3B/lora/sft
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
8
learning_rate
:
1.0e-4
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_lora_sft_ds0.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
sft
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
deepspeed
:
examples/deepspeed/ds_z0_config.json
### dataset
dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/sft
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
2
learning_rate
:
1.0e-4
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_lora_sft_ds3.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
sft
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
deepspeed
:
examples/deepspeed/ds_z3_config.json
### dataset
dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
### output
output_dir
:
saves/llama3-8b/lora/sft
logging_steps
:
10
save_steps
:
500
plot_loss
:
true
overwrite_output_dir
:
true
### train
per_device_train_batch_size
:
1
gradient_accumulation_steps
:
2
learning_rate
:
1.0e-4
num_train_epochs
:
3.0
lr_scheduler_type
:
cosine
warmup_ratio
:
0.1
bf16
:
true
ddp_timeout
:
180000000
### eval
val_size
:
0.1
per_device_eval_batch_size
:
1
eval_strategy
:
steps
eval_steps
:
500
examples/train_lora/llama3_preprocess.yaml
0 → 100644
View file @
12d5cbac
### model
model_name_or_path
:
meta-llama/Meta-Llama-3-8B-Instruct
### method
stage
:
sft
do_train
:
true
finetuning_type
:
lora
lora_target
:
all
### dataset
dataset
:
identity,alpaca_en_demo
template
:
llama3
cutoff_len
:
1024
max_samples
:
1000
overwrite_cache
:
true
preprocessing_num_workers
:
16
tokenized_path
:
saves/llama3-8b/dataset/sft
### output
output_dir
:
saves/llama3-8b/lora/sft
overwrite_output_dir
:
true
Prev
1
2
3
4
5
6
7
8
9
…
13
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment