Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
zhougaofeng
internlm2-math-7B
Commits
77d22308
Commit
77d22308
authored
Jun 11, 2024
by
zhougaofeng
Browse files
Upload New File
parent
47f1dd37
Pipeline
#1098
canceled with stages
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
26 additions
and
0 deletions
+26
-0
finetune/multi_node.sh
finetune/multi_node.sh
+26
-0
No files found.
finetune/multi_node.sh
0 → 100644
View file @
77d22308
#!/bin/bash
# also launch it on slave machine using slave_config.yaml
NPROC_PER_NODE
=
2
NNODES
=
1
RANK
=
0
MASTER_ADDR
=
127.0.0.1
MASTER_PORT
=
17170
CUDA_VISIBLE_DEVICES
=
6,7 torchrun
\
--nproc_per_node
$NPROC_PER_NODE
\
--nnodes
$NNODES
\
--node_rank
$RANK
\
--master_addr
$MASTER_ADDR
\
--master_port
$MASTER_PORT
\
src/train.py
\
--stage
sft
--do_train
\
--model_name_or_path
/home/practice/internlm2-math-7b
\
--dataset
alpaca_en_demo
--template
intern2
--finetuning_type
lora
--lora_target
q_proj,v_proj
\
--output_dir
saves/intern2/lora/sft
\
--overwrite_output_dir
\
--overwrite_cache
--per_device_train_batch_size
2
--gradient_accumulation_steps
32
--lr_scheduler_type
cosine
\
--logging_steps
10
--save_steps
1000
--learning_rate
1e-4
--num_train_epochs
3.0
--plot_loss
--fp16
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment