Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
3f91338b
Commit
3f91338b
authored
Sep 06, 2019
by
LysandreJik
Browse files
Patched a few outdated parameters
parent
f47f9a58
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
9 deletions
+9
-9
examples/README.md
examples/README.md
+9
-9
No files found.
examples/README.md
View file @
3f91338b
...
...
@@ -133,7 +133,7 @@ python run_glue.py \
--do_lower_case
\
--data_dir
$GLUE_DIR
/
$TASK_NAME
\
--max_seq_length
128
\
--train_batch_size
32
\
--
per_gpu_
train_batch_size
32
\
--learning_rate
2e-5
\
--num_train_epochs
3.0
\
--output_dir
/tmp/
$TASK_NAME
/
...
...
@@ -174,7 +174,7 @@ python run_glue.py \
--do_lower_case
\
--data_dir
$GLUE_DIR
/MRPC/
\
--max_seq_length
128
\
--train_batch_size
32
\
--
per_gpu_
train_batch_size
32
\
--learning_rate
2e-5
\
--num_train_epochs
3.0
\
--output_dir
/tmp/mrpc_output/
...
...
@@ -201,7 +201,7 @@ python run_glue.py \
--do_lower_case
\
--data_dir
$GLUE_DIR
/MRPC/
\
--max_seq_length
128
\
--train_batch_size
32
\
--
per_gpu_
train_batch_size
32
\
--learning_rate
2e-5
\
--num_train_epochs
3.0
\
--output_dir
/tmp/mrpc_output/
\
...
...
@@ -226,7 +226,7 @@ python -m torch.distributed.launch \
--do_lower_case
\
--data_dir
$GLUE_DIR
/MRPC/
\
--max_seq_length
128
\
--train_batch_size
8
\
--
per_gpu_
train_batch_size
8
\
--learning_rate
2e-5
\
--num_train_epochs
3.0
\
--output_dir
/tmp/mrpc_output/
...
...
@@ -260,7 +260,7 @@ python -m torch.distributed.launch \
--do_lower_case
\
--data_dir
$GLUE_DIR
/MNLI/
\
--max_seq_length
128
\
--train_batch_size
8
\
--
per_gpu_
train_batch_size
8
\
--learning_rate
2e-5
\
--num_train_epochs
3.0
\
--output_dir
output_dir
\
...
...
@@ -303,11 +303,11 @@ python run_squad.py \
--model_type
bert
\
--model_name_or_path
bert-base-cased
\
--do_train
\
--do_
predict
\
--do_
eval
\
--do_lower_case
\
--train_file
$SQUAD_DIR
/train-v1.1.json
\
--predict_file
$SQUAD_DIR
/dev-v1.1.json
\
--train_batch_size
12
\
--
per_gpu_
train_batch_size
12
\
--learning_rate
3e-5
\
--num_train_epochs
2.0
\
--max_seq_length
384
\
...
...
@@ -332,7 +332,7 @@ python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \
--model_type
bert
\
--model_name_or_path
bert-base-cased
\
--do_train
\
--do_
predict
\
--do_
eval
\
--do_lower_case
\
--train_file
$SQUAD_DIR
/train-v1.1.json
\
--predict_file
$SQUAD_DIR
/dev-v1.1.json
\
...
...
@@ -341,7 +341,7 @@ python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \
--max_seq_length
384
\
--doc_stride
128
\
--output_dir
../models/wwm_uncased_finetuned_squad/
\
--train_batch_size
24
\
--
per_gpu_
train_batch_size
24
\
--gradient_accumulation_steps
12
```
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment