Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
71a38231
"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "3bb646a54f42030e9bafa47cd3f64367691a3bc5"
Commit
71a38231
authored
Jan 30, 2020
by
Jared Nielsen
Committed by
Lysandre Debut
Jan 30, 2020
Browse files
Correct documentation
parent
01a14ebd
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
6 deletions
+6
-6
examples/README.md
examples/README.md
+6
-6
No files found.
examples/README.md
View file @
71a38231
...
...
@@ -404,12 +404,12 @@ exact_match = 81.22
#### Distributed training
Here is an example using distributed training on 8 V100 GPUs and Bert Whole Word Masking uncased model to reach a F1 > 93 on SQuAD1.
0
:
Here is an example using distributed training on 8 V100 GPUs and Bert Whole Word Masking uncased model to reach a F1 > 93 on SQuAD1.
1
:
```
bash
python
-m
torch.distributed.launch
--nproc_per_node
=
8 run_squad.py
\
python
-m
torch.distributed.launch
--nproc_per_node
=
8
./examples/
run_squad.py
\
--model_type
bert
\
--model_name_or_path
bert-
base-cased
\
--model_name_or_path
bert-
large-uncased-whole-word-masking
\
--do_train
\
--do_eval
\
--do_lower_case
\
...
...
@@ -419,9 +419,9 @@ python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \
--num_train_epochs
2
\
--max_seq_length
384
\
--doc_stride
128
\
--output_dir
.
.
/models/wwm_uncased_finetuned_squad/
\
--per_gpu_
train
_batch_size
24
\
--
gradient_accumulation_steps
12
--output_dir
.
/examples
/models/wwm_uncased_finetuned_squad/
\
--per_gpu_
eval
_batch_size
=
3
\
--
per_gpu_train_batch_size
=
3
\
```
Training with the previously defined hyper-parameters yields the following results:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment