Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
1135f238
Commit
1135f238
authored
Apr 15, 2019
by
thomwolf
Browse files
clean up logger in examples for distributed case
parent
cc433070
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
20 additions
and
12 deletions
+20
-12
README.md
README.md
+10
-6
examples/run_classifier.py
examples/run_classifier.py
+5
-3
examples/run_squad.py
examples/run_squad.py
+5
-3
No files found.
README.md
View file @
1135f238
...
...
@@ -1274,18 +1274,20 @@ To get these results we used a combination of:
Here is the full list of hyper-parameters for this run:
```
bash
export
SQUAD_DIR
=
/path/to/SQUAD
python ./run_squad.py
\
--bert_model
bert-large-uncased
\
--do_train
\
--do_predict
\
--do_lower_case
\
--train_file
$SQUAD_
TRAIN
\
--predict_file
$SQUAD_
EVAL
\
--train_file
$SQUAD_
DIR
/train-v1.1.json
\
--predict_file
$SQUAD_
DIR
/dev-v1.1.json
\
--learning_rate
3e-5
\
--num_train_epochs
2
\
--max_seq_length
384
\
--doc_stride
128
\
--output_dir
$OUTPUT_DIR
\
--output_dir
/tmp/debug_squad/
\
--train_batch_size
24
\
--gradient_accumulation_steps
2
```
...
...
@@ -1294,18 +1296,20 @@ If you have a recent GPU (starting from NVIDIA Volta series), you should try **1
Here is an example of hyper-parameters for a FP16 run we tried:
```
bash
export
SQUAD_DIR
=
/path/to/SQUAD
python ./run_squad.py
\
--bert_model
bert-large-uncased
\
--do_train
\
--do_predict
\
--do_lower_case
\
--train_file
$SQUAD_
TRAIN
\
--predict_file
$SQUAD_
EVAL
\
--train_file
$SQUAD_
DIR
/train-v1.1.json
\
--predict_file
$SQUAD_
DIR
/dev-v1.1.json
\
--learning_rate
3e-5
\
--num_train_epochs
2
\
--max_seq_length
384
\
--doc_stride
128
\
--output_dir
$OUTPUT_DIR
\
--output_dir
/tmp/debug_squad/
\
--train_batch_size
24
\
--fp16
\
--loss_scale
128
...
...
examples/run_classifier.py
View file @
1135f238
...
...
@@ -40,9 +40,6 @@ from pytorch_pretrained_bert.modeling import BertForSequenceClassification, Bert
from
pytorch_pretrained_bert.tokenization
import
BertTokenizer
from
pytorch_pretrained_bert.optimization
import
BertAdam
,
warmup_linear
logging
.
basicConfig
(
format
=
'%(asctime)s - %(levelname)s - %(name)s - %(message)s'
,
datefmt
=
'%m/%d/%Y %H:%M:%S'
,
level
=
logging
.
INFO
)
logger
=
logging
.
getLogger
(
__name__
)
...
...
@@ -697,6 +694,11 @@ def main():
n_gpu
=
1
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch
.
distributed
.
init_process_group
(
backend
=
'nccl'
)
logging
.
basicConfig
(
format
=
'%(asctime)s - %(levelname)s - %(name)s - %(message)s'
,
datefmt
=
'%m/%d/%Y %H:%M:%S'
,
level
=
logging
.
INFO
if
args
.
local_rank
in
[
-
1
,
0
]
else
logging
.
WARN
)
logger
.
info
(
"device: {} n_gpu: {}, distributed training: {}, 16-bits training: {}"
.
format
(
device
,
n_gpu
,
bool
(
args
.
local_rank
!=
-
1
),
args
.
fp16
))
...
...
examples/run_squad.py
View file @
1135f238
...
...
@@ -46,9 +46,6 @@ if sys.version_info[0] == 2:
else
:
import
pickle
logging
.
basicConfig
(
format
=
'%(asctime)s - %(levelname)s - %(name)s - %(message)s'
,
datefmt
=
'%m/%d/%Y %H:%M:%S'
,
level
=
logging
.
INFO
)
logger
=
logging
.
getLogger
(
__name__
)
...
...
@@ -848,6 +845,11 @@ def main():
n_gpu
=
1
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch
.
distributed
.
init_process_group
(
backend
=
'nccl'
)
logging
.
basicConfig
(
format
=
'%(asctime)s - %(levelname)s - %(name)s - %(message)s'
,
datefmt
=
'%m/%d/%Y %H:%M:%S'
,
level
=
logging
.
INFO
if
args
.
local_rank
in
[
-
1
,
0
]
else
logging
.
WARN
)
logger
.
info
(
"device: {} n_gpu: {}, distributed training: {}, 16-bits training: {}"
.
format
(
device
,
n_gpu
,
bool
(
args
.
local_rank
!=
-
1
),
args
.
fp16
))
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment