Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
1d7d01c0
"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "d4b3e56d6443aff5148419854f9d4cd45d2db915"
Unverified
Commit
1d7d01c0
authored
Jul 23, 2019
by
Thomas Wolf
Committed by
GitHub
Jul 23, 2019
Browse files
Merge pull request #847 from lpq29743/master
typos
parents
c4bc6688
76be189b
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
4 additions
and
4 deletions
+4
-4
examples/run_glue.py
examples/run_glue.py
+2
-2
examples/run_squad.py
examples/run_squad.py
+2
-2
No files found.
examples/run_glue.py
View file @
1d7d01c0
...
@@ -116,8 +116,8 @@ def train(args, train_dataset, model, tokenizer):
...
@@ -116,8 +116,8 @@ def train(args, train_dataset, model, tokenizer):
'attention_mask'
:
batch
[
1
],
'attention_mask'
:
batch
[
1
],
'token_type_ids'
:
batch
[
2
]
if
args
.
model_type
in
[
'bert'
,
'xlnet'
]
else
None
,
# XLM don't use segment_ids
'token_type_ids'
:
batch
[
2
]
if
args
.
model_type
in
[
'bert'
,
'xlnet'
]
else
None
,
# XLM don't use segment_ids
'labels'
:
batch
[
3
]}
'labels'
:
batch
[
3
]}
ouputs
=
model
(
**
inputs
)
ou
t
puts
=
model
(
**
inputs
)
loss
=
ouputs
[
0
]
# model outputs are always tuple in pytorch-transformers (see doc)
loss
=
ou
t
puts
[
0
]
# model outputs are always tuple in pytorch-transformers (see doc)
if
args
.
n_gpu
>
1
:
if
args
.
n_gpu
>
1
:
loss
=
loss
.
mean
()
# mean() to average on multi-gpu parallel training
loss
=
loss
.
mean
()
# mean() to average on multi-gpu parallel training
...
...
examples/run_squad.py
View file @
1d7d01c0
...
@@ -129,8 +129,8 @@ def train(args, train_dataset, model, tokenizer):
...
@@ -129,8 +129,8 @@ def train(args, train_dataset, model, tokenizer):
if
args
.
model_type
in
[
'xlnet'
,
'xlm'
]:
if
args
.
model_type
in
[
'xlnet'
,
'xlm'
]:
inputs
.
update
({
'cls_index'
:
batch
[
5
],
inputs
.
update
({
'cls_index'
:
batch
[
5
],
'p_mask'
:
batch
[
6
]})
'p_mask'
:
batch
[
6
]})
ouputs
=
model
(
**
inputs
)
ou
t
puts
=
model
(
**
inputs
)
loss
=
ouputs
[
0
]
# model outputs are always tuple in pytorch-transformers (see doc)
loss
=
ou
t
puts
[
0
]
# model outputs are always tuple in pytorch-transformers (see doc)
if
args
.
n_gpu
>
1
:
if
args
.
n_gpu
>
1
:
loss
=
loss
.
mean
()
# mean() to average on multi-gpu parallel (not distributed) training
loss
=
loss
.
mean
()
# mean() to average on multi-gpu parallel (not distributed) training
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment