Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
f18ac4c2
Commit
f18ac4c2
authored
Dec 24, 2019
by
patrickvonplaten
Browse files
fix sequence length for prepare_inputs for xlnet
parent
359dc438
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/modeling_xlnet.py
src/transformers/modeling_xlnet.py
+1
-1
No files found.
src/transformers/modeling_xlnet.py
View file @
f18ac4c2
...
...
@@ -1012,11 +1012,11 @@ class XLNetLMHeadModel(XLNetPreTrainedModel):
# Add dummy token at the end (no attention on this one)
effective_batch_size
=
input_ids
.
shape
[
0
]
sequence_length
=
input_ids
.
shape
[
1
]
dummy_token
=
torch
.
zeros
((
effective_batch_size
,
1
),
dtype
=
torch
.
long
,
device
=
input_ids
.
device
)
input_ids
=
torch
.
cat
([
input_ids
,
dummy_token
],
dim
=
1
)
# Build permutation mask so that previous tokens don't see last token
sequence_length
=
input_ids
.
shape
[
1
]
perm_mask
=
torch
.
zeros
(
(
effective_batch_size
,
sequence_length
,
sequence_length
),
dtype
=
torch
.
float
,
device
=
input_ids
.
device
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment