Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
94785906
Unverified
Commit
94785906
authored
Sep 27, 2019
by
Denny
Committed by
GitHub
Sep 27, 2019
Browse files
Update run_lm_finetuning.py
The previous method, just as phrased, did not exist in the class.
parent
ca559826
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
examples/run_lm_finetuning.py
examples/run_lm_finetuning.py
+1
-1
No files found.
examples/run_lm_finetuning.py
View file @
94785906
...
...
@@ -75,7 +75,7 @@ class TextDataset(Dataset):
tokenized_text
=
tokenizer
.
convert_tokens_to_ids
(
tokenizer
.
tokenize
(
text
))
for
i
in
range
(
0
,
len
(
tokenized_text
)
-
block_size
+
1
,
block_size
):
# Truncate in block of block_size
self
.
examples
.
append
(
tokenizer
.
add_special_tokens_single_se
nt
ence
(
tokenized_text
[
i
:
i
+
block_size
]))
self
.
examples
.
append
(
tokenizer
.
add_special_tokens_single_se
qu
ence
(
tokenized_text
[
i
:
i
+
block_size
]))
# Note that we are loosing the last truncated example here for the sake of simplicity (no padding)
# If your dataset is small, first you should loook for a bigger one :-) and second you
# can change this behavior by adding (model specific) padding.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment