Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
lm-evaluation-harness
Commits
fd33fb97
Commit
fd33fb97
authored
Jun 11, 2023
by
Benjamin Fattori
Committed by
lintangsutawika
Jun 22, 2023
Browse files
comment on padding method for encoder
parent
26a9a445
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
lm_eval/models/seq2seq.py
lm_eval/models/seq2seq.py
+2
-1
No files found.
lm_eval/models/seq2seq.py
View file @
fd33fb97
...
@@ -186,11 +186,12 @@ class Seq2SeqHFLM(LM):
...
@@ -186,11 +186,12 @@ class Seq2SeqHFLM(LM):
)
)
)
)
#TODO: Right now, we pass single EOT token to the Encoder and the full context to the decoder
rolling_token_windows
=
[(
None
,)
+
x
for
x
in
rolling_token_windows
]
rolling_token_windows
=
[(
None
,)
+
x
for
x
in
rolling_token_windows
]
pad_amnt
=
0
pad_amnt
=
0
if
self
.
world_size
>
1
:
if
self
.
world_size
>
1
:
#
TODO: Comment on what we d
o he
re
#
We pad out the external document-level iterator s
o
t
he
inner iterator doesn't hang
mytensor
=
torch
.
tensor
(
len
(
rolling_token_windows
),
device
=
self
.
device
)
mytensor
=
torch
.
tensor
(
len
(
rolling_token_windows
),
device
=
self
.
device
)
gathered
=
(
gathered
=
(
self
.
accelerator
.
gather
(
mytensor
).
cpu
().
detach
().
numpy
().
tolist
()
self
.
accelerator
.
gather
(
mytensor
).
cpu
().
detach
().
numpy
().
tolist
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment