"vscode:/vscode.git/clone" did not exist on "80ae224fd0aeaad738d2b4da1de895f436074251"
Commit 55eccc29 authored by Benjamin Fattori's avatar Benjamin Fattori
Browse files

comment on padding method for encoder

parent 5486050f
......@@ -186,11 +186,12 @@ class Seq2SeqHFLM(LM):
)
)
#TODO: Right now, we pass single EOT token to the Encoder and the full context to the decoder
rolling_token_windows = [(None,) + x for x in rolling_token_windows]
pad_amnt = 0
if self.world_size > 1:
# TODO: Comment on what we do here
# We pad out the external document-level iterator so the inner iterator doesn't hang
mytensor = torch.tensor(len(rolling_token_windows), device=self.device)
gathered = (
self.accelerator.gather(mytensor).cpu().detach().numpy().tolist()
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment