Commit 8694c7b0 authored by rprenger's avatar rprenger
Browse files

Found a bug. If you don't make this change and you ask for 1 token you get 2 etc.

parent 055a673e
...@@ -241,7 +241,7 @@ def sample_sequence_batch(model, context_tokens, context_lengths, ...@@ -241,7 +241,7 @@ def sample_sequence_batch(model, context_tokens, context_lengths,
lengths = torch.ones([batch_size]).long().cuda() * maxlen lengths = torch.ones([batch_size]).long().cuda() * maxlen
while context_length <= (maxlen): while context_length < maxlen:
types2use = None types2use = None
if counter == 0: if counter == 0:
tokens2use = tokens[:, :context_length] tokens2use = tokens[:, :context_length]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment