Unverified Commit 07ebe0fd authored by Thomas Wolf's avatar Thomas Wolf Committed by GitHub
Browse files

Merge pull request #292 from sam-qordoba/patch-3

Fix typo in `GPT2Model` code sample
parents a25d056b 1cb9c76e
......@@ -428,7 +428,7 @@ with torch.no_grad():
hidden_states_1, past = model(tokens_tensor_1)
# past can be used to reuse precomputed hidden state in a subsequent predictions
# (see beam-search examples in the run_gpt2.py example
hidden_states-2, past = model(tokens_tensor_2, past=past)
hidden_states_2, past = model(tokens_tensor_2, past=past)
```
And how to use `GPT2LMHeadModel`
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment