Unverified Commit 144cea25 authored by Sylvain Gugger's avatar Sylvain Gugger Committed by GitHub
Browse files

Fix multiple choice doc examples (#12679)

parent 5dd0c956
...@@ -982,7 +982,7 @@ TF_MULTIPLE_CHOICE_SAMPLE = r""" ...@@ -982,7 +982,7 @@ TF_MULTIPLE_CHOICE_SAMPLE = r"""
>>> choice0 = "It is eaten with a fork and a knife." >>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand." >>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='tf', padding=True) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='tf', padding=True)
>>> inputs = {{k: tf.expand_dims(v, 0) for k, v in encoding.items()}} >>> inputs = {{k: tf.expand_dims(v, 0) for k, v in encoding.items()}}
>>> outputs = model(inputs) # batch size is 1 >>> outputs = model(inputs) # batch size is 1
...@@ -1099,7 +1099,7 @@ FLAX_MULTIPLE_CHOICE_SAMPLE = r""" ...@@ -1099,7 +1099,7 @@ FLAX_MULTIPLE_CHOICE_SAMPLE = r"""
>>> choice0 = "It is eaten with a fork and a knife." >>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand." >>> choice1 = "It is eaten while held in the hand."
>>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='jax', padding=True) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='jax', padding=True)
>>> outputs = model(**{{k: v[None, :] for k,v in encoding.items()}}) >>> outputs = model(**{{k: v[None, :] for k,v in encoding.items()}})
>>> logits = outputs.logits >>> logits = outputs.logits
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment