Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
5eca742f
"test/vscode:/vscode.git/clone" did not exist on "7a1aecb9389cb5928f4595af4a1fb5f88e85b5f8"
Unverified
Commit
5eca742f
authored
Dec 10, 2021
by
Sylvain Gugger
Committed by
GitHub
Dec 10, 2021
Browse files
Fix special character in MDX (#14721)
parent
63c284c2
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
6 deletions
+6
-6
docs/source/quicktour.mdx
docs/source/quicktour.mdx
+6
-6
No files found.
docs/source/quicktour.mdx
View file @
5eca742f
...
...
@@ -246,10 +246,10 @@ objects are described in greater detail [here](main_classes/output). For now, le
```py
>>> print(pt_outputs)
SequenceClassifierOutput(loss=None, logits=tensor([[-4.0833, 4.3364],
[ 0.0818, -0.0418]], grad_fn=
<
AddmmBackward>), hidden_states=None, attentions=None)
[ 0.0818, -0.0418]], grad_fn=
<
AddmmBackward>), hidden_states=None, attentions=None)
===PT-TF-SPLIT===
>>> print(tf_outputs)
TFSequenceClassifierOutput(loss=None, logits=
&lt;
tf.Tensor: shape=(2, 2), dtype=float32, numpy=
TFSequenceClassifierOutput(loss=None, logits=
<
tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[-4.0833 , 4.3364 ],
[ 0.0818, -0.0418]], dtype=float32)>, hidden_states=None, attentions=None)
```
...
...
@@ -278,7 +278,7 @@ We can see we get the numbers from before:
```py
>>> print(pt_predictions)
tensor([[2.2043e-04, 9.9978e-01],
[5.3086e-01, 4.6914e-01]], grad_fn=
&lt;
SoftmaxBackward>)
[5.3086e-01, 4.6914e-01]], grad_fn=
<
SoftmaxBackward>)
===PT-TF-SPLIT===
>>> print(tf_predictions)
tf.Tensor(
...
...
@@ -293,13 +293,13 @@ attribute:
>>> import torch
>>> pt_outputs = pt_model(**pt_batch, labels = torch.tensor([1, 0]))
>>> print(pt_outputs)
SequenceClassifierOutput(loss=tensor(0.3167, grad_fn=
&lt;
NllLossBackward>), logits=tensor([[-4.0833, 4.3364],
[ 0.0818, -0.0418]], grad_fn=
&lt;
AddmmBackward>), hidden_states=None, attentions=None)
SequenceClassifierOutput(loss=tensor(0.3167, grad_fn=
<
NllLossBackward>), logits=tensor([[-4.0833, 4.3364],
[ 0.0818, -0.0418]], grad_fn=
<
AddmmBackward>), hidden_states=None, attentions=None)
===PT-TF-SPLIT===
>>> import tensorflow as tf
>>> tf_outputs = tf_model(tf_batch, labels = tf.constant([1, 0]))
>>> print(tf_outputs)
TFSequenceClassifierOutput(loss=
&lt;
tf.Tensor: shape=(2,), dtype=float32, numpy=array([2.2051e-04, 6.3326e-01], dtype=float32)>, logits=
&lt;
tf.Tensor: shape=(2, 2), dtype=float32, numpy=
TFSequenceClassifierOutput(loss=
<
tf.Tensor: shape=(2,), dtype=float32, numpy=array([2.2051e-04, 6.3326e-01], dtype=float32)>, logits=
<
tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[-4.0833 , 4.3364 ],
[ 0.0818, -0.0418]], dtype=float32)>, hidden_states=None, attentions=None)
```
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment