Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
04028317
"tests/git@developer.sourcefind.cn:OpenDAS/apex.git" did not exist on "6af5980e7acaa715c06da8477f535686bed1b464"
Unverified
Commit
04028317
authored
Jun 14, 2021
by
Stas Bekman
Committed by
GitHub
Jun 14, 2021
Browse files
consistent nn. and nn.functional: part 5 docs (#12161)
parent
88e84186
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
9 additions
and
9 deletions
+9
-9
docs/source/add_new_model.rst
docs/source/add_new_model.rst
+1
-1
docs/source/main_classes/trainer.rst
docs/source/main_classes/trainer.rst
+2
-2
docs/source/migration.md
docs/source/migration.md
+2
-2
docs/source/quicktour.rst
docs/source/quicktour.rst
+2
-2
docs/source/task_summary.rst
docs/source/task_summary.rst
+2
-2
No files found.
docs/source/add_new_model.rst
View file @
04028317
...
@@ -518,7 +518,7 @@ PyTorch, called ``SimpleModel`` as follows:
...
@@ -518,7 +518,7 @@ PyTorch, called ``SimpleModel`` as follows:
.. code:: python
.. code:: python
import torch.nn as
nn
from torch import
nn
class SimpleModel(nn.Module):
class SimpleModel(nn.Module):
def __init__(self):
def __init__(self):
...
...
docs/source/main_classes/trainer.rst
View file @
04028317
...
@@ -59,7 +59,7 @@ classification:
...
@@ -59,7 +59,7 @@ classification:
.. code-block:: python
.. code-block:: python
import torch
from torch import nn
from transformers import Trainer
from transformers import Trainer
class MultilabelTrainer(Trainer):
class MultilabelTrainer(Trainer):
...
@@ -67,7 +67,7 @@ classification:
...
@@ -67,7 +67,7 @@ classification:
labels = inputs.pop("labels")
labels = inputs.pop("labels")
outputs = model(**inputs)
outputs = model(**inputs)
logits = outputs.logits
logits = outputs.logits
loss_fct =
torch.
nn.BCEWithLogitsLoss()
loss_fct = nn.BCEWithLogitsLoss()
loss = loss_fct(logits.view(-1, self.model.config.num_labels),
loss = loss_fct(logits.view(-1, self.model.config.num_labels),
labels.float().view(-1, self.model.config.num_labels))
labels.float().view(-1, self.model.config.num_labels))
return (loss, outputs) if return_outputs else loss
return (loss, outputs) if return_outputs else loss
...
...
docs/source/migration.md
View file @
04028317
...
@@ -23,7 +23,7 @@ expected changes:
...
@@ -23,7 +23,7 @@ expected changes:
#### 1. AutoTokenizers and pipelines now use fast (rust) tokenizers by default.
#### 1. AutoTokenizers and pipelines now use fast (rust) tokenizers by default.
The python and rust tokenizers have roughly the same API, but the rust tokenizers have a more complete feature set.
The python and rust tokenizers have roughly the same API, but the rust tokenizers have a more complete feature set.
This introduces two breaking changes:
This introduces two breaking changes:
-
The handling of overflowing tokens between the python and rust tokenizers is different.
-
The handling of overflowing tokens between the python and rust tokenizers is different.
...
@@ -85,7 +85,7 @@ This is a breaking change as importing intermediary layers using a model's modul
...
@@ -85,7 +85,7 @@ This is a breaking change as importing intermediary layers using a model's modul
##### How to obtain the same behavior as v3.x in v4.x
##### How to obtain the same behavior as v3.x in v4.x
In order to obtain the same behavior as version
`v3.x`
, you should update the path used to access the layers.
In order to obtain the same behavior as version
`v3.x`
, you should update the path used to access the layers.
In version
`v3.x`
:
In version
`v3.x`
:
```
bash
```
bash
...
...
docs/source/quicktour.rst
View file @
04028317
...
@@ -265,8 +265,8 @@ Let's apply the SoftMax activation to get predictions.
...
@@ -265,8 +265,8 @@ Let's apply the SoftMax activation to get predictions.
.. code-block::
.. code-block::
>>> ## PYTORCH CODE
>>> ## PYTORCH CODE
>>>
import torch.nn.functional as F
>>>
from torch import nn
>>> pt_predictions =
F
.softmax(pt_outputs.logits, dim=-1)
>>> pt_predictions =
nn.functional
.softmax(pt_outputs.logits, dim=-1)
>>> ## TENSORFLOW CODE
>>> ## TENSORFLOW CODE
>>> import tensorflow as tf
>>> import tensorflow as tf
>>> tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf.nn.softmax(tf_outputs.logits, axis=-1)
...
...
docs/source/task_summary.rst
View file @
04028317
...
@@ -451,7 +451,7 @@ of tokens.
...
@@ -451,7 +451,7 @@ of tokens.
>>>
##
PYTORCH
CODE
>>>
##
PYTORCH
CODE
>>>
from
transformers
import
AutoModelWithLMHead
,
AutoTokenizer
,
top_k_top_p_filtering
>>>
from
transformers
import
AutoModelWithLMHead
,
AutoTokenizer
,
top_k_top_p_filtering
>>>
import
torch
>>>
import
torch
>>>
from
torch
.
nn
import
functional
as
F
>>>
from
torch
import
nn
>>>
tokenizer
=
AutoTokenizer
.
from_pretrained
(
"gpt2"
)
>>>
tokenizer
=
AutoTokenizer
.
from_pretrained
(
"gpt2"
)
>>>
model
=
AutoModelWithLMHead
.
from_pretrained
(
"gpt2"
)
>>>
model
=
AutoModelWithLMHead
.
from_pretrained
(
"gpt2"
)
...
@@ -467,7 +467,7 @@ of tokens.
...
@@ -467,7 +467,7 @@ of tokens.
>>>
filtered_next_token_logits
=
top_k_top_p_filtering
(
next_token_logits
,
top_k
=
50
,
top_p
=
1.0
)
>>>
filtered_next_token_logits
=
top_k_top_p_filtering
(
next_token_logits
,
top_k
=
50
,
top_p
=
1.0
)
>>>
#
sample
>>>
#
sample
>>>
probs
=
F
.
softmax
(
filtered_next_token_logits
,
dim
=-
1
)
>>>
probs
=
nn
.
functional
.
softmax
(
filtered_next_token_logits
,
dim
=-
1
)
>>>
next_token
=
torch
.
multinomial
(
probs
,
num_samples
=
1
)
>>>
next_token
=
torch
.
multinomial
(
probs
,
num_samples
=
1
)
>>>
generated
=
torch
.
cat
([
input_ids
,
next_token
],
dim
=-
1
)
>>>
generated
=
torch
.
cat
([
input_ids
,
next_token
],
dim
=-
1
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment