Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
345a1371
Unverified
Commit
345a1371
authored
Apr 24, 2023
by
Matt
Committed by
GitHub
Apr 24, 2023
Browse files
Fix TF example in quicktour (#22960)
* Fix TF example in quicktour * Fix model.fit() and the dataset section too
parent
503e8c8b
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
3 additions
and
3 deletions
+3
-3
docs/source/en/quicktour.mdx
docs/source/en/quicktour.mdx
+2
-2
docs/source/en/training.mdx
docs/source/en/training.mdx
+1
-1
No files found.
docs/source/en/quicktour.mdx
View file @
345a1371
...
@@ -528,7 +528,7 @@ All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs
...
@@ -528,7 +528,7 @@ All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs
```py
```py
>>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP
>>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP
>>> tf_dataset = model.prepare_tf_dataset(
>>> tf_dataset = model.prepare_tf_dataset(
... dataset, batch_size=16, shuffle=True, tokenizer=tokenizer
... dataset
["train"]
, batch_size=16, shuffle=True, tokenizer=tokenizer
... ) # doctest: +SKIP
... ) # doctest: +SKIP
```
```
...
@@ -538,7 +538,7 @@ All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs
...
@@ -538,7 +538,7 @@ All models are a standard [`tf.keras.Model`](https://www.tensorflow.org/api_docs
>>> from tensorflow.keras.optimizers import Adam
>>> from tensorflow.keras.optimizers import Adam
>>> model.compile(optimizer=Adam(3e-5))
>>> model.compile(optimizer=Adam(3e-5))
>>> model.fit(dataset) # doctest: +SKIP
>>> model.fit(
tf_
dataset) # doctest: +SKIP
```
```
## What's next?
## What's next?
...
...
docs/source/en/training.mdx
View file @
345a1371
...
@@ -247,7 +247,7 @@ reduces the number of padding tokens compared to padding the entire dataset.
...
@@ -247,7 +247,7 @@ reduces the number of padding tokens compared to padding the entire dataset.
```
py
```
py
>>>
tf_dataset
=
model
.
prepare_tf_dataset
(
dataset
,
batch_size
=
16
,
shuffle
=
True
,
tokenizer
=
tokenizer
)
>>>
tf_dataset
=
model
.
prepare_tf_dataset
(
dataset
[
"train"
]
,
batch_size
=
16
,
shuffle
=
True
,
tokenizer
=
tokenizer
)
```
```
Note
that
in
the
code
sample
above
,
you
need
to
pass
the
tokenizer
to
`
prepare_tf_dataset
`
so
it
can
correctly
pad
batches
as
they
're loaded.
Note
that
in
the
code
sample
above
,
you
need
to
pass
the
tokenizer
to
`
prepare_tf_dataset
`
so
it
can
correctly
pad
batches
as
they
're loaded.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment