Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
698f9e3d
Commit
698f9e3d
authored
Dec 22, 2019
by
Aymeric Augustin
Browse files
Remove trailing whitespace in README.
parent
c11b3e29
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
6 deletions
+6
-6
README.md
README.md
+6
-6
No files found.
README.md
View file @
698f9e3d
...
@@ -251,7 +251,7 @@ valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer,
...
@@ -251,7 +251,7 @@ valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer,
train_dataset
=
train_dataset
.
shuffle
(
100
).
batch
(
32
).
repeat
(
2
)
train_dataset
=
train_dataset
.
shuffle
(
100
).
batch
(
32
).
repeat
(
2
)
valid_dataset
=
valid_dataset
.
batch
(
64
)
valid_dataset
=
valid_dataset
.
batch
(
64
)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer
=
tf
.
keras
.
optimizers
.
Adam
(
learning_rate
=
3e-5
,
epsilon
=
1e-08
,
clipnorm
=
1.0
)
optimizer
=
tf
.
keras
.
optimizers
.
Adam
(
learning_rate
=
3e-5
,
epsilon
=
1e-08
,
clipnorm
=
1.0
)
loss
=
tf
.
keras
.
losses
.
SparseCategoricalCrossentropy
(
from_logits
=
True
)
loss
=
tf
.
keras
.
losses
.
SparseCategoricalCrossentropy
(
from_logits
=
True
)
metric
=
tf
.
keras
.
metrics
.
SparseCategoricalAccuracy
(
'accuracy'
)
metric
=
tf
.
keras
.
metrics
.
SparseCategoricalAccuracy
(
'accuracy'
)
...
@@ -281,7 +281,7 @@ print("sentence_2 is", "a paraphrase" if pred_2 else "not a paraphrase", "of sen
...
@@ -281,7 +281,7 @@ print("sentence_2 is", "a paraphrase" if pred_2 else "not a paraphrase", "of sen
## Quick tour of the fine-tuning/usage scripts
## Quick tour of the fine-tuning/usage scripts
**Important**
**Important**
Before running the fine-tuning scripts, please read the
Before running the fine-tuning scripts, please read the
[
instructions
](
#run-the-examples
)
on how to
[
instructions
](
#run-the-examples
)
on how to
setup your environment to run the examples.
setup your environment to run the examples.
...
@@ -442,7 +442,7 @@ python ./examples/run_generation.py \
...
@@ -442,7 +442,7 @@ python ./examples/run_generation.py \
--model_name_or_path
=
gpt2
\
--model_name_or_path
=
gpt2
\
```
```
and from the Salesforce CTRL model:
and from the Salesforce CTRL model:
```
shell
```
shell
python ./examples/run_generation.py
\
python ./examples/run_generation.py
\
--model_type
=
ctrl
\
--model_type
=
ctrl
\
...
@@ -495,13 +495,13 @@ transformers-cli ls
...
@@ -495,13 +495,13 @@ transformers-cli ls
## Quick tour of pipelines
## Quick tour of pipelines
New in version
`v2.3`
:
`Pipeline`
are high-level objects which automatically handle tokenization, running your data through a transformers model
New in version
`v2.3`
:
`Pipeline`
are high-level objects which automatically handle tokenization, running your data through a transformers model
and outputting the result in a structured object.
and outputting the result in a structured object.
You can create
`Pipeline`
objects for the following down-stream tasks:
You can create
`Pipeline`
objects for the following down-stream tasks:
-
`feature-extraction`
: Generates a tensor representation for the input sequence
-
`feature-extraction`
: Generates a tensor representation for the input sequence
-
`ner`
: Generates named entity mapping for each word in the input sequence.
-
`ner`
: Generates named entity mapping for each word in the input sequence.
-
`sentiment-analysis`
: Gives the polarity (positive / negative) of the whole input sequence.
-
`sentiment-analysis`
: Gives the polarity (positive / negative) of the whole input sequence.
-
`question-answering`
: Provided some context and a question refering to the context, it will extract the answer to the question
-
`question-answering`
: Provided some context and a question refering to the context, it will extract the answer to the question
in the context.
in the context.
...
@@ -516,7 +516,7 @@ nlp('We are very happy to include pipeline into the transformers repository.')
...
@@ -516,7 +516,7 @@ nlp('We are very happy to include pipeline into the transformers repository.')
# Allocate a pipeline for question-answering
# Allocate a pipeline for question-answering
nlp
=
pipeline
(
'question-answering'
)
nlp
=
pipeline
(
'question-answering'
)
nlp
({
nlp
({
'question'
:
'What is the name of the repository ?'
,
'question'
:
'What is the name of the repository ?'
,
'context'
:
'Pipeline have been included in the huggingface/transformers repository'
'context'
:
'Pipeline have been included in the huggingface/transformers repository'
})
})
>>>
{
'score'
:
0.28756016668193496
,
'start'
:
35
,
'end'
:
59
,
'answer'
:
'huggingface/transformers'
}
>>>
{
'score'
:
0.28756016668193496
,
'start'
:
35
,
'end'
:
59
,
'answer'
:
'huggingface/transformers'
}
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment