Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
7f522437
"...git@developer.sourcefind.cn:chenpangpang/ComfyUI.git" did not exist on "70d2ea0faa28e1727f7535466ac5378e786b32cb"
Commit
7f522437
authored
Sep 02, 2019
by
LysandreJik
Browse files
Updated documentation for LM finetuning script
parent
3fbf301b
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
10 additions
and
6 deletions
+10
-6
docs/source/examples.rst
docs/source/examples.rst
+5
-1
docs/source/model_doc/distilbert.rst
docs/source/model_doc/distilbert.rst
+5
-5
No files found.
docs/source/examples.rst
View file @
7f522437
...
...
@@ -459,7 +459,7 @@ The same option as in the original scripts are provided, please refer to the cod
Causal
LM
fine
-
tuning
on
GPT
/
GPT
-
2
,
Masked
LM
fine
-
tuning
on
BERT
/
RoBERTa
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~
Before
running
the
following
examples
you
should
download
the
`
WikiText
-
2
dataset
<
https
://
blog
.
einstein
.
ai
/
the
-
wikitext
-
long
-
term
-
dependency
-
language
-
modeling
-
dataset
/>`
__
and
unpack
it
to
some
directory
`$
WIKITEXT_2_DATASET
`
The
following
results
were
obtained
using
the
`
raw
`
WikiText
-
2
(
no
tokens
were
replaced
before
the
tokenization
).
...
...
@@ -467,6 +467,8 @@ The following results were obtained using the `raw` WikiText-2 (no tokens were r
This
example
fine
-
tunes
GPT
-
2
on
the
WikiText
-
2
dataset
.
The
loss
function
is
a
causal
language
modeling
loss
(
perplexity
).
..
code
-
block
::
bash
export
WIKITEXT_2_DATASET
=/
path
/
to
/
wikitext_dataset
python
run_lm_finetuning
.
py
...
...
@@ -485,6 +487,8 @@ This example fine-tunes RoBERTa on the WikiText-2 dataset. The loss function is
The
`--
mlm
`
flag
is
necessary
to
fine
-
tune
BERT
/
RoBERTa
on
masked
language
modeling
.
..
code
-
block
::
bash
export
WIKITEXT_2_DATASET
=/
path
/
to
/
wikitext_dataset
python
run_lm_finetuning
.
py
...
...
docs/source/model_doc/distilbert.rst
View file @
7f522437
...
...
@@ -2,35 +2,35 @@ DistilBERT
----------------------------------------------------
``DistilBertConfig``
~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~
.. autoclass:: pytorch_transformers.DistilBertConfig
:members:
``DistilBertTokenizer``
~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~
.. autoclass:: pytorch_transformers.DistilBertTokenizer
:members:
``DistilBertModel``
~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~
.. autoclass:: pytorch_transformers.DistilBertModel
:members:
``DistilBertForMaskedLM``
~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~
.. autoclass:: pytorch_transformers.DistilBertForMaskedLM
:members:
``DistilBertForSequenceClassification``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~
.. autoclass:: pytorch_transformers.DistilBertForSequenceClassification
:members:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment