Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
fa8ee8e8
"git@developer.sourcefind.cn:OpenDAS/ollama.git" did not exist on "114c932a8e872846fc714353c65d041feb886027"
Unverified
Commit
fa8ee8e8
authored
Aug 26, 2020
by
Patrick von Platen
Committed by
GitHub
Aug 26, 2020
Browse files
fix torchscript docs (#6740)
parent
64c7c2bc
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
10 deletions
+9
-10
docs/source/serialization.rst
docs/source/serialization.rst
+9
-10
No files found.
docs/source/serialization.rst
View file @
fa8ee8e8
...
...
@@ -130,13 +130,12 @@ Pytorch's two modules `JIT and TRACE <https://pytorch.org/docs/stable/jit.html>`
their
model
to
be
re
-
used
in
other
programs
,
such
as
efficiency
-
oriented
C
++
programs
.
We
have
provided
an
interface
that
allows
the
export
of
🤗
Transformers
models
to
TorchScript
so
that
they
can
be
reused
in
a
different
environment
than
a
Pytorch
-
based
python
program
.
Here
we
explain
how
to
use
our
models
so
that
they
can
be
exported
,
and
what
to
be
mindful
of
when
using
these
models
with
TorchScript
.
be
reused
in
a
different
environment
than
a
Pytorch
-
based
python
program
.
Here
we
explain
how
to
export
and
use
our
models
using
TorchScript
.
Exporting
a
model
need
s
two
things
:
Exporting
a
model
require
s
two
things
:
*
dummy
inputs
to
execute
a
model
forward
pas
s
.
*
the
model
needs
to
be
instantiat
ed
with
the
``
torchscript
``
flag
.
*
a
forward
pass
with
dummy
input
s
.
*
model
instantiat
ion
with
the
``
torchscript
``
flag
.
These
necessities
imply
several
things
developers
should
be
careful
about
.
These
are
detailed
below
.
...
...
@@ -147,8 +146,8 @@ Implications
TorchScript
flag
and
tied
weights
------------------------------------------------
This
flag
is
necessary
because
most
of
the
language
models
in
this
repository
have
tied
weights
between
their
``
Embedding
``
layer
and
their
``
Decoding
``
layer
.
TorchScript
does
not
allow
the
export
of
models
that
have
tied
weights
,
it
is
therefore
necessary
to
untie
the
weights
beforehand
.
``
Embedding
``
layer
and
their
``
Decoding
``
layer
.
TorchScript
does
not
allow
the
export
of
models
that
have
tied
weights
,
therefore
it
is
necessary
to
untie
and
clone
the
weights
beforehand
.
This
implies
that
models
instantiated
with
the
``
torchscript
``
flag
have
their
``
Embedding
``
layer
and
``
Decoding
``
layer
separate
,
which
means
that
they
should
not
be
trained
down
the
line
.
Training
would
de
-
synchronize
the
two
layers
,
...
...
@@ -181,7 +180,7 @@ when exporting varying sequence-length models.
Using
TorchScript
in
Python
-------------------------------------------------
Below
are
example
s
of
using
the
Pyt
ho
n
to
save
,
load
models
as
well
as
how
to
use
the
trace
for
inference
.
Below
is
an
example
,
showing
ho
w
to
save
,
load
models
as
well
as
how
to
use
the
trace
for
inference
.
Saving
a
model
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
...
@@ -237,10 +236,10 @@ We are re-using the previously initialised ``dummy_input``.
..
code
-
block
::
python
loaded_model
=
torch
.
jit
.
load
(
"traced_
model
.pt"
)
loaded_model
=
torch
.
jit
.
load
(
"traced_
bert
.pt"
)
loaded_model
.
eval
()
all_encoder_layers
,
pooled_output
=
loaded_model
(
dummy_input
)
all_encoder_layers
,
pooled_output
=
loaded_model
(
*
dummy_input
)
Using
a
traced
model
for
inference
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment