Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
33d033d6
Unverified
Commit
33d033d6
authored
Mar 17, 2023
by
Kevin Turner
Committed by
GitHub
Mar 17, 2023
Browse files
fix typos in llama.mdx (#22223)
parent
97a3d16a
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
docs/source/en/model_doc/llama.mdx
docs/source/en/model_doc/llama.mdx
+2
-2
No files found.
docs/source/en/model_doc/llama.mdx
View file @
33d033d6
...
@@ -22,7 +22,7 @@ The abstract from the paper is the following:
...
@@ -22,7 +22,7 @@ The abstract from the paper is the following:
Tips
:
Tips
:
-
Weights
for
the
LLaMA
models
can
be
obtained
from
by
filling
out
[
this
form
](
)
https
://
docs
.
google
.
com
/
forms
/
d
/
e
/
1F
AIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA
/
viewform
?
usp
=
send_form
)
-
Weights
for
the
LLaMA
models
can
be
obtained
from
by
filling
out
[
this
form
](
https
://
docs
.
google
.
com
/
forms
/
d
/
e
/
1F
AIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA
/
viewform
?
usp
=
send_form
)
-
After
downloading
the
weights
,
they
will
need
to
be
converted
to
the
Hugging
Face
Transformers
format
using
the
[
conversion
script
](/
src
/
transformers
/
models
/
llama
/
convert_llama_weights_to_hf
.
py
).
The
script
can
be
called
with
the
following
(
example
)
command
:
-
After
downloading
the
weights
,
they
will
need
to
be
converted
to
the
Hugging
Face
Transformers
format
using
the
[
conversion
script
](/
src
/
transformers
/
models
/
llama
/
convert_llama_weights_to_hf
.
py
).
The
script
can
be
called
with
the
following
(
example
)
command
:
```
bash
```
bash
...
@@ -37,7 +37,7 @@ tokenizer = transformers.LlamaTokenizer.from_pretrained("/output/path/tokenizer/
...
@@ -37,7 +37,7 @@ tokenizer = transformers.LlamaTokenizer.from_pretrained("/output/path/tokenizer/
model
=
transformers
.
LlamaForCausalLM
.
from_pretrained
(
"/output/path/llama-7b/"
)
model
=
transformers
.
LlamaForCausalLM
.
from_pretrained
(
"/output/path/llama-7b/"
)
```
```
-
The
LLaMA
tokenizer
is
based
on
[
sentencepiece
](
https
://
github
.
com
/
google
/
sentencepiece
).
One
qui
c
k
of
sentencepiece
is
that
when
decoding
a
sequence
,
if
the
first
token
is
the
start
of
the
word
(
e
.
g
.
"Banana"
),
the
tokenizer
does
not
prepend
the
prefix
space
to
the
string
.
To
have
the
tokenizer
output
the
prefix
space
,
set
`
decode_with_prefix_space
=
True
`
in
the
`
LlamaTokenizer
`
object
or
in
the
tokenizer
configuration
.
-
The
LLaMA
tokenizer
is
based
on
[
sentencepiece
](
https
://
github
.
com
/
google
/
sentencepiece
).
One
qui
r
k
of
sentencepiece
is
that
when
decoding
a
sequence
,
if
the
first
token
is
the
start
of
the
word
(
e
.
g
.
"Banana"
),
the
tokenizer
does
not
prepend
the
prefix
space
to
the
string
.
To
have
the
tokenizer
output
the
prefix
space
,
set
`
decode_with_prefix_space
=
True
`
in
the
`
LlamaTokenizer
`
object
or
in
the
tokenizer
configuration
.
This
model
was
contributed
by
[
zphang
](
https
://
huggingface
.
co
/
zphang
)
with
contributions
from
[
BlackSamorez
](
https
://
huggingface
.
co
/
BlackSamorez
).
The
code
of
the
implementation
in
Hugging
Face
is
based
on
GPT
-
NeoX
[
here
](
https
://
github
.
com
/
EleutherAI
/
gpt
-
neox
).
The
original
code
of
the
authors
can
be
found
[
here
](
https
://
github
.
com
/
facebookresearch
/
llama
).
This
model
was
contributed
by
[
zphang
](
https
://
huggingface
.
co
/
zphang
)
with
contributions
from
[
BlackSamorez
](
https
://
huggingface
.
co
/
BlackSamorez
).
The
code
of
the
implementation
in
Hugging
Face
is
based
on
GPT
-
NeoX
[
here
](
https
://
github
.
com
/
EleutherAI
/
gpt
-
neox
).
The
original
code
of
the
authors
can
be
found
[
here
](
https
://
github
.
com
/
facebookresearch
/
llama
).
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment