Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
066ea374
Unverified
Commit
066ea374
authored
Sep 26, 2024
by
Álvaro Somoza
Committed by
GitHub
Sep 25, 2024
Browse files
[Tests] Fix ChatGLMTokenizer (#9536)
fix
parent
9cd37557
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
0 deletions
+4
-0
src/diffusers/pipelines/kolors/tokenizer.py
src/diffusers/pipelines/kolors/tokenizer.py
+4
-0
No files found.
src/diffusers/pipelines/kolors/tokenizer.py
View file @
066ea374
...
...
@@ -277,6 +277,7 @@ class ChatGLMTokenizer(PreTrainedTokenizer):
padding_strategy
:
PaddingStrategy
=
PaddingStrategy
.
DO_NOT_PAD
,
pad_to_multiple_of
:
Optional
[
int
]
=
None
,
return_attention_mask
:
Optional
[
bool
]
=
None
,
padding_side
:
Optional
[
bool
]
=
None
,
)
->
dict
:
"""
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
...
...
@@ -298,6 +299,9 @@ class ChatGLMTokenizer(PreTrainedTokenizer):
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
`>= 7.5` (Volta).
padding_side (`str`, *optional*):
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment