Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
0fb70683
Unverified
Commit
0fb70683
authored
Feb 20, 2025
by
Sayak Paul
Committed by
GitHub
Feb 20, 2025
Browse files
[tests] use proper gemma class and config in lumina2 tests. (#10828)
use proper gemma class and config in lumina2 tests.
parent
f8b54cf0
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
7 deletions
+8
-7
tests/pipelines/lumina2/test_pipeline_lumina2.py
tests/pipelines/lumina2/test_pipeline_lumina2.py
+8
-7
No files found.
tests/pipelines/lumina2/test_pipeline_lumina2.py
View file @
0fb70683
...
...
@@ -2,7 +2,7 @@ import unittest
import
numpy
as
np
import
torch
from
transformers
import
AutoTokenizer
,
GemmaConfig
,
Gemma
ForCausalLM
from
transformers
import
AutoTokenizer
,
Gemma
2
Config
,
Gemma
2Model
from
diffusers
import
(
AutoencoderKL
,
...
...
@@ -81,15 +81,16 @@ class Lumina2Text2ImgPipelinePipelineFastTests(unittest.TestCase, PipelineTester
tokenizer
=
AutoTokenizer
.
from_pretrained
(
"hf-internal-testing/dummy-gemma"
)
torch
.
manual_seed
(
0
)
config
=
GemmaConfig
(
head_dim
=
2
,
config
=
Gemma
2
Config
(
head_dim
=
4
,
hidden_size
=
8
,
intermediate_size
=
37
,
num_attention_heads
=
4
,
intermediate_size
=
8
,
num_attention_heads
=
2
,
num_hidden_layers
=
2
,
num_key_value_heads
=
4
,
num_key_value_heads
=
2
,
sliding_window
=
2
,
)
text_encoder
=
Gemma
ForCausalLM
(
config
)
text_encoder
=
Gemma
2Model
(
config
)
components
=
{
"transformer"
:
transformer
.
eval
(),
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment