Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
fengzch-das
nunchaku
Commits
336d56ef
Unverified
Commit
336d56ef
authored
May 02, 2025
by
Muyang Li
Committed by
GitHub
May 02, 2025
Browse files
feat: support FP8 LoRAs (#342)
* feat: support FP8 LoRAs * fix the int4 expected lpips
parent
37a27712
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
7 additions
and
1 deletion
+7
-1
nunchaku/lora/flux/diffusers_converter.py
nunchaku/lora/flux/diffusers_converter.py
+6
-0
tests/flux/test_flux_teacache.py
tests/flux/test_flux_teacache.py
+1
-1
No files found.
nunchaku/lora/flux/diffusers_converter.py
View file @
336d56ef
...
...
@@ -14,6 +14,12 @@ def to_diffusers(input_lora: str | dict[str, torch.Tensor], output_path: str | N
tensors
=
load_state_dict_in_safetensors
(
input_lora
,
device
=
"cpu"
)
else
:
tensors
=
{
k
:
v
for
k
,
v
in
input_lora
.
items
()}
### convert the FP8 tensors to BF16
for
k
,
v
in
tensors
.
items
():
if
v
.
dtype
not
in
[
torch
.
float64
,
torch
.
float32
,
torch
.
bfloat16
,
torch
.
float16
]:
tensors
[
k
]
=
v
.
to
(
torch
.
bfloat16
)
new_tensors
,
alphas
=
FluxLoraLoaderMixin
.
lora_state_dict
(
tensors
,
return_alphas
=
True
)
if
alphas
is
not
None
and
len
(
alphas
)
>
0
:
...
...
tests/flux/test_flux_teacache.py
View file @
336d56ef
...
...
@@ -34,7 +34,7 @@ from .utils import already_generate, compute_lpips, offload_pipeline
"fox"
,
1234
,
0.7
,
0.
349
if
get_precision
()
==
"int4"
else
0.349
,
0.
417
if
get_precision
()
==
"int4"
else
0.349
,
),
(
1024
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment