Fix the bug that `joint_attention_kwargs` is not passed to the FLUX's...
Fix the bug that `joint_attention_kwargs` is not passed to the FLUX's transformer attention processors (#9517) * Update transformer_flux.py
Showing
Please register or sign in to comment