[LoRA] feat: support loading loras into 4bit quantized Flux models. (#10578)
* feat: support loading loras into 4bit quantized models. * updates * update * remove weight check.
Showing
Please register or sign in to comment
* feat: support loading loras into 4bit quantized models. * updates * update * remove weight check.