Unverified Commit 03bcf5ae authored by Teriks's avatar Teriks Committed by GitHub
Browse files

RFInversionFluxPipeline, small fix for enable_model_cpu_offload &...


RFInversionFluxPipeline, small fix for enable_model_cpu_offload & enable_sequential_cpu_offload compatibility (#10480)

RFInversionFluxPipeline.encode_image, device fix

Use self._execution_device instead of self.device when selecting
a device for the input image tensor.

This allows for compatibility with enable_model_cpu_offload &
enable_sequential_cpu_offload
Co-authored-by: default avatarTeriks <Teriks@users.noreply.github.com>
Co-authored-by: default avatarLinoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
parent e0b96ba7
...@@ -419,7 +419,7 @@ class RFInversionFluxPipeline( ...@@ -419,7 +419,7 @@ class RFInversionFluxPipeline(
) )
image = image.to(dtype) image = image.to(dtype)
x0 = self.vae.encode(image.to(self.device)).latent_dist.sample() x0 = self.vae.encode(image.to(self._execution_device)).latent_dist.sample()
x0 = (x0 - self.vae.config.shift_factor) * self.vae.config.scaling_factor x0 = (x0 - self.vae.config.shift_factor) * self.vae.config.scaling_factor
x0 = x0.to(dtype) x0 = x0.to(dtype)
return x0, resized return x0, resized
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment