"...resnet50_tensorflow.git" did not exist on "0c963cba5306d19d381869604957ddc423469dd7"
Commit 48ec26d1 authored by Paper99's avatar Paper99
Browse files

Update PhotoMaker V2

parent 1e78aa65
......@@ -10,17 +10,17 @@
## PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-md-dark.svg)](https://huggingface.co/papers/2312.04461)
[[Paper](https://huggingface.co/papers/2312.04461)] &emsp; [[Project Page](https://photo-maker.github.io)] &emsp; [[Model Card](https://huggingface.co/TencentARC/PhotoMaker)] <br>
[[🤗 Demo (Realistic)](https://huggingface.co/spaces/TencentARC/PhotoMaker)] &emsp; [[🤗 Demo (Stylization)](https://huggingface.co/spaces/TencentARC/PhotoMaker-Style)] <br>
[[💥New 🤗 Demo (PhotoMaker V2)](https://huggingface.co/spaces/TencentARC/PhotoMaker-V2)] &emsp; [[🤗 Demo (Realistic)](https://huggingface.co/spaces/TencentARC/PhotoMaker)] &emsp; [[🤗 Demo (Stylization)](https://huggingface.co/spaces/TencentARC/PhotoMaker-Style)] <br>
[[Replicate Demo (Realistic)](https://replicate.com/jd7h/photomaker)] &emsp; [[Replicate Demo (Stylization)](https://replicate.com/yorickvp/photomaker-style)] <be>
If the ID fidelity is not enough for you, please try our [stylization application](https://huggingface.co/spaces/TencentARC/PhotoMaker-Style), you may be pleasantly surprised.
If the ID fidelity is not enough for you, please try our [PhotoMaker V2](https://huggingface.co/spaces/TencentARC/PhotoMaker-V2) or [stylization application](https://huggingface.co/spaces/TencentARC/PhotoMaker-Style), you may be pleasantly surprised.
🥳 We release PhotoMaker V2. Please refer to [comparisons](./README_pmv2.md) between PhotoMaker V1, PhotoMaker V2, IP-Adapter-FaceID-plus-V2, and InstantID. Please watch [this video](https://photo-maker.github.io/assets/demo_pm_v2_full.mp4) for how to use our demo.
</div>
---
Official implementation of **[PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding](https://huggingface.co/papers/2312.04461)**.
### 🌠 **Key Features:**
......@@ -41,8 +41,9 @@ Now we know the implementation of **Replicate**, **Windows**, **ComfyUI**, and *
## 🚩 **New Features/Updates**
- ✅ Jan. 20, 2024. An **important** note: For those GPUs that do not support bfloat16, please change [this line](https://github.com/TencentARC/PhotoMaker/blob/6ec44fc13909d64a65c635b9e3b6f238eb1de9fe/gradio_demo/app.py#L39) to `torch_dtype = torch.float16`, the speed will be **greatly improved** (1min/img (before) vs. 14s/img (after) on V100). The minimum GPU memory requirement for PhotoMaker is **11G** (Please refer to [this link](https://github.com/TencentARC/PhotoMaker/discussions/114) for saving GPU memory).
- ✅ Jan. 15, 2024. We release PhotoMaker.
- ✅ July 20, 2024. 💥 We release PhotoMaker V2 with **improved ID fidelity**. At the same time, it still maintains the generation quality, editability, and compatibility with any plugins that PhotoMaker V1 offers. We have also provided scripts for integration with [ControlNet](./inference_scripts/inference_pmv2_contronet.py), [T2I-Adapter](./inference_scripts/inference_pmv2_t2i_adapter.py), and [IP-Adapter](./inference_scripts/inference_pmv2_ip_adapter.py) to offer excellent control capabilities. Users can further customize scripts for upgrades, such as combining with LCM for acceleration or integrating with IP-Adapter-FaceID or InstantID to further improve ID fidelity. We will release technical report of PhotoMaker V2 soon. Please refer to [this doc](./README_pmv2.md) for a quick preview.
- ✅ January 20, 2024. An **important** note: For those GPUs that do not support bfloat16, please change [this line](https://github.com/TencentARC/PhotoMaker/blob/6ec44fc13909d64a65c635b9e3b6f238eb1de9fe/gradio_demo/app.py#L39) to `torch_dtype = torch.float16`, the speed will be **greatly improved** (1min/img (before) vs. 14s/img (after) on V100). The minimum GPU memory requirement for PhotoMaker is **11G** (Please refer to [this link](https://github.com/TencentARC/PhotoMaker/discussions/114) for saving GPU memory).
- ✅ January 15, 2024. We release PhotoMaker.
---
......@@ -240,6 +241,7 @@ Provided by [@Gradio](https://twitter.com/Gradio/status/1747683500495691942)
# 🤗 Acknowledgements
- PhotoMaker is co-hosted by Tencent ARC Lab and Nankai University [MCG-NKU](https://mmcheng.net/cmm/).
- Inspired from many excellent demos and repos, including [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter), [multimodalart/Ip-Adapter-FaceID](https://huggingface.co/spaces/multimodalart/Ip-Adapter-FaceID), [FastComposer](https://github.com/mit-han-lab/fastcomposer), and [T2I-Adapter](https://github.com/TencentARC/T2I-Adapter). Thanks for their great work!
- Thanks to the [HunyuanDiT](https://github.com/Tencent/HunyuanDiT) team for their generous support and suggestions!
- Thanks to the Venus team in Tencent PCG for their feedback and suggestions.
- Thanks to the HuggingFace team for their generous support!
......
<p align="center">
<img src="https://photo-maker.github.io/assets/logo.png" height=70>
</p>
<!-- ## <div align="center"><b>PhotoMaker</b></div> -->
<div align="center">
## PhotoMaker V2: Improved ID Fidelity and Better Controllability Compared to PhotoMaker V1
[[🤗 Demo](https://huggingface.co/spaces/TencentARC/PhotoMaker-V2)]
</div>
When training PhotoMaker V2, we focused on improving ID fidelity. Compared to PhotoMaker V1, we introduced 1️⃣ new training strategies, incorporated 2️⃣ more portrait datasets, and utilized 3️⃣ a more powerful ID extraction encoder. We will release a technical report soon. Thank you all for your attention.
### 🌠 **Key improvements in PhotoMaker V2:**
1. **ID fidelity** has been **further improved**, especially for single image input and Asian facial inputs. Of course, feeding more facial images can still yield better results.
2. By integrating [ControlNet](./inference_scripts/inference_pmv2_contronet.py), [T2I-Adapter](./inference_scripts/inference_pmv2_t2i_adapter.py), and [IP-Adapter](./inference_scripts/inference_pmv2_ip_adapter.py), the generation process becomes **more controllable**. We provide corresponding scripts for reference. Additionally, PhotoMaker V2 allows users to achieve better ID consistency by combining it with IP-Adapter-FaceID, InstantID, and [character LoRA](https://github.com/TencentARC/PhotoMaker/discussions/14).
3. PhotoMaker V2 **inherits the promising features of PhotoMaker V1**, such as high-quality and diverse generation capabilities, and powerful text control. Additionally, it can still integrate previous applications like bringing characters from old photos or paintings back to reality, identity mixing, and changing age or gender.
## Comparisons with PhotoMaker V1, IP-Adapter-FaceID and InstantID
We selected the three most prevalent methods in ID personalization generation, namely PhotoMaker V1, [IP-Adapter-FaceID-Plus-V2](https://huggingface.co/h94/IP-Adapter-FaceID) ([best of IP-Adapter-FaceID](https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/195)), and [InstantID](https://github.com/InstantID/InstantID).
To ensure a fair comparison, we used the same base model ([RealVisXL-V4.0](https://huggingface.co/SG161222/RealVisXL_V4.0)) and scheduler ([Euler](https://huggingface.co/docs/diffusers/api/schedulers/euler)), and selected the best out of four randomly generated images from each method for visualization. The prompts and negative prompts were consistent:
Prompt: `instagram photo, portrait photo of a woman img holding two cats, colorful, perfect face, natural skin, hard shadows, film grain`
Negative Prompt: `(asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth`
We can see that our method has **advantages** in maintaining ID fidelity and in the quality of the generated images
![comp_pm_v2_reba](https://github.com/user-attachments/assets/b978ffa2-97c9-4910-ab23-a2b2edd3be1d)
![comp_pm_v2_musk](https://github.com/user-attachments/assets/6b96d65b-813a-45e0-8f7a-25041dc4dc10)
![comp_pm_v2_yanzu](https://github.com/user-attachments/assets/b788b2b0-9166-4c9d-aa46-24ef1fb4e5a9)
![comp_pm_v2_yifei](https://github.com/user-attachments/assets/66fa8a73-8973-4e40-a094-c4cb3eec8d8a)
## Cooperation with ControlNet / T2I-Adapter / IP-Adapter
PhotoMaker V2 can collaborate with [T2I-Adapter’s doodle mode](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0),
allowing for controlled image generation based on user drawings and prompts.
This feature can be experienced in [[🤗 our official demo]](https://huggingface.co/spaces/TencentARC/PhotoMaker-V2).
The following video is an example of the experience process:
https://github.com/user-attachments/assets/1303d684-89e4-49d2-8e8c-4b659c8b48e7
Additionally, PhotoMaker V2 can work with [ControlNet](https://github.com/lllyasviel/ControlNet) and [T2I-Adapter](https://github.com/TencentARC/T2I-Adapter) for layout control, such as edge, pose, depth, and more.
We provide two example scripts:
1. [inference_pmv2_contronet.py](./inference_scripts/inference_pmv2_contronet.py)
2. [inference_pmv2_t2i_adapter.py](./inference_scripts/inference_pmv2_t2i_adapter.py)
The image below is an example of controlled generation using pose through ControlNet:
![pm_v2_controlnet](https://github.com/user-attachments/assets/57767447-192c-4606-af2a-4206b5dbccf9)
Our sample scripts can be referred to:
[inference_pmv2_ip_adapter.py](./inference_scripts/inference_pmv2_ip_adapter.py)
The image below is an example:
![pm_v2_ipadapter](https://github.com/user-attachments/assets/89f95604-6cfa-4dde-b563-2d052bac14cc)
PhotoMaker V2, as a plugin, can work well with other plugins, such as IP-Adapter-FaceID or InstantID, to further improve ID fidelity, or combining with LCM for acceleration. We look forward to your exploration of more features, and welcome you to **provide PRs** or **contribute to the open-source community**
🥳 If you have built or known repositories or applications around PhotoMaker V2, please leave us a message in the discussion. We will include them in our README.
## LICENSE
Since PhotoMaker V2 relies on [InsightFace](https://github.com/deepinsight/insightface), it also needs to comply with its [license](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
......@@ -17,7 +17,7 @@ from style_template import styles
from aspect_ratio_template import aspect_ratios
# global variable
base_model_path = 'SG161222/RealVisXL_V3.0'
base_model_path = 'SG161222/RealVisXL_V4.0'
try:
if torch.cuda.is_available():
device = "cuda"
......@@ -37,10 +37,9 @@ DEFAULT_ASPECT_RATIO = ASPECT_RATIO_LABELS[0]
# download PhotoMaker checkpoint to cache
photomaker_ckpt = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v1.bin", repo_type="model")
torch_dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
if device == "mps":
torch_dtype = torch.float16
else:
torch_dtype = torch.bfloat16
pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
base_model_path,
......@@ -54,15 +53,17 @@ pipe.load_photomaker_adapter(
os.path.dirname(photomaker_ckpt),
subfolder="",
weight_name=os.path.basename(photomaker_ckpt),
trigger_word="img"
trigger_word="img",
pm_version="v1",
)
pipe.id_encoder.to(device)
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
# pipe.set_adapters(["photomaker"], adapter_weights=[1.0])
pipe.fuse_lora()
pipe.to(device)
@spaces.GPU(enable_queue=True)
@spaces.GPU
def generate_image(upload_images, prompt, negative_prompt, aspect_ratio_name, style_name, num_steps, style_strength_ratio, num_outputs, guidance_scale, seed, progress=gr.Progress(track_tqdm=True)):
# check the trigger word
image_token_id = pipe.tokenizer.convert_tokens_to_ids(pipe.trigger_word)
......@@ -185,8 +186,8 @@ If our work is useful for your research, please consider citing:
@article{li2023photomaker,
title={PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding},
author={Li, Zhen and Cao, Mingdeng and Wang, Xintao and Qi, Zhongang and Cheng, Ming-Ming and Shan, Ying},
booktitle={arXiv preprint arxiv:2312.04461},
year={2023}
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
```
📋 **License**
......@@ -310,4 +311,4 @@ with gr.Blocks(css=css) as demo:
gr.Markdown(article)
demo.launch(share=False)
demo.launch()
import torch
import torchvision.transforms.functional as TF
import numpy as np
import random
import os
import sys
from diffusers.utils import load_image
from diffusers import EulerDiscreteScheduler, T2IAdapter
from huggingface_hub import hf_hub_download
import spaces
import gradio as gr
from photomaker import PhotoMakerStableDiffusionXLAdapterPipeline
from photomaker import FaceAnalysis2, analyze_faces
from style_template import styles
from aspect_ratio_template import aspect_ratios
# global variable
base_model_path = 'SG161222/RealVisXL_V4.0'
face_detector = FaceAnalysis2(providers=['CUDAExecutionProvider'], allowed_modules=['detection', 'recognition'])
face_detector.prepare(ctx_id=0, det_size=(640, 640))
try:
if torch.cuda.is_available():
device = "cuda"
elif sys.platform == "darwin" and torch.backends.mps.is_available():
device = "mps"
else:
device = "cpu"
except:
device = "cpu"
MAX_SEED = np.iinfo(np.int32).max
STYLE_NAMES = list(styles.keys())
DEFAULT_STYLE_NAME = "Photographic (Default)"
ASPECT_RATIO_LABELS = list(aspect_ratios)
DEFAULT_ASPECT_RATIO = ASPECT_RATIO_LABELS[0]
enable_doodle_arg = False
photomaker_ckpt = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v2.bin", repo_type="model")
torch_dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
if device == "mps":
torch_dtype = torch.float16
# load adapter
adapter = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-sketch-sdxl-1.0", torch_dtype=torch_dtype, variant="fp16"
).to(device)
pipe = PhotoMakerStableDiffusionXLAdapterPipeline.from_pretrained(
base_model_path,
adapter=adapter,
torch_dtype=torch_dtype,
use_safetensors=True,
variant="fp16",
).to(device)
pipe.load_photomaker_adapter(
os.path.dirname(photomaker_ckpt),
subfolder="",
weight_name=os.path.basename(photomaker_ckpt),
trigger_word="img",
pm_version="v2",
)
pipe.id_encoder.to(device)
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
# pipe.set_adapters(["photomaker"], adapter_weights=[1.0])
pipe.fuse_lora()
pipe.to(device)
@spaces.GPU
def generate_image(
upload_images,
prompt,
negative_prompt,
aspect_ratio_name,
style_name,
num_steps,
style_strength_ratio,
num_outputs,
guidance_scale,
seed,
use_doodle,
sketch_image,
adapter_conditioning_scale,
adapter_conditioning_factor,
progress=gr.Progress(track_tqdm=True)
):
if use_doodle:
sketch_image = sketch_image["composite"]
r, g, b, a = sketch_image.split()
sketch_image = a.convert("RGB")
sketch_image = TF.to_tensor(sketch_image) > 0.5 # Inversion
sketch_image = TF.to_pil_image(sketch_image.to(torch.float32))
adapter_conditioning_scale = adapter_conditioning_scale
adapter_conditioning_factor = adapter_conditioning_factor
else:
adapter_conditioning_scale = 0.
adapter_conditioning_factor = 0.
sketch_image = None
# check the trigger word
image_token_id = pipe.tokenizer.convert_tokens_to_ids(pipe.trigger_word)
input_ids = pipe.tokenizer.encode(prompt)
if image_token_id not in input_ids:
raise gr.Error(f"Cannot find the trigger word '{pipe.trigger_word}' in text prompt! Please refer to step 2️⃣")
if input_ids.count(image_token_id) > 1:
raise gr.Error(f"Cannot use multiple trigger words '{pipe.trigger_word}' in text prompt!")
# determine output dimensions by the aspect ratio
output_w, output_h = aspect_ratios[aspect_ratio_name]
print(f"[Debug] Generate image using aspect ratio [{aspect_ratio_name}] => {output_w} x {output_h}")
# apply the style template
prompt, negative_prompt = apply_style(style_name, prompt, negative_prompt)
if upload_images is None:
raise gr.Error(f"Cannot find any input face image! Please refer to step 1️⃣")
input_id_images = []
for img in upload_images:
input_id_images.append(load_image(img))
id_embed_list = []
for img in input_id_images:
img = np.array(img)
img = img[:, :, ::-1]
faces = analyze_faces(face_detector, img)
if len(faces) > 0:
id_embed_list.append(torch.from_numpy((faces[0]['embedding'])))
if len(id_embed_list) == 0:
raise gr.Error(f"No face detected, please update the input face image(s)")
id_embeds = torch.stack(id_embed_list)
generator = torch.Generator(device=device).manual_seed(seed)
print("Start inference...")
print(f"[Debug] Seed: {seed}")
print(f"[Debug] Prompt: {prompt}, \n[Debug] Neg Prompt: {negative_prompt}")
start_merge_step = int(float(style_strength_ratio) / 100 * num_steps)
if start_merge_step > 30:
start_merge_step = 30
print(start_merge_step)
images = pipe(
prompt=prompt,
width=output_w,
height=output_h,
input_id_images=input_id_images,
negative_prompt=negative_prompt,
num_images_per_prompt=num_outputs,
num_inference_steps=num_steps,
start_merge_step=start_merge_step,
generator=generator,
guidance_scale=guidance_scale,
id_embeds=id_embeds,
image=sketch_image,
adapter_conditioning_scale=adapter_conditioning_scale,
adapter_conditioning_factor=adapter_conditioning_factor,
).images
return images, gr.update(visible=True)
def swap_to_gallery(images):
return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
def upload_example_to_gallery(images, prompt, style, negative_prompt):
return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
def remove_back_to_files():
return gr.update(visible=False), gr.update(visible=False), gr.update(visible=True)
def change_doodle_space(use_doodle):
if use_doodle:
return gr.update(visible=True)
else:
return gr.update(visible=False)
def remove_tips():
return gr.update(visible=False)
def randomize_seed_fn(seed: int, randomize_seed: bool) -> int:
if randomize_seed:
seed = random.randint(0, MAX_SEED)
return seed
def apply_style(style_name: str, positive: str, negative: str = "") -> tuple[str, str]:
p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
return p.replace("{prompt}", positive), n + ' ' + negative
def get_image_path_list(folder_name):
image_basename_list = os.listdir(folder_name)
image_path_list = sorted([os.path.join(folder_name, basename) for basename in image_basename_list])
return image_path_list
def get_example():
case = [
[
get_image_path_list('./examples/scarletthead_woman'),
"instagram photo, portrait photo of a woman img, colorful, perfect face, natural skin, hard shadows, film grain",
"(No style)",
"(asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth",
],
[
get_image_path_list('./examples/newton_man'),
"sci-fi, closeup portrait photo of a man img wearing the sunglasses in Iron man suit, face, slim body, high quality, film grain",
"(No style)",
"(asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth",
],
]
return case
### Description and style
logo = r"""
<center><img src='https://photo-maker.github.io/assets/logo.png' alt='PhotoMaker logo' style="width:80px; margin-bottom:10px"></center>
"""
title = r"""
<h1 align="center">PhotoMaker V2: Improved ID Fidelity and Better Controllability than PhotoMaker V1</h1>
"""
description = r"""
<b>Official 🤗 Gradio demo</b> for <a href='https://github.com/TencentARC/PhotoMaker' target='_blank'><b>PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding</b></a>.<br>
The details of PhotoMaker V2 can be found in
<br>
<br>
For previous version of PhotoMaker, you could use our original gradio demos [PhotoMaker](https://huggingface.co/spaces/TencentARC/PhotoMaker) and [PhotoMaker-Style](https://huggingface.co/spaces/TencentARC/PhotoMaker-Style).
<br>
❗️❗️❗️[<b>Important</b>] Personalization steps:<br>
1️⃣ Upload images of someone you want to customize. One image is ok, but more is better. Although we do not perform face detection, the face in the uploaded image should <b>occupy the majority of the image</b>.<br>
2️⃣ Enter a text prompt, making sure to <b>follow the class word</b> you want to customize with the <b>trigger word</b>: `img`, such as: `man img` or `woman img` or `girl img`.<br>
3️⃣ Choose your preferred style template.<br>
4️⃣ <b>(Optional: but new feature)</b> Select the ‘Enable Drawing Doodle...’ option and draw on the canvas<br>
5️⃣ Click the <b>Submit</b> button to start customizing.
"""
article = r"""
If PhotoMaker V2 is helpful, please help to ⭐ the <a href='https://github.com/TencentARC/PhotoMaker' target='_blank'>Github Repo</a>. Thanks!
[![GitHub Stars](https://img.shields.io/github/stars/TencentARC/PhotoMaker?style=social)](https://github.com/TencentARC/PhotoMaker)
---
📝 **Citation**
<br>
If our work is useful for your research, please consider citing:
```bibtex
@article{li2023photomaker,
title={PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding},
author={Li, Zhen and Cao, Mingdeng and Wang, Xintao and Qi, Zhongang and Cheng, Ming-Ming and Shan, Ying},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
```
📋 **License**
<br>
Apache-2.0 LICENSE. Please refer to the [LICENSE file](https://huggingface.co/TencentARC/PhotoMaker/blob/main/LICENSE) for details.
📧 **Contact**
<br>
If you have any questions, please feel free to reach me out at <b>zhenli1031@gmail.com</b>.
"""
tips = r"""
### Usage tips of PhotoMaker
1. Upload **more photos**of the person to be customized to **improve ID fidelty**.
2. If you find that the image quality is poor when using doodle for control, you can reduce the conditioning scale and factor of the adapter.
Besides, you could enlarge the ratio for more consistency of your doodle. <br>
If you have any issues, leave the issue in the discussion page of the space. For a more stable (queue-free) experience, you can duplicate the space.
"""
# We have provided some generate examples and comparisons at: [this website]().
css = '''
.gradio-container {width: 85% !important}
'''
with gr.Blocks(css=css) as demo:
gr.Markdown(logo)
gr.Markdown(title)
gr.Markdown(description)
# gr.DuplicateButton(
# value="Duplicate Space for private use ",
# elem_id="duplicate-button",
# visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1",
# )
with gr.Row():
with gr.Column():
files = gr.Files(
label="Drag (Select) 1 or more photos of your face",
file_types=["image"]
)
uploaded_files = gr.Gallery(label="Your images", visible=False, columns=5, rows=1, height=200)
with gr.Column(visible=False) as clear_button:
remove_and_reupload = gr.ClearButton(value="Remove and upload new ones", components=files, size="sm")
prompt = gr.Textbox(label="Prompt",
info="Try something like 'a photo of a man/woman img', 'img' is the trigger word.",
placeholder="A photo of a [man/woman img]...")
style = gr.Dropdown(label="Style template", choices=STYLE_NAMES, value=DEFAULT_STYLE_NAME)
aspect_ratio = gr.Dropdown(label="Output aspect ratio", choices=ASPECT_RATIO_LABELS, value=DEFAULT_ASPECT_RATIO)
submit = gr.Button("Submit")
enable_doodle = gr.Checkbox(
label="Enable Drawing Doodle for Control", value=enable_doodle_arg,
info="After enabling this option, PhotoMaker will generate content based on your doodle on the canvas, driven by the T2I-Adapter (Quality may be decreased)",
)
with gr.Accordion("T2I-Adapter-Doodle (Optional)", visible=False) as doodle_space:
with gr.Row():
sketch_image = gr.Sketchpad(
label="Canvas",
type="pil",
crop_size=[1024,1024],
layers=False,
canvas_size=(350, 350),
brush=gr.Brush(default_size=5, colors=["#000000"], color_mode="fixed")
)
with gr.Group():
adapter_conditioning_scale = gr.Slider(
label="Adapter conditioning scale",
minimum=0.5,
maximum=1,
step=0.1,
value=0.7,
)
adapter_conditioning_factor = gr.Slider(
label="Adapter conditioning factor",
info="Fraction of timesteps for which adapter should be applied",
minimum=0.5,
maximum=1,
step=0.1,
value=0.8,
)
with gr.Accordion(open=False, label="Advanced Options"):
negative_prompt = gr.Textbox(
label="Negative Prompt",
placeholder="low quality",
value="nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry",
)
num_steps = gr.Slider(
label="Number of sample steps",
minimum=20,
maximum=100,
step=1,
value=50,
)
style_strength_ratio = gr.Slider(
label="Style strength (%)",
minimum=15,
maximum=50,
step=1,
value=20,
)
num_outputs = gr.Slider(
label="Number of output images",
minimum=1,
maximum=4,
step=1,
value=2,
)
guidance_scale = gr.Slider(
label="Guidance scale",
minimum=0.1,
maximum=10.0,
step=0.1,
value=5,
)
seed = gr.Slider(
label="Seed",
minimum=0,
maximum=MAX_SEED,
step=1,
value=0,
)
randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
with gr.Column():
gallery = gr.Gallery(label="Generated Images")
usage_tips = gr.Markdown(label="Usage tips of PhotoMaker", value=tips ,visible=False)
files.upload(fn=swap_to_gallery, inputs=files, outputs=[uploaded_files, clear_button, files])
remove_and_reupload.click(fn=remove_back_to_files, outputs=[uploaded_files, clear_button, files])
enable_doodle.select(fn=change_doodle_space, inputs=enable_doodle, outputs=doodle_space)
input_list = [
files,
prompt,
negative_prompt,
aspect_ratio,
style,
num_steps,
style_strength_ratio,
num_outputs,
guidance_scale,
seed,
enable_doodle,
sketch_image,
adapter_conditioning_scale,
adapter_conditioning_factor
]
submit.click(
fn=remove_tips,
outputs=usage_tips,
).then(
fn=randomize_seed_fn,
inputs=[seed, randomize_seed],
outputs=seed,
queue=False,
api_name=False,
).then(
fn=generate_image,
inputs=input_list,
outputs=[gallery, usage_tips]
)
gr.Examples(
examples=get_example(),
inputs=[files, prompt, style, negative_prompt],
run_on_click=True,
fn=upload_example_to_gallery,
outputs=[uploaded_files, clear_button, files],
)
gr.Markdown(article)
demo.launch()
# !pip install opencv-python transformers accelerate
import os
import sys
import numpy as np
import torch
from diffusers.utils import load_image
from diffusers import EulerDiscreteScheduler
from huggingface_hub import hf_hub_download
from photomaker import PhotoMakerStableDiffusionXLPipeline
from photomaker import FaceAnalysis2, analyze_faces
face_detector = FaceAnalysis2(providers=['CUDAExecutionProvider'], allowed_modules=['detection', 'recognition'])
face_detector.prepare(ctx_id=0, det_size=(640, 640))
try:
if torch.cuda.is_available():
device = "cuda"
elif sys.platform == "darwin" and torch.backends.mps.is_available():
device = "mps"
else:
device = "cpu"
except:
device = "cpu"
torch_dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
if device == "mps":
torch_dtype = torch.float16
output_dir = "./outputs"
os.makedirs(output_dir, exist_ok=True)
photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v2.bin", repo_type="model")
prompt = "instagram photo, portrait photo of a woman img, colorful, perfect face, natural skin, hard shadows, film grain, best quality"
negative_prompt = "(asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth"
# initialize the models and pipeline
### Load base model
pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0", torch_dtype=torch_dtype
).to("cuda")
### Load PhotoMaker checkpoint
pipe.load_photomaker_adapter(
os.path.dirname(photomaker_path),
subfolder="",
weight_name=os.path.basename(photomaker_path),
trigger_word="img" # define the trigger word
)
### Also can cooperate with other LoRA modules
# pipe.load_lora_weights(os.path.dirname(lora_path), weight_name=lora_model_name, adapter_name="lcm-lora")
# pipe.set_adapters(["photomaker", "lcm-lora"], adapter_weights=[1.0, 0.5])
pipe.fuse_lora()
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
### define the input ID images
input_folder_name = './examples/scarletthead_woman'
image_basename_list = os.listdir(input_folder_name)
image_path_list = sorted([os.path.join(input_folder_name, basename) for basename in image_basename_list])
input_id_images = []
for image_path in image_path_list:
input_id_images.append(load_image(image_path))
id_embed_list = []
for img in input_id_images:
img = np.array(img)
img = img[:, :, ::-1]
faces = analyze_faces(face_detector, img)
if len(faces) > 0:
id_embed_list.append(torch.from_numpy((faces[0]['embedding'])))
if len(id_embed_list) == 0:
raise ValueError(f"No face detected in input image pool")
id_embeds = torch.stack(id_embed_list)
# generate image
images = pipe(
prompt,
negative_prompt=negative_prompt,
input_id_images=input_id_images,
id_embeds=id_embeds,
num_images_per_prompt=2,
start_merge_step=10,
).images
for idx, img in enumerate(images):
img.save(os.path.join(output_dir, f"output_pmv2_{idx}.jpg"))
\ No newline at end of file
# !pip install opencv-python transformers accelerate
import os
import sys
import numpy as np
import torch
from diffusers.utils import load_image
from diffusers import EulerDiscreteScheduler, ControlNetModel
from huggingface_hub import hf_hub_download
from controlnet_aux import OpenposeDetector
from photomaker import PhotoMakerStableDiffusionXLControlNetPipeline
from photomaker import FaceAnalysis2, analyze_faces
face_detector = FaceAnalysis2(providers=['CUDAExecutionProvider'], allowed_modules=['detection', 'recognition'])
face_detector.prepare(ctx_id=0, det_size=(640, 640))
try:
if torch.cuda.is_available():
device = "cuda"
elif sys.platform == "darwin" and torch.backends.mps.is_available():
device = "mps"
else:
device = "cpu"
except:
device = "cpu"
torch_dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
if device == "mps":
torch_dtype = torch.float16
output_dir = "./outputs"
os.makedirs(output_dir, exist_ok=True)
photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v2.bin", repo_type="model")
openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
controlnet_pose_model = "thibaud/controlnet-openpose-sdxl-1.0"
controlnet_pose = ControlNetModel.from_pretrained(
controlnet_pose_model, torch_dtype=torch_dtype,
).to("cuda")
prompt = "instagram photo, a photo of a woman img, colorful, perfect face, natural skin, hard shadows, film grain, best quality"
negative_prompt = "(asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth"
# download an image
pose_image = load_image(
"./examples/pos_ref.png"
)
pose_image = openpose(pose_image, detect_resolution=512, image_resolution=1024)
# initialize the models and pipeline
controlnet_conditioning_scale = 1.0 # recommended for good generalization
### Load base model
pipe = PhotoMakerStableDiffusionXLControlNetPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0",
controlnet=controlnet_pose,
torch_dtype=torch_dtype,
).to("cuda")
### Load PhotoMaker checkpoint
pipe.load_photomaker_adapter(
os.path.dirname(photomaker_path),
subfolder="",
weight_name=os.path.basename(photomaker_path),
trigger_word="img" # define the trigger word
)
### Also can cooperate with other LoRA modules
# pipe.load_lora_weights(os.path.dirname(lora_path), weight_name=lora_model_name, adapter_name="lcm-lora")
# pipe.set_adapters(["photomaker", "lcm-lora"], adapter_weights=[1.0, 0.5])
pipe.fuse_lora()
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
### define the input ID images
input_folder_name = './examples/scarletthead_woman'
image_basename_list = os.listdir(input_folder_name)
image_path_list = sorted([os.path.join(input_folder_name, basename) for basename in image_basename_list])
input_id_images = []
for image_path in image_path_list:
input_id_images.append(load_image(image_path))
### extract insightface embedding
input_id_images = []
for image_path in image_path_list:
input_id_images.append(load_image(image_path))
id_embed_list = []
for img in input_id_images:
img = np.array(img)
img = img[:, :, ::-1]
faces = analyze_faces(face_detector, img)
if len(faces) > 0:
id_embed_list.append(torch.from_numpy((faces[0]['embedding'])))
if len(id_embed_list) == 0:
raise ValueError(f"No face detected in input image pool")
id_embeds = torch.stack(id_embed_list)
# generate image
images = pipe(
prompt,
negative_prompt=negative_prompt,
input_id_images=input_id_images,
id_embeds=id_embeds,
controlnet_conditioning_scale=controlnet_conditioning_scale,
image=pose_image,
num_images_per_prompt=2,
start_merge_step=10,
).images
for idx, img in enumerate(images):
img.save(os.path.join(output_dir, f"output_pmv2_cn_{idx}.jpg"))
\ No newline at end of file
# !pip install opencv-python transformers accelerate
import os
import sys
import numpy as np
import torch
from diffusers.utils import load_image
from diffusers import EulerDiscreteScheduler
from huggingface_hub import hf_hub_download
from photomaker import PhotoMakerStableDiffusionXLPipeline
from photomaker import FaceAnalysis2, analyze_faces
face_detector = FaceAnalysis2(providers=['CUDAExecutionProvider'], allowed_modules=['detection', 'recognition'])
face_detector.prepare(ctx_id=0, det_size=(640, 640))
try:
if torch.cuda.is_available():
device = "cuda"
elif sys.platform == "darwin" and torch.backends.mps.is_available():
device = "mps"
else:
device = "cpu"
except:
device = "cpu"
torch_dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
if device == "mps":
torch_dtype = torch.float16
output_dir = "./outputs"
os.makedirs(output_dir, exist_ok=True)
photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v2.bin", repo_type="model")
prompt = "portrait photo of a woman img, colorful, perfect face, best quality"
negative_prompt = "(asymmetry, worst quality, low quality, illustration), open mouth"
# # initialize the models and pipeline
### Load base model
pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0", torch_dtype=torch_dtype,
).to("cuda")
pipe.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
pipe.set_ip_adapter_scale(0.7)
print("Loading images...")
style_images = [load_image(f"./examples/statue.png")]
### Load PhotoMaker checkpoint
pipe.load_photomaker_adapter(
os.path.dirname(photomaker_path),
subfolder="",
weight_name=os.path.basename(photomaker_path),
trigger_word="img" # define the trigger word
)
### Also can cooperate with other LoRA modules
# pipe.load_lora_weights(os.path.dirname(lora_path), weight_name=lora_model_name, adapter_name="lcm-lora")
# pipe.set_adapters(["photomaker", "lcm-lora"], adapter_weights=[1.0, 0.5])
pipe.fuse_lora()
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
### define the input ID images
input_folder_name = './examples/scarletthead_woman'
image_basename_list = os.listdir(input_folder_name)
image_path_list = sorted([os.path.join(input_folder_name, basename) for basename in image_basename_list])
input_id_images = []
for image_path in image_path_list:
input_id_images.append(load_image(image_path))
id_embed_list = []
for img in input_id_images:
img = np.array(img)
img = img[:, :, ::-1]
faces = analyze_faces(face_detector, img)
if len(faces) > 0:
id_embed_list.append(torch.from_numpy((faces[0]['embedding'])))
if len(id_embed_list) == 0:
raise ValueError(f"No face detected in input image pool")
id_embeds = torch.stack(id_embed_list)
# generate image
images = pipe(
prompt,
negative_prompt=negative_prompt,
input_id_images=input_id_images,
id_embeds=id_embeds,
ip_adapter_image=[style_images],
num_images_per_prompt=2,
start_merge_step=10,
).images
for idx, img in enumerate(images):
img.save(os.path.join(output_dir, f"output_pmv2_ipa_{idx}.jpg"))
\ No newline at end of file
# !pip install opencv-python transformers accelerate
import os
import sys
import numpy as np
import torch
from diffusers.utils import load_image
from diffusers import EulerDiscreteScheduler, T2IAdapter
from huggingface_hub import hf_hub_download
from controlnet_aux import OpenposeDetector
from photomaker import PhotoMakerStableDiffusionXLAdapterPipeline
from photomaker import FaceAnalysis2, analyze_faces
face_detector = FaceAnalysis2(providers=['CUDAExecutionProvider'], allowed_modules=['detection', 'recognition'])
face_detector.prepare(ctx_id=0, det_size=(640, 640))
try:
if torch.cuda.is_available():
device = "cuda"
elif sys.platform == "darwin" and torch.backends.mps.is_available():
device = "mps"
else:
device = "cpu"
except:
device = "cpu"
torch_dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
if device == "mps":
torch_dtype = torch.float16
output_dir = "./outputs"
os.makedirs(output_dir, exist_ok=True)
photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v2.bin", repo_type="model")
openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
# load adapter
adapter = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-openpose-sdxl-1.0", torch_dtype=torch_dtype,
).to("cuda")
prompt = "instagram photo, a photo of a woman img, colorful, perfect face, natural skin, hard shadows, film grain, best quality"
negative_prompt = "(asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth"
# download an image
pose_image = load_image(
"./examples/pos_ref.png"
)
pose_image = openpose(pose_image, detect_resolution=512, image_resolution=1024)
# initialize the models and pipeline
adapter_conditioning_scale = 0.8 # recommended for good generalization
adapter_conditioning_factor = 0.8
### Load base model
pipe = PhotoMakerStableDiffusionXLAdapterPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0",
adapter=adapter,
torch_dtype=torch_dtype,
).to("cuda")
### Load PhotoMaker checkpoint
pipe.load_photomaker_adapter(
os.path.dirname(photomaker_path),
subfolder="",
weight_name=os.path.basename(photomaker_path),
trigger_word="img" # define the trigger word
)
### Also can cooperate with other LoRA modules
# pipe.load_lora_weights(os.path.dirname(lora_path), weight_name=lora_model_name, adapter_name="lcm-lora")
# pipe.set_adapters(["photomaker", "lcm-lora"], adapter_weights=[1.0, 0.5])
pipe.fuse_lora()
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
### define the input ID images
input_folder_name = './examples/scarletthead_woman'
image_basename_list = os.listdir(input_folder_name)
image_path_list = sorted([os.path.join(input_folder_name, basename) for basename in image_basename_list])
input_id_images = []
for image_path in image_path_list:
input_id_images.append(load_image(image_path))
id_embed_list = []
for img in input_id_images:
img = np.array(img)
img = img[:, :, ::-1]
faces = analyze_faces(face_detector, img)
if len(faces) > 0:
id_embed_list.append(torch.from_numpy((faces[0]['embedding'])))
if len(id_embed_list) == 0:
raise ValueError(f"No face detected in input image pool")
id_embeds = torch.stack(id_embed_list)
# generate image
images = pipe(
prompt,
negative_prompt=negative_prompt,
input_id_images=input_id_images,
id_embeds=id_embeds,
adapter_conditioning_scale=adapter_conditioning_scale,
image=pose_image,
num_images_per_prompt=2,
start_merge_step=10,
).images
for idx, img in enumerate(images):
img.save(os.path.join(output_dir, f"output_pmv2_t2ia_{idx}.jpg"))
\ No newline at end of file
from .model import PhotoMakerIDEncoder
from .model_v2 import PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken
from .resampler import FacePerceiverResampler
from .pipeline import PhotoMakerStableDiffusionXLPipeline
from .pipeline import PhotoMakerStableDiffusionXLPipeline
from .pipeline_controlnet import PhotoMakerStableDiffusionXLControlNetPipeline
from .pipeline_t2i_adapter import PhotoMakerStableDiffusionXLAdapterPipeline
# InsightFace Package
from .insightface_package import FaceAnalysis2, analyze_faces
__all__ = [
"FaceAnalysis2",
"analyze_faces",
"FacePerceiverResampler",
"PhotoMakerIDEncoder",
"PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken",
"PhotoMakerStableDiffusionXLPipeline",
"PhotoMakerStableDiffusionXLControlNetPipeline",
"PhotoMakerStableDiffusionXLAdapterPipeline",
]
\ No newline at end of file
import numpy as np
# pip install insightface==0.7.3
from insightface.app import FaceAnalysis
from insightface.data import get_image as ins_get_image
###
# https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/165#issue-2055829543
###
class FaceAnalysis2(FaceAnalysis):
# NOTE: allows setting det_size for each detection call.
# the model allows it but the wrapping code from insightface
# doesn't show it, and people end up loading duplicate models
# for different sizes where there is absolutely no need to
def get(self, img, max_num=0, det_size=(640, 640)):
if det_size is not None:
self.det_model.input_size = det_size
return super().get(img, max_num)
def analyze_faces(face_analysis: FaceAnalysis, img_data: np.ndarray, det_size=(640, 640)):
# NOTE: try detect faces, if no faces detected, lower det_size until it does
detection_sizes = [None] + [(size, size) for size in range(640, 256, -64)] + [(256, 256)]
for size in detection_sizes:
faces = face_analysis.get(img_data, det_size=size)
if len(faces) > 0:
return faces
return []
# Merge image encoder and fuse module to create an ID Encoder
# send multiple ID images, we can directly obtain the updated text encoder containing a stacked ID embedding
import torch
import torch.nn as nn
from transformers.models.clip.modeling_clip import CLIPVisionModelWithProjection
from transformers.models.clip.configuration_clip import CLIPVisionConfig
from . import FacePerceiverResampler
VISION_CONFIG_DICT = {
"hidden_size": 1024,
"intermediate_size": 4096,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"patch_size": 14,
"projection_dim": 768
}
class MLP(nn.Module):
def __init__(self, in_dim, out_dim, hidden_dim, use_residual=True):
super().__init__()
if use_residual:
assert in_dim == out_dim
self.layernorm = nn.LayerNorm(in_dim)
self.fc1 = nn.Linear(in_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, out_dim)
self.use_residual = use_residual
self.act_fn = nn.GELU()
def forward(self, x):
residual = x
x = self.layernorm(x)
x = self.fc1(x)
x = self.act_fn(x)
x = self.fc2(x)
if self.use_residual:
x = x + residual
return x
class QFormerPerceiver(nn.Module):
def __init__(self, id_embeddings_dim, cross_attention_dim, num_tokens, embedding_dim=1024, use_residual=True, ratio=4):
super().__init__()
self.num_tokens = num_tokens
self.cross_attention_dim = cross_attention_dim
self.use_residual = use_residual
print(cross_attention_dim*num_tokens)
self.token_proj = nn.Sequential(
nn.Linear(id_embeddings_dim, id_embeddings_dim*ratio),
nn.GELU(),
nn.Linear(id_embeddings_dim*ratio, cross_attention_dim*num_tokens),
)
self.token_norm = nn.LayerNorm(cross_attention_dim)
self.perceiver_resampler = FacePerceiverResampler(
dim=cross_attention_dim,
depth=4,
dim_head=128,
heads=cross_attention_dim // 128,
embedding_dim=embedding_dim,
output_dim=cross_attention_dim,
ff_mult=4,
)
def forward(self, x, last_hidden_state):
x = self.token_proj(x)
x = x.reshape(-1, self.num_tokens, self.cross_attention_dim)
x = self.token_norm(x) # cls token
out = self.perceiver_resampler(x, last_hidden_state) # retrieve from patch tokens
if self.use_residual: # TODO: if use_residual is not true
out = x + 1.0 * out
return out
class FuseModule(nn.Module):
def __init__(self, embed_dim):
super().__init__()
self.mlp1 = MLP(embed_dim * 2, embed_dim, embed_dim, use_residual=False)
self.mlp2 = MLP(embed_dim, embed_dim, embed_dim, use_residual=True)
self.layer_norm = nn.LayerNorm(embed_dim)
def fuse_fn(self, prompt_embeds, id_embeds):
stacked_id_embeds = torch.cat([prompt_embeds, id_embeds], dim=-1)
stacked_id_embeds = self.mlp1(stacked_id_embeds) + prompt_embeds
stacked_id_embeds = self.mlp2(stacked_id_embeds)
stacked_id_embeds = self.layer_norm(stacked_id_embeds)
return stacked_id_embeds
def forward(
self,
prompt_embeds,
id_embeds,
class_tokens_mask,
) -> torch.Tensor:
# id_embeds shape: [b, max_num_inputs, 1, 2048]
id_embeds = id_embeds.to(prompt_embeds.dtype)
num_inputs = class_tokens_mask.sum().unsqueeze(0) # TODO: check for training case
batch_size, max_num_inputs = id_embeds.shape[:2]
# seq_length: 77
seq_length = prompt_embeds.shape[1]
# flat_id_embeds shape: [b*max_num_inputs, 1, 2048]
flat_id_embeds = id_embeds.view(
-1, id_embeds.shape[-2], id_embeds.shape[-1]
)
# valid_id_mask [b*max_num_inputs]
valid_id_mask = (
torch.arange(max_num_inputs, device=flat_id_embeds.device)[None, :]
< num_inputs[:, None]
)
valid_id_embeds = flat_id_embeds[valid_id_mask.flatten()]
prompt_embeds = prompt_embeds.view(-1, prompt_embeds.shape[-1])
class_tokens_mask = class_tokens_mask.view(-1)
valid_id_embeds = valid_id_embeds.view(-1, valid_id_embeds.shape[-1])
# slice out the image token embeddings
image_token_embeds = prompt_embeds[class_tokens_mask]
stacked_id_embeds = self.fuse_fn(image_token_embeds, valid_id_embeds)
assert class_tokens_mask.sum() == stacked_id_embeds.shape[0], f"{class_tokens_mask.sum()} != {stacked_id_embeds.shape[0]}"
prompt_embeds.masked_scatter_(class_tokens_mask[:, None], stacked_id_embeds.to(prompt_embeds.dtype))
updated_prompt_embeds = prompt_embeds.view(batch_size, seq_length, -1)
return updated_prompt_embeds
class PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken(CLIPVisionModelWithProjection):
def __init__(self, id_embeddings_dim=512):
super().__init__(CLIPVisionConfig(**VISION_CONFIG_DICT))
self.fuse_module = FuseModule(2048)
self.visual_projection_2 = nn.Linear(1024, 1280, bias=False)
cross_attention_dim = 2048
# projection
self.num_tokens = 2
self.cross_attention_dim = cross_attention_dim
self.qformer_perceiver = QFormerPerceiver(
id_embeddings_dim,
cross_attention_dim,
self.num_tokens,
)
def forward(self, id_pixel_values, prompt_embeds, class_tokens_mask, id_embeds):
b, num_inputs, c, h, w = id_pixel_values.shape
id_pixel_values = id_pixel_values.view(b * num_inputs, c, h, w)
last_hidden_state = self.vision_model(id_pixel_values)[0]
id_embeds = id_embeds.view(b * num_inputs, -1)
id_embeds = self.qformer_perceiver(id_embeds, last_hidden_state)
id_embeds = id_embeds.view(b, num_inputs, self.num_tokens, -1)
updated_prompt_embeds = self.fuse_module(prompt_embeds, id_embeds, class_tokens_mask)
return updated_prompt_embeds
if __name__ == "__main__":
PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken()
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#### Borrowed from https://github.com/tencent-ailab/IP-Adapter/blob/main/ip_adapter/resampler.py
# modified from https://github.com/mlfoundations/open_flamingo/blob/main/open_flamingo/src/helpers.py
# and https://github.com/lucidrains/imagen-pytorch/blob/main/imagen_pytorch/imagen_pytorch.py
import math
import torch
import torch.nn as nn
from einops import rearrange
from einops.layers.torch import Rearrange
class FacePerceiverResampler(torch.nn.Module):
def __init__(
self,
*,
dim=768,
depth=4,
dim_head=64,
heads=16,
embedding_dim=1280,
output_dim=768,
ff_mult=4,
):
super().__init__()
self.proj_in = torch.nn.Linear(embedding_dim, dim)
self.proj_out = torch.nn.Linear(dim, output_dim)
self.norm_out = torch.nn.LayerNorm(output_dim)
self.layers = torch.nn.ModuleList([])
for _ in range(depth):
self.layers.append(
torch.nn.ModuleList(
[
PerceiverAttention(dim=dim, dim_head=dim_head, heads=heads),
FeedForward(dim=dim, mult=ff_mult),
]
)
)
def forward(self, latents, x):
x = self.proj_in(x)
for attn, ff in self.layers:
latents = attn(x, latents) + latents
latents = ff(latents) + latents
latents = self.proj_out(latents)
return self.norm_out(latents)
# FFN
def FeedForward(dim, mult=4):
inner_dim = int(dim * mult)
return nn.Sequential(
nn.LayerNorm(dim),
nn.Linear(dim, inner_dim, bias=False),
nn.GELU(),
nn.Linear(inner_dim, dim, bias=False),
)
def reshape_tensor(x, heads):
bs, length, width = x.shape
# (bs, length, width) --> (bs, length, n_heads, dim_per_head)
x = x.view(bs, length, heads, -1)
# (bs, length, n_heads, dim_per_head) --> (bs, n_heads, length, dim_per_head)
x = x.transpose(1, 2)
# (bs, n_heads, length, dim_per_head) --> (bs*n_heads, length, dim_per_head)
x = x.reshape(bs, heads, length, -1)
return x
class PerceiverAttention(nn.Module):
def __init__(self, *, dim, dim_head=64, heads=8):
super().__init__()
self.scale = dim_head**-0.5
self.dim_head = dim_head
self.heads = heads
inner_dim = dim_head * heads
self.norm1 = nn.LayerNorm(dim)
self.norm2 = nn.LayerNorm(dim)
self.to_q = nn.Linear(dim, inner_dim, bias=False)
self.to_kv = nn.Linear(dim, inner_dim * 2, bias=False)
self.to_out = nn.Linear(inner_dim, dim, bias=False)
def forward(self, x, latents):
"""
Args:
x (torch.Tensor): image features
shape (b, n1, D)
latent (torch.Tensor): latent features
shape (b, n2, D)
"""
x = self.norm1(x)
latents = self.norm2(latents)
b, l, _ = latents.shape
q = self.to_q(latents)
kv_input = torch.cat((x, latents), dim=-2)
k, v = self.to_kv(kv_input).chunk(2, dim=-1)
q = reshape_tensor(q, self.heads)
k = reshape_tensor(k, self.heads)
v = reshape_tensor(v, self.heads)
# attention
scale = 1 / math.sqrt(math.sqrt(self.dim_head))
weight = (q * scale) @ (k * scale).transpose(-2, -1) # More stable with f16 than dividing afterwards
weight = torch.softmax(weight.float(), dim=-1).type(weight.dtype)
out = weight @ v
out = out.permute(0, 2, 1, 3).reshape(b, l, -1)
return self.to_out(out)
class Resampler(nn.Module):
def __init__(
self,
dim=1024,
depth=8,
dim_head=64,
heads=16,
num_queries=8,
embedding_dim=768,
output_dim=1024,
ff_mult=4,
max_seq_len: int = 257, # CLIP tokens + CLS token
apply_pos_emb: bool = False,
num_latents_mean_pooled: int = 0, # number of latents derived from mean pooled representation of the sequence
):
super().__init__()
self.pos_emb = nn.Embedding(max_seq_len, embedding_dim) if apply_pos_emb else None
self.latents = nn.Parameter(torch.randn(1, num_queries, dim) / dim**0.5)
self.proj_in = nn.Linear(embedding_dim, dim)
self.proj_out = nn.Linear(dim, output_dim)
self.norm_out = nn.LayerNorm(output_dim)
self.to_latents_from_mean_pooled_seq = (
nn.Sequential(
nn.LayerNorm(dim),
nn.Linear(dim, dim * num_latents_mean_pooled),
Rearrange("b (n d) -> b n d", n=num_latents_mean_pooled),
)
if num_latents_mean_pooled > 0
else None
)
self.layers = nn.ModuleList([])
for _ in range(depth):
self.layers.append(
nn.ModuleList(
[
PerceiverAttention(dim=dim, dim_head=dim_head, heads=heads),
FeedForward(dim=dim, mult=ff_mult),
]
)
)
def forward(self, x):
if self.pos_emb is not None:
n, device = x.shape[1], x.device
pos_emb = self.pos_emb(torch.arange(n, device=device))
x = x + pos_emb
latents = self.latents.repeat(x.size(0), 1, 1)
x = self.proj_in(x)
if self.to_latents_from_mean_pooled_seq:
meanpooled_seq = masked_mean(x, dim=1, mask=torch.ones(x.shape[:2], device=x.device, dtype=torch.bool))
meanpooled_latents = self.to_latents_from_mean_pooled_seq(meanpooled_seq)
latents = torch.cat((meanpooled_latents, latents), dim=-2)
for attn, ff in self.layers:
latents = attn(x, latents) + latents
latents = ff(latents) + latents
latents = self.proj_out(latents)
return self.norm_out(latents)
def masked_mean(t, *, dim, mask=None):
if mask is None:
return t.mean(dim=dim)
denom = mask.sum(dim=dim, keepdim=True)
mask = rearrange(mask, "b n -> b n 1")
masked_t = t.masked_fill(~mask, 0.0)
return masked_t.sum(dim=dim) / denom.clamp(min=1e-5)
[tool.poetry]
name = "photomaker"
version = "0.1.0"
version = "0.2.0"
description = "PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding"
authors = ["Li, Zhen", "Cao, Mingdeng", "Wang, Xintao", "Qi, Zhongang", "Cheng, Ming-Ming", "Shan, Ying"]
license = "Apache-2.0"
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment