Commit 9cfc6603 authored by dongchy920's avatar dongchy920
Browse files

instruct first commit

parents
Pipeline #1969 canceled with stages
Copyright 2023 Timothy Brooks, Aleksander Holynski, Alexei A. Efros
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Portions of code and models (such as pretrained checkpoints, which are fine-tuned starting from released Stable Diffusion checkpoints) are derived from the Stable Diffusion codebase (https://github.com/CompVis/stable-diffusion). Further restrictions may apply. Please consult the Stable Diffusion license `stable_diffusion/LICENSE`. Modified code is denoted as such in comments at the start of each file.
# Instruct-Pix2Pix
## 论文
- https://arxiv.org/abs/2211.09800
## 模型结构
InstructPix2Pix 是在 Stable Diffusion 的基础上扩展和微调的一个基于指令微调的图像编辑模型,其核心任务是实现基于自然语言指令的图像编辑:
<div align=center>
<img src="./imgs/sd.png"/>
</div>
这里的conditioning输入为图像编辑指令
## 算法原理
InstrcutPix2Pix只需要编辑指令就可以对图像进行编辑(编辑指令:把自行车变成摩托车),而其他的方法( SDEdit 和 Text2Live)需要对图像进行描述,基于GPT-3、Stable Diffusion、Prompt-to-prompt、Classifier-free guidance;其中Prompt-to-prompt 是图像局部修改的原理性基础,也是数据生成的关键之一
<div align=center>
<img src="./imgs/model.png"/>
</div>
算法步骤包括:
- 1、生成多模态训练数据集
生成过程分为两步:
第一、微调 GPT3 来生成配对的文本编辑的命令:给一张图的图像描述(Input Caption),生成一个命令(Instruction)来说明要改的内容,同时生成一个对应的编辑后的图片描述(Edited Caption)(上面结构图的左半部分)
文章中使用GPT3在一个小的自制数据集上微调,这个数据集包括:1)编辑前的图像描述;2)图像编辑命令;3)编辑后的图像描述,文章从LAION-Aesthetics V2 6.5+ 数据集里采集了700个输入图像描述(captions),手写了命令和输出图像描述
第二,使用文生图模型(SD+P2P),根据两个文本提示(编辑前图像描述和编辑后图像描述)生成一对相应图像
- 2、用生成数据来训一个条件扩散模型
基于 Stable Diffusion 模型框架训练图像生成模型
## 环境配置
### Docker(方法一)
[光源](https://www.sourcefind.cn/#/service-list)中拉取docker镜像:
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.2-py3.10
```
创建容器并挂载目录进行开发:
```
docker run -it --name {name} --shm-size=1024G --device=/dev/kfd --device=/dev/dri/ --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal:ro -v {}:{} {docker_image} /bin/bash
# 修改1 {name} 需要改为自定义名称
# 修改2 {docker_image} 需要需要创建容器的对应镜像名称
# 修改3 -v 挂载路径到容器指定路径
pip install -r requirements.txt
```
### Dockerfile(方法二)
```
cd docker
docker build --no-cache -t instruct_pytorch:1.0 .
docker run -it --name {name} --shm-size=1024G --device=/dev/kfd --device=/dev/dri/ --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal:ro -v {}:{} {docker_image} /bin/bash
pip install -r requirements.txt
```
### Anaconda(方法三)
线上节点推荐使用conda进行环境配置。
创建python=3.10的conda环境并激活
```
conda create -n instruct python=3.10
conda activate instruct
```
关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.hpccube.com/tool/)开发者社区下载安装。
```
DTK驱动:dtk24.04.2
python:python3.10
pytorch:2.1.0
torchvision:0.16.0
```
安装其他依赖包
```
pip install -r requirements.txt
```
## 数据集
## 训练
## 推理
下载预训练权重文件并解压:
instruct_pix2pix预训练权重下载:[官网下载](http://instruct-pix2pix.eecs.berkeley.edu/instruct-pix2pix-00-22000.ckpt)
SCNet快速下载连接[SCNet下载](http://113.200.138.88:18080/aimodels/findsource-dependency/instruct_pix2pix),将ckpt文件保存至checkpoints文件夹中
解压命令:
```
cat instruct-pix2pix-00-22000_a* > instruct-pix2pix-00-22000.tar.gz
tar -zxf instruct-pix2pix-00-22000.tar.gz
```
clip-vit-large-patch14权重数据下载:
[huggingface下载](https://huggingface.co/openai/clip-vit-large-patch14)
SCNet快速下载连接[SCNet下载](http://113.200.138.88:18080/aimodels/findsource-dependency/clip-vit-large-patch14),所有文件下载后保存到openai/clip-vit-large-patch14文件夹中
```
# 图片+编辑生成编辑后的图像
python edit_cli.py --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
```
## result
输入原始图像为:
<div align=center>
<img src="imgs/example.jpg"/>
</div>
输入编辑指令为:
```
turn him into a cyborg
```
模型生成图片:
<div align=center>
<img src="imgs/output.jpg"/>
</div>
编辑后的图像保存位置:imgs/output.jpg
## 精度
## 应用场景
### 算法类别
多模态
### 热点应用行业
AIGC,设计,教育
## 源码仓库及问题反馈
[https://developer.sourcefind.cn/codes/dongchy920/instruct_pix2pix](https://developer.sourcefind.cn/codes/dongchy920/instruct_pix2pix)
## 参考资料
[https://github.com/timothybrooks/instruct-pix2pix](https://github.com/timothybrooks/instruct-pix2pix)
# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
# See more details in LICENSE.
model:
base_learning_rate: 1.0e-04
target: ldm.models.diffusion.ddpm_edit.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: edited
cond_stage_key: edit
# image_size: 64
# image_size: 32
image_size: 16
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: hybrid
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: true
load_ema: true
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 0 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 8
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 128
num_workers: 1
wrap: false
validation:
target: edit_dataset.EditDataset
params:
path: data/clip-filtered-dataset
cache_dir: data/
cache_name: data_10k
split: val
min_text_sim: 0.2
min_image_sim: 0.75
min_direction_sim: 0.2
max_samples_per_prompt: 1
min_resize_res: 512
max_resize_res: 512
crop_res: 512
output_as_edit: False
real_input: True
# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
# See more details in LICENSE.
model:
base_learning_rate: 1.0e-04
target: ldm.models.diffusion.ddpm_edit.LatentDiffusion
params:
ckpt_path: stable_diffusion/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: edited
cond_stage_key: edit
image_size: 32
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: hybrid
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: true
load_ema: false
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 0 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 8
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 32
num_workers: 2
train:
target: edit_dataset.EditDataset
params:
path: data/clip-filtered-dataset
split: train
min_resize_res: 256
max_resize_res: 256
crop_res: 256
flip_prob: 0.5
validation:
target: edit_dataset.EditDataset
params:
path: data/clip-filtered-dataset
split: val
min_resize_res: 256
max_resize_res: 256
crop_res: 256
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 2000
max_images: 2
increase_log_steps: False
trainer:
max_epochs: 2000
benchmark: True
accumulate_grad_batches: 4
check_val_every_n_epoch: 4
import argparse
import json
import sys
from pathlib import Path
import k_diffusion
import numpy as np
import torch
import torch.nn as nn
from einops import rearrange, repeat
from omegaconf import OmegaConf
from PIL import Image
from pytorch_lightning import seed_everything
from tqdm import tqdm
sys.path.append("./")
sys.path.append("./stable_diffusion")
from ldm.modules.attention import CrossAttention
from ldm.util import instantiate_from_config
from metrics.clip_similarity import ClipSimilarity
################################################################################
# Modified K-diffusion Euler ancestral sampler with prompt-to-prompt.
# https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py
def append_dims(x, target_dims):
"""Appends dimensions to the end of a tensor until it has target_dims dimensions."""
dims_to_append = target_dims - x.ndim
if dims_to_append < 0:
raise ValueError(f"input has {x.ndim} dims but target_dims is {target_dims}, which is less")
return x[(...,) + (None,) * dims_to_append]
def to_d(x, sigma, denoised):
"""Converts a denoiser output to a Karras ODE derivative."""
return (x - denoised) / append_dims(sigma, x.ndim)
def get_ancestral_step(sigma_from, sigma_to):
"""Calculates the noise level (sigma_down) to step down to and the amount
of noise to add (sigma_up) when doing an ancestral sampling step."""
sigma_up = min(sigma_to, (sigma_to**2 * (sigma_from**2 - sigma_to**2) / sigma_from**2) ** 0.5)
sigma_down = (sigma_to**2 - sigma_up**2) ** 0.5
return sigma_down, sigma_up
def sample_euler_ancestral(model, x, sigmas, prompt2prompt_threshold=0.0, **extra_args):
"""Ancestral sampling with Euler method steps."""
s_in = x.new_ones([x.shape[0]])
for i in range(len(sigmas) - 1):
prompt_to_prompt = prompt2prompt_threshold > i / (len(sigmas) - 2)
for m in model.modules():
if isinstance(m, CrossAttention):
m.prompt_to_prompt = prompt_to_prompt
denoised = model(x, sigmas[i] * s_in, **extra_args)
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
d = to_d(x, sigmas[i], denoised)
# Euler method
dt = sigma_down - sigmas[i]
x = x + d * dt
if sigmas[i + 1] > 0:
# Make noise the same across all samples in batch.
x = x + torch.randn_like(x[:1]) * sigma_up
return x
################################################################################
def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
if vae_ckpt is not None:
print(f"Loading VAE from {vae_ckpt}")
vae_sd = torch.load(vae_ckpt, map_location="cpu")["state_dict"]
sd = {
k: vae_sd[k[len("first_stage_model.") :]] if k.startswith("first_stage_model.") else v
for k, v in sd.items()
}
model = instantiate_from_config(config.model)
m, u = model.load_state_dict(sd, strict=False)
if len(m) > 0 and verbose:
print("missing keys:")
print(m)
if len(u) > 0 and verbose:
print("unexpected keys:")
print(u)
return model
class CFGDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cfg_scale):
x_in = torch.cat([x] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
return uncond + (cond - uncond) * cfg_scale
def to_pil(image: torch.Tensor) -> Image.Image:
image = 255.0 * rearrange(image.cpu().numpy(), "c h w -> h w c")
image = Image.fromarray(image.astype(np.uint8))
return image
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--out_dir",
type=str,
required=True,
help="Path to output dataset directory.",
)
parser.add_argument(
"--prompts_file",
type=str,
required=True,
help="Path to prompts .jsonl file.",
)
parser.add_argument(
"--ckpt",
type=str,
default="stable_diffusion/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt",
help="Path to stable diffusion checkpoint.",
)
parser.add_argument(
"--vae-ckpt",
type=str,
default="stable_diffusion/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt",
help="Path to vae checkpoint.",
)
parser.add_argument(
"--steps",
type=int,
default=100,
help="Number of sampling steps.",
)
parser.add_argument(
"--n-samples",
type=int,
default=100,
help="Number of samples to generate per prompt (before CLIP filtering).",
)
parser.add_argument(
"--max-out-samples",
type=int,
default=4,
help="Max number of output samples to save per prompt (after CLIP filtering).",
)
parser.add_argument(
"--n-partitions",
type=int,
default=1,
help="Number of total partitions.",
)
parser.add_argument(
"--partition",
type=int,
default=0,
help="Partition index.",
)
parser.add_argument(
"--min-p2p",
type=float,
default=0.1,
help="Min prompt2prompt threshold (portion of denoising for which to fix self attention maps).",
)
parser.add_argument(
"--max-p2p",
type=float,
default=0.9,
help="Max prompt2prompt threshold (portion of denoising for which to fix self attention maps).",
)
parser.add_argument(
"--min-cfg",
type=float,
default=7.5,
help="Min classifier free guidance scale.",
)
parser.add_argument(
"--max-cfg",
type=float,
default=15,
help="Max classifier free guidance scale.",
)
parser.add_argument(
"--clip-threshold",
type=float,
default=0.2,
help="CLIP threshold for text-image similarity of each image.",
)
parser.add_argument(
"--clip-dir-threshold",
type=float,
default=0.2,
help="Directional CLIP threshold for similarity of change between pairs of text and pairs of images.",
)
parser.add_argument(
"--clip-img-threshold",
type=float,
default=0.7,
help="CLIP threshold for image-image similarity.",
)
opt = parser.parse_args()
global_seed = torch.randint(1 << 32, ()).item()
print(f"Global seed: {global_seed}")
seed_everything(global_seed)
model = load_model_from_config(
OmegaConf.load("stable_diffusion/configs/stable-diffusion/v1-inference.yaml"),
ckpt=opt.ckpt,
vae_ckpt=opt.vae_ckpt,
)
model.cuda().eval()
model_wrap = k_diffusion.external.CompVisDenoiser(model)
clip_similarity = ClipSimilarity().cuda()
out_dir = Path(opt.out_dir)
out_dir.mkdir(exist_ok=True, parents=True)
with open(opt.prompts_file) as fp:
prompts = [json.loads(line) for line in fp]
print(f"Partition index {opt.partition} ({opt.partition + 1} / {opt.n_partitions})")
prompts = np.array_split(list(enumerate(prompts)), opt.n_partitions)[opt.partition]
with torch.no_grad(), torch.autocast("cuda"), model.ema_scope():
uncond = model.get_learned_conditioning(2 * [""])
sigmas = model_wrap.get_sigmas(opt.steps)
for i, prompt in tqdm(prompts, desc="Prompts"):
prompt_dir = out_dir.joinpath(f"{i:07d}")
prompt_dir.mkdir(exist_ok=True)
with open(prompt_dir.joinpath("prompt.json"), "w") as fp:
json.dump(prompt, fp)
cond = model.get_learned_conditioning([prompt["caption"], prompt["output"]])
results = {}
with tqdm(total=opt.n_samples, desc="Samples") as progress_bar:
while len(results) < opt.n_samples:
seed = torch.randint(1 << 32, ()).item()
if seed in results:
continue
torch.manual_seed(seed)
x = torch.randn(1, 4, 512 // 8, 512 // 8, device="cuda") * sigmas[0]
x = repeat(x, "1 ... -> n ...", n=2)
model_wrap_cfg = CFGDenoiser(model_wrap)
p2p_threshold = opt.min_p2p + torch.rand(()).item() * (opt.max_p2p - opt.min_p2p)
cfg_scale = opt.min_cfg + torch.rand(()).item() * (opt.max_cfg - opt.min_cfg)
extra_args = {"cond": cond, "uncond": uncond, "cfg_scale": cfg_scale}
samples_ddim = sample_euler_ancestral(model_wrap_cfg, x, sigmas, p2p_threshold, **extra_args)
x_samples_ddim = model.decode_first_stage(samples_ddim)
x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
x0 = x_samples_ddim[0]
x1 = x_samples_ddim[1]
clip_sim_0, clip_sim_1, clip_sim_dir, clip_sim_image = clip_similarity(
x0[None], x1[None], [prompt["caption"]], [prompt["output"]]
)
results[seed] = dict(
image_0=to_pil(x0),
image_1=to_pil(x1),
p2p_threshold=p2p_threshold,
cfg_scale=cfg_scale,
clip_sim_0=clip_sim_0[0].item(),
clip_sim_1=clip_sim_1[0].item(),
clip_sim_dir=clip_sim_dir[0].item(),
clip_sim_image=clip_sim_image[0].item(),
)
progress_bar.update()
# CLIP filter to get best samples for each prompt.
metadata = [
(result["clip_sim_dir"], seed)
for seed, result in results.items()
if result["clip_sim_image"] >= opt.clip_img_threshold
and result["clip_sim_dir"] >= opt.clip_dir_threshold
and result["clip_sim_0"] >= opt.clip_threshold
and result["clip_sim_1"] >= opt.clip_threshold
]
metadata.sort(reverse=True)
for _, seed in metadata[: opt.max_out_samples]:
result = results[seed]
image_0 = result.pop("image_0")
image_1 = result.pop("image_1")
image_0.save(prompt_dir.joinpath(f"{seed}_0.jpg"), quality=100)
image_1.save(prompt_dir.joinpath(f"{seed}_1.jpg"), quality=100)
with open(prompt_dir.joinpath(f"metadata.jsonl"), "a") as fp:
fp.write(f"{json.dumps(dict(seed=seed, **result))}\n")
print("Done.")
if __name__ == "__main__":
main()
from __future__ import annotations
import json
import time
from argparse import ArgumentParser
from pathlib import Path
from typing import Optional
import datasets
import numpy as np
import openai
from tqdm.auto import tqdm
DELIMITER_0 = "\n##\n"
DELIMITER_1 = "\n%%\n"
STOP = "\nEND"
def generate(
openai_model: str,
caption: str,
num_retries: int = 3,
max_tokens: int = 256,
temperature: float = 0.7,
top_p: float = 1.0,
frequency_penalty: float = 0.1,
presence_penalty: float = 0.0,
sleep_on_error: float = 1.0,
) -> Optional[tuple[str, str]]:
for _ in range(1 + num_retries):
try:
response = openai.Completion.create(
model=openai_model,
prompt=caption + DELIMITER_0,
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty,
stop=[STOP],
)
except Exception as e:
print(e)
time.sleep(sleep_on_error)
continue
output = response["choices"][0]["text"].split(DELIMITER_1)
if len(output) == 2:
instruction, edited_caption = output
results = openai.Moderation.create([instruction, edited_caption])["results"]
if results[0]["flagged"] or results[1]["flagged"]:
continue
if caption.strip().strip(".!?").lower() != edited_caption.strip().strip(".!?").lower():
return instruction, edited_caption
def main(openai_model: str, num_samples: int, num_partitions: int, partition: int, seed: int):
dataset = datasets.load_dataset("ChristophSchuhmann/improved_aesthetics_6.5plus", split="train")
# Other datasets we considered that may be worth trying:
# dataset = datasets.load_dataset("ChristophSchuhmann/MS_COCO_2017_URL_TEXT", split="train")
# dataset = datasets.load_dataset("laion/laion-coco", split="train")
np.random.seed(seed)
permutation = np.array_split(np.random.permutation(len(dataset)), num_partitions)[partition]
dataset = dataset[permutation]
captions = dataset["TEXT"]
urls = dataset["URL"]
output_path = f"data/dataset=laion-aesthetics-6.5_model={openai_model}_samples={num_samples}_partition={partition}.jsonl" # fmt: skip
print(f"Prompt file path: {output_path}")
count = 0
caption_set = set()
url_set = set()
if Path(output_path).exists():
with open(output_path, "r") as f:
for line in tqdm(f, desc="Resuming from existing prompts"):
prompt = json.loads(line)
if prompt["caption"] not in caption_set and prompt["url"] not in url_set:
caption_set.add(prompt["caption"])
url_set.add(prompt["url"])
count += 1
with open(output_path, "a") as fp:
with tqdm(total=num_samples - count, desc="Generating instructions and edited captions") as progress_bar:
for caption, url in zip(captions, urls):
if caption in caption_set or url in url_set:
continue
if openai.Moderation.create(caption)["results"][0]["flagged"]:
continue
edit_output = generate(openai_model, caption)
if edit_output is not None:
edit, output = edit_output
fp.write(f"{json.dumps(dict(caption=caption, edit=edit, output=output, url=url))}\n")
count += 1
progress_bar.update()
caption_set.add(caption)
url_set.add(url)
if count == num_samples:
break
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--openai-api-key", required=True, type=str)
parser.add_argument("--openai-model", required=True, type=str)
parser.add_argument("--num-samples", default=10000, type=int)
parser.add_argument("--num-partitions", default=1, type=int)
parser.add_argument("--partition", default=0, type=int)
parser.add_argument("--seed", default=0, type=int)
args = parser.parse_args()
openai.api_key = args.openai_api_key
main(args.openai_model, args.num_samples, args.num_partitions, args.partition, args.seed)
import json
from argparse import ArgumentParser
from pathlib import Path
from tqdm.auto import tqdm
def main():
parser = ArgumentParser()
parser.add_argument("dataset_dir")
args = parser.parse_args()
dataset_dir = Path(args.dataset_dir)
seeds = []
with tqdm(desc="Listing dataset image seeds") as progress_bar:
for prompt_dir in dataset_dir.iterdir():
if prompt_dir.is_dir():
prompt_seeds = [image_path.name.split("_")[0] for image_path in sorted(prompt_dir.glob("*_0.jpg"))]
if len(prompt_seeds) > 0:
seeds.append((prompt_dir.name, prompt_seeds))
progress_bar.update()
seeds.sort()
with open(dataset_dir.joinpath("seeds.json"), "w") as f:
json.dump(seeds, f)
if __name__ == "__main__":
main()
import json
from argparse import ArgumentParser
from generate_txt_dataset import DELIMITER_0, DELIMITER_1, STOP
def main(input_path: str, output_path: str):
with open(input_path) as f:
prompts = [json.loads(l) for l in f]
with open(output_path, "w") as f:
for prompt in prompts:
prompt_for_gpt = {
"prompt": f"{prompt['input']}{DELIMITER_0}",
"completion": f"{prompt['edit']}{DELIMITER_1}{prompt['output']}{STOP}",
}
f.write(f"{json.dumps(prompt_for_gpt)}\n")
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--input-path", required=True, type=str)
parser.add_argument("--output-path", required=True, type=str)
args = parser.parse_args()
main(args.input_path, args.output_path)
FROM image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.2-py3.10
RUN source /opt/dtk/env.sh
\ No newline at end of file
from __future__ import annotations
import math
import random
import sys
from argparse import ArgumentParser
import einops
import gradio as gr
import k_diffusion as K
import numpy as np
import torch
import torch.nn as nn
from einops import rearrange
from omegaconf import OmegaConf
from PIL import Image, ImageOps
from torch import autocast
sys.path.append("./stable_diffusion")
from stable_diffusion.ldm.util import instantiate_from_config
help_text = """
If you're not getting what you want, there may be a few reasons:
1. Is the image not changing enough? Your Image CFG weight may be too high. This value dictates how similar the output should be to the input. It's possible your edit requires larger changes from the original image, and your Image CFG weight isn't allowing that. Alternatively, your Text CFG weight may be too low. This value dictates how much to listen to the text instruction. The default Image CFG of 1.5 and Text CFG of 7.5 are a good starting point, but aren't necessarily optimal for each edit. Try:
* Decreasing the Image CFG weight, or
* Incerasing the Text CFG weight, or
2. Conversely, is the image changing too much, such that the details in the original image aren't preserved? Try:
* Increasing the Image CFG weight, or
* Decreasing the Text CFG weight
3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time.
4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog").
5. Increasing the number of steps sometimes improves results.
6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try:
* Cropping the image so the face takes up a larger portion of the frame.
"""
example_instructions = [
"Make it a picasso painting",
"as if it were by modigliani",
"convert to a bronze statue",
"Turn it into an anime.",
"have it look like a graphic novel",
"make him gain weight",
"what would he look like bald?",
"Have him smile",
"Put him in a cocktail party.",
"move him at the beach.",
"add dramatic lighting",
"Convert to black and white",
"What if it were snowing?",
"Give him a leather jacket",
"Turn him into a cyborg!",
"make him wear a beanie",
]
class CFGDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, z, sigma, cond, uncond, text_cfg_scale, image_cfg_scale):
cfg_z = einops.repeat(z, "1 ... -> n ...", n=3)
cfg_sigma = einops.repeat(sigma, "1 ... -> n ...", n=3)
cfg_cond = {
"c_crossattn": [torch.cat([cond["c_crossattn"][0], uncond["c_crossattn"][0], uncond["c_crossattn"][0]])],
"c_concat": [torch.cat([cond["c_concat"][0], cond["c_concat"][0], uncond["c_concat"][0]])],
}
out_cond, out_img_cond, out_uncond = self.inner_model(cfg_z, cfg_sigma, cond=cfg_cond).chunk(3)
return out_uncond + text_cfg_scale * (out_cond - out_img_cond) + image_cfg_scale * (out_img_cond - out_uncond)
def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
if vae_ckpt is not None:
print(f"Loading VAE from {vae_ckpt}")
vae_sd = torch.load(vae_ckpt, map_location="cpu")["state_dict"]
sd = {
k: vae_sd[k[len("first_stage_model.") :]] if k.startswith("first_stage_model.") else v
for k, v in sd.items()
}
model = instantiate_from_config(config.model)
m, u = model.load_state_dict(sd, strict=False)
if len(m) > 0 and verbose:
print("missing keys:")
print(m)
if len(u) > 0 and verbose:
print("unexpected keys:")
print(u)
return model
def main():
parser = ArgumentParser()
parser.add_argument("--resolution", default=512, type=int)
parser.add_argument("--config", default="configs/generate.yaml", type=str)
parser.add_argument("--ckpt", default="checkpoints/instruct-pix2pix-00-22000.ckpt", type=str)
parser.add_argument("--vae-ckpt", default=None, type=str)
args = parser.parse_args()
config = OmegaConf.load(args.config)
model = load_model_from_config(config, args.ckpt, args.vae_ckpt)
model.eval().cuda()
model_wrap = K.external.CompVisDenoiser(model)
model_wrap_cfg = CFGDenoiser(model_wrap)
null_token = model.get_learned_conditioning([""])
example_image = Image.open("imgs/example.jpg").convert("RGB")
def load_example(
steps: int,
randomize_seed: bool,
seed: int,
randomize_cfg: bool,
text_cfg_scale: float,
image_cfg_scale: float,
):
example_instruction = random.choice(example_instructions)
return [example_image, example_instruction] + generate(
example_image,
example_instruction,
steps,
randomize_seed,
seed,
randomize_cfg,
text_cfg_scale,
image_cfg_scale,
)
def generate(
input_image: Image.Image,
instruction: str,
steps: int,
randomize_seed: bool,
seed: int,
randomize_cfg: bool,
text_cfg_scale: float,
image_cfg_scale: float,
):
seed = random.randint(0, 100000) if randomize_seed else seed
text_cfg_scale = round(random.uniform(6.0, 9.0), ndigits=2) if randomize_cfg else text_cfg_scale
image_cfg_scale = round(random.uniform(1.2, 1.8), ndigits=2) if randomize_cfg else image_cfg_scale
width, height = input_image.size
factor = args.resolution / max(width, height)
factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height)
width = int((width * factor) // 64) * 64
height = int((height * factor) // 64) * 64
input_image = ImageOps.fit(input_image, (width, height), method=Image.Resampling.LANCZOS)
if instruction == "":
return [input_image, seed]
with torch.no_grad(), autocast("cuda"), model.ema_scope():
cond = {}
cond["c_crossattn"] = [model.get_learned_conditioning([instruction])]
input_image = 2 * torch.tensor(np.array(input_image)).float() / 255 - 1
input_image = rearrange(input_image, "h w c -> 1 c h w").to(model.device)
cond["c_concat"] = [model.encode_first_stage(input_image).mode()]
uncond = {}
uncond["c_crossattn"] = [null_token]
uncond["c_concat"] = [torch.zeros_like(cond["c_concat"][0])]
sigmas = model_wrap.get_sigmas(steps)
extra_args = {
"cond": cond,
"uncond": uncond,
"text_cfg_scale": text_cfg_scale,
"image_cfg_scale": image_cfg_scale,
}
torch.manual_seed(seed)
z = torch.randn_like(cond["c_concat"][0]) * sigmas[0]
z = K.sampling.sample_euler_ancestral(model_wrap_cfg, z, sigmas, extra_args=extra_args)
x = model.decode_first_stage(z)
x = torch.clamp((x + 1.0) / 2.0, min=0.0, max=1.0)
x = 255.0 * rearrange(x, "1 c h w -> h w c")
edited_image = Image.fromarray(x.type(torch.uint8).cpu().numpy())
return [seed, text_cfg_scale, image_cfg_scale, edited_image]
def reset():
return [0, "Randomize Seed", 1371, "Fix CFG", 7.5, 1.5, None]
with gr.Blocks(css="footer {visibility: hidden}") as demo:
with gr.Row():
with gr.Column(scale=1, min_width=100):
generate_button = gr.Button("Generate")
with gr.Column(scale=1, min_width=100):
load_button = gr.Button("Load Example")
with gr.Column(scale=1, min_width=100):
reset_button = gr.Button("Reset")
with gr.Column(scale=3):
instruction = gr.Textbox(lines=1, label="Edit Instruction", interactive=True)
with gr.Row():
input_image = gr.Image(label="Input Image", type="pil", interactive=True)
edited_image = gr.Image(label=f"Edited Image", type="pil", interactive=False)
input_image.style(height=512, width=512)
edited_image.style(height=512, width=512)
with gr.Row():
steps = gr.Number(value=100, precision=0, label="Steps", interactive=True)
randomize_seed = gr.Radio(
["Fix Seed", "Randomize Seed"],
value="Randomize Seed",
type="index",
show_label=False,
interactive=True,
)
seed = gr.Number(value=1371, precision=0, label="Seed", interactive=True)
randomize_cfg = gr.Radio(
["Fix CFG", "Randomize CFG"],
value="Fix CFG",
type="index",
show_label=False,
interactive=True,
)
text_cfg_scale = gr.Number(value=7.5, label=f"Text CFG", interactive=True)
image_cfg_scale = gr.Number(value=1.5, label=f"Image CFG", interactive=True)
gr.Markdown(help_text)
load_button.click(
fn=load_example,
inputs=[
steps,
randomize_seed,
seed,
randomize_cfg,
text_cfg_scale,
image_cfg_scale,
],
outputs=[input_image, instruction, seed, text_cfg_scale, image_cfg_scale, edited_image],
)
generate_button.click(
fn=generate,
inputs=[
input_image,
instruction,
steps,
randomize_seed,
seed,
randomize_cfg,
text_cfg_scale,
image_cfg_scale,
],
outputs=[seed, text_cfg_scale, image_cfg_scale, edited_image],
)
reset_button.click(
fn=reset,
inputs=[],
outputs=[steps, randomize_seed, seed, randomize_cfg, text_cfg_scale, image_cfg_scale, edited_image],
)
demo.queue(concurrency_count=1)
demo.launch(share=True)
if __name__ == "__main__":
main()
from __future__ import annotations
import math
import random
import sys
from argparse import ArgumentParser
import einops
import k_diffusion as K
import numpy as np
import torch
import torch.nn as nn
from einops import rearrange
from omegaconf import OmegaConf
from PIL import Image, ImageOps
from torch import autocast
sys.path.append("./stable_diffusion")
from stable_diffusion.ldm.util import instantiate_from_config
class CFGDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, z, sigma, cond, uncond, text_cfg_scale, image_cfg_scale):
cfg_z = einops.repeat(z, "1 ... -> n ...", n=3)
cfg_sigma = einops.repeat(sigma, "1 ... -> n ...", n=3)
cfg_cond = {
"c_crossattn": [torch.cat([cond["c_crossattn"][0], uncond["c_crossattn"][0], uncond["c_crossattn"][0]])],
"c_concat": [torch.cat([cond["c_concat"][0], cond["c_concat"][0], uncond["c_concat"][0]])],
}
out_cond, out_img_cond, out_uncond = self.inner_model(cfg_z, cfg_sigma, cond=cfg_cond).chunk(3)
return out_uncond + text_cfg_scale * (out_cond - out_img_cond) + image_cfg_scale * (out_img_cond - out_uncond)
def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
if vae_ckpt is not None:
print(f"Loading VAE from {vae_ckpt}")
vae_sd = torch.load(vae_ckpt, map_location="cpu")["state_dict"]
sd = {
k: vae_sd[k[len("first_stage_model.") :]] if k.startswith("first_stage_model.") else v
for k, v in sd.items()
}
model = instantiate_from_config(config.model)
m, u = model.load_state_dict(sd, strict=False)
if len(m) > 0 and verbose:
print("missing keys:")
print(m)
if len(u) > 0 and verbose:
print("unexpected keys:")
print(u)
return model
def main():
parser = ArgumentParser()
parser.add_argument("--resolution", default=512, type=int)
parser.add_argument("--steps", default=100, type=int)
parser.add_argument("--config", default="configs/generate.yaml", type=str)
parser.add_argument("--ckpt", default="checkpoints/instruct-pix2pix-00-22000.ckpt", type=str)
parser.add_argument("--vae-ckpt", default=None, type=str)
parser.add_argument("--input", required=True, type=str)
parser.add_argument("--output", required=True, type=str)
parser.add_argument("--edit", required=True, type=str)
parser.add_argument("--cfg-text", default=7.5, type=float)
parser.add_argument("--cfg-image", default=1.5, type=float)
parser.add_argument("--seed", type=int)
args = parser.parse_args()
config = OmegaConf.load(args.config)
model = load_model_from_config(config, args.ckpt, args.vae_ckpt)
model.eval().cuda()
model_wrap = K.external.CompVisDenoiser(model)
model_wrap_cfg = CFGDenoiser(model_wrap)
null_token = model.get_learned_conditioning([""])
seed = random.randint(0, 100000) if args.seed is None else args.seed
input_image = Image.open(args.input).convert("RGB")
width, height = input_image.size
factor = args.resolution / max(width, height)
factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height)
width = int((width * factor) // 64) * 64
height = int((height * factor) // 64) * 64
input_image = ImageOps.fit(input_image, (width, height), method=Image.Resampling.LANCZOS)
if args.edit == "":
input_image.save(args.output)
return
with torch.no_grad(), autocast("cuda"), model.ema_scope():
cond = {}
cond["c_crossattn"] = [model.get_learned_conditioning([args.edit])]
input_image = 2 * torch.tensor(np.array(input_image)).float() / 255 - 1
input_image = rearrange(input_image, "h w c -> 1 c h w").to(model.device)
cond["c_concat"] = [model.encode_first_stage(input_image).mode()]
uncond = {}
uncond["c_crossattn"] = [null_token]
uncond["c_concat"] = [torch.zeros_like(cond["c_concat"][0])]
sigmas = model_wrap.get_sigmas(args.steps)
extra_args = {
"cond": cond,
"uncond": uncond,
"text_cfg_scale": args.cfg_text,
"image_cfg_scale": args.cfg_image,
}
torch.manual_seed(seed)
z = torch.randn_like(cond["c_concat"][0]) * sigmas[0]
z = K.sampling.sample_euler_ancestral(model_wrap_cfg, z, sigmas, extra_args=extra_args)
x = model.decode_first_stage(z)
x = torch.clamp((x + 1.0) / 2.0, min=0.0, max=1.0)
x = 255.0 * rearrange(x, "1 c h w -> h w c")
edited_image = Image.fromarray(x.type(torch.uint8).cpu().numpy())
edited_image.save(args.output)
if __name__ == "__main__":
main()
from __future__ import annotations
import json
import math
from pathlib import Path
from typing import Any
import numpy as np
import torch
import torchvision
from einops import rearrange
from PIL import Image
from torch.utils.data import Dataset
class EditDataset(Dataset):
def __init__(
self,
path: str,
split: str = "train",
splits: tuple[float, float, float] = (0.9, 0.05, 0.05),
min_resize_res: int = 256,
max_resize_res: int = 256,
crop_res: int = 256,
flip_prob: float = 0.0,
):
assert split in ("train", "val", "test")
assert sum(splits) == 1
self.path = path
self.min_resize_res = min_resize_res
self.max_resize_res = max_resize_res
self.crop_res = crop_res
self.flip_prob = flip_prob
with open(Path(self.path, "seeds.json")) as f:
self.seeds = json.load(f)
split_0, split_1 = {
"train": (0.0, splits[0]),
"val": (splits[0], splits[0] + splits[1]),
"test": (splits[0] + splits[1], 1.0),
}[split]
idx_0 = math.floor(split_0 * len(self.seeds))
idx_1 = math.floor(split_1 * len(self.seeds))
self.seeds = self.seeds[idx_0:idx_1]
def __len__(self) -> int:
return len(self.seeds)
def __getitem__(self, i: int) -> dict[str, Any]:
name, seeds = self.seeds[i]
propt_dir = Path(self.path, name)
seed = seeds[torch.randint(0, len(seeds), ()).item()]
with open(propt_dir.joinpath("prompt.json")) as fp:
prompt = json.load(fp)["edit"]
image_0 = Image.open(propt_dir.joinpath(f"{seed}_0.jpg"))
image_1 = Image.open(propt_dir.joinpath(f"{seed}_1.jpg"))
reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item()
image_0 = image_0.resize((reize_res, reize_res), Image.Resampling.LANCZOS)
image_1 = image_1.resize((reize_res, reize_res), Image.Resampling.LANCZOS)
image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w")
crop = torchvision.transforms.RandomCrop(self.crop_res)
flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob))
image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2)
return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt))
class EditDatasetEval(Dataset):
def __init__(
self,
path: str,
split: str = "train",
splits: tuple[float, float, float] = (0.9, 0.05, 0.05),
res: int = 256,
):
assert split in ("train", "val", "test")
assert sum(splits) == 1
self.path = path
self.res = res
with open(Path(self.path, "seeds.json")) as f:
self.seeds = json.load(f)
split_0, split_1 = {
"train": (0.0, splits[0]),
"val": (splits[0], splits[0] + splits[1]),
"test": (splits[0] + splits[1], 1.0),
}[split]
idx_0 = math.floor(split_0 * len(self.seeds))
idx_1 = math.floor(split_1 * len(self.seeds))
self.seeds = self.seeds[idx_0:idx_1]
def __len__(self) -> int:
return len(self.seeds)
def __getitem__(self, i: int) -> dict[str, Any]:
name, seeds = self.seeds[i]
propt_dir = Path(self.path, name)
seed = seeds[torch.randint(0, len(seeds), ()).item()]
with open(propt_dir.joinpath("prompt.json")) as fp:
prompt = json.load(fp)
edit = prompt["edit"]
input_prompt = prompt["input"]
output_prompt = prompt["output"]
image_0 = Image.open(propt_dir.joinpath(f"{seed}_0.jpg"))
reize_res = torch.randint(self.res, self.res + 1, ()).item()
image_0 = image_0.resize((reize_res, reize_res), Image.Resampling.LANCZOS)
image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
return dict(image_0=image_0, input_prompt=input_prompt, edit=edit, output_prompt=output_prompt)
# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
# See more details in LICENSE.
name: ip2p
channels:
- pytorch
- defaults
dependencies:
- python=3.8.5
- pip=20.3
- cudatoolkit=11.3
- pytorch=1.11.0
- torchvision=0.12.0
- numpy=1.19.2
- pip:
- albumentations==0.4.3
- datasets==2.8.0
- diffusers
- opencv-python==4.1.2.30
- pudb==2019.2
- invisible-watermark
- imageio==2.9.0
- imageio-ffmpeg==0.4.2
- pytorch-lightning==1.4.2
- omegaconf==2.1.1
- test-tube>=0.7.5
- streamlit>=0.73.1
- einops==0.3.0
- torch-fidelity==0.3.0
- transformers==4.19.2
- torchmetrics==0.6.0
- kornia==0.6
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
- openai
- gradio
- seaborn
- git+https://github.com/crowsonkb/k-diffusion.git
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment