project page arxiv demo demo model model model

News | Quick Start | Usage Tips | Online Demos | Citation | License

**OmniGen2** is a powerful and efficient unified multimodal model. Its architecture is composed of two key components: a 3B Vision-Language Model (VLM) and a 4B diffusion model. In this design, the frozen 3B VLM ([Qwen-VL-2.5](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct)) is responsible for interpreting both visual signals and user instructions, while the 4B diffusion model leverages this understanding to perform high-quality image generation. This dual-component architecture enables strong performance across four primary capabilities: - **Visual Understanding**: Inherits the robust ability to interpret and analyze image content from its Qwen-VL-2.5 foundation. - **Text-to-Image Generation**: Creates high-fidelity and aesthetically pleasing images from textual prompts. - **Instruction-guided Image Editing**: Executes complex, instruction-based image modifications with high precision, achieving state-of-the-art performance among open-source models. - **In-context Generation**: A versatile capability to process and flexibly combine diverse inputs—including humans, reference objects, and scenes—to produce novel and coherent visual outputs. As an open-source project, OmniGen2 provides a powerful yet resource-efficient foundation for researchers and developers exploring the frontiers of controllable and personalized generative AI. **We will release the training code, dataset, and data construction pipeline soon. Stay tuned!**


Demonstration of OmniGen2's overall capabilities.


Demonstration of OmniGen2's image editing capabilities.


Demonstration of OmniGen2's in-context generation capabilities.

## 🔥 News - **2025-07-05**: Training datasets [X2I2](https://huggingface.co/datasets/OmniGen2/X2I2) are available. - **2025-07-03**: OmniGen2 now supports [TeaCache](https://github.com/ali-vilab/TeaCache) and [TaylorSeer](https://github.com/Shenyi-Z/TaylorSeer) for faster inference, see [Usage Tips](#-usage-tips) for details. Thanks @legitnull for great [TeaCache-PR](https://github.com/VectorSpaceLab/OmniGen2/pull/52) and [TaylorSeer-PR](https://github.com/VectorSpaceLab/OmniGen2/pull/76). - **2025-07-01**: OmniGen2 is supported by [ComfyUI official](https://comfyanonymous.github.io/ComfyUI_examples/omnigen), thanks !! - **2025-06-30**: Training code is available, see [fine-tuning](docs/FINETUNE.md) for details. - **2025-06-28**: We release [OmniContext](https://huggingface.co/datasets/OmniGen2/OmniContext) benchmark. The evaluation codes are in [omnicontext](https://github.com/VectorSpaceLab/OmniGen2/tree/main/omnicontext). - **2025-06-24**: [Technical Report](https://arxiv.org/abs/2506.18871) is available. - **2025-06-23**: We’ve updated our code and HF model—OmniGen2 now runs *without* `flash-attn`. Users can still install it for optimal performance. - **2025-06-20**: Updated [resource requirements](#-resources-requirement), adding CPU offload support for devices with limited VRAM. - **2025-06-16**: [Gradio](https://github.com/VectorSpaceLab/OmniGen2?tab=readme-ov-file#-gradio-demo) and [Jupyter](https://github.com/VectorSpaceLab/OmniGen2/blob/main/example.ipynb) is available. Online Gradio Demo: [Demo1](https://9c4426d27c3b9ecbed.gradio.live); [Chat-Demo1](https://0351497834a4d7226c.gradio.live); see more demo links in [gradio section](https://github.com/VectorSpaceLab/OmniGen2?tab=readme-ov-file#-gradio-demo) - **2025-06-16**: We release **OmniGen2**, a multimodal generation model, model weights can be accessed in [huggingface](https://huggingface.co/OmniGen2/OmniGen2) and [modelscope](https://www.modelscope.cn/models/OmniGen2/OmniGen2). ## 📌 TODO - [x] Technical report. - [x] Support CPU offload and improve inference efficiency. - [x] In-context generation benchmark: **OmniContext**. - [ ] Integration of diffusers. - [x] Training datasets. - [ ] Training data construction pipeline. - [ ] ComfyUI Demo (**commuity support will be greatly appreciated!**). ## 🚀 Quick Start ### 🛠️ Environment Setup #### ✅ Recommended Setup ```bash # 1. Clone the repo git clone git@github.com:VectorSpaceLab/OmniGen2.git cd OmniGen2 # 2. (Optional) Create a clean Python environment conda create -n omnigen2 python=3.11 conda activate omnigen2 # 3. Install dependencies # 3.1 Install PyTorch (choose correct CUDA version) pip install torch==2.6.0 torchvision --extra-index-url https://download.pytorch.org/whl/cu124 # 3.2 Install other required packages pip install -r requirements.txt pip install flash-attn --no-build-isolation ``` #### 🌏 For users in Mainland China ```bash # Install PyTorch from a domestic mirror pip install torch==2.6.0 torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu124 # Install other dependencies from Tsinghua mirror pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple pip install flash-attn --no-build-isolation -i https://pypi.tuna.tsinghua.edu.cn/simple ``` --- ### 🧪 Run Examples ```bash # Visual Understanding bash example_understanding.sh # Text-to-image generation bash example_t2i.sh # Instruction-guided image editing bash example_edit.sh # Subject-driven image editing bash example_subject_driven_edit.sh ``` --- ### 🌐 Gradio Demo * **Online Demo**: We are temporarily providing 8 GPUs to support the online demos. If you notice a long queue for a particular link, please try other links: [Demo1](https://be5916033313307354.gradio.live), [Demo2](https://281efc44b736406f42.gradio.live), [Demo3](https://a27912fbaef54294f8.gradio.live), [Demo4](https://bbf305e391bc769d22.gradio.live) [Chat-Demo1](https://a79e0445bb498554e8.gradio.live), [Chat-Demo2](https://7f922fdca66e47c427.gradio.live), [Chat-Demo3](https://6568f4b2a8353be3ae.gradio.live), [Chat-Demo4](https://f38c30ed99f0f6caab.gradio.live) * **Run Locally**: ```bash pip install gradio python app.py # Optional: Share demo with public link (You need to be able to access huggingface) python app.py --share ``` ## 💡 Usage Tips To achieve optimal results with OmniGen2, you can adjust the following key hyperparameters based on your specific use case. - `num_inference_step`: The number of sampling steps per generation. Higher values generally improve quality but increase generation time. - Recommended Range: 28 to 50 - `text_guidance_scale`: Controls how strictly the output adheres to the text prompt (Classifier-Free Guidance). - **For Text-to-Image**: Use a higher value (e.g., 6-7) for simple or less detailed prompts. Use a lower value (e.g., 4) for complex and highly detailed prompts. - **For Editing/Composition**: A moderate value around 4-5 is recommended. - `image_guidance_scale`: This controls how much the final image should resemble the input reference image. - **The Trade-off**: A higher value (~2.0) makes the output more faithful to the reference image's structure and style, but it might ignore parts of your text prompt. A lower value (~1.5) gives the text prompt more influence. - **Tip**: Start with 1.5 and increase it if you need more consistency with the reference image. For image editing task, we recommend to set it between 1.3 and 2.0; for in-context generateion task, a higher image_guidance_scale will maintian more details in input images, and we recommend to set it between 2.5 and 3.0. - `max_pixels`: Automatically resizes images when their total pixel count (width × height) exceeds this limit, while maintaining its aspect ratio. This helps manage performance and memory usage. - `max_input_image_side_length`: Maximum side length for input images. - `negative_prompt`: Tell the model what you don't want to see in the image. - **Example**: blurry, low quality, text, watermark - **Tip**: For the best results, try experimenting with different negative prompts. If you're not sure, just leave it blank. ## ❤️ Citing Us If you find this repository or our work useful, please consider giving a star ⭐ and citation 🦖, which would be greatly appreciated (OmniGen2 report will be available as soon as possible): ```bibtex @article{xiao2024omnigen, title={Omnigen: Unified image generation}, author={Xiao, Shitao and Wang, Yueze and Zhou, Junjie and Yuan, Huaying and Xing, Xingrun and Yan, Ruiran and Wang, Shuting and Huang, Tiejun and Liu, Zheng}, journal={arXiv preprint arXiv:2409.11340}, year={2024} } ``` ## License This work is licensed under Apache 2.0 license.