"vscode:/vscode.git/clone" did not exist on "c47c4480774d6dd0639862a0ef259cd564d9359b"
Commit 30af93f2 authored by chenpangpang's avatar chenpangpang
Browse files

feat: gpu初始提交

parent 68e98ab8
Pipeline #2159 canceled with stages
.idea
chenyh
FROM image.sourcefind.cn:5000/gpu/admin/base/jupyterlab-pytorch:2.3.1-py3.10-cuda11.8-ubuntu22.04-devel as base
ARG IMAGE=nvcomposer
ARG IMAGE_UPPER=NVComposer
ARG BRANCH=gpu
RUN cd /root && git clone -b $BRANCH http://developer.hpccube.com/codes/chenpangpang/$IMAGE.git
WORKDIR /root/$IMAGE/$IMAGE_UPPER
RUN pip install -r requirements.txt
#########
# Prod #
#########
FROM image.sourcefind.cn:5000/gpu/admin/base/jupyterlab-pytorch:2.3.1-py3.10-cuda11.8-ubuntu22.04-devel
ARG IMAGE=nvcomposer
ARG IMAGE_UPPER=NVComposer
COPY chenyh/$IMAGE/frpc_linux_amd64_* /opt/conda/lib/python3.10/site-packages/gradio/
RUN chmod +x /opt/conda/lib/python3.10/site-packages/gradio/frpc_linux_amd64_*
COPY chenyh/nvcomposer/NVComposer-V0.1.ckpt /root/$IMAGE_UPPER/NVComposer-V0.1.ckpt
COPY --from=base /opt/conda/lib/python3.10/site-packages /opt/conda/lib/python3.10/site-packages
COPY --from=base /root/$IMAGE/$IMAGE_UPPER /root/$IMAGE_UPPER
COPY --from=base /root/$IMAGE/启动器.ipynb /root/$IMAGE/start.sh /root/
COPY --from=base /root/$IMAGE/assets/ /root/assets/
\ No newline at end of file
.idea
__pycache__
.git
*.pyc
.DS_Store
._*
cache
\ No newline at end of file
Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved. The below software and/or models in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"). All Tencent Modifications are Copyright (C) THL A29 Limited.
License Terms of the NVComposer:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this Software and associated documentation files, to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or sublicense copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
- You agree to use the NVComposer only for academic, research and education purposes, and refrain from using it for any commercial or production purposes under any circumstances.
- The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
For avoidance of doubts, "Software" means the NVComposer model inference-enabling code, parameters and weights made available under this license.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Other dependencies and licenses:
Open Source Model Licensed under the CreativeML OpenRAIL M license:
The below model in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"), as model weights provided for the NVComposer Project hereunder is fine-tuned with the assistance of below model.
All Tencent Modifications are Copyright (C) 2024 THL A29 Limited.
--------------------------------------------------------------------
1. stable-diffusion-v1-5
This stable-diffusion-v1-5 is licensed under the CreativeML OpenRAIL M license, Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
The original model is available at: https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5
Terms of the CreativeML OpenRAIL M license:
--------------------------------------------------------------------
Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
CreativeML Open RAIL-M
dated August 22, 2022
Section I: PREAMBLE
Multimodal generative models are being widely adopted and used, and have the potential to transform the way artists, among other individuals, conceive and benefit from AI or ML technologies as a tool for content creation.
Notwithstanding the current and potential benefits that these artifacts can bring to society at large, there are also concerns about potential misuses of them, either due to their technical limitations or ethical considerations.
In short, this license strives for both the open and responsible downstream use of the accompanying model. When it comes to the open character, we took inspiration from open source permissive licenses regarding the grant of IP rights. Referring to the downstream responsible use, we added use-based restrictions not permitting the use of the Model in very specific scenarios, in order for the licensor to be able to enforce the license in case potential misuses of the Model may occur. At the same time, we strive to promote open and responsible research on generative models for art and content generation.
Even though downstream derivative versions of the model could be released under different licensing terms, the latter will always have to include - at minimum - the same use-based restrictions as the ones in the original license (this license). We believe in the intersection between open and responsible AI development; thus, this License aims to strike a balance between both in order to enable responsible open-science in the field of AI.
This License governs the use of the model (and its derivatives) and is informed by the model card associated with the model.
NOW THEREFORE, You and Licensor agree as follows:
1. Definitions
- "License" means the terms and conditions for use, reproduction, and Distribution as defined in this document.
- "Data" means a collection of information and/or content extracted from the dataset used with the Model, including to train, pretrain, or otherwise evaluate the Model. The Data is not licensed under this License.
- "Output" means the results of operating a Model as embodied in informational content resulting therefrom.
- "Model" means any accompanying machine-learning based assemblies (including checkpoints), consisting of learnt weights, parameters (including optimizer states), corresponding to the model architecture as embodied in the Complementary Material, that have been trained or tuned, in whole or in part on the Data, using the Complementary Material.
- "Derivatives of the Model" means all modifications to the Model, works based on the Model, or any other model which is created or initialized by transfer of patterns of the weights, parameters, activations or output of the Model, to the other model, in order to cause the other model to perform similarly to the Model, including - but not limited to - distillation methods entailing the use of intermediate data representations or methods based on the generation of synthetic data by the Model for training the other model.
- "Complementary Material" means the accompanying source code and scripts used to define, run, load, benchmark or evaluate the Model, and used to prepare data for training or evaluation, if any. This includes any accompanying documentation, tutorials, examples, etc, if any.
- "Distribution" means any transmission, reproduction, publication or other sharing of the Model or Derivatives of the Model to a third party, including providing the Model as a hosted service made available by electronic or other remote means - e.g. API-based or web access.
- "Licensor" means the copyright owner or entity authorized by the copyright owner that is granting the License, including the persons or entities that may have rights in the Model and/or distributing the Model.
- "You" (or "Your") means an individual or Legal Entity exercising permissions granted by this License and/or making use of the Model for whichever purpose and in any field of use, including usage of the Model in an end-use application - e.g. chatbot, translator, image generator.
- "Third Parties" means individuals or legal entities that are not under common control with Licensor or You.
- "Contribution" means any work of authorship, including the original version of the Model and any modifications or additions to that Model or Derivatives of the Model thereof, that is intentionally submitted to Licensor for inclusion in the Model by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Model, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
- "Contributor" means Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Model.
Section II: INTELLECTUAL PROPERTY RIGHTS
Both copyright and patent grants apply to the Model, Derivatives of the Model and Complementary Material. The Model and Derivatives of the Model are subject to additional terms as described in Section III.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform, sublicense, and distribute the Complementary Material, the Model, and Derivatives of the Model.
3. Grant of Patent License. Subject to the terms and conditions of this License and where and as applicable, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this paragraph) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Model and the Complementary Material, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Model to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model and/or Complementary Material or a Contribution incorporated within the Model and/or Complementary Material constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for the Model and/or Work shall terminate as of the date such litigation is asserted or filed.
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
4. Distribution and Redistribution. You may host for Third Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in any medium, with or without modifications, provided that You meet the following conditions:
Use-based restrictions as referenced in paragraph 5 MUST be included as an enforceable provision by You in any type of legal agreement (e.g. a license) governing the use and/or distribution of the Model or Derivatives of the Model, and You shall give notice to subsequent users You Distribute to, that the Model or Derivatives of the Model are subject to paragraph 5. This provision does not apply to the use of Complementary Material.
You must give any Third Party recipients of the Model or Derivatives of the Model a copy of this License;
You must cause any modified files to carry prominent notices stating that You changed the files;
You must retain all copyright, patent, trademark, and attribution notices excluding those notices that do not pertain to any part of the Model, Derivatives of the Model.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions - respecting paragraph 4.a. - for use, reproduction, or Distribution of Your modifications, or for any such Derivatives of the Model as a whole, provided Your use, reproduction, and Distribution of the Model otherwise complies with the conditions stated in this License.
5. Use-based restrictions. The restrictions set forth in Attachment A are considered Use-based restrictions. Therefore You cannot use the Model and the Derivatives of the Model for the specified restricted uses. You may use the Model subject to this License, including only for lawful purposes and in accordance with the License. Use may include creating any content with, finetuning, updating, running, training, evaluating and/or reparametrizing the Model. You shall require all of Your users who use the Model or a Derivative of the Model to comply with the terms of this paragraph (paragraph 5).
6. The Output You Generate. Except as set forth herein, Licensor claims no rights in the Output You generate using the Model. You are accountable for the Output you generate and its subsequent uses. No use of the output can contravene any provision as stated in the License.
Section IV: OTHER PROVISIONS
7. Updates and Runtime Restrictions. To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this License, update the Model through electronic means, or modify the Output of the Model based on updates. You shall undertake reasonable efforts to use the latest version of the Model.
8. Trademarks and related. Nothing in this License permits You to make use of Licensors’ trademarks, trade names, logos or to otherwise suggest endorsement or misrepresent the relationship between the parties; and any rights not expressly granted herein are reserved by the Licensors.
9. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Model and the Complementary Material (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Model, Derivatives of the Model, and the Complementary Material and assume any risks associated with Your exercise of permissions under this License.
10. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Model and the Complementary Material (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
11. Accepting Warranty or Additional Liability. While redistributing the Model, Derivatives of the Model and the Complementary Material thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
12. If any provision of this License is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
END OF TERMS AND CONDITIONS
Attachment A
Use Restrictions
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national, federal, state, local or international law or regulation;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate personal identifiable information that can be used to harm an individual;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
- To provide medical advice and medical results interpretation;
- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
Open Source Software Licensed under the Apache License Version 2.0:
--------------------------------------------------------------------
1. pytorch_lightning
Copyright 2018-2021 William Falcon
2. gradio
Copyright (c) gradio original author and authors
Terms of the Apache License Version 2.0:
--------------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Open Source Software Licensed under the BSD 3-Clause License:
--------------------------------------------------------------------
1. torchvision
Copyright (c) Soumith Chintala 2016,
All rights reserved.
2. scikit-learn
Copyright (c) 2007-2024 The scikit-learn developers.
All rights reserved.
Terms of the BSD 3-Clause License:
--------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Open Source Software Licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
--------------------------------------------------------------------
1. torch
Copyright (c) 2016- Facebook, Inc (Adam Paszke)
Copyright (c) 2014- Facebook, Inc (Soumith Chintala)
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (Clement Farabet)
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
A copy of the BSD 3-Clause is included in this file.
For the license of other third party components, please refer to the following URL:
https://github.com/pytorch/pytorch/tree/v2.1.2/third_party
Open Source Software Licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
--------------------------------------------------------------------
1. numpy
Copyright (c) 2005-2023, NumPy Developers.
All rights reserved.
A copy of the BSD 3-Clause is included in this file.
For the license of other third party components, please refer to the following URL:
https://github.com/numpy/numpy/blob/v1.26.3/LICENSES_bundled.txt
Open Source Software Licensed under the HPND License:
--------------------------------------------------------------------
1. Pillow
Copyright © 2010-2024 by Jeffrey A. Clark (Alex) and contributors.
Terms of the HPND License:
--------------------------------------------------------------------
By obtaining, using, and/or copying this software and/or its associated
documentation, you agree that you have read, understood, and will comply
with the following terms and conditions:
Permission to use, copy, modify and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appears in all copies, and that
both that copyright notice and this permission notice appear in supporting
documentation, and that the name of Secret Labs AB or the author not be
used in advertising or publicity pertaining to distribution of the software
without specific, written prior permission.
SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS.
IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL,
INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
Open Source Software Licensed under the MIT License:
--------------------------------------------------------------------
1. einops
Copyright (c) 2018 Alex Rogozhnikov
Terms of the MIT License:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Open Source Software Licensed under the MIT License and Other Licenses of the Third-Party Components therein:
--------------------------------------------------------------------
1. opencv-python
Copyright (c) Olli-Pekka Heinisuo
A copy of the MIT is included in this file.
For the license of other third party components, please refer to the following URL:
https://github.com/opencv/opencv-python/blob/4.x/LICENSE-3RD-PARTY.txt
\ No newline at end of file
---
title: NVComposer
emoji: 📸
colorFrom: indigo
colorTo: gray
sdk: gradio
sdk_version: 4.38.1
app_file: app.py
pinned: false
python_version: 3.1
---
\ No newline at end of file
import datetime
import json
import os
import gradio as gr
import PIL.Image
import numpy as np
import torch
import torchvision.transforms.functional
from numpy import deg2rad
from omegaconf import OmegaConf
from core.data.camera_pose_utils import convert_w2c_between_c2w
from core.data.combined_multi_view_dataset import (
get_ray_embeddings,
normalize_w2c_camera_pose_sequence,
crop_and_resize,
)
from main.evaluation.funcs import load_model_checkpoint
from main.evaluation.pose_interpolation import (
move_pose,
interpolate_camera_poses,
generate_spherical_trajectory,
)
from main.evaluation.utils_eval import process_inference_batch
from utils.utils import instantiate_from_config
from core.models.samplers.ddim import DDIMSampler
torch.set_float32_matmul_precision("medium")
gpu_no = 0
config = "./configs/dual_stream/nvcomposer.yaml"
ckpt = "NVComposer-V0.1.ckpt"
model_resolution_height, model_resolution_width = 576, 1024
num_views = 16
dtype = torch.float16
config = OmegaConf.load(config)
model_config = config.pop("model", OmegaConf.create())
model_config.params.train_with_multi_view_feature_alignment = False
model = instantiate_from_config(model_config).cuda(gpu_no).to(dtype=dtype)
assert os.path.exists(ckpt), f"Error: checkpoint [{ckpt}] Not Found!"
print(f"Loading checkpoint from {ckpt}...")
model = load_model_checkpoint(model, ckpt)
model.eval()
latent_h, latent_w = (
model_resolution_height // 8,
model_resolution_width // 8,
)
channels = model.channels
sampler = DDIMSampler(model)
EXAMPLES = [
[
"./assets/sample1.jpg",
None,
1,
0,
0,
1,
0,
0,
0,
0,
0,
-0.2,
3,
1.5,
20,
"./assets/sample1.mp4",
1,
],
[
"./assets/sample2.jpg",
None,
0,
0,
25,
1,
0,
0,
0,
0,
0,
0,
3,
1.5,
20,
"./assets/sample2.mp4",
1,
],
[
"./assets/sample3.jpg",
None,
0,
0,
15,
1,
0,
0,
0,
0,
0,
0,
3,
1.5,
20,
"./assets/sample3.mp4",
1,
],
[
"./assets/sample4.jpg",
None,
0,
0,
-15,
1,
0,
0,
0,
0,
0,
0,
3,
1.5,
20,
"./assets/sample4.mp4",
1,
],
[
"./assets/sample5-1.png",
"./assets/sample5-2.png",
0,
0,
-30,
1,
0,
0,
0,
0,
0,
0,
3,
1.5,
20,
"./assets/sample5.mp4",
2,
],
]
def compose_data_item(
num_views,
cond_pil_image_list,
caption="",
camera_mode=False,
input_pose_format="c2w",
model_pose_format="c2w",
x_rotation_angle=10,
y_rotation_angle=10,
z_rotation_angle=10,
x_translation=0.5,
y_translation=0.5,
z_translation=0.5,
image_size=None,
spherical_angle_x=10,
spherical_angle_y=10,
spherical_radius=10,
):
if image_size is None:
image_size = [512, 512]
latent_size = [image_size[0] // 8, image_size[1] // 8]
def image_processing_function(x):
return (
torch.from_numpy(
np.array(
crop_and_resize(
x, target_height=image_size[0], target_width=image_size[1]
)
).transpose((2, 0, 1))
).float()
/ 255.0
)
resizer_image_to_latent_size = torchvision.transforms.Resize(
size=latent_size,
interpolation=torchvision.transforms.InterpolationMode.BILINEAR,
antialias=True,
)
num_cond_views = len(cond_pil_image_list)
print(f"Number of received condition images: {num_cond_views}.")
num_target_views = num_views - num_cond_views
if camera_mode == 1:
print("Camera Mode: Movement with Rotation and Translation.")
start_pose = torch.tensor(
[
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
]
).float()
end_pose = move_pose(
start_pose,
x_angle=torch.tensor(deg2rad(x_rotation_angle)),
y_angle=torch.tensor(deg2rad(y_rotation_angle)),
z_angle=torch.tensor(deg2rad(z_rotation_angle)),
translation=torch.tensor([x_translation, y_translation, z_translation]),
)
target_poses = interpolate_camera_poses(
start_pose, end_pose, num_steps=num_target_views
)
elif camera_mode == 0:
print("Camera Mode: Spherical Movement.")
target_poses = generate_spherical_trajectory(
end_angles=(spherical_angle_x, spherical_angle_y),
radius=spherical_radius,
num_steps=num_target_views,
)
print("Target pose sequence (before normalization): \n ", target_poses)
cond_poses = [
torch.tensor(
[
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
]
).float()
] * num_cond_views
target_poses = torch.stack(target_poses, dim=0).float()
cond_poses = torch.stack(cond_poses, dim=0).float()
if not camera_mode != 0 and (input_pose_format != "w2c"):
# c2w to w2c. Input for normalize_camera_pose_sequence() should be w2c
target_poses = convert_w2c_between_c2w(target_poses)
cond_poses = convert_w2c_between_c2w(cond_poses)
target_poses, cond_poses = normalize_w2c_camera_pose_sequence(
target_poses,
cond_poses,
output_c2w=model_pose_format == "c2w",
translation_norm_mode="disabled",
)
target_and_condition_camera_poses = torch.cat([target_poses, cond_poses], dim=0)
print("Target pose sequence (after normalization): \n ", target_poses)
fov_xy = [80, 45]
target_rays = get_ray_embeddings(
target_poses,
size_h=image_size[0],
size_w=image_size[1],
fov_xy_list=[fov_xy for _ in range(num_target_views)],
)
condition_rays = get_ray_embeddings(
cond_poses,
size_h=image_size[0],
size_w=image_size[1],
fov_xy_list=[fov_xy for _ in range(num_cond_views)],
)
target_images_tensor = torch.zeros(
num_target_views, 3, image_size[0], image_size[1]
)
condition_images = [image_processing_function(x) for x in cond_pil_image_list]
condition_images_tensor = torch.stack(condition_images, dim=0) * 2.0 - 1.0
target_images_tensor[0, :, :, :] = condition_images_tensor[0, :, :, :]
target_and_condition_images_tensor = torch.cat(
[target_images_tensor, condition_images_tensor], dim=0
)
target_and_condition_rays_tensor = torch.cat([target_rays, condition_rays], dim=0)
target_and_condition_rays_tensor = resizer_image_to_latent_size(
target_and_condition_rays_tensor * 5.0
)
mask_preserving_target = torch.ones(size=[num_views, 1], dtype=torch.float16)
mask_preserving_target[num_target_views:] = 0.0
combined_fovs = torch.stack([torch.tensor(fov_xy)] * num_views, dim=0)
mask_only_preserving_first_target = torch.zeros_like(mask_preserving_target)
mask_only_preserving_first_target[0] = 1.0
mask_only_preserving_first_condition = torch.zeros_like(mask_preserving_target)
mask_only_preserving_first_condition[num_target_views] = 1.0
test_data = {
# T, C, H, W
"combined_images": target_and_condition_images_tensor.unsqueeze(0),
"mask_preserving_target": mask_preserving_target.unsqueeze(0), # T, 1
# T, 1
"mask_only_preserving_first_target": mask_only_preserving_first_target.unsqueeze(
0
),
# T, 1
"mask_only_preserving_first_condition": mask_only_preserving_first_condition.unsqueeze(
0
),
# T, C, H//8, W//8
"combined_rays": target_and_condition_rays_tensor.unsqueeze(0),
"combined_fovs": combined_fovs.unsqueeze(0),
"target_and_condition_camera_poses": target_and_condition_camera_poses.unsqueeze(
0
),
"num_target_images": torch.tensor([num_target_views]),
"num_cond_images": torch.tensor([num_cond_views]),
"num_cond_images_str": [str(num_cond_views)],
"item_idx": [0],
"subset_key": ["evaluation"],
"caption": [caption],
"fov_xy": torch.tensor(fov_xy).float().unsqueeze(0),
}
return test_data
def tensor_to_mp4(video, savepath, fps, nrow=None):
"""
video: torch.Tensor, b,t,c,h,w, value range: 0-1
"""
n = video.shape[0]
print("Video shape=", video.shape)
video = video.permute(1, 0, 2, 3, 4) # t,n,c,h,w
nrow = int(np.sqrt(n)) if nrow is None else nrow
frame_grids = [
torchvision.utils.make_grid(framesheet, nrow=nrow) for framesheet in video
] # [3, grid_h, grid_w]
# stack in temporal dim [T, 3, grid_h, grid_w]
grid = torch.stack(frame_grids, dim=0)
grid = torch.clamp(grid.float(), -1.0, 1.0)
# [T, 3, grid_h, grid_w] -> [T, grid_h, grid_w, 3]
grid = (grid * 255).to(torch.uint8).permute(0, 2, 3, 1)
# print(f'Save video to {savepath}')
torchvision.io.write_video(
savepath, grid, fps=fps, video_codec="h264", options={"crf": "10"}
)
def parse_to_np_array(input_string):
try:
# Try to parse the input as JSON first
data = json.loads(input_string)
arr = np.array(data)
except json.JSONDecodeError:
# If JSON parsing fails, assume it's a multi-line string and handle accordingly
lines = input_string.strip().splitlines()
data = []
for line in lines:
# Split the line by spaces and convert to floats
data.append([float(x) for x in line.split()])
arr = np.array(data)
# Check if the resulting array is 3x4
if arr.shape != (3, 4):
raise ValueError(f"Expected array shape (3, 4), but got {arr.shape}")
return arr
def run_inference(
camera_mode,
input_cond_image1=None,
input_cond_image2=None,
input_cond_image3=None,
input_cond_image4=None,
input_pose_format="c2w",
model_pose_format="c2w",
x_rotation_angle=None,
y_rotation_angle=None,
z_rotation_angle=None,
x_translation=None,
y_translation=None,
z_translation=None,
trajectory_extension_factor=1,
cfg_scale=1.0,
cfg_scale_extra=1.0,
sample_steps=50,
num_images_slider=None,
spherical_angle_x=10,
spherical_angle_y=10,
spherical_radius=10,
random_seed=1,
):
cfg_scale_extra = 1.0 # Disable Extra CFG due to time limit of ZeroGPU
os.makedirs("./cache/", exist_ok=True)
with torch.no_grad():
with torch.cuda.amp.autocast(dtype=dtype):
torch.manual_seed(random_seed)
input_cond_images = []
for _cond_image in [
input_cond_image1,
input_cond_image2,
input_cond_image3,
input_cond_image4,
]:
if _cond_image is not None:
if isinstance(_cond_image, np.ndarray):
_cond_image = PIL.Image.fromarray(_cond_image)
input_cond_images.append(_cond_image)
num_condition_views = len(input_cond_images)
assert (
num_images_slider == num_condition_views
), f"The `num_condition_views`={num_condition_views} while got `num_images_slider`={num_images_slider}."
input_caption = ""
num_target_views = num_views - num_condition_views
data_item = compose_data_item(
num_views=num_views,
cond_pil_image_list=input_cond_images,
caption=input_caption,
camera_mode=camera_mode,
input_pose_format=input_pose_format,
model_pose_format=model_pose_format,
x_rotation_angle=x_rotation_angle,
y_rotation_angle=y_rotation_angle,
z_rotation_angle=z_rotation_angle,
x_translation=x_translation,
y_translation=y_translation,
z_translation=z_translation,
image_size=[model_resolution_height, model_resolution_width],
spherical_angle_x=spherical_angle_x,
spherical_angle_y=spherical_angle_y,
spherical_radius=spherical_radius,
)
batch = data_item
if trajectory_extension_factor == 1:
print("No trajectory extension.")
else:
print(f"Trajectory is enabled: {trajectory_extension_factor}.")
full_x_samples = []
for repeat_idx in range(int(trajectory_extension_factor)):
if repeat_idx != 0:
batch["combined_images"][:, 0, :, :, :] = full_x_samples[-1][
:, -1, :, :, :
]
batch["combined_images"][:, num_target_views, :, :, :] = (
full_x_samples[-1][:, -1, :, :, :]
)
cond, uc, uc_extra, x_rec = process_inference_batch(
cfg_scale, batch, model, with_uncondition_extra=True
)
batch_size = x_rec.shape[0]
shape_without_batch = (num_views, channels, latent_h, latent_w)
samples, _ = sampler.sample(
sample_steps,
batch_size=batch_size,
shape=shape_without_batch,
conditioning=cond,
verbose=True,
unconditional_conditioning=uc,
unconditional_guidance_scale=cfg_scale,
unconditional_conditioning_extra=uc_extra,
unconditional_guidance_scale_extra=cfg_scale_extra,
x_T=None,
expand_mode=False,
num_target_views=num_views - num_condition_views,
num_condition_views=num_condition_views,
dense_expansion_ratio=None,
pred_x0_post_process_function=None,
pred_x0_post_process_function_kwargs=None,
)
if samples.size(2) > 4:
image_samples = samples[:, :num_target_views, :4, :, :]
else:
image_samples = samples
per_instance_decoding = False
if per_instance_decoding:
x_samples = []
for item_idx in range(image_samples.shape[0]):
image_samples = image_samples[
item_idx : item_idx + 1, :, :, :, :
]
x_sample = model.decode_first_stage(image_samples)
x_samples.append(x_sample)
x_samples = torch.cat(x_samples, dim=0)
else:
x_samples = model.decode_first_stage(image_samples)
full_x_samples.append(x_samples[:, :num_target_views, ...])
full_x_samples = torch.concat(full_x_samples, dim=1)
x_samples = full_x_samples
x_samples = torch.clamp((x_samples + 1.0) / 2.0, 0.0, 1.0)
video_name = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") + ".mp4"
video_path = "./cache/" + video_name
tensor_to_mp4(x_samples.detach().cpu(), fps=6, savepath=video_path)
return video_path
with gr.Blocks() as demo:
gr.HTML(
"""
<div style="text-align: center;">
<h1 style="text-align: center; color: #333333;">📸 NVComposer</h1>
<h3 style="text-align: center; color: #333333;">Generative Novel View Synthesis with Sparse and
Unposed Images</h3>
<p style="text-align: center; font-weight: bold">
<a href="https://lg-li.github.io/project/nvcomposer">🌍 Project Page</a> |
<a href="https://arxiv.org/abs/2412.03517">📃 ArXiv Preprint</a> |
<a href="https://github.com/TencentARC/NVComposer">🧑‍💻 Github Repository</a>
</p>
<p style="text-align: left; font-size: 1.1em;">
Welcome to the demo of <strong>NVComposer</strong>. Follow the steps below to explore its capabilities:
</p>
</div>
<div style="text-align: left; margin: 0 auto; ">
<ol style="font-size: 1.1em;">
<li><strong>Choose camera movement mode:</strong> Spherical Mode or Rotation & Translation Mode.</li>
<li><strong>Customize the camera trajectory:</strong> Adjust the spherical parameters or rotation/translations along the X, Y,
and Z axes.</li>
<li><strong>Upload images:</strong> You can upload up to 4 images as input conditions.</li>
<li><strong>Set sampling parameters (optional):</strong> Tweak the settings and click the <b>Generate</b> button.</li>
</ol>
<p>
⏱️ <b>ZeroGPU Time Limit</b>: Hugging Face ZeroGPU has a inference time limit of 180 seconds.
You may need to <b>log in with a free account</b> to use this demo.
Large sampling steps might lead to timeout (GPU Abort).
In that case, please consider log in with a Pro account or run it on your local machine.
</p>
<p style="text-align: left; font-size: 1.1em;">🤗 Please 🌟 star our <a href="https://github.com/TencentARC/NVComposer"> GitHub repo </a>
and click on the ❤️ like button above if you find our work helpful. <br>
<a href="https://github.com/TencentARC/NVComposer"><img src="https://img.shields.io/github/stars/TencentARC%2FNVComposer"/></a> </p>
</div>
"""
)
with gr.Row():
with gr.Column(scale=1):
with gr.Accordion("Camera Movement Settings", open=True):
camera_mode = gr.Radio(
choices=[("Spherical Mode", 0), ("Rotation & Translation Mode", 1)],
label="Camera Mode",
value=0,
interactive=True,
)
with gr.Group(visible=True) as group_spherical:
# This tab can be left blank for now as per your request
# Add extra options manually here in the future
gr.HTML(
"""<p style="padding: 10px">
<b>Spherical Mode</b> allows you to control the camera's movement by specifying its position on a sphere centered around the scene.
Adjust the Polar Angle (vertical rotation), Azimuth Angle (horizontal rotation), and Radius (distance from the center of the anchor view) to define the camera's viewpoint.
The anchor view is considered located on the sphere at the specified radius, aligned with a zero polar angle and zero azimuth angle, oriented toward the origin.
</p>
"""
)
spherical_angle_x = gr.Slider(
minimum=-30,
maximum=30,
step=1,
value=0,
label="Polar Angle (Theta)",
)
spherical_angle_y = gr.Slider(
minimum=-30,
maximum=30,
step=1,
value=5,
label="Azimuth Angle (Phi)",
)
spherical_radius = gr.Slider(
minimum=0.5, maximum=1.5, step=0.1, value=1, label="Radius"
)
with gr.Group(visible=False) as group_move_rotation_translation:
gr.HTML(
"""<p style="padding: 10px">
<b>Rotation & Translation Mode</b> lets you directly define how the camera moves and rotates in the 3D space.
Use Rotation X/Y/Z to control the camera's orientation and Translation X/Y/Z to shift its position.
The anchor view serves as the starting point, with no initial rotation or translation applied.
</p>
"""
)
rotation_x = gr.Slider(
minimum=-20, maximum=20, step=1, value=0, label="Rotation X"
)
rotation_y = gr.Slider(
minimum=-20, maximum=20, step=1, value=0, label="Rotation Y"
)
rotation_z = gr.Slider(
minimum=-20, maximum=20, step=1, value=0, label="Rotation Z"
)
translation_x = gr.Slider(
minimum=-1, maximum=1, step=0.1, value=0, label="Translation X"
)
translation_y = gr.Slider(
minimum=-1, maximum=1, step=0.1, value=0, label="Translation Y"
)
translation_z = gr.Slider(
minimum=-1,
maximum=1,
step=0.1,
value=-0.2,
label="Translation Z",
)
input_camera_pose_format = gr.Radio(
choices=["W2C", "C2W"],
value="C2W",
label="Input Camera Pose Format",
visible=False,
)
model_camera_pose_format = gr.Radio(
choices=["W2C", "C2W"],
value="C2W",
label="Model Camera Pose Format",
visible=False,
)
def on_change_selected_camera_settings(_id):
return [gr.update(visible=_id == 0), gr.update(visible=_id == 1)]
camera_mode.change(
fn=on_change_selected_camera_settings,
inputs=camera_mode,
outputs=[group_spherical, group_move_rotation_translation],
)
with gr.Accordion("Advanced Sampling Settings"):
cfg_scale = gr.Slider(
value=3.0,
label="Classifier-Free Guidance Scale",
minimum=1,
maximum=10,
step=0.1,
)
extra_cfg_scale = gr.Slider(
value=1.0,
label="Extra Classifier-Free Guidance Scale",
minimum=1,
maximum=10,
step=0.1,
visible=False,
)
sample_steps = gr.Slider(
value=18, label="DDIM Sample Steps", minimum=0, maximum=25, step=1
)
trajectory_extension_factor = gr.Slider(
value=1,
label="Trajectory Extension (proportional to runtime)",
minimum=1,
maximum=3,
step=1,
)
random_seed = gr.Slider(
value=1024, minimum=1, maximum=9999, step=1, label="Random Seed"
)
def on_change_trajectory_extension_factor(_val):
if _val == 1:
return [
gr.update(minimum=-30, maximum=30),
gr.update(minimum=-30, maximum=30),
gr.update(minimum=0.5, maximum=1.5),
gr.update(minimum=-20, maximum=20),
gr.update(minimum=-20, maximum=20),
gr.update(minimum=-20, maximum=20),
gr.update(minimum=-1, maximum=1),
gr.update(minimum=-1, maximum=1),
gr.update(minimum=-1, maximum=1),
]
elif _val == 2:
return [
gr.update(minimum=-15, maximum=15),
gr.update(minimum=-15, maximum=15),
gr.update(minimum=0.5, maximum=1.5),
gr.update(minimum=-10, maximum=10),
gr.update(minimum=-10, maximum=10),
gr.update(minimum=-10, maximum=10),
gr.update(minimum=-0.5, maximum=0.5),
gr.update(minimum=-0.5, maximum=0.5),
gr.update(minimum=-0.5, maximum=0.5),
]
elif _val == 3:
return [
gr.update(minimum=-10, maximum=10),
gr.update(minimum=-10, maximum=10),
gr.update(minimum=0.5, maximum=1.5),
gr.update(minimum=-6, maximum=6),
gr.update(minimum=-6, maximum=6),
gr.update(minimum=-6, maximum=6),
gr.update(minimum=-0.3, maximum=0.3),
gr.update(minimum=-0.3, maximum=0.3),
gr.update(minimum=-0.3, maximum=0.3),
]
trajectory_extension_factor.change(
fn=on_change_trajectory_extension_factor,
inputs=trajectory_extension_factor,
outputs=[
spherical_angle_x,
spherical_angle_y,
spherical_radius,
rotation_x,
rotation_y,
rotation_z,
translation_x,
translation_y,
translation_z,
],
)
with gr.Column(scale=1):
with gr.Accordion("Input Image(s)", open=True):
num_images_slider = gr.Slider(
minimum=1,
maximum=4,
step=1,
value=1,
label="Number of Input Image(s)",
)
condition_image_1 = gr.Image(label="Input Image 1 (Anchor View)")
condition_image_2 = gr.Image(label="Input Image 2", visible=False)
condition_image_3 = gr.Image(label="Input Image 3", visible=False)
condition_image_4 = gr.Image(label="Input Image 4", visible=False)
with gr.Column(scale=1):
with gr.Accordion("Output Video", open=True):
output_video = gr.Video(label="Output Video")
run_btn = gr.Button("Generate")
with gr.Accordion("Notes", open=True):
gr.HTML(
"""
<p style="font-size: 1.1em; line-height: 1.6; color: #555;">
🧐 <b>Reminder</b>:
As a generative model, NVComposer may occasionally produce unexpected outputs.
Try adjusting the random seed, sampling steps, or CFG scales to explore different results.
<br>
🤔 <b>Longer Generation</b>:
If you need longer video, you can increase the trajectory extension value in the advanced sampling settings and run with your own GPU.
This extends the defined camera trajectory by repeating it, allowing for a longer output.
This also requires using smaller rotation or translation scales to maintain smooth transitions and will increase the generation time. <br>
🤗 <b>Limitation</b>:
This is the initial beta version of NVComposer.
Its generalizability may be limited in certain scenarios, and artifacts can appear with large camera motions due to the current foundation model's constraints.
We’re actively working on an improved version with enhanced datasets and a more powerful foundation model,
and we are looking for <b>collaboration opportunities from the community</b>. <br>
✨ We welcome your feedback and questions. Thank you! </p>
"""
)
with gr.Row():
gr.Examples(
label="Quick Examples",
examples=EXAMPLES,
inputs=[
condition_image_1,
condition_image_2,
camera_mode,
spherical_angle_x,
spherical_angle_y,
spherical_radius,
rotation_x,
rotation_y,
rotation_z,
translation_x,
translation_y,
translation_z,
cfg_scale,
extra_cfg_scale,
sample_steps,
output_video,
num_images_slider,
],
examples_per_page=5,
cache_examples=False,
)
# Update visibility of condition images based on the slider
def update_visible_images(num_images):
return [
gr.update(visible=num_images >= 2),
gr.update(visible=num_images >= 3),
gr.update(visible=num_images >= 4),
]
# Trigger visibility update when the slider value changes
num_images_slider.change(
fn=update_visible_images,
inputs=num_images_slider,
outputs=[condition_image_2, condition_image_3, condition_image_4],
)
run_btn.click(
fn=run_inference,
inputs=[
camera_mode,
condition_image_1,
condition_image_2,
condition_image_3,
condition_image_4,
input_camera_pose_format,
model_camera_pose_format,
rotation_x,
rotation_y,
rotation_z,
translation_x,
translation_y,
translation_z,
trajectory_extension_factor,
cfg_scale,
extra_cfg_scale,
sample_steps,
num_images_slider,
spherical_angle_x,
spherical_angle_y,
spherical_radius,
random_seed,
],
outputs=output_video,
)
demo.launch(share=True, sever_name="0.0.0.0")
num_frames: &num_frames 16
resolution: &resolution [576, 1024]
model:
base_learning_rate: 1.0e-5
scale_lr: false
target: core.models.diffusion.DualStreamMultiViewDiffusionModel
params:
use_task_embedding: false
ray_as_image: false
apply_condition_mask_in_training_loss: true
separate_noise_and_condition: true
condition_padding_with_anchor: false
use_ray_decoder_loss_high_frequency_isolation: false
train_with_multi_view_feature_alignment: true
use_text_cross_attention_condition: false
linear_start: 0.00085
linear_end: 0.012
num_time_steps_cond: 1
log_every_t: 200
time_steps: 1000
data_key_images: combined_images
data_key_rays: combined_rays
data_key_text_condition: caption
cond_stage_trainable: false
image_size: [72, 128]
channels: 10
monitor: global_step
scale_by_std: false
scale_factor: 0.18215
use_dynamic_rescale: true
base_scale: 0.3
use_ema: false
uncond_prob: 0.05
uncond_type: 'empty_seq'
use_camera_pose_query_transformer: false
random_cond: false
cond_concat: true
frame_mask: false
padding: true
per_frame_auto_encoding: true
parameterization: "v"
rescale_betas_zero_snr: true
use_noise_offset: false
scheduler_config:
target: utils.lr_scheduler.LambdaLRScheduler
interval: 'step'
frequency: 100
params:
start_step: 0
final_decay_ratio: 0.1
decay_steps: 100
bd_noise: false
unet_config:
target: core.modules.networks.unet_modules.UNetModel
params:
in_channels: 20
out_channels: 10
model_channels: 320
attention_resolutions:
- 4
- 2
- 1
num_res_blocks: 2
channel_mult:
- 1
- 2
- 4
- 4
dropout: 0.1
num_head_channels: 64
transformer_depth: 1
context_dim: 1024
use_linear: true
use_checkpoint: true
temporal_conv: true
temporal_attention: true
temporal_selfatt_only: true
use_relative_position: false
use_causal_attention: false
temporal_length: *num_frames
addition_attention: true
image_cross_attention: true
image_cross_attention_scale_learnable: true
default_fs: 3
fs_condition: false
use_spatial_temporal_attention: true
use_addition_ray_output_head: true
ray_channels: 6
use_lora_for_rays_in_output_blocks: false
use_task_embedding: false
use_ray_decoder: true
use_ray_decoder_residual: true
full_spatial_temporal_attention: true
enhance_multi_view_correspondence: false
camera_pose_condition: true
use_feature_alignment: true
first_stage_config:
target: core.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [1, 2, 4, 4]
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_img_config:
target: core.modules.encoders.condition.FrozenOpenCLIPImageEmbedderV2
params:
freeze: true
image_proj_model_config:
target: core.modules.encoders.resampler.Resampler
params:
dim: 1024
depth: 4
dim_head: 64
heads: 12
num_queries: 16
embedding_dim: 1280
output_dim: 1024
ff_mult: 4
video_length: *num_frames
import torch.nn as nn
from utils.utils import instantiate_from_config
def disabled_train(self, mode=True):
"""Overwrite model.train with this function to make sure train/eval mode
does not change anymore."""
return self
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module
def scale_module(module, scale):
"""
Scale the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().mul_(scale)
return module
def conv_nd(dims, *args, **kwargs):
"""
Create a 1D, 2D, or 3D convolution module.
"""
if dims == 1:
return nn.Conv1d(*args, **kwargs)
elif dims == 2:
return nn.Conv2d(*args, **kwargs)
elif dims == 3:
return nn.Conv3d(*args, **kwargs)
raise ValueError(f"unsupported dimensions: {dims}")
def linear(*args, **kwargs):
"""
Create a linear module.
"""
return nn.Linear(*args, **kwargs)
def avg_pool_nd(dims, *args, **kwargs):
"""
Create a 1D, 2D, or 3D average pooling module.
"""
if dims == 1:
return nn.AvgPool1d(*args, **kwargs)
elif dims == 2:
return nn.AvgPool2d(*args, **kwargs)
elif dims == 3:
return nn.AvgPool3d(*args, **kwargs)
raise ValueError(f"unsupported dimensions: {dims}")
def nonlinearity(type="silu"):
if type == "silu":
return nn.SiLU()
elif type == "leaky_relu":
return nn.LeakyReLU()
class GroupNormSpecific(nn.GroupNorm):
def forward(self, x):
return super().forward(x.float()).type(x.dtype)
def normalization(channels, num_groups=32):
"""
Make a standard normalization layer.
:param channels: number of input channels.
:param num_groups: number of groupseg.
:return: an nn.Module for normalization.
"""
return GroupNormSpecific(num_groups, channels)
class HybridConditioner(nn.Module):
def __init__(self, c_concat_config, c_crossattn_config):
super().__init__()
self.concat_conditioner = instantiate_from_config(c_concat_config)
self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
def forward(self, c_concat, c_crossattn):
c_concat = self.concat_conditioner(c_concat)
c_crossattn = self.crossattn_conditioner(c_crossattn)
return {"c_concat": [c_concat], "c_crossattn": [c_crossattn]}
import math
from inspect import isfunction
import torch
import torch.distributed as dist
from torch import nn
def gather_data(data, return_np=True):
"""gather data from multiple processes to one list"""
data_list = [torch.zeros_like(data) for _ in range(dist.get_world_size())]
dist.all_gather(data_list, data) # gather not supported with NCCL
if return_np:
data_list = [data.cpu().numpy() for data in data_list]
return data_list
def autocast(f):
def do_autocast(*args, **kwargs):
with torch.cuda.amp.autocast(
enabled=True,
dtype=torch.get_autocast_gpu_dtype(),
cache_enabled=torch.is_autocast_cache_enabled(),
):
return f(*args, **kwargs)
return do_autocast
def extract_into_tensor(a, t, x_shape):
b, *_ = t.shape
out = a.gather(-1, t)
return out.reshape(b, *((1,) * (len(x_shape) - 1)))
def noise_like(shape, device, repeat=False):
def repeat_noise():
return torch.randn((1, *shape[1:]), device=device).repeat(
shape[0], *((1,) * (len(shape) - 1))
)
def noise():
return torch.randn(shape, device=device)
return repeat_noise() if repeat else noise()
def default(val, d):
if exists(val):
return val
return d() if isfunction(d) else d
def exists(val):
return val is not None
def identity(*args, **kwargs):
return nn.Identity()
def uniq(arr):
return {el: True for el in arr}.keys()
def mean_flat(tensor):
"""
Take the mean over all non-batch dimensions.
"""
return tensor.mean(dim=list(range(1, len(tensor.shape))))
def ismap(x):
if not isinstance(x, torch.Tensor):
return False
return (len(x.shape) == 4) and (x.shape[1] > 3)
def isimage(x):
if not isinstance(x, torch.Tensor):
return False
return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)
def max_neg_value(t):
return -torch.finfo(t.dtype).max
def shape_to_str(x):
shape_str = "x".join([str(x) for x in x.shape])
return shape_str
def init_(tensor):
dim = tensor.shape[-1]
std = 1 / math.sqrt(dim)
tensor.uniform_(-std, std)
return tensor
# USE_DEEP_SPEED_CHECKPOINTING = False
# if USE_DEEP_SPEED_CHECKPOINTING:
# import deepspeed
#
# _gradient_checkpoint_function = deepspeed.checkpointing.checkpoint
# else:
_gradient_checkpoint_function = torch.utils.checkpoint.checkpoint
def gradient_checkpoint(func, inputs, params, flag):
"""
Evaluate a function without caching intermediate activations, allowing for
reduced memory at the expense of extra compute in the backward pass.
:param func: the function to evaluate.
:param inputs: the argument sequence to pass to `func`.
:param params: a sequence of parameters `func` depends on but does not
explicitly take as arguments.
:param flag: if False, disable gradient checkpointing.
"""
if flag:
# args = tuple(inputs) + tuple(params)
# return CheckpointFunction.apply(func, len(inputs), *args)
if isinstance(inputs, tuple):
return _gradient_checkpoint_function(func, *inputs, use_reentrant=False)
else:
return _gradient_checkpoint_function(func, inputs, use_reentrant=False)
else:
return func(*inputs)
class CheckpointFunction(torch.autograd.Function):
@staticmethod
@torch.cuda.amp.custom_fwd
def forward(ctx, run_function, length, *args):
ctx.run_function = run_function
ctx.input_tensors = list(args[:length])
ctx.input_params = list(args[length:])
with torch.no_grad():
output_tensors = ctx.run_function(*ctx.input_tensors)
return output_tensors
@staticmethod
@torch.cuda.amp.custom_bwd # add this
def backward(ctx, *output_grads):
"""
for x in ctx.input_tensors:
if isinstance(x, int):
print('-----------------', ctx.run_function)
"""
ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
with torch.enable_grad():
# Fixes a bug where the first op in run_function modifies the
# Tensor storage in place, which is not allowed for detach()'d
# Tensors.
shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
output_tensors = ctx.run_function(*shallow_copies)
input_grads = torch.autograd.grad(
output_tensors,
ctx.input_tensors + ctx.input_params,
output_grads,
allow_unused=True,
)
del ctx.input_tensors
del ctx.input_params
del output_tensors
return (None, None) + input_grads
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment