shap_e.md 2.19 KB
Newer Older
YiYi Xu's avatar
YiYi Xu committed
1
2
3
4
5
6
7
8
9
10
11
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Shap-E

Steven Liu's avatar
Steven Liu committed
12
The Shap-E model was proposed in [Shap-E: Generating Conditional 3D Implicit Functions](https://huggingface.co/papers/2305.02463) by Alex Nichol and Heewon Jun from [OpenAI](https://github.com/openai).
YiYi Xu's avatar
YiYi Xu committed
13

14
The abstract from the paper is:
YiYi Xu's avatar
YiYi Xu committed
15
16
17

*We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space.*

18
The original codebase can be found at [openai/shap-e](https://github.com/openai/shap-e).
YiYi Xu's avatar
YiYi Xu committed
19

20
<Tip>
YiYi Xu's avatar
YiYi Xu committed
21

Steven Liu's avatar
Steven Liu committed
22
See the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
YiYi Xu's avatar
YiYi Xu committed
23

24
</Tip>
YiYi Xu's avatar
YiYi Xu committed
25
26
27
28
29
30
31
32
33

## ShapEPipeline
[[autodoc]] ShapEPipeline
	- all
	- __call__

## ShapEImg2ImgPipeline
[[autodoc]] ShapEImg2ImgPipeline
	- all
34
35
36
37
	- __call__

## ShapEPipelineOutput
[[autodoc]] pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput