Commit d8362053 authored by sangwz's avatar sangwz
Browse files

Adaptation optimization under the HIP environment

parent f34104cf
Pipeline #804 failed with stages
in 0 seconds
<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/pytorch3dlogo.png" width="900"/> ## 简介
[![CircleCI](https://circleci.com/gh/facebookresearch/pytorch3d.svg?style=svg)](https://circleci.com/gh/facebookresearch/pytorch3d) **PyTorch3D** 是一个用于深度学习中处理**3D数据**的库。它提供了一系列功能,方便处理和操作三维数据,
[![Anaconda-Server Badge](https://anaconda.org/pytorch3d/pytorch3d/badges/version.svg)](https://anaconda.org/pytorch3d/pytorch3d)
# Introduction ## 环境依赖
PyTorch3D provides efficient, reusable components for 3D Computer Vision research with [PyTorch](https://pytorch.org). 环境以及版本依赖:
Key features include: * Python 3.8,3.9 or 3.10
- Data structure for storing and manipulating triangle meshes - dtk-23.10
- Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions) - pytorch-2.1
- A differentiable mesh renderer
- Implicitron, see [its README](projects/implicitron_trainer), a framework for new-view synthesis via implicit representations. ([blog post](https://ai.facebook.com/blog/implicitron-a-new-modular-extensible-framework-for-neural-implicit-representations-in-pytorch3d/))
PyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. ## 安装
For this reason, all operators in PyTorch3D:
- Are implemented using PyTorch tensors ### 环境准备
- Can handle minibatches of hetereogenous data
- Can be differentiated
- Can utilize GPUs for acceleration
Within FAIR, PyTorch3D has been used to power research projects such as [Mesh R-CNN](https://arxiv.org/abs/1906.02739). *[开发者社区](https://developer.hpccube.com/tool/#sdk) DCU Toolkit 中下载 DTK-23.10 解压至 /opt/ 路径下,并建立软链接
See our [blog post](https://ai.facebook.com/blog/-introducing-pytorch3d-an-open-source-library-for-3d-deep-learning/) to see more demos and learn about PyTorch3D. ```plaintext
cd /opt && ln -s dtk-23.10 dtk
```
## Installation * 在光合[光合开发者社区](https://developer.hpccube.com/tool/#sdk) AI 生态包中获取对应的 pytorch Release 版本(需对应 DCU Toolkit 版本与 python 版本)
For detailed instructions refer to [INSTALL.md](INSTALL.md). ```shell
wget https://cancon.hpccube.com:65024/directlink/4/pytorch/dtk23.10/torch-2.1.0a0+git793d2b5.abi0.dtk2310-cp38-cp38-manylinux2014_x86_64.whl
## License python3 -m pip install torch-2.1xxxx.whl
```
PyTorch3D is released under the [BSD License](LICENSE). * 导入环境变量以及安装必要依赖库
## Tutorials ```shell
source /opt/dtk/env.sh
Get started with PyTorch3D by trying one of the tutorial notebooks. pip3 install wheel -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn
```
|<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/dolphin_deform.gif" width="310"/>|<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/bundle_adjust.gif" width="310"/>| ### pip安装
|:-----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|
| [Deform a sphere mesh to dolphin](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb)| [Bundle adjustment](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/bundle_adjustment.ipynb) |
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/render_textured_mesh.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/camera_position_teapot.gif" width="310" height="310"/> 可以在光合[光合开发者社区](https://developer.hpccube.com/tool/#sdk) AI 生态包中获取最新的 `pytorch3d` Release 版本.
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render textured meshes](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_textured_meshes.ipynb)| [Camera position optimization](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/pointcloud_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/cow_deform.gif" width="310" height="310"/> ```shell
|:------------------------------------------------------------:|:--------------------------------------------------:| wget http://10.6.10.68:8000/customized/pytorch3d/23.10/pytorch3d-0.7.6-cp38-cp38-linux_x86_64.whl
| [Render textured pointclouds](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_colored_points.ipynb)| [Fit a mesh with texture](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_textured_mesh.ipynb)| python -m pip install pytorch3d-0.7.6-cp38-cp38-linux_x86_64.whl.whl
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/densepose_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/shapenet_render.png" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render DensePose data](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_densepose.ipynb)| [Load & Render ShapeNet data](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/dataloaders_ShapeNetCore_R2N2.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_textured_volume.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_nerf.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Fit Textured Volume](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_textured_volume.ipynb)| [Fit A Simple Neural Radiance Field](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_simple_neural_radiance_field.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_textured_volume.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/implicitron_config.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Fit Textured Volume in Implicitron](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/implicitron_volumes.ipynb)| [Implicitron Config System](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/implicitron_config_system.ipynb)|
## Documentation
Learn more about the API by reading the PyTorch3D [documentation](https://pytorch3d.readthedocs.org/).
We also have deep dive notes on several API components:
- [Heterogeneous Batching](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/batching.md)
- [Mesh IO](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/meshes_io.md)
- [Differentiable Rendering](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/renderer_getting_started.md)
### Overview Video
We have created a short (~14 min) video tutorial providing an overview of the PyTorch3D codebase including several code examples. Click on the image below to watch the video on YouTube:
<a href="http://www.youtube.com/watch?v=Pph1r-x9nyY"><img src="http://img.youtube.com/vi/Pph1r-x9nyY/0.jpg" height="225" ></a>
## Development
We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to [CONTRIBUTING.md](./.github/CONTRIBUTING.md) for full instructions on how to run the code, tests and linter, and submit your pull requests.
## Development and Compatibility
- `main` branch: actively developed, without any guarantee, Anything can be broken at any time
- REMARK: this includes nightly builds which are built from `main`
- HINT: the commit history can help locate regressions or changes
- backward-compatibility between releases: no guarantee. Best efforts to communicate breaking changes and facilitate migration of code or data (incl. models).
## Contributors
PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.
In alphabetical order:
* Amitav Baruah
* Steve Branson
* Krzysztof Chalupka
* Jiali Duan
* Luya Gao
* Georgia Gkioxari
* Taylor Gordon
* Justin Johnson
* Patrick Labatut
* Christoph Lassner
* Wan-Yen Lo
* David Novotny
* Nikhila Ravi
* Jeremy Reizenstein
* Dave Schnizlein
* Roman Shapovalov
* Olivia Wiles
## Citation
If you find PyTorch3D useful in your research, please cite our tech report:
```bibtex
@article{ravi2020pytorch3d,
author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
title = {Accelerating 3D Deep Learning with PyTorch3D},
journal = {arXiv:2007.08501},
year = {2020},
}
```
If you are using the pulsar backend for sphere-rendering (the `PulsarPointRenderer` or `pytorch3d.renderer.points.pulsar.Renderer`), please cite the tech report:
```bibtex
@article{lassner2020pulsar,
author = {Christoph Lassner and Michael Zollh\"ofer},
title = {Pulsar: Efficient Sphere-based Neural Rendering},
journal = {arXiv:2004.07484},
year = {2020},
}
``` ```
## News
Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md).
**[Oct 31st 2023]:** PyTorch3D [v0.7.5](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.5) released. ### 编译安装
**[May 10th 2023]:** PyTorch3D [v0.7.4](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.4) released. * 拉取代码
**[Apr 5th 2023]:** PyTorch3D [v0.7.3](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.3) released. ```shell
git clone http://developer.hpccube.com/codes/aicomponent/pytorch3d.git
```
**[Dec 19th 2022]:** PyTorch3D [v0.7.2](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.2) released. * 若使用 `GPU` 支持,需要设置环境变量`FORCE_CUDA`为1
**[Oct 23rd 2022]:** PyTorch3D [v0.7.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.1) released. ```shell
export FORCE_CUDA=1
```
**[Aug 10th 2022]:** PyTorch3D [v0.7.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.0) released with Implicitron and MeshRasterizerOpenGL. - 执行编译命令
**[Apr 28th 2022]:** PyTorch3D [v0.6.2](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.2) released ```shell
cd path/to/pytorch3d
pip install -e .
```
**[Dec 16th 2021]:** PyTorch3D [v0.6.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.1) released * 当编译出现错误时,参考以下说明:
**[Oct 6th 2021]:** PyTorch3D [v0.6.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.0) released * 报错提示缺少 `intel-mkl` 相关库,应安装 `intel-mkl`
**[Aug 5th 2021]:** PyTorch3D [v0.5.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.5.0) released ```shell
yum-config-manager --add-repo https://yum.repos.intel.com/mkl/setup/intel-mkl.repo
**[Feb 9th 2021]:** PyTorch3D [v0.4.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.4.0) released with support for implicit functions, volume rendering and a [reimplementation of NeRF](https://github.com/facebookresearch/pytorch3d/tree/main/projects/nerf). yum install intel-mkl-2020.0-088 -y --nogpgchec
# 将库路径添加到环境变量中
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/mkl/lib/intel64
```
**[November 2nd 2020]:** PyTorch3D [v0.3.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.3.0) released, integrating the pulsar backend. * 缺少 `magma`时,安装对应库
**[Aug 28th 2020]:** PyTorch3D [v0.2.5](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.2.5) released ```shell
# 默认dtk安装路径为 /opt/dtk
cd /opt/dtk
**[July 17th 2020]:** PyTorch3D tech report published on ArXiv: https://arxiv.org/abs/2007.08501 wget http://10.6.10.68:8000/debug/pytorch/third_party/magma_v2.7.2-hip_nfs3.2_DTK23.10_intel-2020.1.217_07Oct2023.tar.gz
**[April 24th 2020]:** PyTorch3D [v0.2.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.2.0) released tar -zxf magma_v2.7.2-hip_nfs3.2_DTK23.10_intel-2020.1.217_07Oct2023.tar.gz
**[March 25th 2020]:** [SynSin](https://arxiv.org/abs/1912.08804) codebase released using PyTorch3D: https://github.com/facebookresearch/synsin mv magma_v2.7.2-hip_nfs3.2_DTK23.10_intel-2020.1.217_07Oct2023 magma
**[March 8th 2020]:** PyTorch3D [v0.1.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.1.1) bug fix release cd magma/lib/
**[Jan 23rd 2020]:** PyTorch3D [v0.1.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.1.0) released. [Mesh R-CNN](https://arxiv.org/abs/1906.02739) codebase released: https://github.com/facebookresearch/meshrcnn # 添加环境变量
export LD_LIBRARY_PATH=${ROCM_PATH}/magma/lib:$LD_LIBRARY_PATH
```
\ No newline at end of file
<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/pytorch3dlogo.png" width="900"/>
[![CircleCI](https://circleci.com/gh/facebookresearch/pytorch3d.svg?style=svg)](https://circleci.com/gh/facebookresearch/pytorch3d)
[![Anaconda-Server Badge](https://anaconda.org/pytorch3d/pytorch3d/badges/version.svg)](https://anaconda.org/pytorch3d/pytorch3d)
# Introduction
PyTorch3D provides efficient, reusable components for 3D Computer Vision research with [PyTorch](https://pytorch.org).
Key features include:
- Data structure for storing and manipulating triangle meshes
- Efficient operations on triangle meshes (projective transformations, graph convolution, sampling, loss functions)
- A differentiable mesh renderer
- Implicitron, see [its README](projects/implicitron_trainer), a framework for new-view synthesis via implicit representations. ([blog post](https://ai.facebook.com/blog/implicitron-a-new-modular-extensible-framework-for-neural-implicit-representations-in-pytorch3d/))
PyTorch3D is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data.
For this reason, all operators in PyTorch3D:
- Are implemented using PyTorch tensors
- Can handle minibatches of hetereogenous data
- Can be differentiated
- Can utilize GPUs for acceleration
Within FAIR, PyTorch3D has been used to power research projects such as [Mesh R-CNN](https://arxiv.org/abs/1906.02739).
See our [blog post](https://ai.facebook.com/blog/-introducing-pytorch3d-an-open-source-library-for-3d-deep-learning/) to see more demos and learn about PyTorch3D.
## Installation
For detailed instructions refer to [INSTALL.md](INSTALL.md).
## License
PyTorch3D is released under the [BSD License](LICENSE).
## Tutorials
Get started with PyTorch3D by trying one of the tutorial notebooks.
|<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/dolphin_deform.gif" width="310"/>|<img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/bundle_adjust.gif" width="310"/>|
|:-----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|
| [Deform a sphere mesh to dolphin](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/deform_source_mesh_to_target_mesh.ipynb)| [Bundle adjustment](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/bundle_adjustment.ipynb) |
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/render_textured_mesh.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/camera_position_teapot.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render textured meshes](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_textured_meshes.ipynb)| [Camera position optimization](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/camera_position_optimization_with_differentiable_rendering.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/pointcloud_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/cow_deform.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render textured pointclouds](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_colored_points.ipynb)| [Fit a mesh with texture](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_textured_mesh.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/densepose_render.png" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/shapenet_render.png" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Render DensePose data](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/render_densepose.ipynb)| [Load & Render ShapeNet data](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/dataloaders_ShapeNetCore_R2N2.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_textured_volume.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_nerf.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Fit Textured Volume](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_textured_volume.ipynb)| [Fit A Simple Neural Radiance Field](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/fit_simple_neural_radiance_field.ipynb)|
| <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/fit_textured_volume.gif" width="310"/> | <img src="https://raw.githubusercontent.com/facebookresearch/pytorch3d/main/.github/implicitron_config.gif" width="310" height="310"/>
|:------------------------------------------------------------:|:--------------------------------------------------:|
| [Fit Textured Volume in Implicitron](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/implicitron_volumes.ipynb)| [Implicitron Config System](https://github.com/facebookresearch/pytorch3d/blob/main/docs/tutorials/implicitron_config_system.ipynb)|
## Documentation
Learn more about the API by reading the PyTorch3D [documentation](https://pytorch3d.readthedocs.org/).
We also have deep dive notes on several API components:
- [Heterogeneous Batching](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/batching.md)
- [Mesh IO](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/meshes_io.md)
- [Differentiable Rendering](https://github.com/facebookresearch/pytorch3d/tree/main/docs/notes/renderer_getting_started.md)
### Overview Video
We have created a short (~14 min) video tutorial providing an overview of the PyTorch3D codebase including several code examples. Click on the image below to watch the video on YouTube:
<a href="http://www.youtube.com/watch?v=Pph1r-x9nyY"><img src="http://img.youtube.com/vi/Pph1r-x9nyY/0.jpg" height="225" ></a>
## Development
We welcome new contributions to PyTorch3D and we will be actively maintaining this library! Please refer to [CONTRIBUTING.md](./.github/CONTRIBUTING.md) for full instructions on how to run the code, tests and linter, and submit your pull requests.
## Development and Compatibility
- `main` branch: actively developed, without any guarantee, Anything can be broken at any time
- REMARK: this includes nightly builds which are built from `main`
- HINT: the commit history can help locate regressions or changes
- backward-compatibility between releases: no guarantee. Best efforts to communicate breaking changes and facilitate migration of code or data (incl. models).
## Contributors
PyTorch3D is written and maintained by the Facebook AI Research Computer Vision Team.
In alphabetical order:
* Amitav Baruah
* Steve Branson
* Krzysztof Chalupka
* Jiali Duan
* Luya Gao
* Georgia Gkioxari
* Taylor Gordon
* Justin Johnson
* Patrick Labatut
* Christoph Lassner
* Wan-Yen Lo
* David Novotny
* Nikhila Ravi
* Jeremy Reizenstein
* Dave Schnizlein
* Roman Shapovalov
* Olivia Wiles
## Citation
If you find PyTorch3D useful in your research, please cite our tech report:
```bibtex
@article{ravi2020pytorch3d,
author = {Nikhila Ravi and Jeremy Reizenstein and David Novotny and Taylor Gordon
and Wan-Yen Lo and Justin Johnson and Georgia Gkioxari},
title = {Accelerating 3D Deep Learning with PyTorch3D},
journal = {arXiv:2007.08501},
year = {2020},
}
```
If you are using the pulsar backend for sphere-rendering (the `PulsarPointRenderer` or `pytorch3d.renderer.points.pulsar.Renderer`), please cite the tech report:
```bibtex
@article{lassner2020pulsar,
author = {Christoph Lassner and Michael Zollh\"ofer},
title = {Pulsar: Efficient Sphere-based Neural Rendering},
journal = {arXiv:2004.07484},
year = {2020},
}
```
## News
Please see below for a timeline of the codebase updates in reverse chronological order. We are sharing updates on the releases as well as research projects which are built with PyTorch3D. The changelogs for the releases are available under [`Releases`](https://github.com/facebookresearch/pytorch3d/releases), and the builds can be installed using `conda` as per the instructions in [INSTALL.md](INSTALL.md).
**[Oct 31st 2023]:** PyTorch3D [v0.7.5](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.5) released.
**[May 10th 2023]:** PyTorch3D [v0.7.4](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.4) released.
**[Apr 5th 2023]:** PyTorch3D [v0.7.3](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.3) released.
**[Dec 19th 2022]:** PyTorch3D [v0.7.2](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.2) released.
**[Oct 23rd 2022]:** PyTorch3D [v0.7.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.1) released.
**[Aug 10th 2022]:** PyTorch3D [v0.7.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.7.0) released with Implicitron and MeshRasterizerOpenGL.
**[Apr 28th 2022]:** PyTorch3D [v0.6.2](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.2) released
**[Dec 16th 2021]:** PyTorch3D [v0.6.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.1) released
**[Oct 6th 2021]:** PyTorch3D [v0.6.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.6.0) released
**[Aug 5th 2021]:** PyTorch3D [v0.5.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.5.0) released
**[Feb 9th 2021]:** PyTorch3D [v0.4.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.4.0) released with support for implicit functions, volume rendering and a [reimplementation of NeRF](https://github.com/facebookresearch/pytorch3d/tree/main/projects/nerf).
**[November 2nd 2020]:** PyTorch3D [v0.3.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.3.0) released, integrating the pulsar backend.
**[Aug 28th 2020]:** PyTorch3D [v0.2.5](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.2.5) released
**[July 17th 2020]:** PyTorch3D tech report published on ArXiv: https://arxiv.org/abs/2007.08501
**[April 24th 2020]:** PyTorch3D [v0.2.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.2.0) released
**[March 25th 2020]:** [SynSin](https://arxiv.org/abs/1912.08804) codebase released using PyTorch3D: https://github.com/facebookresearch/synsin
**[March 8th 2020]:** PyTorch3D [v0.1.1](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.1.1) bug fix release
**[Jan 23rd 2020]:** PyTorch3D [v0.1.0](https://github.com/facebookresearch/pytorch3d/releases/tag/v0.1.0) released. [Mesh R-CNN](https://arxiv.org/abs/1906.02739) codebase released: https://github.com/facebookresearch/meshrcnn
...@@ -68,6 +68,7 @@ DEVICE T ...@@ -68,6 +68,7 @@ DEVICE T
WARP_CUMSUM(const cg::coalesced_group& group, const uint& mask, const T& base) { WARP_CUMSUM(const cg::coalesced_group& group, const uint& mask, const T& base) {
T ret = base; T ret = base;
T shfl_val; T shfl_val;
#ifndef __HIP_PLATFORM_HCC__
shfl_val = __shfl_down_sync(mask, ret, 1u); // Deactivate the rightmost lane. shfl_val = __shfl_down_sync(mask, ret, 1u); // Deactivate the rightmost lane.
ret += (group.thread_rank() < 31) * shfl_val; ret += (group.thread_rank() < 31) * shfl_val;
shfl_val = __shfl_down_sync(mask, ret, 2u); shfl_val = __shfl_down_sync(mask, ret, 2u);
...@@ -78,6 +79,20 @@ WARP_CUMSUM(const cg::coalesced_group& group, const uint& mask, const T& base) { ...@@ -78,6 +79,20 @@ WARP_CUMSUM(const cg::coalesced_group& group, const uint& mask, const T& base) {
ret += (group.thread_rank() < 24) * shfl_val; ret += (group.thread_rank() < 24) * shfl_val;
shfl_val = __shfl_down_sync(mask, ret, 16u); // ...16 shfl_val = __shfl_down_sync(mask, ret, 16u); // ...16
ret += (group.thread_rank() < 16) * shfl_val; ret += (group.thread_rank() < 16) * shfl_val;
#else
shfl_val = __shfl_down(ret, 1u); // Deactivate the rightmost lane.
ret += (group.thread_rank() < 63) * shfl_val;
shfl_val = __shfl_down(ret, 2u);
ret += (group.thread_rank() < 62) * shfl_val;
shfl_val = __shfl_down(ret, 4u); // ...4
ret += (group.thread_rank() < 60) * shfl_val;
shfl_val = __shfl_down(ret, 8u); // ...8
ret += (group.thread_rank() < 56) * shfl_val;
shfl_val = __shfl_down(ret, 16u); // ...16
ret += (group.thread_rank() < 48) * shfl_val;
shfl_val = __shfl_down(ret, 32u); // ...32
ret += (group.thread_rank() < 32) * shfl_val;
#endif
return ret; return ret;
} }
...@@ -85,11 +100,20 @@ template <typename T> ...@@ -85,11 +100,20 @@ template <typename T>
DEVICE T DEVICE T
WARP_MAX(const cg::coalesced_group& group, const uint& mask, const T& base) { WARP_MAX(const cg::coalesced_group& group, const uint& mask, const T& base) {
T ret = base; T ret = base;
#ifndef __HIP_PLATFORM_HCC__
ret = max(ret, __shfl_down_sync(mask, ret, 16u)); ret = max(ret, __shfl_down_sync(mask, ret, 16u));
ret = max(ret, __shfl_down_sync(mask, ret, 8u)); ret = max(ret, __shfl_down_sync(mask, ret, 8u));
ret = max(ret, __shfl_down_sync(mask, ret, 4u)); ret = max(ret, __shfl_down_sync(mask, ret, 4u));
ret = max(ret, __shfl_down_sync(mask, ret, 2u)); ret = max(ret, __shfl_down_sync(mask, ret, 2u));
ret = max(ret, __shfl_down_sync(mask, ret, 1u)); ret = max(ret, __shfl_down_sync(mask, ret, 1u));
#else
ret = max(ret, __shfl_down(ret, 32u));
ret = max(ret, __shfl_down(ret, 16u));
ret = max(ret, __shfl_down(ret, 8u));
ret = max(ret, __shfl_down(ret, 4u));
ret = max(ret, __shfl_down(ret, 2u));
ret = max(ret, __shfl_down(ret, 1u));
#endif
return ret; return ret;
} }
...@@ -97,11 +121,20 @@ template <typename T> ...@@ -97,11 +121,20 @@ template <typename T>
DEVICE T DEVICE T
WARP_SUM(const cg::coalesced_group& group, const uint& mask, const T& base) { WARP_SUM(const cg::coalesced_group& group, const uint& mask, const T& base) {
T ret = base; T ret = base;
#ifndef __HIP_PLATFORM_HCC__
ret = ret + __shfl_down_sync(mask, ret, 16u); ret = ret + __shfl_down_sync(mask, ret, 16u);
ret = ret + __shfl_down_sync(mask, ret, 8u); ret = ret + __shfl_down_sync(mask, ret, 8u);
ret = ret + __shfl_down_sync(mask, ret, 4u); ret = ret + __shfl_down_sync(mask, ret, 4u);
ret = ret + __shfl_down_sync(mask, ret, 2u); ret = ret + __shfl_down_sync(mask, ret, 2u);
ret = ret + __shfl_down_sync(mask, ret, 1u); ret = ret + __shfl_down_sync(mask, ret, 1u);
#else
ret = ret + __shfl_down(ret, 32u);
ret = ret + __shfl_down(ret, 16u);
ret = ret + __shfl_down(ret, 8u);
ret = ret + __shfl_down(ret, 4u);
ret = ret + __shfl_down(ret, 2u);
ret = ret + __shfl_down(ret, 1u);
#endif
return ret; return ret;
} }
...@@ -142,6 +175,7 @@ INLINE DEVICE float3 WARP_SUM_FLOAT3( ...@@ -142,6 +175,7 @@ INLINE DEVICE float3 WARP_SUM_FLOAT3(
#define FMA(x, y, z) __fmaf_rn((x), (y), (z)) #define FMA(x, y, z) __fmaf_rn((x), (y), (z))
#define I2F(a) __int2float_rn(a) #define I2F(a) __int2float_rn(a)
#define FRCP(x) __frcp_rn(x) #define FRCP(x) __frcp_rn(x)
#ifndef __HIP_PLATFORM_HCC__
__device__ static float atomicMax(float* address, float val) { __device__ static float atomicMax(float* address, float val) {
int* address_as_i = (int*)address; int* address_as_i = (int*)address;
int old = *address_as_i, assumed; int old = *address_as_i, assumed;
...@@ -166,6 +200,7 @@ __device__ static float atomicMin(float* address, float val) { ...@@ -166,6 +200,7 @@ __device__ static float atomicMin(float* address, float val) {
} while (assumed != old); } while (assumed != old);
return __int_as_float(old); return __int_as_float(old);
} }
#endif
#define DMAX(a, b) FMAX(a, b) #define DMAX(a, b) FMAX(a, b)
#define DMIN(a, b) FMIN(a, b) #define DMIN(a, b) FMIN(a, b)
#define DSQRT(a) sqrt(a) #define DSQRT(a) sqrt(a)
......
...@@ -20,6 +20,14 @@ IHD CamGradInfo::CamGradInfo() { ...@@ -20,6 +20,14 @@ IHD CamGradInfo::CamGradInfo() {
pixel_dir_x = make_float3(0.f, 0.f, 0.f); pixel_dir_x = make_float3(0.f, 0.f, 0.f);
pixel_dir_y = make_float3(0.f, 0.f, 0.f); pixel_dir_y = make_float3(0.f, 0.f, 0.f);
} }
#ifdef __HIP_PLATFORM_HCC__
IHD CamGradInfo::CamGradInfo(float val) {
cam_pos = make_float3(val, val, val);
pixel_0_0_center = make_float3(val, val, val);
pixel_dir_x = make_float3(val, val, val);
pixel_dir_y = make_float3(val, val, val);
}
#endif
} // namespace pulsar } // namespace pulsar
#endif #endif
...@@ -64,6 +64,9 @@ inline bool operator==(const CamInfo& a, const CamInfo& b) { ...@@ -64,6 +64,9 @@ inline bool operator==(const CamInfo& a, const CamInfo& b) {
struct CamGradInfo { struct CamGradInfo {
HOST DEVICE CamGradInfo(); HOST DEVICE CamGradInfo();
#ifdef __HIP_PLATFORM_HCC__
HOST DEVICE CamGradInfo(float val);
#endif
float3 cam_pos; float3 cam_pos;
float3 pixel_0_0_center; float3 pixel_0_0_center;
float3 pixel_dir_x; float3 pixel_dir_x;
...@@ -72,6 +75,10 @@ struct CamGradInfo { ...@@ -72,6 +75,10 @@ struct CamGradInfo {
// TODO: remove once https://github.com/NVlabs/cub/issues/172 is resolved. // TODO: remove once https://github.com/NVlabs/cub/issues/172 is resolved.
struct IntWrapper { struct IntWrapper {
#ifdef __HIP_PLATFORM_HCC__
HOST DEVICE IntWrapper(){}
HOST DEVICE explicit IntWrapper(int ival):val{0}{}
#endif
int val; int val;
}; };
......
...@@ -46,6 +46,7 @@ IHD float3 outer_product_sum(const float3& a) { ...@@ -46,6 +46,7 @@ IHD float3 outer_product_sum(const float3& a) {
} }
// TODO: put intrinsics here. // TODO: put intrinsics here.
#ifndef __HIP_PLATFORM_HCC__
IHD float3 operator+(const float3& a, const float3& b) { IHD float3 operator+(const float3& a, const float3& b) {
return make_float3(a.x + b.x, a.y + b.y, a.z + b.z); return make_float3(a.x + b.x, a.y + b.y, a.z + b.z);
} }
...@@ -93,6 +94,7 @@ IHD float3 operator*(const float3& a, const float3& b) { ...@@ -93,6 +94,7 @@ IHD float3 operator*(const float3& a, const float3& b) {
IHD float3 operator*(const float& a, const float3& b) { IHD float3 operator*(const float& a, const float3& b) {
return b * a; return b * a;
} }
#endif
INLINE DEVICE float length(const float3& v) { INLINE DEVICE float length(const float3& v) {
// TODO: benchmark what's faster. // TODO: benchmark what's faster.
......
...@@ -53,10 +53,17 @@ struct IntersectInfoMinMax { ...@@ -53,10 +53,17 @@ struct IntersectInfoMinMax {
return b; return b;
} }
IntersectInfo result; IntersectInfo result;
#ifndef __HIP_PLATFORM_HCC__
result.min.x = std::min<ushort>(a.min.x, b.min.x); result.min.x = std::min<ushort>(a.min.x, b.min.x);
result.min.y = std::min<ushort>(a.min.y, b.min.y); result.min.y = std::min<ushort>(a.min.y, b.min.y);
result.max.x = std::max<ushort>(a.max.x, b.max.x); result.max.x = std::max<ushort>(a.max.x, b.max.x);
result.max.y = std::max<ushort>(a.max.y, b.max.y); result.max.y = std::max<ushort>(a.max.y, b.max.y);
#else
result.min.x = std::min<ushort>(static_cast<ushort>(a.min.x), static_cast<ushort>(b.min.x));
result.min.y = std::min<ushort>(static_cast<ushort>(a.min.y), static_cast<ushort>(b.min.y));
result.max.x = std::max<ushort>(static_cast<ushort>(a.max.x), static_cast<ushort>(b.max.x));
result.max.y = std::max<ushort>(static_cast<ushort>(a.max.y), static_cast<ushort>(b.max.y));
#endif
return result; return result;
} }
}; };
......
...@@ -18,6 +18,7 @@ const auto vEpsilon = 1e-8; ...@@ -18,6 +18,7 @@ const auto vEpsilon = 1e-8;
// Common functions and operators for float2. // Common functions and operators for float2.
#ifndef __HIP_PLATFORM_HCC__
__device__ inline float2 operator-(const float2& a, const float2& b) { __device__ inline float2 operator-(const float2& a, const float2& b) {
return make_float2(a.x - b.x, a.y - b.y); return make_float2(a.x - b.x, a.y - b.y);
} }
...@@ -41,6 +42,7 @@ __device__ inline float2 operator*(const float2& a, const float2& b) { ...@@ -41,6 +42,7 @@ __device__ inline float2 operator*(const float2& a, const float2& b) {
__device__ inline float2 operator*(const float a, const float2& b) { __device__ inline float2 operator*(const float a, const float2& b) {
return make_float2(a * b.x, a * b.y); return make_float2(a * b.x, a * b.y);
} }
#endif
__device__ inline float FloatMin3(const float a, const float b, const float c) { __device__ inline float FloatMin3(const float a, const float b, const float c) {
return fminf(a, fminf(b, c)); return fminf(a, fminf(b, c));
......
...@@ -23,37 +23,49 @@ WarpReduceMin(scalar_t* min_dists, int64_t* min_idxs, const size_t tid) { ...@@ -23,37 +23,49 @@ WarpReduceMin(scalar_t* min_dists, int64_t* min_idxs, const size_t tid) {
min_idxs[tid] = min_idxs[tid + 32]; min_idxs[tid] = min_idxs[tid + 32];
min_dists[tid] = min_dists[tid + 32]; min_dists[tid] = min_dists[tid + 32];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
// s = 16 // s = 16
if (min_dists[tid] > min_dists[tid + 16]) { if (min_dists[tid] > min_dists[tid + 16]) {
min_idxs[tid] = min_idxs[tid + 16]; min_idxs[tid] = min_idxs[tid + 16];
min_dists[tid] = min_dists[tid + 16]; min_dists[tid] = min_dists[tid + 16];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
// s = 8 // s = 8
if (min_dists[tid] > min_dists[tid + 8]) { if (min_dists[tid] > min_dists[tid + 8]) {
min_idxs[tid] = min_idxs[tid + 8]; min_idxs[tid] = min_idxs[tid + 8];
min_dists[tid] = min_dists[tid + 8]; min_dists[tid] = min_dists[tid + 8];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
// s = 4 // s = 4
if (min_dists[tid] > min_dists[tid + 4]) { if (min_dists[tid] > min_dists[tid + 4]) {
min_idxs[tid] = min_idxs[tid + 4]; min_idxs[tid] = min_idxs[tid + 4];
min_dists[tid] = min_dists[tid + 4]; min_dists[tid] = min_dists[tid + 4];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
// s = 2 // s = 2
if (min_dists[tid] > min_dists[tid + 2]) { if (min_dists[tid] > min_dists[tid + 2]) {
min_idxs[tid] = min_idxs[tid + 2]; min_idxs[tid] = min_idxs[tid + 2];
min_dists[tid] = min_dists[tid + 2]; min_dists[tid] = min_dists[tid + 2];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
// s = 1 // s = 1
if (min_dists[tid] > min_dists[tid + 1]) { if (min_dists[tid] > min_dists[tid + 1]) {
min_idxs[tid] = min_idxs[tid + 1]; min_idxs[tid] = min_idxs[tid + 1];
min_dists[tid] = min_dists[tid + 1]; min_dists[tid] = min_dists[tid + 1];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
} }
template <typename scalar_t> template <typename scalar_t>
...@@ -65,30 +77,42 @@ __device__ void WarpReduceMax( ...@@ -65,30 +77,42 @@ __device__ void WarpReduceMax(
dists[tid] = dists[tid + 32]; dists[tid] = dists[tid + 32];
dists_idx[tid] = dists_idx[tid + 32]; dists_idx[tid] = dists_idx[tid + 32];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 16]) { if (dists[tid] < dists[tid + 16]) {
dists[tid] = dists[tid + 16]; dists[tid] = dists[tid + 16];
dists_idx[tid] = dists_idx[tid + 16]; dists_idx[tid] = dists_idx[tid + 16];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 8]) { if (dists[tid] < dists[tid + 8]) {
dists[tid] = dists[tid + 8]; dists[tid] = dists[tid + 8];
dists_idx[tid] = dists_idx[tid + 8]; dists_idx[tid] = dists_idx[tid + 8];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 4]) { if (dists[tid] < dists[tid + 4]) {
dists[tid] = dists[tid + 4]; dists[tid] = dists[tid + 4];
dists_idx[tid] = dists_idx[tid + 4]; dists_idx[tid] = dists_idx[tid + 4];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 2]) { if (dists[tid] < dists[tid + 2]) {
dists[tid] = dists[tid + 2]; dists[tid] = dists[tid + 2];
dists_idx[tid] = dists_idx[tid + 2]; dists_idx[tid] = dists_idx[tid + 2];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
if (dists[tid] < dists[tid + 1]) { if (dists[tid] < dists[tid + 1]) {
dists[tid] = dists[tid + 1]; dists[tid] = dists[tid + 1];
dists_idx[tid] = dists_idx[tid + 1]; dists_idx[tid] = dists_idx[tid + 1];
} }
#ifndef __HIP_PLATFORM_HCC__
__syncwarp(); __syncwarp();
#endif
} }
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment