- 02 Oct, 2024 2 commits
-
-
Stanislav Pidhorskyi authored
Summary: Added another tutorial Reviewed By: una-dinosauria Differential Revision: D63753117 fbshipit-source-id: d1fc0f4ca85b97da341b22c3bda71adb787a7140
-
Stanislav Pidhorskyi authored
Summary: Fixing Automated checkup https://www.internalfb.com/intern/opensource/github/repo/978273877280726/checkup/ Reviewed By: una-dinosauria Differential Revision: D63753101 fbshipit-source-id: 550464583a0a81c79a87c351de0e0aae2041db5f
-
- 01 Oct, 2024 1 commit
-
-
Stanislav Pidhorskyi authored
Summary: Added sphinx docs and github workflow to build gh-pages Reviewed By: HapeMask Differential Revision: D63498319 fbshipit-source-id: 401aabb6dc1624a26c2c9231ab2a1e6b98458190
-
- 27 Sep, 2024 2 commits
-
-
Chris Klaiber authored
Summary: This unblocks building DRTK in a Docker build since BuildKit doesn't yet support GPUs: https://github.com/moby/buildkit/issues/1436 This works because CUDA is needed at run-time but not at build-time. When CUDA is present at build-time, the currently installed cards set which archs the extensions are built for. However, the archs are manually fixed in setup.py, so this commit also adds support for respecting TORCH_CUDA_ARCH_LIST which controls archs within torch.utils.cpp_extension.CUDAExtension Pull Request resolved: https://github.com/facebookresearch/DRTK/pull/5 Test Plan: Build without TORCH_CUDA_ARCH_LIST and observe using `ps` that compilation gets the flags specified in setup.py. Build with TORCH_CUDA_ARCH_LIST=Turing and TORCH_CUDA_ARCH_LIST=8.6 and observe using `ps` that compilation gets only the flags for the requested archs. Build within a Dockerfile using `docker-compose up --build`, which is using BuildKit, and observe that compilation succeeds and the library is usable when later run with CUDA GPUs available. Reviewed By: HapeMask Differential Revision: D63513797 Pulled By: podgorskiy fbshipit-source-id: 54ce6765ccc37317999f18690a73823eb074f08f
-
Stanislav Pidhorskyi authored
Summary: This is necessary if we want to run building docs in github workflow Differential Revision: D63497765 fbshipit-source-id: 018157c205a66584dd040882124588e499439893
-
- 26 Sep, 2024 6 commits
-
-
Stanislav Pidhorskyi authored
Summary: As title says. The is for the sphinx documentation. Reviewed By: HapeMask Differential Revision: D63440496 fbshipit-source-id: 483fdfc6cbc14ce8f88e6d048553488f1a0f8ed3
-
Stanislav Pidhorskyi authored
Summary: Sequence is not TorchScript compatible, replacing with a List. Also the original annotation `Optional[Sequence[str]]` was not correct, as implementation was actially allowing usage of `str` type. Wrapped `project_points` with decorator `torch.jit.ignore` as it violates TorchScript too much (uses sets, changes types of variables, uses None with other types aggressively) Differential Revision: D63444533 fbshipit-source-id: 46604cbd239ed8b9051ad371779fb81106987dc5
-
Stanislav Pidhorskyi authored
Summary: Make `rasterize_with_depth` and `transform_with_v_cam` accessible from the root drtk module as: ``` from drtk import rasterize_with_depth, transform_with_v_cam ``` instead of ``` from drtk.rasterize import rasterize_with_depth from drtk.transform import transform_with_v_cam ``` This has greater importance for DGX and GitHub rather than prod, since because of how buck is setup, it still won't work. Reviewed By: una-dinosauria Differential Revision: D63440390 fbshipit-source-id: 7f6c8c16fb8ef5aa10bef3491fd59727f1000d0b
-
Stanislav Pidhorskyi authored
Differential Revision: D63440224 fbshipit-source-id: 0a3647fd710f2d980b22f1954e439fa27a1b4864
-
Stanislav Pidhorskyi authored
Summary: Got legal approval 🥳 Reviewed By: una-dinosauria Differential Revision: D63428775 fbshipit-source-id: 7568ef2861ef10c2bd0367a7195cbbedf96ec8be
-
Facebook GitHub Bot authored
The internal and external repositories are out of sync. This Pull Request attempts to brings them back in sync by patching the GitHub repository. Please carefully review this patch. You must disable ShipIt for your project in order to merge this pull request. DO NOT IMPORT this pull request. Instead, merge it directly on GitHub using the MERGE BUTTON. Re-enable ShipIt after merging.
-
- 06 Sep, 2024 1 commit
-
-
Patrick Snape authored
Summary: This isn't the real fix it's just moving the files into a folder we can exclude in shipit. See D62289505 where we exclude this directory from shipit. This project is exported to open source (Github) so we need to not ship these files -- I didn't realize this when I wrote the tests. Reviewed By: jermenkoo Differential Revision: D62289557 fbshipit-source-id: 60afb00fd8a6aa54aabe12a8a3fcc161a65b0495
-
- 17 Aug, 2024 2 commits
-
-
Patrick Snape authored
Summary: Currently on master we have a bug in that we undistort images into a pinhole frame but then optimize in Rosetta using fisheye. Undistorting to pinhole is totally fine but then we should be optimizing in pinhole in Rosetta. So this is the "other" version of the fix I did in D58979100. In D58979100 I "fixed" the cameras by defining the undistort method to be fisheye rather than pinhole. In this diff I did the other way round - I've left the CameraOperator alone and I've fixed the distortion model we propagate after undistortion to be pinhole. Reviewed By: AlexRadionovMeta Differential Revision: D60927099 fbshipit-source-id: 316aa0d99396c49d61c31d743112f1aa0eef85cb
-
Patrick Snape authored
Summary: I wrote the unit test attached to triple check we were matching the perception cameras - but we were not. So I ported the code exactly and now we exactly match the perception cameras behavior. Looking at the code it seems to be a legit bug - and when I read the code my mental model says this may make a significant difference if the radial distortion parameters are high (which they are in BTL). However - Rosetta continues to be an enigma to me and it seems to hardly make a difference... Differential Revision: D60927097 fbshipit-source-id: 15e04b50634d8c236bfaf89fcf6f43aeff1ede7d
-
- 12 Aug, 2024 1 commit
-
-
Stanislav Pidhorskyi authored
Summary: Adds `grid_scatter` op that is similar to `grid_sample` but the grid points to the destination location instead of the source. `grid_scatter` is indeed dual to `grid_sample`. Forward of `grid_scatter` is backward of `grid_sample` and backward of `grid_scatter` is forward of `grid_sample` (with the exception for the gradient with respect to grid) which is reflected in the reference implementation in `drtk/grid_scatter.py`. ```python def grid_scatter( input: th.Tensor, grid: th.Tensor, output_height: int, output_width: int, mode: str = "bilinear", padding_mode: str = "border", align_corners: Optional[bool] = None, ) -> th.Tensor: ``` Where : * `input` [N x C x H x W]: is the input tensor values from which will be transferred to the result. * `grid` [N x H x W x 2]: is the grid tensor that points to the location where the values from the input tensor should be copied to. The `W`, `H` sizes of grid should match the corresponding sizes of the `input` tensor. * `output_height`, `output_width`: size of the output, where output will be: [N x C x `output_height` x `output_width`]. In contrast to `grid_sample`, we can no longer rely on the sizes of the `grid` for this information. * `mode`, `padding_mode`, `align_corners` same as for the `grid_sample`, but now for the reverse operation - splatting (or scattering). At the moment does not support "nearest" mode, which is rarely needed. Maybe will add later. Ideally, we would also want to support autocast mode where the `input` and output tensors are float16 while the `grid` is float32. This is not the case at the moment, but I'll add that later. ## Example usage Let's assume that we loaded mesh into `v, vi, vt, vti`, have defined `image_width, image_height`, `cam_pos`, `cam_rot`, `focal`, `princpt`, and computed normals for the mesh `normals`. We also define a shading function, e.g.: ```lang=python def shade( vn_img: th.Tensor, light_dir: th.Tensor, ambient_intensity: float = 1.0, direct_intensity: float = 1.0, shadow_img: Optional[th.Tensor] = None, ): ambient = (vn_img[:, 1:2] * 0.5 + 0.5) * th.as_tensor([0.45, 0.5, 0.7]).cuda()[ None, :, None, None ] direct = ( th.sum(vn_img.mul(thf.normalize(light_dir, dim=1)), dim=1, keepdim=True).clamp( min=0.0 ) * th.as_tensor([0.65, 0.6, 0.5]).cuda()[None, :, None, None] ) if shadow_img is not None: direct = direct * shadow_img return th.pow(ambient * ambient_intensity + direct * direct_intensity, 1 / 2.2) ``` And we can render the image as: ```lang=python v_pix = transform(v, cam_pos, cam_rot, focal, princpt) index_img = rasterize(v_pix, vi, image_height, image_width) _, bary_img = render(v_pix, vi, index_img) # mask image mask: th.Tensor = (index_img != -1)[:, None] # compute vt image vt_img = interpolate(vt.mul(2.0).sub(1.0)[None], vti, index_img, bary_img) # compute normals vn_img = interpolate(normals, vi, index_img, bary_img) diffuse = ( shade(vn_img, th.as_tensor([0.5, 0.5, 0.0]).cuda()[None, :, None, None]) * mask ) ``` {F1801805545} ## Shadow mapping We can use `grid_scatter` to compute mesh visibility from the camera view: ```lang=python texel_weight = grid_scatter( mask.float(), vt_img.permute(0, 2, 3, 1), output_width=512, output_height=512, mode="bilinear", padding_mode="border", align_corners=False, ) threshold = 0.1 # texel_weight is proportional to how much pixel are the texel covers. We can specify a threshold of how much covered pixel area counts as visible. visibility = (texel_weight > threshold).float() ``` {F1801810094} Now we can render the scene from different angle and use the visibility mask for shadows: ```lang=python v_pix = transform(v, cam_pos_new, cam_rot_new, focal, princpt) index_img = rasterize(v_pix, vi, image_height, image_width) _, bary_img = render(v_pix, vi, index_img) # mask image mask: th.Tensor = (index_img != -1)[:, None] # compute vt image vt_img = interpolate(vt.mul(2.0).sub(1.0)[None], vti, index_img, bary_img) # compute v image (for near-field) v_img = interpolate(v, vi, index_img, bary_img) # shadow shadow_img = thf.grid_sample(visibility, vt_img.permute(0, 2, 3, 1), mode="bilinear", padding_mode="border", align_corners=False) # compute normals vn_img = interpolate(normals, vi, index_img, bary_img) diffuse = shade(vn_img, cam_pos[:, :, None, None] - v_img, 0.05, 0.4, shadow_img) * mask ``` {F1801811232} ## Texture projection Let's load a test image: ```lang=python import skimage test_image = ( th.as_tensor(skimage.data.coffee(), dtype=th.float32).permute(2, 0, 1)[None, ...].mul(1 / 255).contiguous().cuda() ) test_image = thf.interpolate(test_image, scale_factor=2.0, mode="bilinear", align_corners=False) ``` {F1801814094} We can use `grid_scatter` to project the image onto the uv space: ```lang=python camera_image_extended = ( th.cat([test_image, th.ones_like(test_image[:, :1])], dim=1) * mask ) texture_weight = grid_scatter( camera_image_extended, vt_img.permute(0, 2, 3, 1), output_width=512, output_height=512, mode="bilinear", padding_mode="border", align_corners=False, ) texture = texture_weight[:, :3] / texture_weight[:, 3:4].clamp(min=1e-4) ``` {F1801816367} And if we render the scene from a different angle using the projected texture: {F1801817130} Reviewed By: HapeMask Differential Revision: D61006613 fbshipit-source-id: 98c83ba4eda531e9d73cb9e533176286dc699f63
-
- 07 Aug, 2024 1 commit
-
-
Stanislav Pidhorskyi authored
Summary: For unused warps we always write zeros to `bary_img_grad`. However it is possible that the warp is used, but only portion of the threads are used. In this case, for unused threads we do not write zeros to `bary_img_grad`. For efficiency, `bary_img_grad` is created with `at::empty`, thus those before mentioned entries, will still have uninitialized values. This is not an issue, because the `render` function will never read those entries, however it is possible that the uninitialized values will coincide with `nan` and it will trigger a false positive detection in auto grad anomaly detection. Please see more details in D60904848 about the issue. Differential Revision: D60912474 fbshipit-source-id: 6eda5a07789db456c17eb60de222dd4c7b1c53d2
-
- 21 Jul, 2024 1 commit
-
-
Owen Wang authored
Summary: Apply fisheye62 distortion to HQLP camera renders. Visualization of results. 4 rows are in order: HMC, drtk render, pixel render, pinhole img. https://www.internalfb.com/manifold/explorer/codec-avatars-scratch/tree/hmc_nvs/20240516_rosetta_nearfield/runs/TEST_run_hqlp_rosetta_render_ojwang_v0719_002/dome_hmc_nvs_render/MLU130_20220509--0000_codec/vrs_comparison/000002__MLU130-brows_lowered-2022-05-09-16-08-12.png Debug distortion document, tracked the work: https://docs.google.com/document/d/1nSnFMrGWXVoICDEstpL9JZAnRtiuSMnWjnfGDzIfrNE/edit?pli=1 Reviewed By: euclid1767 Differential Revision: D58825272 fbshipit-source-id: 463f8c60014bd9cde898d72875bec49ffc75e453
-
- 18 Jul, 2024 1 commit
-
-
Gabe Schwartz authored
Summary: In order to work properly with `torch.compile()`, we need to decorate any function that calls a custom C++/CUDA extension with `torch.compiler.disable` so that it knows to insert a graph break. Reviewed By: podgorskiy Differential Revision: D59776177 fbshipit-source-id: d80eb43858836f8b8647d2a35b30d0b863989e94
-
- 11 Jun, 2024 1 commit
-
-
Christian Richardt authored
Summary: Created from CodeHub with https://fburl.com/edit-in-codehub Reviewed By: una-dinosauria Differential Revision: D58369133 fbshipit-source-id: cf6a0e4710be579f60f6d569d23c50183dfd6018
-
- 10 Jun, 2024 1 commit
-
-
Gabe Schwartz authored
Summary: Created from CodeHub with https://fburl.com/edit-in-codehub Reviewed By: tsimk Differential Revision: D58368652 fbshipit-source-id: 8f3f426f0825255e5e6d6b32b8f1bdb2cde6e161
-
- 08 Jun, 2024 1 commit
-
-
facebook-github-bot authored
fbshipit-source-id: afc575e8e7d8e2796a3f77d8b1c6c4fcb999558d
-