Commit 124bb5e3 authored by Jeremy Reizenstein's avatar Jeremy Reizenstein Committed by Facebook GitHub Bot
Browse files

spelling

Summary: Collection of spelling things, mostly in docs / tutorials.

Reviewed By: gkioxari

Differential Revision: D26101323

fbshipit-source-id: 652f62bc9d71a4ff872efa21141225e43191353a
parent c2e62a50
...@@ -266,7 +266,7 @@ class RadianceFieldRenderer(torch.nn.Module): ...@@ -266,7 +266,7 @@ class RadianceFieldRenderer(torch.nn.Module):
image: torch.Tensor, image: torch.Tensor,
) -> Tuple[dict, dict]: ) -> Tuple[dict, dict]:
""" """
Performs the coarse and fine rendering passees of the radiance field Performs the coarse and fine rendering passes of the radiance field
from the viewpoint of the input `camera`. from the viewpoint of the input `camera`.
Afterwards, both renders are compared to the input ground truth `image` Afterwards, both renders are compared to the input ground truth `image`
by evaluating the peak signal-to-noise ratio and the mean-squared error. by evaluating the peak signal-to-noise ratio and the mean-squared error.
......
...@@ -36,7 +36,7 @@ class EmissionAbsorptionNeRFRaymarcher(EmissionAbsorptionRaymarcher): ...@@ -36,7 +36,7 @@ class EmissionAbsorptionNeRFRaymarcher(EmissionAbsorptionRaymarcher):
rays_features: Per-ray feature values represented with a tensor rays_features: Per-ray feature values represented with a tensor
of shape `(..., n_points_per_ray, feature_dim)`. of shape `(..., n_points_per_ray, feature_dim)`.
eps: A lower bound added to `rays_densities` before computing eps: A lower bound added to `rays_densities` before computing
the absorbtion function (cumprod of `1-rays_densities` along the absorption function (cumprod of `1-rays_densities` along
each ray). This prevents the cumprod to yield exact 0 each ray). This prevents the cumprod to yield exact 0
which would inhibit any gradient-based learning. which would inhibit any gradient-based learning.
...@@ -44,7 +44,7 @@ class EmissionAbsorptionNeRFRaymarcher(EmissionAbsorptionRaymarcher): ...@@ -44,7 +44,7 @@ class EmissionAbsorptionNeRFRaymarcher(EmissionAbsorptionRaymarcher):
features: A tensor of shape `(..., feature_dim)` containing features: A tensor of shape `(..., feature_dim)` containing
the rendered features for each ray. the rendered features for each ray.
weights: A tensor of shape `(..., n_points_per_ray)` containing weights: A tensor of shape `(..., n_points_per_ray)` containing
the ray-specific emission-absorbtion distribution. the ray-specific emission-absorption distribution.
Each ray distribution `(..., :)` is a valid probability Each ray distribution `(..., :)` is a valid probability
distribution, i.e. it contains non-negative values that integrate distribution, i.e. it contains non-negative values that integrate
to 1, such that `weights.sum(dim=-1)==1).all()` yields `True`. to 1, such that `weights.sum(dim=-1)==1).all()` yields `True`.
......
...@@ -60,7 +60,7 @@ def main(cfg: DictConfig): ...@@ -60,7 +60,7 @@ def main(cfg: DictConfig):
print(f"Loading checkpoint {checkpoint_path}.") print(f"Loading checkpoint {checkpoint_path}.")
loaded_data = torch.load(checkpoint_path) loaded_data = torch.load(checkpoint_path)
# Do not load the cached xy grid. # Do not load the cached xy grid.
# - this allows to set an arbitrary evaluation image size. # - this allows setting an arbitrary evaluation image size.
state_dict = { state_dict = {
k: v k: v
for k, v in loaded_data["model"].items() for k, v in loaded_data["model"].items()
...@@ -121,7 +121,7 @@ def main(cfg: DictConfig): ...@@ -121,7 +121,7 @@ def main(cfg: DictConfig):
test_image = test_image.to(device) test_image = test_image.to(device)
test_camera = test_camera.to(device) test_camera = test_camera.to(device)
# Activate eval mode of the model (allows to do a full rendering pass). # Activate eval mode of the model (lets us do a full rendering pass).
model.eval() model.eval()
with torch.no_grad(): with torch.no_grad():
test_nerf_out, test_metrics = model( test_nerf_out, test_metrics = model(
......
...@@ -70,7 +70,7 @@ class TestRaysampler(unittest.TestCase): ...@@ -70,7 +70,7 @@ class TestRaysampler(unittest.TestCase):
def test_probabilistic_raysampler(self, batch_size=1, n_pts_per_ray=60): def test_probabilistic_raysampler(self, batch_size=1, n_pts_per_ray=60):
""" """
Check that the probabilisitc ray sampler does not crash for various Check that the probabilistic ray sampler does not crash for various
settings. settings.
""" """
......
...@@ -31,7 +31,7 @@ def main(cfg: DictConfig): ...@@ -31,7 +31,7 @@ def main(cfg: DictConfig):
else: else:
warnings.warn( warnings.warn(
"Please note that although executing on CPU is supported," "Please note that although executing on CPU is supported,"
+ "the training is unlikely to finish in resonable time." + "the training is unlikely to finish in reasonable time."
) )
device = "cpu" device = "cpu"
...@@ -109,7 +109,7 @@ def main(cfg: DictConfig): ...@@ -109,7 +109,7 @@ def main(cfg: DictConfig):
optimizer, lr_lambda, last_epoch=start_epoch - 1, verbose=False optimizer, lr_lambda, last_epoch=start_epoch - 1, verbose=False
) )
# Initialize the cache for storing variables needed for visulization. # Initialize the cache for storing variables needed for visualization.
visuals_cache = collections.deque(maxlen=cfg.visualization.history_size) visuals_cache = collections.deque(maxlen=cfg.visualization.history_size)
# Init the visualization visdom env. # Init the visualization visdom env.
...@@ -194,7 +194,7 @@ def main(cfg: DictConfig): ...@@ -194,7 +194,7 @@ def main(cfg: DictConfig):
if iteration % cfg.stats_print_interval == 0: if iteration % cfg.stats_print_interval == 0:
stats.print(stat_set="train") stats.print(stat_set="train")
# Update the visualisatioon cache. # Update the visualization cache.
visuals_cache.append( visuals_cache.append(
{ {
"camera": camera.cpu(), "camera": camera.cpu(),
...@@ -219,7 +219,7 @@ def main(cfg: DictConfig): ...@@ -219,7 +219,7 @@ def main(cfg: DictConfig):
val_image = val_image.to(device) val_image = val_image.to(device)
val_camera = val_camera.to(device) val_camera = val_camera.to(device)
# Activate eval mode of the model (allows to do a full rendering pass). # Activate eval mode of the model (lets us do a full rendering pass).
model.eval() model.eval()
with torch.no_grad(): with torch.no_grad():
val_nerf_out, val_metrics = model( val_nerf_out, val_metrics = model(
......
...@@ -105,7 +105,7 @@ __device__ bool CheckPointOutsideBoundingBox( ...@@ -105,7 +105,7 @@ __device__ bool CheckPointOutsideBoundingBox(
// which contains Pixel structs with the indices of the faces which intersect // which contains Pixel structs with the indices of the faces which intersect
// with this pixel sorted by closest z distance. If the point pxy lies in the // with this pixel sorted by closest z distance. If the point pxy lies in the
// face, the list (q) is updated and re-orderered in place. In addition // face, the list (q) is updated and re-orderered in place. In addition
// the auxillary variables q_size, q_max_z and q_max_idx are also modified. // the auxiliary variables q_size, q_max_z and q_max_idx are also modified.
// This code is shared between RasterizeMeshesNaiveCudaKernel and // This code is shared between RasterizeMeshesNaiveCudaKernel and
// RasterizeMeshesFineCudaKernel. // RasterizeMeshesFineCudaKernel.
template <typename FaceQ> template <typename FaceQ>
...@@ -275,7 +275,7 @@ __global__ void RasterizeMeshesNaiveCudaKernel( ...@@ -275,7 +275,7 @@ __global__ void RasterizeMeshesNaiveCudaKernel(
const int yi = H - 1 - pix_idx / W; const int yi = H - 1 - pix_idx / W;
const int xi = W - 1 - pix_idx % W; const int xi = W - 1 - pix_idx % W;
// screen coordinates to ndc coordiantes of pixel. // screen coordinates to ndc coordinates of pixel.
const float xf = PixToNonSquareNdc(xi, W, H); const float xf = PixToNonSquareNdc(xi, W, H);
const float yf = PixToNonSquareNdc(yi, H, W); const float yf = PixToNonSquareNdc(yi, H, W);
const float2 pxy = make_float2(xf, yf); const float2 pxy = make_float2(xf, yf);
......
...@@ -27,7 +27,7 @@ __device__ inline bool operator<(const Pix& a, const Pix& b) { ...@@ -27,7 +27,7 @@ __device__ inline bool operator<(const Pix& a, const Pix& b) {
// which contains Pixel structs with the indices of the points which intersect // which contains Pixel structs with the indices of the points which intersect
// with this pixel sorted by closest z distance. If the pixel pxy lies in the // with this pixel sorted by closest z distance. If the pixel pxy lies in the
// point, the list (q) is updated and re-orderered in place. In addition // point, the list (q) is updated and re-orderered in place. In addition
// the auxillary variables q_size, q_max_z and q_max_idx are also modified. // the auxiliary variables q_size, q_max_z and q_max_idx are also modified.
// This code is shared between RasterizePointsNaiveCudaKernel and // This code is shared between RasterizePointsNaiveCudaKernel and
// RasterizePointsFineCudaKernel. // RasterizePointsFineCudaKernel.
template <typename PointQ> template <typename PointQ>
...@@ -104,7 +104,7 @@ __global__ void RasterizePointsNaiveCudaKernel( ...@@ -104,7 +104,7 @@ __global__ void RasterizePointsNaiveCudaKernel(
const int yi = H - 1 - pix_idx / W; const int yi = H - 1 - pix_idx / W;
const int xi = W - 1 - pix_idx % W; const int xi = W - 1 - pix_idx % W;
// screen coordinates to ndc coordiantes of pixel. // screen coordinates to ndc coordinates of pixel.
const float xf = PixToNonSquareNdc(xi, W, H); const float xf = PixToNonSquareNdc(xi, W, H);
const float yf = PixToNonSquareNdc(yi, H, W); const float yf = PixToNonSquareNdc(yi, H, W);
......
...@@ -224,7 +224,7 @@ BarycentricPerspectiveCorrectionBackward( ...@@ -224,7 +224,7 @@ BarycentricPerspectiveCorrectionBackward(
// Clip negative barycentric coordinates to 0.0 and renormalize so // Clip negative barycentric coordinates to 0.0 and renormalize so
// the barycentric coordinates for a point sum to 1. When the blur_radius // the barycentric coordinates for a point sum to 1. When the blur_radius
// is greater than 0, a face will still be recorded as overlapping a pixel // is greater than 0, a face will still be recorded as overlapping a pixel
// if the pixel is outisde the face. In this case at least one of the // if the pixel is outside the face. In this case at least one of the
// barycentric coordinates for the pixel relative to the face will be negative. // barycentric coordinates for the pixel relative to the face will be negative.
// Clipping will ensure that the texture and z buffer are interpolated // Clipping will ensure that the texture and z buffer are interpolated
// correctly. // correctly.
......
...@@ -245,7 +245,7 @@ inline std::tuple<vec3<T>, T, T, T> BarycentricPerspectiveCorrectionBackward( ...@@ -245,7 +245,7 @@ inline std::tuple<vec3<T>, T, T, T> BarycentricPerspectiveCorrectionBackward(
// Clip negative barycentric coordinates to 0.0 and renormalize so // Clip negative barycentric coordinates to 0.0 and renormalize so
// the barycentric coordinates for a point sum to 1. When the blur_radius // the barycentric coordinates for a point sum to 1. When the blur_radius
// is greater than 0, a face will still be recorded as overlapping a pixel // is greater than 0, a face will still be recorded as overlapping a pixel
// if the pixel is outisde the face. In this case at least one of the // if the pixel is outside the face. In this case at least one of the
// barycentric coordinates for the pixel relative to the face will be negative. // barycentric coordinates for the pixel relative to the face will be negative.
// Clipping will ensure that the texture and z buffer are interpolated // Clipping will ensure that the texture and z buffer are interpolated
// correctly. // correctly.
......
...@@ -99,7 +99,7 @@ class R2N2(ShapeNetBase): ...@@ -99,7 +99,7 @@ class R2N2(ShapeNetBase):
path.join(SYNSET_DICT_DIR, "r2n2_synset_dict.json"), "r" path.join(SYNSET_DICT_DIR, "r2n2_synset_dict.json"), "r"
) as read_dict: ) as read_dict:
self.synset_dict = json.load(read_dict) self.synset_dict = json.load(read_dict)
# Inverse dicitonary mapping synset labels to corresponding offsets. # Inverse dictionary mapping synset labels to corresponding offsets.
self.synset_inv = {label: offset for offset, label in self.synset_dict.items()} self.synset_inv = {label: offset for offset, label in self.synset_dict.items()}
# Store synset and model ids of objects mentioned in the splits_file. # Store synset and model ids of objects mentioned in the splits_file.
...@@ -383,7 +383,7 @@ class R2N2(ShapeNetBase): ...@@ -383,7 +383,7 @@ class R2N2(ShapeNetBase):
view_idxs: each model will be rendered with the orientation(s) of the specified view_idxs: each model will be rendered with the orientation(s) of the specified
views. Only render by view_idxs if no camera or args for BlenderCamera is views. Only render by view_idxs if no camera or args for BlenderCamera is
supplied. supplied.
Accepts any of the args of the render function in ShapnetBase: Accepts any of the args of the render function in ShapeNetBase:
model_ids: List[str] of model_ids of models intended to be rendered. model_ids: List[str] of model_ids of models intended to be rendered.
categories: List[str] of categories intended to be rendered. categories categories: List[str] of categories intended to be rendered. categories
and sample_nums must be specified at the same time. categories can be given and sample_nums must be specified at the same time. categories can be given
......
...@@ -97,8 +97,8 @@ def compute_extrinsic_matrix(azimuth, elevation, distance): ...@@ -97,8 +97,8 @@ def compute_extrinsic_matrix(azimuth, elevation, distance):
Copied from meshrcnn codebase: Copied from meshrcnn codebase:
https://github.com/facebookresearch/meshrcnn/blob/master/shapenet/utils/coords.py#L96 https://github.com/facebookresearch/meshrcnn/blob/master/shapenet/utils/coords.py#L96
Compute 4x4 extrinsic matrix that converts from homogenous world coordinates Compute 4x4 extrinsic matrix that converts from homogeneous world coordinates
to homogenous camera coordinates. We assume that the camera is looking at the to homogeneous camera coordinates. We assume that the camera is looking at the
origin. origin.
Used in R2N2 Dataset when computing calibration matrices. Used in R2N2 Dataset when computing calibration matrices.
...@@ -189,7 +189,7 @@ def _compute_idxs(vals, counts): ...@@ -189,7 +189,7 @@ def _compute_idxs(vals, counts):
Args: Args:
vals: tensor of binary values indicating voxel presence in a dense format. vals: tensor of binary values indicating voxel presence in a dense format.
counts: tensor of number of occurence of each value in vals. counts: tensor of number of occurrence of each value in vals.
Returns: Returns:
idxs: A tensor of shape (N), where N is the number of nonzero voxels. idxs: A tensor of shape (N), where N is the number of nonzero voxels.
...@@ -379,7 +379,7 @@ def project_verts(verts, P, eps=1e-1): ...@@ -379,7 +379,7 @@ def project_verts(verts, P, eps=1e-1):
Copied from meshrcnn codebase: Copied from meshrcnn codebase:
https://github.com/facebookresearch/meshrcnn/blob/master/shapenet/utils/coords.py#L159 https://github.com/facebookresearch/meshrcnn/blob/master/shapenet/utils/coords.py#L159
Project verticies using a 4x4 transformation matrix. Project vertices using a 4x4 transformation matrix.
Args: Args:
verts: FloatTensor of shape (N, V, 3) giving a batch of vertex positions or of verts: FloatTensor of shape (N, V, 3) giving a batch of vertex positions or of
...@@ -403,7 +403,7 @@ def project_verts(verts, P, eps=1e-1): ...@@ -403,7 +403,7 @@ def project_verts(verts, P, eps=1e-1):
# Add an extra row of ones to the world-space coordinates of verts before # Add an extra row of ones to the world-space coordinates of verts before
# multiplying by the projection matrix. We could avoid this allocation by # multiplying by the projection matrix. We could avoid this allocation by
# instead multiplying by a 4x3 submatrix of the projectio matrix, then # instead multiplying by a 4x3 submatrix of the projection matrix, then
# adding the remaining 4x1 vector. Not sure whether there will be much # adding the remaining 4x1 vector. Not sure whether there will be much
# performance difference between the two. # performance difference between the two.
ones = torch.ones(N, V, 1, dtype=dtype, device=device) ones = torch.ones(N, V, 1, dtype=dtype, device=device)
......
...@@ -37,7 +37,7 @@ class ShapeNetCore(ShapeNetBase): ...@@ -37,7 +37,7 @@ class ShapeNetCore(ShapeNetBase):
synset offsets or labels. A combination of both is also accepted. synset offsets or labels. A combination of both is also accepted.
When no category is specified, all categories in data_dir are loaded. When no category is specified, all categories in data_dir are loaded.
version: (int) version of ShapeNetCore data in data_dir, 1 or 2. version: (int) version of ShapeNetCore data in data_dir, 1 or 2.
Default is set to be 1. Version 1 has 57 categories and verions 2 has 55 Default is set to be 1. Version 1 has 57 categories and version 2 has 55
categories. categories.
Note: version 1 has two categories 02858304(boat) and 02992529(cellphone) Note: version 1 has two categories 02858304(boat) and 02992529(cellphone)
that are hyponyms of categories 04530566(watercraft) and 04401088(telephone) that are hyponyms of categories 04530566(watercraft) and 04401088(telephone)
...@@ -63,7 +63,7 @@ class ShapeNetCore(ShapeNetBase): ...@@ -63,7 +63,7 @@ class ShapeNetCore(ShapeNetBase):
dict_file = "shapenet_synset_dict_v%d.json" % version dict_file = "shapenet_synset_dict_v%d.json" % version
with open(path.join(SYNSET_DICT_DIR, dict_file), "r") as read_dict: with open(path.join(SYNSET_DICT_DIR, dict_file), "r") as read_dict:
self.synset_dict = json.load(read_dict) self.synset_dict = json.load(read_dict)
# Inverse dicitonary mapping synset labels to corresponding offsets. # Inverse dictionary mapping synset labels to corresponding offsets.
self.synset_inv = {label: offset for offset, label in self.synset_dict.items()} self.synset_inv = {label: offset for offset, label in self.synset_dict.items()}
# If categories are specified, check if each category is in the form of either # If categories are specified, check if each category is in the form of either
......
...@@ -250,7 +250,7 @@ class ShapeNetBase(torch.utils.data.Dataset): ...@@ -250,7 +250,7 @@ class ShapeNetBase(torch.utils.data.Dataset):
Helper function for sampling a number of indices from the given category. Helper function for sampling a number of indices from the given category.
Args: Args:
sample_num: number of indicies to be sampled from the given category. sample_num: number of indices to be sampled from the given category.
category: category synset of the category to be sampled from. If not category: category synset of the category to be sampled from. If not
specified, sample from all models in the loaded dataset. specified, sample from all models in the loaded dataset.
""" """
......
...@@ -28,7 +28,7 @@ def make_mesh_texture_atlas( ...@@ -28,7 +28,7 @@ def make_mesh_texture_atlas(
Args: Args:
material_properties: dict of properties for each material. If a material material_properties: dict of properties for each material. If a material
does not have any properties it will have an emtpy dict. does not have any properties it will have an empty dict.
texture_images: dict of material names and texture images texture_images: dict of material names and texture images
face_material_names: numpy array of the material name corresponding to each face_material_names: numpy array of the material name corresponding to each
face. Faces which don't have an associated material will be an empty string. face. Faces which don't have an associated material will be an empty string.
...@@ -220,13 +220,13 @@ def make_material_atlas( ...@@ -220,13 +220,13 @@ def make_material_atlas(
For each grid cell we can now calculate the centroid `(c_y, c_x)` For each grid cell we can now calculate the centroid `(c_y, c_x)`
of the corresponding texture triangle: of the corresponding texture triangle:
- if `(x + y) < R`, then offsett the centroid of - if `(x + y) < R`, then offset the centroid of
triangle 0 by `(y, x) * (1/R)` triangle 0 by `(y, x) * (1/R)`
- if `(x + y) > R`, then offset the centroid of - if `(x + y) > R`, then offset the centroid of
triangle 8 by `((R-1-y), (R-1-x)) * (1/R)`. triangle 8 by `((R-1-y), (R-1-x)) * (1/R)`.
This is equivalent to updating the portion of Grid 1 This is equivalent to updating the portion of Grid 1
above the diagnonal, replacing `(y, x)` with `((R-1-y), (R-1-x))`: above the diagonal, replacing `(y, x)` with `((R-1-y), (R-1-x))`:
..code-block::python ..code-block::python
......
...@@ -109,13 +109,13 @@ def load_obj( ...@@ -109,13 +109,13 @@ def load_obj(
Faces are interpreted as follows: Faces are interpreted as follows:
:: ::
5/2/1 describes the first vertex of the first triange 5/2/1 describes the first vertex of the first triangle
- 5: index of vertex [1.000000 1.000000 -1.000000] - 5: index of vertex [1.000000 1.000000 -1.000000]
- 2: index of texture coordinate [0.749279 0.501284] - 2: index of texture coordinate [0.749279 0.501284]
- 1: index of normal [0.000000 0.000000 -1.000000] - 1: index of normal [0.000000 0.000000 -1.000000]
If there are faces with more than 3 vertices If there are faces with more than 3 vertices
they are subdivided into triangles. Polygonal faces are assummed to have they are subdivided into triangles. Polygonal faces are assumed to have
vertices ordered counter-clockwise so the (right-handed) normal points vertices ordered counter-clockwise so the (right-handed) normal points
out of the screen e.g. a proper rectangular face would be specified like this: out of the screen e.g. a proper rectangular face would be specified like this:
:: ::
...@@ -368,7 +368,7 @@ def _parse_face( ...@@ -368,7 +368,7 @@ def _parse_face(
face_normals.append(int(vert_props[2])) face_normals.append(int(vert_props[2]))
if len(vert_props) > 3: if len(vert_props) > 3:
raise ValueError( raise ValueError(
"Face vertices can ony have 3 properties. \ "Face vertices can only have 3 properties. \
Face vert %s, Line: %s" Face vert %s, Line: %s"
% (str(vert_props), str(line)) % (str(vert_props), str(line))
) )
......
...@@ -33,7 +33,7 @@ and to save a point cloud you might do ...@@ -33,7 +33,7 @@ and to save a point cloud you might do
``` ```
pcl = Pointclouds(...) pcl = Pointclouds(...)
IO().save_pointcloud(pcl, "output_poincloud.obj") IO().save_pointcloud(pcl, "output_pointcloud.obj")
``` ```
""" """
...@@ -43,7 +43,7 @@ class IO: ...@@ -43,7 +43,7 @@ class IO:
""" """
This class is the interface to flexible loading and saving of meshes and point clouds. This class is the interface to flexible loading and saving of meshes and point clouds.
In simple cases the user will just initialise an instance of this class as `IO()` In simple cases the user will just initialize an instance of this class as `IO()`
and then use its load and save functions. The arguments of the initializer are not and then use its load and save functions. The arguments of the initializer are not
usually needed. usually needed.
...@@ -53,7 +53,7 @@ class IO: ...@@ -53,7 +53,7 @@ class IO:
Args: Args:
include_default_formats: If False, the built-in file formats will not be available. include_default_formats: If False, the built-in file formats will not be available.
Then only user-registered formats can be used. Then only user-registered formats can be used.
path_manager: Used to customise how paths given as strings are interpreted. path_manager: Used to customize how paths given as strings are interpreted.
""" """
def __init__( def __init__(
......
...@@ -380,7 +380,7 @@ def _try_read_ply_constant_list_ascii(f, definition: _PlyElementType): ...@@ -380,7 +380,7 @@ def _try_read_ply_constant_list_ascii(f, definition: _PlyElementType):
return data[:, 1:] return data[:, 1:]
def _parse_heterogenous_property_ascii(datum, line_iter, property: _Property): def _parse_heterogeneous_property_ascii(datum, line_iter, property: _Property):
""" """
Read a general data property from an ascii .ply file. Read a general data property from an ascii .ply file.
...@@ -431,7 +431,7 @@ def _read_ply_element_ascii(f, definition: _PlyElementType): ...@@ -431,7 +431,7 @@ def _read_ply_element_ascii(f, definition: _PlyElementType):
In simple cases where every element has the same size, 2D numpy array In simple cases where every element has the same size, 2D numpy array
corresponding to the data. The rows are the different values. corresponding to the data. The rows are the different values.
Otherwise a list of lists of values, where the outer list is Otherwise a list of lists of values, where the outer list is
each occurence of the element, and the inner lists have one value per each occurrence of the element, and the inner lists have one value per
property. property.
""" """
if not definition.count: if not definition.count:
...@@ -454,7 +454,7 @@ def _read_ply_element_ascii(f, definition: _PlyElementType): ...@@ -454,7 +454,7 @@ def _read_ply_element_ascii(f, definition: _PlyElementType):
datum = [] datum = []
line_iter = iter(line_string.strip().split()) line_iter = iter(line_string.strip().split())
for property in definition.properties: for property in definition.properties:
_parse_heterogenous_property_ascii(datum, line_iter, property) _parse_heterogeneous_property_ascii(datum, line_iter, property)
data.append(datum) data.append(datum)
if next(line_iter, None) is not None: if next(line_iter, None) is not None:
raise ValueError("Too much data for an element.") raise ValueError("Too much data for an element.")
...@@ -669,7 +669,7 @@ def _read_ply_element_binary(f, definition: _PlyElementType, big_endian: bool) - ...@@ -669,7 +669,7 @@ def _read_ply_element_binary(f, definition: _PlyElementType, big_endian: bool) -
In simple cases where every element has the same size, 2D numpy array In simple cases where every element has the same size, 2D numpy array
corresponding to the data. The rows are the different values. corresponding to the data. The rows are the different values.
Otherwise a list of lists/tuples of values, where the outer list is Otherwise a list of lists/tuples of values, where the outer list is
each occurence of the element, and the inner lists have one value per each occurrence of the element, and the inner lists have one value per
property. property.
""" """
if not definition.count: if not definition.count:
...@@ -1027,7 +1027,7 @@ def _save_ply( ...@@ -1027,7 +1027,7 @@ def _save_ply(
Args: Args:
f: File object to which the 3D data should be written. f: File object to which the 3D data should be written.
verts: FloatTensor of shape (V, 3) giving vertex coordinates. verts: FloatTensor of shape (V, 3) giving vertex coordinates.
faces: LongTensor of shsape (F, 3) giving faces. faces: LongTensor of shape (F, 3) giving faces.
verts_normals: FloatTensor of shape (V, 3) giving vertex normals. verts_normals: FloatTensor of shape (V, 3) giving vertex normals.
ascii: (bool) whether to use the ascii ply format. ascii: (bool) whether to use the ascii ply format.
decimal_places: Number of decimal places for saving if ascii=True. decimal_places: Number of decimal places for saving if ascii=True.
......
...@@ -9,7 +9,7 @@ def mesh_laplacian_smoothing(meshes, method: str = "uniform"): ...@@ -9,7 +9,7 @@ def mesh_laplacian_smoothing(meshes, method: str = "uniform"):
Computes the laplacian smoothing objective for a batch of meshes. Computes the laplacian smoothing objective for a batch of meshes.
This function supports three variants of Laplacian smoothing, This function supports three variants of Laplacian smoothing,
namely with uniform weights("uniform"), with cotangent weights ("cot"), namely with uniform weights("uniform"), with cotangent weights ("cot"),
and cotangent cuvature ("cotcurv").For more details read [1, 2]. and cotangent curvature ("cotcurv").For more details read [1, 2].
Args: Args:
meshes: Meshes object with a batch of meshes. meshes: Meshes object with a batch of meshes.
......
...@@ -91,7 +91,7 @@ class _FacePointDistance(Function): ...@@ -91,7 +91,7 @@ class _FacePointDistance(Function):
max_tris: Scalar equal to maximum number of faces in the batch max_tris: Scalar equal to maximum number of faces in the batch
Returns: Returns:
dists: FloatTensor of shape `(T,)`, where `dists[t]` is the squared dists: FloatTensor of shape `(T,)`, where `dists[t]` is the squared
euclidean distance of `t`-th trianguar face to the closest point in the euclidean distance of `t`-th triangular face to the closest point in the
corresponding example in the batch corresponding example in the batch
idxs: LongTensor of shape `(T,)` indicating the closest point in the idxs: LongTensor of shape `(T,)` indicating the closest point in the
corresponding example in the batch. corresponding example in the batch.
......
...@@ -21,7 +21,7 @@ def interpolate_face_attributes( ...@@ -21,7 +21,7 @@ def interpolate_face_attributes(
pixel in the image. A value < 0 indicates that the pixel does not pixel in the image. A value < 0 indicates that the pixel does not
overlap any face and should be skipped. overlap any face and should be skipped.
barycentric_coords: FloatTensor of shape (N, H, W, K, 3) specifying barycentric_coords: FloatTensor of shape (N, H, W, K, 3) specifying
the barycentric coordianates of each pixel the barycentric coordinates of each pixel
relative to the faces (in the packed relative to the faces (in the packed
representation) which overlap the pixel. representation) which overlap the pixel.
face_attributes: packed attributes of shape (total_faces, 3, D), face_attributes: packed attributes of shape (total_faces, 3, D),
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment