Commit 124bb5e3 authored by Jeremy Reizenstein's avatar Jeremy Reizenstein Committed by Facebook GitHub Bot
Browse files

spelling

Summary: Collection of spelling things, mostly in docs / tutorials.

Reviewed By: gkioxari

Differential Revision: D26101323

fbshipit-source-id: 652f62bc9d71a4ff872efa21141225e43191353a
parent c2e62a50
......@@ -147,7 +147,7 @@ def knn_points(
p2_nn = knn_gather(p2, p1_idx, lengths2)
which is a helper function that allows indexing any tensor of shape (N, P2, U) with
the indices `p1_idx` returned by `knn_points`. The outout is a tensor
the indices `p1_idx` returned by `knn_points`. The output is a tensor
of shape (N, P1, K, U).
"""
......
......@@ -184,7 +184,7 @@ def _gen_pairs(input, dim=-2, reducer=lambda a, b: ((a - b) ** 2).sum(dim=-1)):
def _kernel_vec_distances(v):
"""Computes the coefficients for linearisation of the quadratic system
"""Computes the coefficients for linearization of the quadratic system
to match all pairwise distances between 4 control points (dim=1).
The last dimension corresponds to the coefficients for quadratic terms
Bij = Bi * Bj, where Bi and Bj correspond to kernel vectors.
......
......@@ -28,7 +28,7 @@ def estimate_pointcloud_normals(
**neighborhood_size**: The size of the neighborhood used to estimate the
geometry around each point.
**disambiguate_directions**: If `True`, uses the algorithm from [1] to
ensure sign consistency of the normals of neigboring points.
ensure sign consistency of the normals of neighboring points.
Returns:
**normals**: A tensor of normals for each input point
......@@ -83,7 +83,7 @@ def estimate_pointcloud_local_coord_frames(
**neighborhood_size**: The size of the neighborhood used to estimate the
geometry around each point.
**disambiguate_directions**: If `True`, uses the algorithm from [1] to
ensure sign consistency of the normals of neigboring points.
ensure sign consistency of the normals of neighboring points.
Returns:
**curvatures**: The three principal curvatures of each point
......
......@@ -140,7 +140,7 @@ def add_points_features_to_volume_densities_features(
volume_features: Batch of input feature volumes of shape
`(minibatch, feature_dim, D, H, W)`
If set to `None`, the `volume_features` will be automatically
instantiatied with a correct size and filled with 0s.
instantiated with a correct size and filled with 0s.
mode: The mode of the conversion of individual points into the volume.
Set either to `nearest` or `trilinear`:
`nearest`: Each 3D point is first rounded to the volumetric
......@@ -310,7 +310,7 @@ def splat_points_to_volumes(
# minibatch x n_points x feature_dim -> minibatch x feature_dim x n_points
points_features = points_features.permute(0, 2, 1).contiguous()
# XYZ = the upper-left volume index of the 8-neigborhood of every point
# XYZ = the upper-left volume index of the 8-neighborhood of every point
# grid_sizes is of the form (minibatch, depth-height-width)
grid_sizes_xyz = grid_sizes[:, [2, 1, 0]]
......
......@@ -25,8 +25,9 @@ def sample_points_from_meshes(
Tuple[torch.Tensor, torch.Tensor, torch.Tensor],
]:
"""
Convert a batch of meshes to a pointcloud by uniformly sampling points on
the surface of the mesh with probability proportional to the face area.
Convert a batch of meshes to a batch of pointclouds by uniformly sampling
points on the surface of the mesh with probability proportional to the
face area.
Args:
meshes: A Meshes object with a batch of N meshes.
......@@ -54,7 +55,7 @@ def sample_points_from_meshes(
.. code-block:: python
Poinclouds(samples, normals=normals, features=textures)
Pointclouds(samples, normals=normals, features=textures)
"""
if meshes.isempty():
raise ValueError("Meshes are empty.")
......@@ -71,7 +72,7 @@ def sample_points_from_meshes(
num_meshes = len(meshes)
num_valid_meshes = torch.sum(meshes.valid) # Non empty meshes.
# Intialize samples tensor with fill value 0 for empty meshes.
# Initialize samples tensor with fill value 0 for empty meshes.
samples = torch.zeros((num_meshes, num_samples, 3), device=meshes.device)
# Only compute samples for non empty meshes
......@@ -104,7 +105,7 @@ def sample_points_from_meshes(
samples[meshes.valid] = w0[:, :, None] * a + w1[:, :, None] * b + w2[:, :, None] * c
if return_normals:
# Intialize normals tensor with fill value 0 for empty meshes.
# Initialize normals tensor with fill value 0 for empty meshes.
# Normals for the sampled points are face normals computed from
# the vertices of the face in which the sampled point lies.
normals = torch.zeros((num_meshes, num_samples, 3), device=meshes.device)
......
......@@ -27,7 +27,7 @@ def wmean(
the last (spatial) dimension are assumed same;
dim: dimension(s) in `x` to average over;
keepdim: tells whether to keep the resulting singleton dimension.
eps: minumum clamping value in the denominator.
eps: minimum clamping value in the denominator.
Returns:
the mean tensor:
* if `weights` is None => `mean(x, dim)`,
......
......@@ -15,7 +15,7 @@ def vert_align(
) -> torch.Tensor:
"""
Sample vertex features from a feature map. This operation is called
"perceptual feaure pooling" in [1] or "vert align" in [2].
"perceptual feature pooling" in [1] or "vert align" in [2].
[1] Wang et al, "Pixel2Mesh: Generating 3D Mesh Models from Single
RGB Images", ECCV 2018.
......@@ -45,7 +45,7 @@ def vert_align(
Returns:
feats_sampled: FloatTensor of shape (N, V, C) giving sampled features for each
vertex. If feats is a list, we return concatentated features in axis=2 of
vertex. If feats is a list, we return concatenated features in axis=2 of
shape (N, V, sum(C_n)) where C_n = feats[n].shape[1].
If return_packed = True, the features are transformed to a packed
representation of shape (sum(V), C)
......
......@@ -30,7 +30,7 @@ class CamerasBase(TensorProperties):
The transformation from world -> view happens after applying a rotation (R)
and translation (T)
- NDC coordinate system: This is the normalized coordinate system that confines
in a volume the renderered part of the object or scene. Also known as view volume.
in a volume the rendered part of the object or scene. Also known as view volume.
Given the PyTorch3D convention, (+1, +1, znear) is the top left near corner,
and (-1, -1, zfar) is the bottom right far corner of the volume.
The transformation from view -> NDC happens after applying the camera
......@@ -78,7 +78,7 @@ class CamerasBase(TensorProperties):
def unproject_points(self):
"""
Transform input points from NDC coodinates
Transform input points from NDC coordinates
to the world / camera coordinates.
Each of the input points `xy_depth` of shape (..., 3) is
......@@ -210,7 +210,7 @@ class CamerasBase(TensorProperties):
For `CamerasBase.transform_points`, setting `eps > 0`
stabilizes gradients since it leads to avoiding division
by excessivelly low numbers for points close to the
by excessively low numbers for points close to the
camera plane.
Returns
......@@ -235,7 +235,7 @@ class CamerasBase(TensorProperties):
For `CamerasBase.transform_points`, setting `eps > 0`
stabilizes gradients since it leads to avoiding division
by excessivelly low numbers for points close to the
by excessively low numbers for points close to the
camera plane.
Returns
......@@ -318,7 +318,7 @@ def OpenGLPerspectiveCameras(
class FoVPerspectiveCameras(CamerasBase):
"""
A class which stores a batch of parameters to generate a batch of
projection matrices by specifiying the field of view.
projection matrices by specifying the field of view.
The definition of the parameters follow the OpenGL perspective camera.
The extrinsics of the camera (R and T matrices) can also be set in the
......@@ -405,7 +405,7 @@ class FoVPerspectiveCameras(CamerasBase):
degrees: bool, set to True if fov is specified in degrees.
Returns:
torch.floatTensor of the calibration matrix with shape (N, 4, 4)
torch.FloatTensor of the calibration matrix with shape (N, 4, 4)
"""
K = torch.zeros((self._N, 4, 4), device=self.device, dtype=torch.float32)
ones = torch.ones((self._N), dtype=torch.float32, device=self.device)
......@@ -421,7 +421,7 @@ class FoVPerspectiveCameras(CamerasBase):
min_x = -max_x
# NOTE: In OpenGL the projection matrix changes the handedness of the
# coordinate frame. i.e the NDC space postive z direction is the
# coordinate frame. i.e the NDC space positive z direction is the
# camera space negative z direction. This is because the sign of the z
# in the projection matrix is set to -1.0.
# In pytorch3d we maintain a right handed coordinate system throughout
......@@ -444,7 +444,7 @@ class FoVPerspectiveCameras(CamerasBase):
def get_projection_transform(self, **kwargs) -> Transform3d:
"""
Calculate the perpective projection matrix with a symmetric
Calculate the perspective projection matrix with a symmetric
viewing frustrum. Use column major order.
The viewing frustrum will be projected into ndc, s.t.
(max_x, max_y) -> (+1, +1)
......@@ -586,7 +586,7 @@ def OpenGLOrthographicCameras(
class FoVOrthographicCameras(CamerasBase):
"""
A class which stores a batch of parameters to generate a batch of
projection matrices by specifiying the field of view.
projection matrices by specifying the field of view.
The definition of the parameters follow the OpenGL orthographic camera.
"""
......@@ -612,7 +612,7 @@ class FoVOrthographicCameras(CamerasBase):
max_y: maximum y coordinate of the frustrum.
min_y: minimum y coordinate of the frustrum.
max_x: maximum x coordinate of the frustrum.
min_x: minumum x coordinage of the frustrum
min_x: minimum x coordinate of the frustrum
scale_xyz: scale factors for each axis of shape (N, 3).
R: Rotation matrix of shape (N, 3, 3).
T: Translation of shape (N, 3).
......@@ -649,7 +649,7 @@ class FoVOrthographicCameras(CamerasBase):
znear: near clipping plane of the view frustrum.
zfar: far clipping plane of the view frustrum.
max_x: maximum x coordinate of the frustrum.
min_x: minumum x coordinage of the frustrum
min_x: minimum x coordinate of the frustrum
max_y: maximum y coordinate of the frustrum.
min_y: minimum y coordinate of the frustrum.
scale_xyz: scale factors for each axis of shape (N, 3).
......@@ -693,7 +693,7 @@ class FoVOrthographicCameras(CamerasBase):
scale_z = 2 / (far-near)
mid_x = (max_x + min_x) / (max_x - min_x)
mix_y = (max_y + min_y) / (max_y - min_y)
mid_z = (far + near) / (farnear)
mid_z = (far + near) / (far - near)
K = [
[scale_x, 0, 0, -mid_x],
......@@ -811,7 +811,7 @@ class PerspectiveCameras(CamerasBase):
If you wish to provide parameters in screen space, you NEED to provide
the image_size = (imwidth, imheight).
If you wish to provide parameters in NDC space, you should NOT provide
image_size. Providing valid image_size will triger a screen space to
image_size. Providing valid image_size will trigger a screen space to
NDC space transformation in the camera.
For example, here is how to define cameras on the two spaces.
......@@ -978,7 +978,7 @@ class OrthographicCameras(CamerasBase):
If you wish to provide parameters in screen space, you NEED to provide
the image_size = (imwidth, imheight).
If you wish to provide parameters in NDC space, you should NOT provide
image_size. Providing valid image_size will triger a screen space to
image_size. Providing valid image_size will trigger a screen space to
NDC space transformation in the camera.
For example, here is how to define cameras on the two spaces.
......@@ -1120,7 +1120,7 @@ def _get_sfm_calibration_matrix(
image_size=None,
) -> torch.Tensor:
"""
Returns a calibration matrix of a perspective/orthograpic camera.
Returns a calibration matrix of a perspective/orthographic camera.
Args:
N: Number of cameras.
......@@ -1355,7 +1355,7 @@ def look_at_view_transform(
Args:
dist: distance of the camera from the object
elev: angle in degres or radians. This is the angle between the
elev: angle in degrees or radians. This is the angle between the
vector from the object to the camera, and the horizontal plane y = 0 (xz-plane).
azim: angle in degrees or radians. The vector from the object to
the camera is projected onto a horizontal plane y = 0.
......@@ -1365,7 +1365,7 @@ def look_at_view_transform(
degrees: boolean flag to indicate if the elevation and azimuth
angles are specified in degrees or radians.
eye: the position of the camera(s) in world coordinates. If eye is not
None, it will overide the camera position derived from dist, elev, azim.
None, it will override the camera position derived from dist, elev, azim.
up: the direction of the x axis in the world coordinate system.
at: the position of the object(s) in world coordinates.
eye, up and at can be of shape (1, 3) or (N, 3).
......
......@@ -67,13 +67,13 @@ class EmissionAbsorptionRaymarcher(torch.nn.Module):
rays_features: Per-ray feature values represented with a tensor
of shape `(..., n_points_per_ray, feature_dim)`.
eps: A lower bound added to `rays_densities` before computing
the absorbtion function (cumprod of `1-rays_densities` along
the absorption function (cumprod of `1-rays_densities` along
each ray). This prevents the cumprod to yield exact 0
which would inhibit any gradient-based learning.
Returns:
features_opacities: A tensor of shape `(..., feature_dim+1)`
that concatenates two tensors alonng the last dimension:
that concatenates two tensors along the last dimension:
1) features: A tensor of per-ray renders
of shape `(..., feature_dim)`.
2) opacities: A tensor of per-ray opacity values
......
......@@ -16,7 +16,7 @@ This file defines three raysampling techniques:
class GridRaysampler(torch.nn.Module):
"""
Samples a fixed number of points along rays which are regulary distributed
Samples a fixed number of points along rays which are regularly distributed
in a batch of rectangular image grids. Points along each ray
have uniformly-spaced z-coordinates between a predefined
minimum and maximum depth.
......@@ -129,7 +129,7 @@ class GridRaysampler(torch.nn.Module):
class NDCGridRaysampler(GridRaysampler):
"""
Samples a fixed number of points along rays which are regulary distributed
Samples a fixed number of points along rays which are regularly distributed
in a batch of rectangular image grids. Points along each ray
have uniformly-spaced z-coordinates between a predefined minimum and maximum depth.
......
......@@ -18,7 +18,7 @@ from .utils import _validate_ray_bundle_variables, ray_bundle_variables_to_ray_p
# 1) The raysampler:
# - samples rays from input cameras
# - transforms the rays to world coordinates
# 2) The volumetric_function (which is a callable argument of the forwad pass)
# 2) The volumetric_function (which is a callable argument of the forward pass)
# evaluates ray_densities and ray_features at the sampled ray-points.
# 3) The raymarcher takes ray_densities and ray_features and uses a raymarching
# algorithm to render each ray.
......@@ -64,7 +64,7 @@ class ImplicitRenderer(torch.nn.Module):
the an feature vector for each ray point.
Note that, in order to increase flexibility of the API, we allow multiple
other arguments to enter the volumentric function via additional
other arguments to enter the volumetric function via additional
(optional) keyword arguments `**kwargs`.
A typical use-case is passing a `CamerasBase` object as an additional
keyword argument, which can allow the volumetric function to adjust its
......@@ -131,7 +131,7 @@ class ImplicitRenderer(torch.nn.Module):
Args:
cameras: A batch of cameras that render the scene. A `self.raysampler`
takes the cameras as input and samples rays that pass through the
domain of the volumentric function.
domain of the volumetric function.
volumetric_function: A `Callable` that accepts the parametrizations
of the rendering rays and returns the densities and features
at the respective 3D of the rendering rays. Please refer to
......@@ -229,7 +229,7 @@ class VolumeRenderer(torch.nn.Module):
Args:
cameras: A batch of cameras that render the scene. A `self.raysampler`
takes the cameras as input and samples rays that pass through the
domain of the volumentric function.
domain of the volumetric function.
volumes: An instance of the `Volumes` class representing a
batch of volumes that are being rendered.
......@@ -247,7 +247,7 @@ class VolumeRenderer(torch.nn.Module):
class VolumeSampler(torch.nn.Module):
"""
A class that allows to sample a batch of volumes `Volumes`
A module to sample a batch of volumes `Volumes`
at 3D points sampled along projection rays.
"""
......@@ -255,7 +255,7 @@ class VolumeSampler(torch.nn.Module):
"""
Args:
volumes: An instance of the `Volumes` class representing a
batch if volumes that are being rendered.
batch of volumes that are being rendered.
sample_mode: Defines the algorithm used to sample the volumetric
voxel grid. Can be either "bilinear" or "nearest".
"""
......@@ -300,7 +300,7 @@ class VolumeSampler(torch.nn.Module):
Returns:
rays_densities: A tensor of shape
`(minibatch, ..., num_points_per_ray, opacity_dim)` containing the
densitity vectors sampled from the volume at the locations of
density vectors sampled from the volume at the locations of
the ray points.
rays_features: A tensor of shape
`(minibatch, ..., num_points_per_ray, feature_dim)` containing the
......
......@@ -44,7 +44,7 @@ def diffuse(normals, color, direction) -> torch.Tensor:
average/interpolated face coordinates.
"""
# TODO: handle multiple directional lights per batch element.
# TODO: handle attentuation.
# TODO: handle attenuation.
# Ensure color and location have same batch dimension as normals
normals, color, direction = convert_to_tensors_and_broadcast(
......@@ -107,7 +107,7 @@ def specular(
meshes.verts_packed_to_mesh_idx() or meshes.faces_packed_to_mesh_idx().
"""
# TODO: handle multiple directional lights
# TODO: attentuate based on inverse squared distance to the light source
# TODO: attenuate based on inverse squared distance to the light source
if points.shape != normals.shape:
msg = "Expected points and normals to have the same shape: got %r, %r"
......
......@@ -17,7 +17,7 @@ from .clip import (
# TODO make the epsilon user configurable
kEpsilon = 1e-8
# Maxinum number of faces per bins for
# Maximum number of faces per bins for
# coarse-to-fine rasterization
kMaxFacesPerBin = 22
......@@ -68,7 +68,7 @@ def rasterize_meshes(
set it heuristically based on the shape of the input. This should not
affect the output, but can affect the speed of the forward pass.
faces_per_bin: Only applicable when using coarse-to-fine rasterization
(bin_size > 0); this is the maxiumum number of faces allowed within each
(bin_size > 0); this is the maximum number of faces allowed within each
bin. If more than this many faces actually fall into a bin, an error
will be raised. This should not affect the output values, but can affect
the memory usage in the forward pass.
......@@ -138,7 +138,7 @@ def rasterize_meshes(
num_faces_per_mesh = meshes.num_faces_per_mesh()
# In the case that H != W use the max image size to set the bin_size
# to accommodate the num bins constraint in the coarse rasteizer.
# to accommodate the num bins constraint in the coarse rasterizer.
# If the ratio of H:W is large this might cause issues as the smaller
# dimension will have fewer bins.
# TODO: consider a better way of setting the bin size.
......@@ -453,7 +453,7 @@ def rasterize_meshes_python(
mesh_to_face_first_idx = clipped_faces.mesh_to_face_first_idx
num_faces_per_mesh = clipped_faces.num_faces_per_mesh
# Intialize output tensors.
# Initialize output tensors.
face_idxs = torch.full(
(N, H, W, K), fill_value=-1, dtype=torch.int64, device=device
)
......@@ -662,7 +662,7 @@ def barycentric_coordinates_clip(bary):
Clip negative barycentric coordinates to 0.0 and renormalize so
the barycentric coordinates for a point sum to 1. When the blur_radius
is greater than 0, a face will still be recorded as overlapping a pixel
if the pixel is outisde the face. In this case at least one of the
if the pixel is outside the face. In this case at least one of the
barycentric coordinates for the pixel relative to the face will be negative.
Clipping will ensure that the texture and z buffer are interpolated correctly.
......
......@@ -60,7 +60,7 @@ class RasterizationSettings:
class MeshRasterizer(nn.Module):
"""
This class implements methods for rasterizing a batch of heterogenous
This class implements methods for rasterizing a batch of heterogeneous
Meshes.
"""
......
......@@ -240,7 +240,7 @@ class TexturesBase:
number of faces in the i-th mesh and C is the dimensional of
the feature (C = 3 for RGB textures).
You can use the utils function in structures.utils to convert the
packed respresentation to a list or padded.
packed representation to a list or padded.
"""
raise NotImplementedError()
......@@ -261,10 +261,10 @@ class TexturesBase:
def __getitem__(self, index):
"""
Each texture class should implement a method
to get the texture properites for the
to get the texture properties for the
specified elements in the batch.
The TexturesBase._getitem(i) method
can be used as a helper funtion to retrieve the
can be used as a helper function to retrieve the
class attributes for item i. Then, a new
instance of the child class can be created with
the attributes.
......@@ -496,7 +496,7 @@ class TexturesAtlas(TexturesBase):
of the faces (in the packed representation) which
overlap each pixel in the image.
- barycentric_coords: FloatTensor of shape (N, H, W, K, 3) specifying
the barycentric coordianates of each pixel
the barycentric coordinates of each pixel
relative to the faces (in the packed
representation) which overlap the pixel.
......@@ -536,7 +536,7 @@ class TexturesAtlas(TexturesBase):
For N meshes with {Fi} number of faces, it returns a
tensor of shape sum(Fi)x3xD (D = 3 for RGB).
You can use the utils function in structures.utils to convert the
packed respresentation to a list or padded.
packed representation to a list or padded.
"""
atlas_packed = self.atlas_packed()
# assume each face consists of (v0, v1, v2).
......@@ -892,7 +892,7 @@ class TexturesUV(TexturesBase):
of the faces (in the packed representation) which
overlap each pixel in the image.
- barycentric_coords: FloatTensor of shape (N, H, W, K, 3) specifying
the barycentric coordianates of each pixel
the barycentric coordinates of each pixel
relative to the faces (in the packed
representation) which overlap the pixel.
......@@ -1233,7 +1233,7 @@ class TexturesVertex(TexturesBase):
Args:
verts_features: list of (Vi, D) or (N, V, D) tensor giving a feature
vector with artbitrary dimensions for each vertex.
vector with arbitrary dimensions for each vertex.
"""
if isinstance(verts_features, (tuple, list)):
correct_shape = all(
......@@ -1356,7 +1356,7 @@ class TexturesVertex(TexturesBase):
def sample_textures(self, fragments, faces_packed=None) -> torch.Tensor:
"""
Detemine the color for each rasterized face. Interpolate the colors for
Determine the color for each rasterized face. Interpolate the colors for
vertices which form the face using the barycentric coordinates.
Args:
fragments:
......@@ -1366,7 +1366,7 @@ class TexturesVertex(TexturesBase):
of the faces (in the packed representation) which
overlap each pixel in the image.
- barycentric_coords: FloatTensor of shape (N, H, W, K, 3) specifying
the barycentric coordianates of each pixel
the barycentric coordinates of each pixel
relative to the faces (in the packed
representation) which overlap the pixel.
......@@ -1389,7 +1389,7 @@ class TexturesVertex(TexturesBase):
For N meshes with {Fi} number of faces, it returns a
tensor of shape sum(Fi)x3xC (C = 3 for RGB).
You can use the utils function in structures.utils to convert the
packed respresentation to a list or padded.
packed representation to a list or padded.
"""
verts_features_packed = self.verts_features_packed()
faces_verts_features = verts_features_packed[faces_packed]
......
......@@ -44,7 +44,7 @@ def _interpolate_zbuf(
of the faces (in the packed representation) which
overlap each pixel in the image.
barycentric_coords: FloatTensor of shape (N, H, W, K, 3) specifying
the barycentric coordianates of each pixel
the barycentric coordinates of each pixel
relative to the faces (in the packed
representation) which overlap the pixel.
meshes: Meshes object representing a batch of meshes.
......@@ -98,7 +98,7 @@ def _try_place_rectangle(
Example:
(We always have placed the first rectangle horizontally and other
rectangles above it.)
Let's say the placed boxes 1-4 are layed out like this.
Let's say the placed boxes 1-4 are laid out like this.
The coordinates of the points marked X are stored in occupied.
It is to the right of the X's that we seek to place rect.
......
......@@ -8,7 +8,7 @@ from pytorch3d import _C # pyre-fixme[21]: Could not find name `_C` in `pytorch
from pytorch3d.renderer.mesh.rasterize_meshes import pix_to_non_square_ndc
# Maxinum number of faces per bins for
# Maximum number of faces per bins for
# coarse-to-fine rasterization
kMaxPointsPerBin = 22
......@@ -59,7 +59,7 @@ def rasterize_points(
set it heuristically based on the shape of the input. This should not
affect the output, but can affect the speed of the forward pass.
points_per_bin: Only applicable when using coarse-to-fine rasterization
(bin_size > 0); this is the maxiumum number of points allowed within each
(bin_size > 0); this is the maximum number of points allowed within each
bin. If more than this many points actually fall into a bin, an error
will be raised. This should not affect the output values, but can affect
the memory usage in the forward pass.
......@@ -95,7 +95,7 @@ def rasterize_points(
radius = _format_radius(radius, pointclouds)
# In the case that H != W use the max image size to set the bin_size
# to accommodate the num bins constraint in the coarse rasteizer.
# to accommodate the num bins constraint in the coarse rasterizer.
# If the ratio of H:W is large this might cause issues as the smaller
# dimension will have fewer bins.
# TODO: consider a better way of setting the bin size.
......@@ -276,7 +276,7 @@ def rasterize_points_python(
# Support variable size radius for each point in the batch
radius = _format_radius(radius, pointclouds)
# Intialize output tensors.
# Initialize output tensors.
point_idxs = torch.full(
(N, H, W, K), fill_value=-1, dtype=torch.int32, device=device
)
......
......@@ -76,7 +76,7 @@ class TensorAccessor(nn.Module):
if hasattr(self.class_object, name):
return self.class_object.__dict__[name][self.index]
else:
msg = "Attribue %s not found on %r"
msg = "Attribute %s not found on %r"
return AttributeError(msg % (name, self.class_object.__name__))
......
......@@ -22,13 +22,13 @@ class Meshes(object):
- has specific batch dimension.
Packed
- no batch dimension.
- has auxillary variables used to index into the padded representation.
- has auxiliary variables used to index into the padded representation.
Example:
Input list of verts V_n = [[V_1], [V_2], ... , [V_N]]
where V_1, ... , V_N are the number of verts in each mesh and N is the
numer of meshes.
number of meshes.
Input list of faces F_n = [[F_1], [F_2], ... , [F_N]]
where F_1, ... , F_N are the number of faces in each mesh.
......@@ -100,7 +100,7 @@ class Meshes(object):
| ]) |
-----------------------------------------------------------------------------
Auxillary variables for packed representation
Auxiliary variables for packed representation
Name | Size | Example from above
-------------------------------|---------------------|-----------------------
......@@ -139,7 +139,7 @@ class Meshes(object):
# SPHINX IGNORE
From the faces, edges are computed and have packed and padded
representations with auxillary variables.
representations with auxiliary variables.
E_n = [[E_1], ... , [E_N]]
where E_1, ... , E_N are the number of unique edges in each mesh.
......@@ -894,7 +894,7 @@ class Meshes(object):
def _compute_packed(self, refresh: bool = False):
"""
Computes the packed version of the meshes from verts_list and faces_list
and sets the values of auxillary tensors.
and sets the values of auxiliary tensors.
Args:
refresh: Set to True to force recomputation of packed representations.
......@@ -1022,7 +1022,7 @@ class Meshes(object):
# Remove duplicate edges: convert each edge (v0, v1) into an
# integer hash = V * v0 + v1; this allows us to use the scalar version of
# unique which is much faster than edges.unique(dim=1) which is very slow.
# After finding the unique elements reconstruct the vertex indicies as:
# After finding the unique elements reconstruct the vertex indices as:
# (v0, v1) = (hash / V, hash % V)
# The inverse maps from unique_edges back to edges:
# unique_edges[inverse_idxs] == edges
......
......@@ -18,7 +18,7 @@ class Pointclouds(object):
- has specific batch dimension.
Packed
- no batch dimension.
- has auxillary variables used to index into the padded representation.
- has auxiliary variables used to index into the padded representation.
Example
......@@ -61,7 +61,7 @@ class Pointclouds(object):
| ]) |
-----------------------------------------------------------------------------
Auxillary variables for packed representation
Auxiliary variables for packed representation
Name | Size | Example from above
-------------------------------|---------------------|-----------------------
......@@ -265,7 +265,7 @@ class Pointclouds(object):
)
if d.device != self.device:
raise ValueError(
"All auxillary inputs must be on the same device as the points."
"All auxiliary inputs must be on the same device as the points."
)
if p > 0:
if d.dim() != 2:
......@@ -291,7 +291,7 @@ class Pointclouds(object):
)
if aux_input.device != self.device:
raise ValueError(
"All auxillary inputs must be on the same device as the points."
"All auxiliary inputs must be on the same device as the points."
)
aux_input_C = aux_input.shape[2]
return None, aux_input, aux_input_C
......@@ -508,7 +508,7 @@ class Pointclouds(object):
def padded_to_packed_idx(self):
"""
Return a 1D tensor x with length equal to the total number of points
such that points_packed()[i] is element x[i] of the flattened padded
such that points_packed()[i] is element x[i] of the flattened padded
representation.
The packed representation can be calculated as follows.
......@@ -573,7 +573,7 @@ class Pointclouds(object):
def _compute_packed(self, refresh: bool = False):
"""
Computes the packed version from points_list, normals_list and
features_list and sets the values of auxillary tensors.
features_list and sets the values of auxiliary tensors.
Args:
refresh: Set to True to force recomputation of packed
......@@ -910,7 +910,7 @@ class Pointclouds(object):
**neighborhood_size**: The size of the neighborhood used to estimate the
geometry around each point.
**disambiguate_directions**: If `True`, uses the algorithm from [1] to
ensure sign consistency of the normals of neigboring points.
ensure sign consistency of the normals of neighboring points.
**normals**: A tensor of normals for each input point
of shape `(minibatch, num_point, 3)`.
If `pointclouds` are of `Pointclouds` class, returns a padded tensor.
......@@ -985,7 +985,7 @@ class Pointclouds(object):
Args:
new_points_padded: FloatTensor of shape (N, P, 3)
new_normals_padded: (optional) FloatTensor of shape (N, P, 3)
new_features_padded: (optional) FloatTensors of shape (N, P, C)
new_features_padded: (optional) FloatTensor of shape (N, P, C)
Returns:
Pointcloud with updated padded representations
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment