Unverified Commit 387ae76d authored by Hongzhi (Steve), Chen's avatar Hongzhi (Steve), Chen Committed by GitHub
Browse files

[Doc] Update with Mono font to keep doc string consistent. (#5213)



* part1

* more

* more

* spmatrix

* code
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
parent d61012e0
......@@ -181,7 +181,7 @@ class DiagMatrix:
>>> val = torch.ones(2)
>>> D = dglsp.diag(val)
>>> D.to(device='cuda:0', dtype=torch.int32)
>>> D.to(device="cuda:0", dtype=torch.int32)
DiagMatrix(values=tensor([1, 1], device='cuda:0', dtype=torch.int32),
shape=(2, 2))
"""
......@@ -198,7 +198,7 @@ class DiagMatrix:
def cuda(self):
"""Moves the matrix to GPU. If the matrix is already on GPU, the
original matrix will be returned. If multiple GPU devices exist,
'cuda:0' will be selected.
``cuda:0`` will be selected.
Returns
-------
......@@ -383,7 +383,7 @@ def identity(
Shape of the matrix.
d : int, optional
If None, the diagonal entries will be scaler 1. Otherwise, the diagonal
entries will be a 1-valued tensor of shape (d).
entries will be a 1-valued tensor of shape ``(d)``.
dtype : torch.dtype, optional
The data type of the matrix
device : torch.device, optional
......@@ -399,6 +399,8 @@ def identity(
Case1: 3-by-3 matrix with scaler diagonal values
.. code::
[[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]
......@@ -409,6 +411,8 @@ def identity(
Case2: 3-by-5 matrix with scaler diagonal values
.. code::
[[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0]]
......
......@@ -254,10 +254,9 @@ def matmul(
* The operator supports batched sparse-dense matrix multiplication. In \
this case, the sparse or diagonal matrix :attr:`A` should have shape \
:math:`(L, M)`, where the non-zero values have a batch dimension \
:math:`K`. The dense matrix :attr:`B` should have shape \
:math:`(M, N, K)`. The output is a dense matrix of shape \
:math:`(L, N, K)`.
``(L, M)``, where the non-zero values have a batch dimension ``K``. \
The dense matrix :attr:`B` should have shape ``(M, N, K)``. The output \
is a dense matrix of shape ``(L, N, K)``.
* Sparse-sparse matrix multiplication does not support batched computation.
......
......@@ -9,8 +9,8 @@ from .sparse_matrix import SparseMatrix
def reduce(input: SparseMatrix, dim: Optional[int] = None, rtype: str = "sum"):
"""Computes the reduction of non-zero values of the ``input`` sparse matrix
along the given dimension :attr:`dim`.
"""Computes the reduction of non-zero values of the :attr:`input` sparse
matrix along the given dimension :attr:`dim`.
The reduction does not count zero elements. If the row or column to be
reduced does not have any non-zero elements, the result will be 0.
......@@ -83,8 +83,8 @@ def reduce(input: SparseMatrix, dim: Optional[int] = None, rtype: str = "sum"):
def sum(input: SparseMatrix, dim: Optional[int] = None):
"""Computes the sum of non-zero values of the ``input`` sparse matrix along
the given dimension :attr:`dim`.
"""Computes the sum of non-zero values of the :attr:`input` sparse matrix
along the given dimension :attr:`dim`.
Parameters
----------
......@@ -137,8 +137,8 @@ def sum(input: SparseMatrix, dim: Optional[int] = None):
def smax(input: SparseMatrix, dim: Optional[int] = None):
"""Computes the maximum of non-zero values of the ``input`` sparse matrix
along the given dimension :attr:`dim`.
"""Computes the maximum of non-zero values of the :attr:`input` sparse
matrix along the given dimension :attr:`dim`.
The reduction does not count zero values. If the row or column to be
reduced does not have any non-zero value, the result will be 0.
......@@ -195,8 +195,8 @@ def smax(input: SparseMatrix, dim: Optional[int] = None):
def smin(input: SparseMatrix, dim: Optional[int] = None):
"""Computes the minimum of non-zero values of the ``input`` sparse matrix
along the given dimension :attr:`dim`.
"""Computes the minimum of non-zero values of the :attr:`input` sparse
matrix along the given dimension :attr:`dim`.
The reduction does not count zero values. If the row or column to be reduced
does not have any non-zero value, the result will be 0.
......@@ -257,8 +257,8 @@ def smin(input: SparseMatrix, dim: Optional[int] = None):
def smean(input: SparseMatrix, dim: Optional[int] = None):
"""Computes the mean of non-zero values of the ``input`` sparse matrix along
the given dimension :attr:`dim`.
"""Computes the mean of non-zero values of the :attr:`input` sparse matrix
along the given dimension :attr:`dim`.
The reduction does not count zero values. If the row or column to be reduced
does not have any non-zero value, the result will be 0.
......@@ -319,8 +319,8 @@ def smean(input: SparseMatrix, dim: Optional[int] = None):
def sprod(input: SparseMatrix, dim: Optional[int] = None):
"""Computes the product of non-zero values of the ``input`` sparse matrix
along the given dimension :attr:`dim`.
"""Computes the product of non-zero values of the :attr:`input` sparse
matrix along the given dimension :attr:`dim`.
The reduction does not count zero values. If the row or column to be reduced
does not have any non-zero value, the result will be 0.
......
......@@ -14,7 +14,7 @@ def softmax(input: SparseMatrix) -> SparseMatrix:
Equivalently, applies softmax to the non-zero elements of the sparse
matrix along the column (``dim=1``) dimension.
If :attr:`input.val` takes shape :attr:`(nnz, D)`, then the output matrix
If :attr:`input.val` takes shape ``(nnz, D)``, then the output matrix
:attr:`output` and :attr:`output.val` take the same shape as :attr:`input`
and :attr:`input.val`. :attr:`output.val[:, i]` is calculated based on
:attr:`input.val[:, i]`.
......
......@@ -246,7 +246,7 @@ class SparseMatrix:
>>> indices = torch.tensor([[1, 1, 2], [1, 2, 0]])
>>> A = dglsp.spmatrix(indices, shape=(3, 4))
>>> A.to(device='cuda:0', dtype=torch.int32)
>>> A.to(device="cuda:0", dtype=torch.int32)
SparseMatrix(indices=tensor([[1, 1, 2],
[1, 2, 0]], device='cuda:0'),
values=tensor([1, 1, 1], device='cuda:0',
......@@ -274,7 +274,7 @@ class SparseMatrix:
def cuda(self):
"""Moves the matrix to GPU. If the matrix is already on GPU, the
original matrix will be returned. If multiple GPU devices exist,
'cuda:0' will be selected.
``cuda:0`` will be selected.
Returns
-------
......@@ -308,6 +308,7 @@ class SparseMatrix:
>>> indices = torch.tensor([[1, 1, 2], [1, 2, 0]]).to("cuda")
>>> A = dglsp.spmatrix(indices, shape=(3, 4))
>>> A.cpu()
SparseMatrix(indices=tensor([[1, 1, 2],
[1, 2, 0]]),
......@@ -466,12 +467,12 @@ def spmatrix(
which should have shape of ``(2, N)`` where the first row is the row
indices and the second row is the column indices of non-zero elements.
val : tensor.Tensor, optional
The values of shape (nnz) or (nnz, D). If None, it will be a tensor of
shape (nnz) filled by 1.
The values of shape ``(nnz)`` or ``(nnz, D)``. If None, it will be a
tensor of shape ``(nnz)`` filled by 1.
shape : tuple[int, int], optional
If not specified, it will be inferred from :attr:`row` and :attr:`col`,
i.e., (row.max() + 1, col.max() + 1). Otherwise, :attr:`shape` should
be no smaller than this.
i.e., ``(row.max() + 1, col.max() + 1)``. Otherwise, :attr:`shape`
should be no smaller than this.
Returns
-------
......@@ -540,16 +541,16 @@ def from_coo(
Parameters
----------
row : torch.Tensor
The row indices of shape (nnz)
The row indices of shape ``(nnz)``
col : torch.Tensor
The column indices of shape (nnz)
The column indices of shape ``(nnz)``
val : torch.Tensor, optional
The values of shape (nnz) or (nnz, D). If None, it will be a tensor of
shape (nnz) filled by 1.
The values of shape ``(nnz)`` or ``(nnz, D)``. If None, it will be a
tensor of shape ``(nnz)`` filled by 1.
shape : tuple[int, int], optional
If not specified, it will be inferred from :attr:`row` and :attr:`col`,
i.e., (row.max() + 1, col.max() + 1). Otherwise, :attr:`shape` should
be no smaller than this.
i.e., ``(row.max() + 1, col.max() + 1)``. Otherwise, :attr:`shape`
should be no smaller than this.
Returns
-------
......@@ -630,17 +631,17 @@ def from_csr(
Parameters
----------
indptr : torch.Tensor
Pointer to the column indices of shape (N + 1), where N is the number
of rows
Pointer to the column indices of shape ``(N + 1)``, where ``N`` is the
number of rows
indices : torch.Tensor
The column indices of shape (nnz)
The column indices of shape ``(nnz)``
val : torch.Tensor, optional
The values of shape (nnz) or (nnz, D). If None, it will be a tensor of
shape (nnz) filled by 1.
The values of shape ``(nnz)`` or ``(nnz, D)``. If None, it will be a
tensor of shape ``(nnz)`` filled by 1.
shape : tuple[int, int], optional
If not specified, it will be inferred from :attr:`indptr` and
:attr:`indices`, i.e., (len(indptr) - 1, indices.max() + 1). Otherwise,
:attr:`shape` should be no smaller than this.
:attr:`indices`, i.e., ``(len(indptr) - 1, indices.max() + 1)``.
Otherwise, :attr:`shape` should be no smaller than this.
Returns
-------
......@@ -652,6 +653,8 @@ def from_csr(
Case1: Sparse matrix without values
.. code::
[[0, 1, 0],
[0, 0, 1],
[1, 1, 1]]
......@@ -725,12 +728,12 @@ def from_csc(
indices : torch.Tensor
The row indices of shape nnz
val : torch.Tensor, optional
The values of shape (nnz) or (nnz, D). If None, it will be a tensor of
shape (nnz) filled by 1.
The values of shape ``(nnz)`` or ``(nnz, D)``. If None, it will be a
tensor of shape ``(nnz)`` filled by 1.
shape : tuple[int, int], optional
If not specified, it will be inferred from :attr:`indptr` and
:attr:`indices`, i.e., (indices.max() + 1, len(indptr) - 1). Otherwise,
:attr:`shape` should be no smaller than this.
:attr:`indices`, i.e., ``(indices.max() + 1, len(indptr) - 1)``.
Otherwise, :attr:`shape` should be no smaller than this.
Returns
-------
......@@ -742,6 +745,8 @@ def from_csc(
Case1: Sparse matrix without values
.. code::
[[0, 1, 0],
[0, 0, 1],
[1, 1, 1]]
......@@ -801,7 +806,8 @@ def val_like(mat: SparseMatrix, val: torch.Tensor) -> SparseMatrix:
mat : SparseMatrix
An existing sparse matrix with non-zero values
val : torch.Tensor
The new values of the non-zero elements, a tensor of shape (nnz) or (nnz, D)
The new values of the non-zero elements, a tensor of shape ``(nnz)`` or
``(nnz, D)``
Returns
-------
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment