Unverified Commit 37bd0925 authored by Mufei Li's avatar Mufei Li Committed by GitHub
Browse files

[Sparse] Namespace migration and doc update (#5088)



* update

* lint

* lint

* comments
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-36-188.ap-northeast-1.compute.internal>
parent e65fc9ff
.. _apibackend:
dgl.mock_sparse
dgl.sparse
=================================
`dgl_sparse` is a library for sparse operators that are commonly used in GNN models.
`dgl.sparse` is a library for sparse operators that are commonly used in GNN models.
.. warning::
This is an experimental package. The sparse operators provided in this library do not guarantee the same performance as their message-passing api counterparts.
Sparse matrix class
-------------------------
.. currentmodule:: dgl.mock_sparse
.. currentmodule:: dgl.sparse
.. class:: SparseMatrix
Class for creating a sparse matrix representation. The row and column indices of the sparse matrix can be the source
(row) and destination (column) indices of a homogeneous or heterogeneous graph.
Class for creating a sparse matrix representation
There are a few ways to create a sparse matrix:
......@@ -23,24 +22,24 @@ Sparse matrix class
* In CSR format using row pointers and col indices, use :func:`create_from_csr`.
* In CSC format using col pointers and row indices, use :func:`create_from_csc`.
For example, we can create COO matrix as follows:
For example, one can create COO matrices as follows:
Case1: Sparse matrix with row and column indices without values.
Case1: Sparse matrix with row and column indices without values
>>> src = torch.tensor([1, 1, 2])
>>> dst = torch.tensor([2, 4, 3])
>>> A = create_from_coo(src, dst)
>>> row = torch.tensor([1, 1, 2])
>>> col = torch.tensor([2, 4, 3])
>>> A = create_from_coo(row, col)
>>> A
SparseMatrix(indices=tensor([[1, 1, 2],
[2, 4, 3]]),
values=tensor([1., 1., 1.]),
shape=(3, 5), nnz=3)
Case2: Sparse matrix with scalar/vector values. Following example is with
vector data.
Case2: Sparse matrix with scalar/vector values
>>> # vector values
>>> val = torch.tensor([[1, 1], [2, 2], [3, 3]])
>>> A = create_from_coo(src, dst, val)
>>> A = create_from_coo(row, col, val)
SparseMatrix(indices=tensor([[1, 1, 2],
[2, 4, 3]]),
values=tensor([[1, 1],
......@@ -48,7 +47,7 @@ Sparse matrix class
[3, 3]]),
shape=(3, 5), nnz=3)
Similarly, we can create CSR matrix as follows:
Similarly, one can create a CSR matrix as follows:
>>> indptr = torch.tensor([0, 1, 2, 5])
>>> indices = torch.tensor([1, 2, 0, 1, 2])
......@@ -56,31 +55,45 @@ Sparse matrix class
>>> A = create_from_csr(indptr, indices, val)
>>> A
SparseMatrix(indices=tensor([[0, 1, 2, 2, 2],
[1, 2, 0, 1, 2]]),
values=tensor([[1, 1],
[2, 2],
[3, 3],
[4, 4],
[5, 5]]),
shape=(3, 3), nnz=5)
[1, 2, 0, 1, 2]]),
values=tensor([[1, 1],
[2, 2],
[3, 3],
[4, 4],
[5, 5]]),
shape=(3, 3), nnz=5)
Sparse matrix class attributes
------------------------------
Creators
````````
.. autosummary::
:toctree: ../../generated/
create_from_coo
create_from_csr
create_from_csc
val_like
Attributes and methods
``````````````````````
.. autosummary::
:toctree: ../../generated/
SparseMatrix.shape
SparseMatrix.nnz
SparseMatrix.dtype
SparseMatrix.device
SparseMatrix.val
SparseMatrix.__repr__
SparseMatrix.row
SparseMatrix.col
SparseMatrix.val
__call__
SparseMatrix.indices
SparseMatrix.coo
SparseMatrix.csr
SparseMatrix.csc
SparseMatrix.coalesce
SparseMatrix.has_duplicate
SparseMatrix.dense
SparseMatrix.t
SparseMatrix.T
......@@ -90,64 +103,67 @@ Sparse matrix class attributes
SparseMatrix.smax
SparseMatrix.smin
SparseMatrix.smean
SparseMatrix.__neg__
SparseMatrix.inv
SparseMatrix.neg
SparseMatrix.softmax
SparseMatrix.__matmul__
.. autosummary::
:toctree: ../../generated/
create_from_coo
create_from_csr
create_from_csc
Diagonal matrix class
-------------------------
.. currentmodule:: dgl.mock_sparse
.. currentmodule:: dgl.sparse
.. autoclass:: DiagMatrix
.. class:: DiagMatrix
Diagonal matrix class attributes
--------------------------------
Creators
````````
.. autosummary::
:toctree: ../../generated/
diag
identity
Attributes and methods
``````````````````````
.. autosummary::
:toctree: ../../generated/
DiagMatrix.val
DiagMatrix.shape
DiagMatrix.__call__
DiagMatrix.nnz
DiagMatrix.dtype
DiagMatrix.device
DiagMatrix.val
DiagMatrix.__repr__
DiagMatrix.as_sparse
DiagMatrix.dense
DiagMatrix.t
DiagMatrix.T
DiagMatrix.transpose
DiagMatrix.reduce
DiagMatrix.sum
DiagMatrix.smax
DiagMatrix.smin
DiagMatrix.smean
DiagMatrix.__neg__
DiagMatrix.neg
DiagMatrix.inv
DiagMatrix.softmax
DiagMatrix.__matmul__
.. autosummary::
:toctree: ../../generated/
diag
identity
Operators
---------
.. currentmodule:: dgl.mock_sparse
.. currentmodule:: dgl.sparse
.. autosummary::
:toctree: ../../generated/
sp_add
sp_mul
sp_power
diag_add
diag_sub
diag_mul
diag_div
diag_power
add
power
spmm
spspmm
bspmm
bspspmm
spspmm
mm
sddmm
bsddmm
softmax
......@@ -3,7 +3,7 @@
(https://arxiv.org/abs/1810.05997)
"""
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
import torch.nn.functional as F
......
......@@ -2,7 +2,7 @@
[Combining Label Propagation and Simple Models Out-performs
Graph Neural Networks](https://arxiv.org/abs/2010.13993)
"""
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
import torch.nn.functional as F
......
......@@ -3,7 +3,7 @@
(https://arxiv.org/abs/1710.10903)
"""
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
import torch.nn.functional as F
......
......@@ -3,7 +3,7 @@
(https://arxiv.org/abs/1609.02907)
"""
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
import torch.nn.functional as F
......
......@@ -5,7 +5,7 @@
import math
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
......
"""
Hypergraph Neural Networks (https://arxiv.org/pdf/1809.09401.pdf)
"""
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
import torch.nn.functional as F
......
......@@ -4,7 +4,7 @@ Hypergraph Convolution and Hypergraph Attention
"""
import argparse
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
......
......@@ -3,7 +3,7 @@
(https://arxiv.org/abs/1902.07153)
"""
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
import torch.nn.functional as F
......
......@@ -6,7 +6,7 @@ This example shows a simplified version of SIGN: a precomputed 2-hops diffusion
operator on top of symmetrically normalized adjacency matrix A_hat.
"""
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
import torch.nn.functional as F
......
......@@ -10,7 +10,7 @@ with attention.
import argparse
import dgl.mock_sparse2 as dglsp
import dgl.sparse as dglsp
import torch
import torch.nn as nn
......
"""DGL elementwise operators for diagonal matrix module."""
from typing import Union
from .diag_matrix import DiagMatrix, diag
from .diag_matrix import diag, DiagMatrix
from .sparse_matrix import SparseMatrix
__all__ = ["diag_add", "diag_sub", "diag_mul", "diag_div", "diag_power"]
......
......@@ -57,4 +57,5 @@ def softmax(A: SparseMatrix) -> SparseMatrix:
"""
return SparseMatrix(torch.ops.dgl_sparse.softmax(A.c_sparse_matrix))
SparseMatrix.softmax = softmax
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment