Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
torch-sparse
Commits
bab40181
"docs/vscode:/vscode.git/clone" did not exist on "c370b90ff184a61bcbd58d486975ad4de095275e"
Unverified
Commit
bab40181
authored
Jun 22, 2019
by
ekka
Committed by
GitHub
Jun 22, 2019
Browse files
Merge pull request
#1
from rusty1s/master
Sync
parents
6244606f
15b84133
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
11 additions
and
11 deletions
+11
-11
README.md
README.md
+11
-11
No files found.
README.md
View file @
bab40181
...
...
@@ -56,7 +56,7 @@ Be sure to import `torch` first before using this package to resolve symbols the
torch_sparse.coalesce(index, value, m, n, op="add", fill_value=0) -> (torch.LongTensor, torch.Tensor)
```
Row-wise sorts
`
value
`
and removes duplicate entries.
Row-wise sorts
`
index
`
and removes duplicate entries.
Duplicate entries are removed by scattering them together.
For scattering, any operation of
[
`torch_scatter`
](
https://github.com/rusty1s/pytorch_scatter
)
can be used.
...
...
@@ -64,8 +64,8 @@ For scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/py
*
**index**
*(LongTensor)*
- The index tensor of sparse matrix.
*
**value**
*(Tensor)*
- The value tensor of sparse matrix.
*
**m**
*(int)*
- The first dimension of
spar
se matrix.
*
**n**
*(int)*
- The second dimension of
spar
se matrix.
*
**m**
*(int)*
- The first dimension of
corresponding den
se matrix.
*
**n**
*(int)*
- The second dimension of
corresponding den
se matrix.
*
**op**
*(string, optional)*
- The scatter operation to use. (default:
`"add"`
)
*
**fill_value**
*(int, optional)*
- The initial fill value of scatter operation. (default:
`0`
)
...
...
@@ -109,8 +109,8 @@ Transposes dimensions 0 and 1 of a sparse matrix.
*
**index**
*(LongTensor)*
- The index tensor of sparse matrix.
*
**value**
*(Tensor)*
- The value tensor of sparse matrix.
*
**m**
*(int)*
- The first dimension of
spar
se matrix.
*
**n**
*(int)*
- The second dimension of
spar
se matrix.
*
**m**
*(int)*
- The first dimension of
corresponding den
se matrix.
*
**n**
*(int)*
- The second dimension of
corresponding den
se matrix.
### Returns
...
...
@@ -143,7 +143,7 @@ tensor([[7.0, 9.0],
## Sparse Dense Matrix Multiplication
```
torch_sparse.spmm(index, value, m, matrix) -> torch.Tensor
torch_sparse.spmm(index, value, m,
n,
matrix) -> torch.Tensor
```
Matrix product of a sparse matrix with a dense matrix.
...
...
@@ -152,8 +152,8 @@ Matrix product of a sparse matrix with a dense matrix.
*
**index**
*(LongTensor)*
- The index tensor of sparse matrix.
*
**value**
*(Tensor)*
- The value tensor of sparse matrix.
*
**m**
*(int)*
- The first dimension of
spar
se matrix.
*
**n**
*(int)*
- The second dimension of
spar
se matrix.
*
**m**
*(int)*
- The first dimension of
corresponding den
se matrix.
*
**n**
*(int)*
- The second dimension of
corresponding den
se matrix.
*
**matrix**
*(Tensor)*
- The dense matrix.
### Returns
...
...
@@ -195,9 +195,9 @@ Both input sparse matrices need to be **coalesced**.
*
**valueA**
*(Tensor)*
- The value tensor of first sparse matrix.
*
**indexB**
*(LongTensor)*
- The index tensor of second sparse matrix.
*
**valueB**
*(Tensor)*
- The value tensor of second sparse matrix.
*
**m**
*(int)*
- The first dimension of first
spar
se matrix.
*
**k**
*(int)*
- The second dimension of first
spar
se matrix and first dimension of second
spar
se matrix.
*
**n**
*(int)*
- The second dimension of second
spar
se matrix.
*
**m**
*(int)*
- The first dimension of first
corresponding den
se matrix.
*
**k**
*(int)*
- The second dimension of first
corresponding den
se matrix and first dimension of second
corresponding den
se matrix.
*
**n**
*(int)*
- The second dimension of second
corresponding den
se matrix.
### Returns
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment