Commit a43bd4e4 authored by limm's avatar limm
Browse files

Modified README.md supports fastpt compilation

parent adfff486
# <div aligh="center"><strong>PyTorch Sparse</strong></div> # <div aligh="center"><strong>PyTorch Sparse</strong></div>
## 简介 ## 简介
Pytorch Sparce 是 个包包含了一个小型扩展库,用于优化支持自动微分的稀疏矩阵操作。目前,这个包包括以下方法:Coalesce,Transpose,Sparse Dense Matrix Multiplication,Sparse Sparse Matrix Multiplication 所有包含的操作都支持不同类型的数据,并且既在CPU上也在GPU上实现了。DAS软件栈中的PyTorch Sparce版本,不仅保证了组件核心功能在DCU加速卡的可用性,还针对DCU特有的硬件架构进行了深度定制优化。这使得开发者能够以极低的成本,轻松实现应用程序在DCU加速卡上的快速迁移和性能提升。目前支持Pytorch1.13 Pyotrch2.1 Pytorch2.3 Pytorch Sparce 是 个包包含了一个小型扩展库,用于优化支持自动微分的稀疏矩阵操作。目前,这个包包括以下方法:Coalesce,Transpose,Sparse Dense Matrix Multiplication,Sparse Sparse Matrix Multiplication 所有包含的操作都支持不同类型的数据,并且既在CPU上也在GPU上实现了。DAS软件栈中的PyTorch Sparce版本,不仅保证了组件核心功能在DCU加速卡的可用性,还针对DCU特有的硬件架构进行了深度定制优化。这使得开发者能够以极低的成本,轻松实现应用程序在DCU加速卡上的快速迁移和性能提升。目前支持Pytorch1.13 Pyotrch2.1 Pytorch2.4.1 Pytorch2.5.1
### 使用pip方式安装 ### 使用pip方式安装
torch-sparce whl包下载目录:[http://10.6.10.68:8000/customized/torch-sparse/dtk24.04/](http://10.6.10.68:8000/customized/torch-sparse/dtk24.04/). 目前只提供有python.10版本的安装包 (如果为空需要自己按照下面步骤编译) torch-sparce whl包下载目录:[光合开发者社区](https://das.sourcefind.cn:55011/portal/#/home). 目前只提供有python.10版本的安装包 (如果为空需要自己按照下面步骤编译)
```shell ```shell
pip install torch_sparse* (下载的torch_sparse的whl包) pip install torch_sparse* (下载的torch_sparse的whl包)
``` ```
...@@ -17,22 +17,26 @@ pip install 'urllib3==1.26.14' ...@@ -17,22 +17,26 @@ pip install 'urllib3==1.26.14'
pip install pytest pip install pytest
pip install wheel pip install wheel
``` ```
- 在首页 | 光合开发者社区下载 det24.04 解压在 /opt/ 路径下,并建立软连接,例如: - 在首页 | 光合开发者社区下载 dtk25.04 解压在 /opt/ 路径下,并建立软连接,例如:
```shell ```shell
cd /opt cd /opt & ln -s dtk-25.04 dtk
wget http://10.6.10.68:8000/dtk-pkg/dtk24.04/release/CentOS7.6/DTK-24.04-CentOS7.6-x86_64.tar.gz
tar -zxvf DTK-24.04-CentOS7.6-x86_64.tar.gz
ln -s dtk-24.04 dtk
source /opt/dtk/env.sh
``` ```
- 安装pytorch. pytorch whl包下载目录: [http://10.6.10.68:8000/debug/pytorch/dtk24.04/](http://10.6.10.68:8000/debug/pytorch/dtk24.04/). 根据需求下载对应系统下的版本,安装如下: - 安装pytorch. pytorch whl包下载目录: [光合开发者社区](https://das.sourcefind.cn:55011/portal/#/home). 根据需求下载对应系统下的版本,安装如下:
```shell ```shell
pip install torch* (下载的torch的whl包) pip install torch* (下载的torch的whl包)
``` ```
- 安装fastpt. fastpt whl包下载目录: [光合开发者社区](https://das.sourcefind.cn:55011/portal/#/home). 根据需求下载对应系统下的版本,安装如下:
```shell
pip install fastpt* (下载的fastpt的whl包)
```
#### 源码下载编译安装 #### 源码下载编译安装
```shell ```shell
git clone -b 0.6.16-release http://developer.hpccube.com/codes/aicomponent/torch-sparce.git git clone -b 0.6.16-fastpt http://developer.hpccube.com/codes/aicomponent/torch-sparce.git
export FORCE_CUDA=1
source /usr/local/bin/fastpt -C
cd torch-sparce cd torch-sparce
python setup.py bdist_wheel python setup.py bdist_wheel
pip install dist/*.whl pip install dist/*.whl
......
[pypi-image]: https://badge.fury.io/py/torch-sparse.svg
[pypi-url]: https://pypi.python.org/pypi/torch-sparse
[testing-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml/badge.svg
[testing-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml
[linting-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml/badge.svg
[linting-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml
[coverage-image]: https://codecov.io/gh/rusty1s/pytorch_sparse/branch/master/graph/badge.svg
[coverage-url]: https://codecov.io/github/rusty1s/pytorch_sparse?branch=master
# PyTorch Sparse
[![PyPI Version][pypi-image]][pypi-url]
[![Testing Status][testing-image]][testing-url]
[![Linting Status][linting-image]][linting-url]
[![Code Coverage][coverage-image]][coverage-url]
--------------------------------------------------------------------------------
This package consists of a small extension library of optimized sparse matrix operations with autograd support.
This package currently consists of the following methods:
* **[Coalesce](#coalesce)**
* **[Transpose](#transpose)**
* **[Sparse Dense Matrix Multiplication](#sparse-dense-matrix-multiplication)**
* **[Sparse Sparse Matrix Multiplication](#sparse-sparse-matrix-multiplication)**
All included operations work on varying data types and are implemented both for CPU and GPU.
To avoid the hazzle of creating [`torch.sparse_coo_tensor`](https://pytorch.org/docs/stable/torch.html?highlight=sparse_coo_tensor#torch.sparse_coo_tensor), this package defines operations on sparse tensors by simply passing `index` and `value` tensors as arguments ([with same shapes as defined in PyTorch](https://pytorch.org/docs/stable/sparse.html)).
Note that only `value` comes with autograd support, as `index` is discrete and therefore not differentiable.
## Installation
### Anaconda
**Update:** You can now install `pytorch-sparse` via [Anaconda](https://anaconda.org/pyg/pytorch-sparse) for all major OS/PyTorch/CUDA combinations 🤗
Given that you have [`pytorch >= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run
```
conda install pytorch-sparse -c pyg
```
#### Note: Conda packages are not published for PyTorch 1.12 yet
### Binaries
We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).
#### PyTorch 1.12
To install the binaries for PyTorch 1.12.0, simply run
```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.12.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu102`, `cu113`, or `cu116` depending on your PyTorch installation.
| | `cpu` | `cu102` | `cu113` | `cu116` |
|-------------|-------|---------|---------|---------|
| **Linux** | ✅ | ✅ | ✅ | ✅ |
| **Windows** | ✅ | | ✅ | ✅ |
| **macOS** | ✅ | | | |
#### PyTorch 1.11
To install the binaries for PyTorch 1.11.0, simply run
```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.11.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu102`, `cu113`, or `cu115` depending on your PyTorch installation.
| | `cpu` | `cu102` | `cu113` | `cu115` |
|-------------|-------|---------|---------|---------|
| **Linux** | ✅ | ✅ | ✅ | ✅ |
| **Windows** | ✅ | | ✅ | ✅ |
| **macOS** | ✅ | | | |
**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, and PyTorch 1.10.0/1.10.1/1.10.2 (following the same procedure).
For older versions, you might need to explicitly specify the latest supported version number in order to prevent a manual installation from source.
You can look up the latest supported version number [here](https://data.pyg.org/whl).
### From source
Ensure that at least PyTorch 1.7.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:
```
$ python -c "import torch; print(torch.__version__)"
>>> 1.7.0
$ echo $PATH
>>> /usr/local/cuda/bin:...
$ echo $CPATH
>>> /usr/local/cuda/include:...
```
If you want to additionally build `torch-sparse` with METIS support, *e.g.* for partioning, please download and install the [METIS library](https://web.archive.org/web/20211119110155/http://glaros.dtc.umn.edu/gkhome/metis/metis/download) by following the instructions in the `Install.txt` file.
Note that METIS needs to be installed with 64 bit `IDXTYPEWIDTH` by changing `include/metis.h`.
Afterwards, set the environment variable `WITH_METIS=1`.
Then run:
```
pip install torch-scatter torch-sparse
```
When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:
```
export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.2+PTX 7.5+PTX"
```
## Functions
### Coalesce
```
torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)
```
Row-wise sorts `index` and removes duplicate entries.
Duplicate entries are removed by scattering them together.
For scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/pytorch_scatter) can be used.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **op** *(string, optional)* - The scatter operation to use. (default: `"add"`)
#### Returns
* **index** *(LongTensor)* - The coalesced index tensor of sparse matrix.
* **value** *(Tensor)* - The coalesced value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import coalesce
index = torch.tensor([[1, 0, 1, 0, 2, 1],
[0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])
index, value = coalesce(index, value, m=3, n=2)
```
```
print(index)
tensor([[0, 1, 1, 2],
[1, 0, 1, 0]])
print(value)
tensor([[6.0, 8.0],
[7.0, 9.0],
[3.0, 4.0],
[5.0, 6.0]])
```
### Transpose
```
torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)
```
Transposes dimensions 0 and 1 of a sparse matrix.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **coalesced** *(bool, optional)* - If set to `False`, will not coalesce the output. (default: `True`)
#### Returns
* **index** *(LongTensor)* - The transposed index tensor of sparse matrix.
* **value** *(Tensor)* - The transposed value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import transpose
index = torch.tensor([[1, 0, 1, 0, 2, 1],
[0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])
index, value = transpose(index, value, 3, 2)
```
```
print(index)
tensor([[0, 0, 1, 1],
[1, 2, 0, 1]])
print(value)
tensor([[7.0, 9.0],
[5.0, 6.0],
[6.0, 8.0],
[3.0, 4.0]])
```
### Sparse Dense Matrix Multiplication
```
torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor
```
Matrix product of a sparse matrix with a dense matrix.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **matrix** *(Tensor)* - The dense matrix.
#### Returns
* **out** *(Tensor)* - The dense output matrix.
#### Example
```python
import torch
from torch_sparse import spmm
index = torch.tensor([[0, 0, 1, 2, 2],
[0, 2, 1, 0, 1]])
value = torch.Tensor([1, 2, 4, 1, 3])
matrix = torch.Tensor([[1, 4], [2, 5], [3, 6]])
out = spmm(index, value, 3, 3, matrix)
```
```
print(out)
tensor([[7.0, 16.0],
[8.0, 20.0],
[7.0, 19.0]])
```
### Sparse Sparse Matrix Multiplication
```
torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)
```
Matrix product of two sparse tensors.
Both input sparse matrices need to be **coalesced** (use the `coalesced` attribute to force).
#### Parameters
* **indexA** *(LongTensor)* - The index tensor of first sparse matrix.
* **valueA** *(Tensor)* - The value tensor of first sparse matrix.
* **indexB** *(LongTensor)* - The index tensor of second sparse matrix.
* **valueB** *(Tensor)* - The value tensor of second sparse matrix.
* **m** *(int)* - The first dimension of first sparse matrix.
* **k** *(int)* - The second dimension of first sparse matrix and first dimension of second sparse matrix.
* **n** *(int)* - The second dimension of second sparse matrix.
* **coalesced** *(bool, optional)*: If set to `True`, will coalesce both input sparse matrices. (default: `False`)
#### Returns
* **index** *(LongTensor)* - The output index tensor of sparse matrix.
* **value** *(Tensor)* - The output value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import spspmm
indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.Tensor([1, 2, 3, 4, 5])
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.Tensor([2, 4])
indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
```
```
print(indexC)
tensor([[0, 1, 2],
[0, 1, 1]])
print(valueC)
tensor([8.0, 6.0, 8.0])
```
## Running tests
```
pytest
```
## C++ API
`torch-sparse` also offers a C++ API that contains C++ equivalent of python models.
For this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):
```
mkdir build
cd build
# Add -DWITH_CUDA=on support for CUDA support
cmake -DCMAKE_PREFIX_PATH="..." ..
make
make install
```
# PyTorch Sparse
## 编译问题
### 编译torch_sparse==0.6.15 目前不支持torch2.1版本的torch,如果编译torch2.1版本的torch_sparse0.6.15版本需要修改一些代码
### 文件路径pytorch_sparse/csrc/version.cpp
```
static auto registry = torch::RegisterOperators().op("torch_sparse::cuda_version", &sparse::cuda_version); -> static auto registry = torch::RegisterOperators().op("torch_sparse::cuda_version", [] { return sparse::cuda_version(); });
```
# 注释:上述内容为README.md的补充。
Metadata-Version: 2.1
Name: torch-sparse
Version: 0.6.18
Summary: PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations
Home-page: https://github.com/rusty1s/pytorch_sparse
Author: Matthias Fey
Author-email: matthias.fey@tu-dortmund.de
License: UNKNOWN
Download-URL: https://github.com/rusty1s/pytorch_sparse/archive/0.6.18.tar.gz
Description: [pypi-image]: https://badge.fury.io/py/torch-sparse.svg
[pypi-url]: https://pypi.python.org/pypi/torch-sparse
[testing-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml/badge.svg
[testing-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml
[linting-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml/badge.svg
[linting-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml
[coverage-image]: https://codecov.io/gh/rusty1s/pytorch_sparse/branch/master/graph/badge.svg
[coverage-url]: https://codecov.io/github/rusty1s/pytorch_sparse?branch=master
# PyTorch Sparse
[![PyPI Version][pypi-image]][pypi-url]
[![Testing Status][testing-image]][testing-url]
[![Linting Status][linting-image]][linting-url]
[![Code Coverage][coverage-image]][coverage-url]
--------------------------------------------------------------------------------
This package consists of a small extension library of optimized sparse matrix operations with autograd support.
This package currently consists of the following methods:
* **[Coalesce](#coalesce)**
* **[Transpose](#transpose)**
* **[Sparse Dense Matrix Multiplication](#sparse-dense-matrix-multiplication)**
* **[Sparse Sparse Matrix Multiplication](#sparse-sparse-matrix-multiplication)**
All included operations work on varying data types and are implemented both for CPU and GPU.
To avoid the hazzle of creating [`torch.sparse_coo_tensor`](https://pytorch.org/docs/stable/torch.html?highlight=sparse_coo_tensor#torch.sparse_coo_tensor), this package defines operations on sparse tensors by simply passing `index` and `value` tensors as arguments ([with same shapes as defined in PyTorch](https://pytorch.org/docs/stable/sparse.html)).
Note that only `value` comes with autograd support, as `index` is discrete and therefore not differentiable.
## Installation
### Anaconda
**Update:** You can now install `pytorch-sparse` via [Anaconda](https://anaconda.org/pyg/pytorch-sparse) for all major OS/PyTorch/CUDA combinations 🤗
Given that you have [`pytorch >= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run
```
conda install pytorch-sparse -c pyg
```
### Binaries
We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).
#### PyTorch 2.2
To install the binaries for PyTorch 2.2.0, simply run
```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.2.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu118`, or `cu121` depending on your PyTorch installation.
| | `cpu` | `cu118` | `cu121` |
|-------------|-------|---------|---------|
| **Linux** | ✅ | ✅ | ✅ |
| **Windows** | ✅ | ✅ | ✅ |
| **macOS** | ✅ | | |
#### PyTorch 2.1
To install the binaries for PyTorch 2.1.0, simply run
```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.1.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu118`, or `cu121` depending on your PyTorch installation.
| | `cpu` | `cu118` | `cu121` |
|-------------|-------|---------|---------|
| **Linux** | ✅ | ✅ | ✅ |
| **Windows** | ✅ | ✅ | ✅ |
| **macOS** | ✅ | | |
**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, and PyTorch 2.0.0 (following the same procedure).
For older versions, you need to explicitly specify the latest supported version number or install via `pip install --no-index` in order to prevent a manual installation from source.
You can look up the latest supported version number [here](https://data.pyg.org/whl).
### From source
Ensure that at least PyTorch 1.7.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:
```
$ python -c "import torch; print(torch.__version__)"
>>> 1.7.0
$ echo $PATH
>>> /usr/local/cuda/bin:...
$ echo $CPATH
>>> /usr/local/cuda/include:...
```
If you want to additionally build `torch-sparse` with METIS support, *e.g.* for partioning, please download and install the [METIS library](https://web.archive.org/web/20211119110155/http://glaros.dtc.umn.edu/gkhome/metis/metis/download) by following the instructions in the `Install.txt` file.
Note that METIS needs to be installed with 64 bit `IDXTYPEWIDTH` by changing `include/metis.h`.
Afterwards, set the environment variable `WITH_METIS=1`.
Then run:
```
pip install torch-scatter torch-sparse
```
When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:
```
export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.2+PTX 7.5+PTX"
```
## Functions
### Coalesce
```
torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)
```
Row-wise sorts `index` and removes duplicate entries.
Duplicate entries are removed by scattering them together.
For scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/pytorch_scatter) can be used.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **op** *(string, optional)* - The scatter operation to use. (default: `"add"`)
#### Returns
* **index** *(LongTensor)* - The coalesced index tensor of sparse matrix.
* **value** *(Tensor)* - The coalesced value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import coalesce
index = torch.tensor([[1, 0, 1, 0, 2, 1],
[0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])
index, value = coalesce(index, value, m=3, n=2)
```
```
print(index)
tensor([[0, 1, 1, 2],
[1, 0, 1, 0]])
print(value)
tensor([[6.0, 8.0],
[7.0, 9.0],
[3.0, 4.0],
[5.0, 6.0]])
```
### Transpose
```
torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)
```
Transposes dimensions 0 and 1 of a sparse matrix.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **coalesced** *(bool, optional)* - If set to `False`, will not coalesce the output. (default: `True`)
#### Returns
* **index** *(LongTensor)* - The transposed index tensor of sparse matrix.
* **value** *(Tensor)* - The transposed value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import transpose
index = torch.tensor([[1, 0, 1, 0, 2, 1],
[0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])
index, value = transpose(index, value, 3, 2)
```
```
print(index)
tensor([[0, 0, 1, 1],
[1, 2, 0, 1]])
print(value)
tensor([[7.0, 9.0],
[5.0, 6.0],
[6.0, 8.0],
[3.0, 4.0]])
```
### Sparse Dense Matrix Multiplication
```
torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor
```
Matrix product of a sparse matrix with a dense matrix.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **matrix** *(Tensor)* - The dense matrix.
#### Returns
* **out** *(Tensor)* - The dense output matrix.
#### Example
```python
import torch
from torch_sparse import spmm
index = torch.tensor([[0, 0, 1, 2, 2],
[0, 2, 1, 0, 1]])
value = torch.Tensor([1, 2, 4, 1, 3])
matrix = torch.Tensor([[1, 4], [2, 5], [3, 6]])
out = spmm(index, value, 3, 3, matrix)
```
```
print(out)
tensor([[7.0, 16.0],
[8.0, 20.0],
[7.0, 19.0]])
```
### Sparse Sparse Matrix Multiplication
```
torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)
```
Matrix product of two sparse tensors.
Both input sparse matrices need to be **coalesced** (use the `coalesced` attribute to force).
#### Parameters
* **indexA** *(LongTensor)* - The index tensor of first sparse matrix.
* **valueA** *(Tensor)* - The value tensor of first sparse matrix.
* **indexB** *(LongTensor)* - The index tensor of second sparse matrix.
* **valueB** *(Tensor)* - The value tensor of second sparse matrix.
* **m** *(int)* - The first dimension of first sparse matrix.
* **k** *(int)* - The second dimension of first sparse matrix and first dimension of second sparse matrix.
* **n** *(int)* - The second dimension of second sparse matrix.
* **coalesced** *(bool, optional)*: If set to `True`, will coalesce both input sparse matrices. (default: `False`)
#### Returns
* **index** *(LongTensor)* - The output index tensor of sparse matrix.
* **value** *(Tensor)* - The output value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import spspmm
indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.Tensor([1, 2, 3, 4, 5])
indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.Tensor([2, 4])
indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
```
```
print(indexC)
tensor([[0, 1, 2],
[0, 1, 1]])
print(valueC)
tensor([8.0, 6.0, 8.0])
```
## Running tests
```
pytest
```
## C++ API
`torch-sparse` also offers a C++ API that contains C++ equivalent of python models.
For this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):
```
mkdir build
cd build
# Add -DWITH_CUDA=on support for CUDA support
cmake -DCMAKE_PREFIX_PATH="..." ..
make
make install
```
Keywords: pytorch,sparse,sparse-matrices,autograd
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3 :: Only
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Provides-Extra: test
LICENSE
MANIFEST.in
README.md
setup.cfg
setup.py
csrc/convert.cpp
csrc/convert_hip.cpp
csrc/diag.cpp
csrc/diag_hip.cpp
csrc/ego_sample.cpp
csrc/extensions.h
csrc/hgt_sample.cpp
csrc/macros.h
csrc/metis.cpp
csrc/neighbor_sample.cpp
csrc/relabel.cpp
csrc/rw.cpp
csrc/rw_hip.cpp
csrc/saint.cpp
csrc/sample.cpp
csrc/sparse.h
csrc/spmm.cpp
csrc/spmm_hip.cpp
csrc/version.cpp
csrc/version_hip.cpp
csrc/cpu/convert_cpu.cpp
csrc/cpu/convert_cpu.h
csrc/cpu/convert_cpu_hip.cpp
csrc/cpu/diag_cpu.cpp
csrc/cpu/diag_cpu.h
csrc/cpu/diag_cpu_hip.cpp
csrc/cpu/ego_sample_cpu.cpp
csrc/cpu/ego_sample_cpu.h
csrc/cpu/ego_sample_cpu_hip.cpp
csrc/cpu/hgt_sample_cpu.cpp
csrc/cpu/hgt_sample_cpu.h
csrc/cpu/hgt_sample_cpu_hip.cpp
csrc/cpu/metis_cpu.cpp
csrc/cpu/metis_cpu.h
csrc/cpu/metis_cpu_hip.cpp
csrc/cpu/neighbor_sample_cpu.cpp
csrc/cpu/neighbor_sample_cpu.h
csrc/cpu/neighbor_sample_cpu_hip.cpp
csrc/cpu/reducer.h
csrc/cpu/relabel_cpu.cpp
csrc/cpu/relabel_cpu.h
csrc/cpu/relabel_cpu_hip.cpp
csrc/cpu/rw_cpu.cpp
csrc/cpu/rw_cpu.h
csrc/cpu/rw_cpu_hip.cpp
csrc/cpu/saint_cpu.cpp
csrc/cpu/saint_cpu.h
csrc/cpu/saint_cpu_hip.cpp
csrc/cpu/sample_cpu.cpp
csrc/cpu/sample_cpu.h
csrc/cpu/sample_cpu_hip.cpp
csrc/cpu/spmm_cpu.cpp
csrc/cpu/spmm_cpu.h
csrc/cpu/spmm_cpu_hip.cpp
csrc/cpu/utils.h
csrc/cpu/utils_hip.h
csrc/cuda/atomics.cuh
csrc/cuda/convert_cuda.cu
csrc/cuda/convert_cuda.h
csrc/cuda/diag_cuda.cu
csrc/cuda/diag_cuda.h
csrc/cuda/reducer.cuh
csrc/cuda/rw_cuda.cu
csrc/cuda/rw_cuda.h
csrc/cuda/spmm_cuda.cu
csrc/cuda/spmm_cuda.h
csrc/cuda/utils.cuh
csrc/hip/convert_cuda.h
csrc/hip/convert_cuda.hip
csrc/hip/diag_cuda.h
csrc/hip/diag_cuda.hip
csrc/hip/reducer.cuh
csrc/hip/rw_cuda.h
csrc/hip/rw_cuda.hip
csrc/hip/spmm_cuda.h
csrc/hip/spmm_cuda.hip
csrc/hip/utils.cuh
third_party/parallel-hashmap/.gitattributes
third_party/parallel-hashmap/.gitignore
third_party/parallel-hashmap/CMakeLists.txt
third_party/parallel-hashmap/LICENSE
third_party/parallel-hashmap/README.md
third_party/parallel-hashmap/index.html
third_party/parallel-hashmap/phmap.natvis
third_party/parallel-hashmap/phmap_gdb.py
third_party/parallel-hashmap/phmap_lldb.py
third_party/parallel-hashmap/.git/HEAD
third_party/parallel-hashmap/.git/config
third_party/parallel-hashmap/.git/description
third_party/parallel-hashmap/.git/index
third_party/parallel-hashmap/.git/packed-refs
third_party/parallel-hashmap/.git/hooks/applypatch-msg.sample
third_party/parallel-hashmap/.git/hooks/commit-msg.sample
third_party/parallel-hashmap/.git/hooks/post-update.sample
third_party/parallel-hashmap/.git/hooks/pre-applypatch.sample
third_party/parallel-hashmap/.git/hooks/pre-commit.sample
third_party/parallel-hashmap/.git/hooks/pre-push.sample
third_party/parallel-hashmap/.git/hooks/pre-rebase.sample
third_party/parallel-hashmap/.git/hooks/prepare-commit-msg.sample
third_party/parallel-hashmap/.git/hooks/update.sample
third_party/parallel-hashmap/.git/info/exclude
third_party/parallel-hashmap/.git/logs/HEAD
third_party/parallel-hashmap/.git/logs/refs/heads/master
third_party/parallel-hashmap/.git/logs/refs/remotes/origin/HEAD
third_party/parallel-hashmap/.git/objects/pack/pack-3c85fde45df5c893a21cee3d06d971471906d621.idx
third_party/parallel-hashmap/.git/objects/pack/pack-3c85fde45df5c893a21cee3d06d971471906d621.pack
third_party/parallel-hashmap/.git/refs/heads/master
third_party/parallel-hashmap/.git/refs/remotes/origin/HEAD
third_party/parallel-hashmap/.github/FUNDING.yml
third_party/parallel-hashmap/.github/workflows/linux.yml
third_party/parallel-hashmap/.github/workflows/macos.yml
third_party/parallel-hashmap/.github/workflows/windows.yml
third_party/parallel-hashmap/cmake/CMakeLists.txt.in
third_party/parallel-hashmap/cmake/DetectVersion.cmake
third_party/parallel-hashmap/cmake/DownloadGTest.cmake
third_party/parallel-hashmap/cmake/helpers.cmake
third_party/parallel-hashmap/cmake/phmap.cmake
third_party/parallel-hashmap/doc/new_release.md
third_party/parallel-hashmap/parallel_hashmap/btree.h
third_party/parallel-hashmap/parallel_hashmap/meminfo.h
third_party/parallel-hashmap/parallel_hashmap/phmap.h
third_party/parallel-hashmap/parallel_hashmap/phmap_base.h
third_party/parallel-hashmap/parallel_hashmap/phmap_base_hip.h
third_party/parallel-hashmap/parallel_hashmap/phmap_bits.h
third_party/parallel-hashmap/parallel_hashmap/phmap_bits_hip.h
third_party/parallel-hashmap/parallel_hashmap/phmap_config.h
third_party/parallel-hashmap/parallel_hashmap/phmap_config_hip.h
third_party/parallel-hashmap/parallel_hashmap/phmap_dump.h
third_party/parallel-hashmap/parallel_hashmap/phmap_fwd_decl.h
third_party/parallel-hashmap/parallel_hashmap/phmap_hip.h
third_party/parallel-hashmap/parallel_hashmap/phmap_utils.h
third_party/parallel-hashmap/parallel_hashmap/phmap_utils_hip.h
torch_sparse/__init__.py
torch_sparse/add.py
torch_sparse/bandwidth.py
torch_sparse/cat.py
torch_sparse/coalesce.py
torch_sparse/convert.py
torch_sparse/diag.py
torch_sparse/eye.py
torch_sparse/index_select.py
torch_sparse/masked_select.py
torch_sparse/matmul.py
torch_sparse/metis.py
torch_sparse/mul.py
torch_sparse/narrow.py
torch_sparse/permute.py
torch_sparse/reduce.py
torch_sparse/rw.py
torch_sparse/saint.py
torch_sparse/sample.py
torch_sparse/select.py
torch_sparse/spadd.py
torch_sparse/spmm.py
torch_sparse/spspmm.py
torch_sparse/storage.py
torch_sparse/tensor.py
torch_sparse/testing.py
torch_sparse/transpose.py
torch_sparse/typing.py
torch_sparse/utils.py
torch_sparse.egg-info/PKG-INFO
torch_sparse.egg-info/SOURCES.txt
torch_sparse.egg-info/dependency_links.txt
torch_sparse.egg-info/requires.txt
torch_sparse.egg-info/top_level.txt
\ No newline at end of file
scipy
[test]
pytest
pytest-cov
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment