Commit ecb8c688 authored by rusty1s's avatar rusty1s
Browse files

build wheels

parent 0b9019dd
language: shell
os:
- linux
- osx
- windows
env:
global:
- CUDA_HOME=/usr/local/cuda
jobs:
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cpu
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu92
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu100
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu101
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cpu
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu92
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu100
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu101
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cpu
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu92
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu100
- TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu101
jobs:
include:
- os: linux
language: python
python: 3.7
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- gcc-5
- g++-5
env:
- CC=gcc-5
- CXX=g++-5
exclude: # Exclude *all* macOS CUDA jobs and Windows CUDA 9.2/10.0 jobs.
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu92
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu100
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu101
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu92
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu100
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu101
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu92
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu100
- os: osx
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu101
- os: windows
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu92
- os: windows
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.8 IDX=cu100
- os: windows
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu92
- os: windows
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.7 IDX=cu100
- os: windows
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu92
- os: windows
env: TORCH_VERSION=1.4.0 PYTHON_VERSION=3.6 IDX=cu100
install:
- pip install numpy
- pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
- pip install pycodestyle
- pip install flake8
- pip install codecov
- pip install torch-scatter
- source script/cuda.sh
- source script/conda.sh
- conda create --yes -n test python="${PYTHON_VERSION}"
- source activate test
- conda install pytorch=${TORCH_VERSION} ${TOOLKIT} -c pytorch --yes
- source script/torch.sh
- pip install torch-scatter==latest+${IDX} -f http://pytorch-scatter.s3-website.eu-central-1.amazonaws.com/whl/torch-${TORCH_VERSION}.html --trusted-host pytorch-scatter.s3-website.eu-central-1.amazonaws.com
- pip install flake8 codecov
- python setup.py install
script:
- python -c "import torch; print(torch.__version__)"
- pycodestyle --ignore=E731,W504 .
- flake8 .
- python setup.py install
- python setup.py test
after_success:
- python setup.py bdist_wheel --dist-dir=dist/torch-${TORCH_VERSION}
- python script/rename_wheel.py ${IDX}
- codecov
deploy:
provider: s3
region: eu-central-1
edge: true
access_key_id: AKIAJB7S6NJ5OM5MAAGA
secret_access_key: ${S3_SECRET_ACCESS_KEY}
bucket: pytorch-sparse
local_dir: dist/torch-${TORCH_VERSION}
upload_dir: whl/torch-${TORCH_VERSION}
acl: public_read
on:
repo: rusty1s/pytorch_sparse
branch: master
notifications:
email: false
include README.md
include LICENSE
recursive-exclude test *
recursive-include csrc *
......@@ -13,8 +13,7 @@
--------------------------------------------------------------------------------
[PyTorch](http://pytorch.org/) completely lacks autograd support and operations such as sparse sparse matrix multiplication, but is heavily working on improvement (*cf.* [this issue](https://github.com/pytorch/pytorch/issues/9674)).
In the meantime, this package consists of a small extension library of optimized sparse matrix operations with autograd support.
This package consists of a small extension library of optimized sparse matrix operations with autograd support.
This package currently consists of the following methods:
* **[Coalesce](#coalesce)**
......@@ -28,6 +27,26 @@ Note that only `value` comes with autograd support, as `index` is discrete and t
## Installation
### Binaries
We provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](http://pytorch-sparse.s3-website.eu-central-1.amazonaws.com/whl).
To install from binaries, simply run
```
pip install torch-scatter==latest+${CUDA} -f http://pytorch-scatter.s3-website.eu-central-1.amazonaws.com/whl/torch-1.4.0.html --trusted-host pytorch-scatter.s3-website.eu-central-1.amazonaws.com
pip install torch-sparse==latest+${CUDA} -f http://pytorch-sparse.s3-website.eu-central-1.amazonaws.com/whl/torch-1.4.0.html --trusted-host pytorch-sparse.s3-website.eu-central-1.amazonaws.com
```
where `${CUDA}` should be replaced by either `cpu`, `cu92`, `cu100` or `cu101` depending on your PyTorch installation.
| | `cpu` | `cu92` | `cu100` | `cu101` |
|-------------|-------|--------|---------|---------|
| **Linux** | ✅ | ✅ | ✅ | ✅ |
| **Windows** | ✅ | ❌ | ❌ | ✅ |
| **macOS** | ✅ | ❌ | ❌ | ❌ |
### From source
Ensure that at least PyTorch 1.4.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:
```
......@@ -47,10 +66,16 @@ Then run:
pip install torch-scatter torch-sparse
```
If you are running into any installation problems, please create an [issue](https://github.com/rusty1s/pytorch_sparse/issues).
Be sure to import `torch` first before using this package to resolve symbols the dynamic linker must see.
When running in a docker container without nvidia driver, PyTorch needs to evaluate the compute capabilities and may fail.
In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:
```
export TORCH_CUDA_ARCH_LIST = "6.0 6.1 7.2+PTX 7.5+PTX"
```
## Functions
## Coalesce
### Coalesce
```
torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)
......@@ -60,7 +85,7 @@ Row-wise sorts `index` and removes duplicate entries.
Duplicate entries are removed by scattering them together.
For scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/pytorch_scatter) can be used.
### Parameters
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
......@@ -68,12 +93,12 @@ For scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/py
* **n** *(int)* - The second dimension of corresponding dense matrix.
* **op** *(string, optional)* - The scatter operation to use. (default: `"add"`)
### Returns
#### Returns
* **index** *(LongTensor)* - The coalesced index tensor of sparse matrix.
* **value** *(Tensor)* - The coalesced value tensor of sparse matrix.
### Example
#### Example
```python
import torch
......@@ -97,7 +122,7 @@ tensor([[6.0, 8.0],
[5.0, 6.0]])
```
## Transpose
### Transpose
```
torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)
......@@ -105,7 +130,7 @@ torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)
Transposes dimensions 0 and 1 of a sparse matrix.
### Parameters
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
......@@ -113,12 +138,12 @@ Transposes dimensions 0 and 1 of a sparse matrix.
* **n** *(int)* - The second dimension of corresponding dense matrix.
* **coalesced** *(bool, optional)* - If set to `False`, will not coalesce the output. (default: `True`)
### Returns
#### Returns
* **index** *(LongTensor)* - The transposed index tensor of sparse matrix.
* **value** *(Tensor)* - The transposed value tensor of sparse matrix.
### Example
#### Example
```python
import torch
......@@ -142,7 +167,7 @@ tensor([[7.0, 9.0],
[3.0, 4.0]])
```
## Sparse Dense Matrix Multiplication
### Sparse Dense Matrix Multiplication
```
torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor
......@@ -150,7 +175,7 @@ torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor
Matrix product of a sparse matrix with a dense matrix.
### Parameters
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
......@@ -158,11 +183,11 @@ Matrix product of a sparse matrix with a dense matrix.
* **n** *(int)* - The second dimension of corresponding dense matrix.
* **matrix** *(Tensor)* - The dense matrix.
### Returns
#### Returns
* **out** *(Tensor)* - The dense output matrix.
### Example
#### Example
```python
import torch
......@@ -183,7 +208,7 @@ tensor([[7.0, 16.0],
[7.0, 19.0]])
```
## Sparse Sparse Matrix Multiplication
### Sparse Sparse Matrix Multiplication
```
torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)
......@@ -192,7 +217,7 @@ torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTenso
Matrix product of two sparse tensors.
Both input sparse matrices need to be **coalesced** (use the `coalesced` attribute to force).
### Parameters
#### Parameters
* **indexA** *(LongTensor)* - The index tensor of first sparse matrix.
* **valueA** *(Tensor)* - The value tensor of first sparse matrix.
......@@ -203,12 +228,12 @@ Both input sparse matrices need to be **coalesced** (use the `coalesced` attribu
* **n** *(int)* - The second dimension of second corresponding dense matrix.
* **coalesced** *(bool, optional)*: If set to `True`, will coalesce both input sparse matrices. (default: `False`)
### Returns
#### Returns
* **index** *(LongTensor)* - The output index tensor of sparse matrix.
* **value** *(Tensor)* - The output value tensor of sparse matrix.
### Example
#### Example
```python
import torch
......
#include <Python.h>
#include <torch/script.h>
#ifdef WITH_CUDA
#include <cuda.h>
#endif
#ifdef _WIN32
PyMODINIT_FUNC PyInit__version(void) { return NULL; }
#endif
int64_t cuda_version() {
#ifdef WITH_CUDA
return CUDA_VERSION;
#else
return -1;
#endif
}
static auto registry =
torch::RegisterOperators().op("torch_scatter::cuda_version", &cuda_version);
#!/bin/bash
if [ "${TRAVIS_OS_NAME}" = "linux" ]; then
wget -nv https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
chmod +x miniconda.sh
./miniconda.sh -b
PATH=/home/travis/miniconda3/bin:${PATH}
fi
if [ "${TRAVIS_OS_NAME}" = "osx" ]; then
wget -nv https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O miniconda.sh
chmod +x miniconda.sh
./miniconda.sh -b
PATH=/Users/travis/miniconda3/bin:${PATH}
fi
if [ "${TRAVIS_OS_NAME}" = "windows" ]; then
choco install openssl.light
choco install miniconda3
PATH=/c/tools/miniconda3/Scripts:$PATH
fi
conda update --yes conda
conda create --yes -n test python="${PYTHON_VERSION}"
#!/bin/bash
if [ "${TRAVIS_OS_NAME}" = "linux" ] && [ "$IDX" = "cpu" ]; then
export TOOLKIT=cpuonly
fi
if [ "${TRAVIS_OS_NAME}" = "linux" ] && [ "$IDX" = "cu92" ]; then
export CUDA_SHORT=9.2
export CUDA=9.2.148-1
export UBUNTU_VERSION=ubuntu1604
export CUBLAS=cuda-cublas-dev-9-2
export TOOLKIT="cudatoolkit=${CUDA_SHORT}"
fi
if [ "${TRAVIS_OS_NAME}" = "linux" ] && [ "$IDX" = "cu100" ]; then
export CUDA_SHORT=10.0
export CUDA=10.0.130-1
export UBUNTU_VERSION=ubuntu1804
export CUBLAS=cuda-cublas-dev-10-0
export TOOLKIT="cudatoolkit=${CUDA_SHORT}"
fi
if [ "${TRAVIS_OS_NAME}" = "linux" ] && [ "$IDX" = "cu101" ]; then
export IDX=cu101
export CUDA_SHORT=10.1
export CUDA=10.1.105-1
export UBUNTU_VERSION=ubuntu1804
export CUBLAS=libcublas-dev
export TOOLKIT="cudatoolkit=${CUDA_SHORT}"
fi
if [ "${TRAVIS_OS_NAME}" = "windows" ] && [ "$IDX" = "cpu" ]; then
export TOOLKIT=cpuonly
fi
if [ "${TRAVIS_OS_NAME}" = "windows" ] && [ "$IDX" = "cu92" ]; then
export CUDA_SHORT=9.2
export CUDA_URL=https://developer.nvidia.com/compute/cuda/${CUDA_SHORT}/Prod2/local_installers2
export CUDA_FILE=cuda_${CUDA_SHORT}.148_win10
export TOOLKIT="cudatoolkit=${CUDA_SHORT}"
fi
if [ "${TRAVIS_OS_NAME}" = "windows" ] && [ "$IDX" = "cu100" ]; then
export CUDA_SHORT=10.0
export CUDA_URL=https://developer.nvidia.com/compute/cuda/${CUDA_SHORT}/Prod/local_installers
export CUDA_FILE=cuda_${CUDA_SHORT}.130_411.31_win10
export TOOLKIT="cudatoolkit=${CUDA_SHORT}"
fi
if [ "${TRAVIS_OS_NAME}" = "windows" ] && [ "$IDX" = "cu101" ]; then
export CUDA_SHORT=10.1
export CUDA_URL=https://developer.nvidia.com/compute/cuda/${CUDA_SHORT}/Prod/local_installers
export CUDA_FILE=cuda_${CUDA_SHORT}.105_418.96_win10.exe
export TOOLKIT="cudatoolkit=${CUDA_SHORT}"
fi
if [ "${TRAVIS_OS_NAME}" = "osx" ] && [ "$IDX" = "cpu" ]; then
export TOOLKIT=""
fi
if [ "${IDX}" = "cpu" ]; then
export FORCE_CPU=1
else
export FORCE_CUDA=1
fi
if [ "${TRAVIS_OS_NAME}" = "linux" ] && [ "${IDX}" != "cpu" ]; then
INSTALLER=cuda-repo-${UBUNTU_VERSION}_${CUDA}_amd64.deb
wget -nv "http://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/${INSTALLER}"
sudo dpkg -i "${INSTALLER}"
wget -nv "https://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/7fa2af80.pub"
sudo apt-key add 7fa2af80.pub
sudo apt update -qq
sudo apt install -y "cuda-core-${CUDA_SHORT/./-}" "cuda-cudart-dev-${CUDA_SHORT/./-}" "${CUBLAS}" "cuda-cusparse-dev-${CUDA_SHORT/./-}"
sudo apt clean
CUDA_HOME=/usr/local/cuda-${CUDA_SHORT}
LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${LD_LIBRARY_PATH}
PATH=${CUDA_HOME}/bin:${PATH}
nvcc --version
fi
if [ "${TRAVIS_OS_NAME}" = "windows" ] && [ "${IDX}" != "cpu" ]; then
wget -nv "${CUDA_URL}/${CUDA_FILE}"
PowerShell -Command "Start-Process -FilePath \"${CUDA_FILE}\" -ArgumentList \"-s nvcc_${CUDA_SHORT} cublas_dev_${CUDA_SHORT} cusparse_dev_${CUDA_SHORT}\" -Wait -NoNewWindow"
CUDA_HOME=/c/Program\ Files/NVIDIA\ GPU\ Computing\ Toolkit/CUDA/v${CUDA_SHORT}
PATH=${CUDA_HOME}/bin:$PATH
PATH=/c/Program\ Files\ \(x86\)/Microsoft\ Visual\ Studio/2017/BuildTools/MSBuild/15.0/Bin:$PATH
nvcc --version
fi
# Fix Cuda9.2 on Windows: https://github.com/pytorch/pytorch/issues/6109
if [ "${TRAVIS_OS_NAME}" = "windows" ] && [ "${IDX}" = "cu92" ]; then
sed -i.bak -e '129,141d' "${CUDA_HOME}/include/crt/host_config.h"
fi
import sys
import os
import os.path as osp
import glob
import shutil
idx = sys.argv[1]
assert idx in ['cpu', 'cu92', 'cu100', 'cu101']
dist_dir = osp.join(osp.dirname(osp.abspath(__file__)), '..', 'dist')
wheels = glob.glob(osp.join('dist', '**', '*.whl'), recursive=True)
for wheel in wheels:
if idx in wheel:
continue
paths = wheel.split(osp.sep)
names = paths[-1].split('-')
name = '-'.join(names[:-4] + ['latest+' + idx] + names[-3:])
shutil.copyfile(wheel, osp.join(*paths[:-1], name))
name = '-'.join(names[:-4] + [names[-4] + '+' + idx] + names[-3:])
os.rename(wheel, osp.join(*paths[:-1], name))
#!/bin/bash
# Fix "member may not be initialized" error on Windows: https://github.com/pytorch/pytorch/issues/27958
if [ "${TRAVIS_OS_NAME}" = "windows" ] && [ "${IDX}" != "cpu" ]; then
sed -i.bak -e 's/constexpr/const/g' /c/tools/miniconda3/envs/test/lib/site-packages/torch/include/torch/csrc/jit/script/module.h
sed -i.bak -e 's/constexpr/const/g' /c/tools/miniconda3/envs/test/lib/site-packages/torch/include/torch/csrc/jit/argument_spec.h
sed -i.bak -e 's/return \*(this->value)/return \*((type\*)this->value)/g' /c/tools/miniconda3/envs/test/lib/site-packages/torch/include/pybind11/cast.h
fi
import boto3
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket(name="pytorch-sparse")
objects = bucket.objects.all()
wheels = sorted([obj.key for obj in objects if obj.key[-3:] == 'whl'])
wheels_dict = {}
for torch_version in list(set([wheel.split('/')[1] for wheel in wheels])):
wheels_dict[torch_version] = []
for wheel in wheels:
torch_version = wheel.split('/')[1]
wheels_dict[torch_version].append(wheel)
html = '<!DOCTYPE html>\n<html>\n<body>\n{}\n</body>\n</html>'
href = '<a href="{}">{}</a><br/>'
url = 'http://pytorch-sparse.s3-website.eu-central-1.amazonaws.com/{}.html'
index_html = html.format('\n'.join([
href.format(url.format('whl/' + key), key) for key in wheels_dict.keys()
]))
with open('index.html', 'w') as f:
f.write(index_html)
bucket.Object('whl/index.html').upload_file(
Filename='index.html', ExtraArgs={
'ContentType': 'text/html',
'ACL': 'public-read'
})
url = 'https://pytorch-sparse.s3.eu-central-1.amazonaws.com/{}'
for key, item in wheels_dict.items():
version_html = html.format('\n'.join([
href.format(url.format(i), '/'.join(i.split('/')[2:])) for i in item
]))
with open('{}.html'.format(key), 'w') as f:
f.write(version_html)
bucket.Object('whl/{}.html'.format(key)).upload_file(
Filename='{}.html'.format(key), ExtraArgs={
'ContentType': 'text/html',
'ACL': 'public-read'
})
......@@ -20,7 +20,7 @@ BUILD_DOCS = os.getenv('BUILD_DOCS', '0') == '1'
def get_extensions():
Extension = CppExtension
define_macros = []
extra_compile_args = {'cxx': [], 'nvcc': []}
extra_compile_args = {'cxx': []}
extra_link_args = []
if WITH_CUDA:
......@@ -29,7 +29,7 @@ def get_extensions():
nvcc_flags = os.getenv('NVCC_FLAGS', '')
nvcc_flags = [] if nvcc_flags == '' else nvcc_flags.split(' ')
nvcc_flags += ['-arch=sm_35', '--expt-relaxed-constexpr']
extra_compile_args['nvcc'] += nvcc_flags
extra_compile_args['nvcc'] = nvcc_flags
if sys.platform == 'win32':
extra_link_args = ['cusparse.lib']
......@@ -44,11 +44,11 @@ def get_extensions():
sources = [main]
path = osp.join(extensions_dir, 'cpu', name + '_cpu.cpp')
path = osp.join(extensions_dir, 'cpu', f'{name}_cpu.cpp')
if osp.exists(path):
sources += [path]
path = osp.join(extensions_dir, 'cuda', name + '_cuda.cu')
path = osp.join(extensions_dir, 'cuda', f'{name}_cuda.cu')
if WITH_CUDA and osp.exists(path):
sources += [path]
......@@ -79,7 +79,7 @@ setup(
'Matrix Operations'),
keywords=['pytorch', 'sparse', 'sparse-matrices', 'autograd'],
license='MIT',
python_requires='>=3.5',
python_requires='>=3.6',
install_requires=install_requires,
setup_requires=setup_requires,
tests_require=tests_require,
......
......@@ -7,7 +7,7 @@ grad_dtypes = [torch.float, torch.double]
devices = [torch.device('cpu')]
if torch.cuda.is_available():
devices += [torch.device('cuda:{}'.format(torch.cuda.current_device()))]
devices += [torch.device(f'cuda:{torch.cuda.current_device()}')]
def tensor(x, dtype, device):
......
# flake8: noqa
import importlib
import os.path as osp
import torch
__version__ = '0.5.1'
expected_torch_version = (1, 4)
try:
torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
'_version', [osp.dirname(__file__)]).origin)
except OSError as e:
if 'undefined symbol' in str(e):
major, minor = [int(x) for x in torch.__version__.split('.')[:2]]
t_major, t_minor = expected_torch_version
if major != t_major or (major == t_major and minor != t_minor):
raise RuntimeError(
f'Expected PyTorch version {t_major}.{t_minor} but found '
f'version {major}.{minor}.')
raise OSError(e)
cuda_version = torch.ops.torch_scatter.cuda_version()
if cuda_version != -1 and torch.version.cuda is not None: # pragma: no cover
if cuda_version < 10000:
major, minor = int(str(cuda_version)[0]), int(str(cuda_version)[2])
else:
major, minor = int(str(cuda_version)[0:2]), int(str(cuda_version)[3])
t_major, t_minor = [int(x) for x in torch.version.cuda.split('.')]
cuda_version = str(major) + '.' + str(minor)
if t_major != major or t_minor != minor:
raise RuntimeError(
f'Detected that PyTorch and torch_sparse were compiled with '
f'different CUDA versions. PyTorch has CUDA version '
f'{t_major}.{t_minor} and torch_sparse has CUDA version '
f'{major}.{minor}. Please reinstall the torch_sparse that '
f'matches your PyTorch install.')
from .storage import SparseStorage
from .tensor import SparseTensor
from .transpose import t
......@@ -19,8 +59,6 @@ from .eye import eye
from .spmm import spmm
from .spspmm import spspmm
__version__ = '0.5.0'
__all__ = [
'SparseStorage',
'SparseTensor',
......
......@@ -15,9 +15,8 @@ def add(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
other = other.squeeze(0)[col]
else:
raise ValueError(
'Size mismatch: Expected size ({}, 1, ...) or (1, {}, ...), but '
'got size {}.'.format(src.size(0), src.size(1), other.size()))
f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
f'(1, {src.size(1)}, ...), but got size {other.size()}.')
if value is not None:
value = other.to(value.dtype).add_(value)
else:
......@@ -35,8 +34,8 @@ def add_(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
other = other.squeeze(0)[col]
else:
raise ValueError(
'Size mismatch: Expected size ({}, 1, ...) or (1, {}, ...), but '
'got size {}.'.format(src.size(0), src.size(1), other.size()))
f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
f'(1, {src.size(1)}, ...), but got size {other.size()}.')
if value is not None:
value = value.add_(other.to(value.dtype))
......
......@@ -138,8 +138,8 @@ def cat(tensors: List[SparseTensor], dim: int) -> SparseTensor:
return tensors[0].set_value(value, layout='coo')
else:
raise IndexError(
'Dimension out of range: Expected to be in range of [{}, {}], but '
'got {}.'.format(-tensors[0].dim(), tensors[0].dim() - 1, dim))
f'Dimension out of range: Expected to be in range of '
'[{-tensors[0].dim()}, {tensors[0].dim() - 1}], but got {dim}.')
@torch.jit.script
......
import warnings
import importlib
import os.path as osp
from typing import Optional
......@@ -6,18 +6,8 @@ import torch
from torch_sparse.storage import SparseStorage
from torch_sparse.tensor import SparseTensor
try:
torch.ops.load_library(
osp.join(osp.dirname(osp.abspath(__file__)), '_diag.so'))
except OSError:
warnings.warn('Failed to load `diag` binaries.')
def non_diag_mask_placeholder(row: torch.Tensor, col: torch.Tensor, M: int,
N: int, k: int) -> torch.Tensor:
raise ImportError
return row
torch.ops.torch_sparse.non_diag_mask = non_diag_mask_placeholder
torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
'_diag', [osp.dirname(__file__)]).origin)
@torch.jit.script
......
import warnings
import importlib
import os.path as osp
from typing import Optional, Union, Tuple
from typing import Union, Tuple
import torch
from torch_sparse.tensor import SparseTensor
try:
torch.ops.load_library(
osp.join(osp.dirname(osp.abspath(__file__)), '_spmm.so'))
except OSError:
warnings.warn('Failed to load `spmm` binaries.')
def spmm_sum_placeholder(row: Optional[torch.Tensor], rowptr: torch.Tensor,
col: torch.Tensor, value: Optional[torch.Tensor],
colptr: Optional[torch.Tensor],
csr2csc: Optional[torch.Tensor],
mat: torch.Tensor) -> torch.Tensor:
raise ImportError
return mat
def spmm_mean_placeholder(row: Optional[torch.Tensor],
rowptr: torch.Tensor, col: torch.Tensor,
value: Optional[torch.Tensor],
rowcount: Optional[torch.Tensor],
colptr: Optional[torch.Tensor],
csr2csc: Optional[torch.Tensor],
mat: torch.Tensor) -> torch.Tensor:
raise ImportError
return mat
def spmm_min_max_placeholder(rowptr: torch.Tensor, col: torch.Tensor,
value: Optional[torch.Tensor],
mat: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor]:
raise ImportError
return mat, mat
torch.ops.torch_sparse.spmm_sum = spmm_sum_placeholder
torch.ops.torch_sparse.spmm_mean = spmm_mean_placeholder
torch.ops.torch_sparse.spmm_min = spmm_min_max_placeholder
torch.ops.torch_sparse.spmm_max = spmm_min_max_placeholder
try:
torch.ops.load_library(
osp.join(osp.dirname(osp.abspath(__file__)), '_spspmm.so'))
except OSError:
warnings.warn('Failed to load `spspmm` binaries.')
def spspmm_sum_placeholder(
rowptrA: torch.Tensor, colA: torch.Tensor,
valueA: Optional[torch.Tensor], rowptrB: torch.Tensor,
colB: torch.Tensor, valueB: Optional[torch.Tensor], K: int
) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]:
raise ImportError
return rowptrA, colA, valueA
torch.ops.torch_sparse.spspmm_sum = spspmm_sum_placeholder
torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
'_spmm', [osp.dirname(__file__)]).origin)
torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
'_spspmm', [osp.dirname(__file__)]).origin)
@torch.jit.script
......
......@@ -15,8 +15,8 @@ def mul(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
other = other.squeeze(0)[col]
else:
raise ValueError(
'Size mismatch: Expected size ({}, 1, ...) or (1, {}, ...), but '
'got size {}.'.format(src.size(0), src.size(1), other.size()))
f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
f'(1, {src.size(1)}, ...), but got size {other.size()}.')
if value is not None:
value = other.to(value.dtype).mul_(value)
......@@ -35,8 +35,8 @@ def mul_(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
other = other.squeeze(0)[col]
else:
raise ValueError(
'Size mismatch: Expected size ({}, 1, ...) or (1, {}, ...), but '
'got size {}.'.format(src.size(0), src.size(1), other.size()))
f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
f'(1, {src.size(1)}, ...), but got size {other.size()}.')
if value is not None:
value = value.mul_(other.to(value.dtype))
......
import warnings
import importlib
import os.path as osp
from typing import Optional, List
......@@ -6,22 +7,8 @@ import torch
from torch_scatter import segment_csr, scatter_add
from torch_sparse.utils import Final
try:
torch.ops.load_library(
osp.join(osp.dirname(osp.abspath(__file__)), '_convert.so'))
except OSError:
warnings.warn('Failed to load `convert` binaries.')
def ind2ptr_placeholder(ind: torch.Tensor, M: int) -> torch.Tensor:
raise ImportError
return ind
def ptr2ind_placeholder(ptr: torch.Tensor, E: int) -> torch.Tensor:
raise ImportError
return ptr
torch.ops.torch_sparse.ind2ptr = ind2ptr_placeholder
torch.ops.torch_sparse.ptr2ind = ptr2ind_placeholder
torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
'_convert', [osp.dirname(__file__)]).origin)
layouts: Final[List[str]] = ['coo', 'csr', 'csc']
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment