Commit e19cd62e authored by Quan (Andy) Gan's avatar Quan (Andy) Gan Committed by Minjie Wang
Browse files

[Test] Unify tests for different backends (#333)

* test basics

* batched graph & filter, mxnet filter fix

* frame and function; bugfix

* test graph adj and inc matrices

* fixing start = 0 for mxnet

* test index

* inplace update & line graph

* multi send recv

* more tests

* oops

* more tests

* removing old test files; readonly graphs for mxnet still kept

* modifying test scripts

* adding a placeholder for pytorch to reserve directory

* torch 0.4.1 compat fixes

* moving backend out of compute to avoid nose detection

* tests guide

* mx sparse-to-dense/sparse-to-numpy is buggy

* oops

* contribution guide for unit tests

* printing incmat

* printing dlpack

* small push

* typo

* fixing duplicate entries that causes undefined behavior

* move equal comparison to backend
parent 3edcaa1e
......@@ -4,7 +4,7 @@ Contribution is always welcomed. A good starting place is the roadmap issue, whe
you can find our current milestones. All contributions must go through pull requests
and be reviewed by the committors. See our [contribution guide](https://docs.dgl.ai/contribute.html) for more details.
Once your contribution is accepted and merged, congratulation, you are now an contributor to the DGL project.
Once your contribution is accepted and merged, congratulations, you are now a contributor to the DGL project.
We will put your name in the list below and also on our [website](https://www.dgl.ai/ack).
Contributors
......
......@@ -118,7 +118,51 @@ You could test the build by running the following command and see the path of yo
python -c 'import dgl; print(dgl.__path__)'
TBD by Quan about how to run and write unittests.
Unit tests
``````````
Currently, we use ``nose`` for unit tests. The organization goes as follows:
* ``backend``: Additional unified tensor interface for supported frameworks.
The functions there are only used in unit tests, not DGL itself. Note that
the code there are not unit tests by themselves. The additional backend can
be imported with
.. code-block:: python
import backend
The additional backend contains the following files:
- ``backend/backend_unittest.py``: stub file for all additional tensor
functions.
- ``backend/${DGLBACKEND}/__init__.py``: implementations of the stubs
for the backend ``${DGLBACKEND}``.
- ``backend/__init__.py``: when imported, it replaces the stub implementations
with the framework-specific code, depending on the selected backend. It
also changes the signature of some existing backend functions to automatically
select dtypes and contexts.
* ``compute``: All framework-agnostic computation-related unit tests go there.
Anything inside should not depend on a specific tensor library. Tensor
functions not provided in DGL unified tensor interface (i.e. ``dgl.backend``)
should go into ``backend`` directory.
* ``${DGLBACKEND}`` (e.g. ``pytorch`` and ``mxnet``): All framework-specific
computation-related unit tests go there.
* ``graph_index``: All unit tests for C++ graph structure implementation go
there. The Python API being tested in this directory, if any, should be
as minimal as possible (usually simple wrappers of corresponding C++
functions).
* ``lint``: Pylint-related files.
* ``scripts``: Automated test scripts for CI.
To run unit tests, run
.. code-block:: bash
sh tests/scripts/task_unit_test.sh <your-backend>
where ``<your-backend>`` can be any supported backends (i.e. ``pytorch`` or ``mxnet``).
Building documents
------------------
......
......@@ -9,7 +9,7 @@ The principles of this interface:
* Argument type should be easier to understand.
It is recommended the frameworks implement all the interfaces. However, it is
also OK to skip some. The generated backend module has an ``is_enbaled`` function
also OK to skip some. The generated backend module has an ``is_enabled`` function
that returns whether the interface is supported by the framework or not.
"""
......@@ -507,6 +507,22 @@ def zeros(shape, dtype, ctx):
"""
pass
def zeros_like(input):
"""Create a zero tensor with the same shape, dtype and context of the
given tensor.
Parameters
----------
input : Tensor
The input
Returns
-------
Tensor
The result
"""
pass
def ones(shape, dtype, ctx):
"""Create a one tensor.
......@@ -595,6 +611,54 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim):
"""
pass
def boolean_mask(input, mask):
"""Selects elements in x according to the given mask from the first
dimension.
Parameters
----------
input : Tensor
The input tensor
mask : Boolean Tensor
The mask
Returns
-------
Tensor
The result
"""
pass
def equal(x, y):
"""Compares whether the elements are equal.
Parameters
----------
x, y : Tensor
The two tensors
Returns
-------
Boolean tensor
The result, with the same shape as input.
"""
pass
def logical_not(input):
"""Perform a logical not operation. Equivalent to np.logical_not
Parameters
----------
input : Tensor
The input
Returns
-------
Tensor
The result
"""
pass
###############################################################################
# Tensor functions used *only* on index tensor
# ----------------
......
......@@ -3,6 +3,7 @@ from __future__ import absolute_import
import numpy as np
import mxnet as mx
import mxnet.ndarray as nd
import numbers
def data_type_dict():
return {'float16' : np.float16,
......@@ -18,6 +19,13 @@ def cpu():
return mx.cpu()
def tensor(data, dtype=None):
# MXNet always returns a float tensor regardless of type inside data.
# This is a workaround.
if dtype is None:
if isinstance(data[0], numbers.Integral):
dtype = np.int64
else:
dtype = np.float32
return nd.array(data, dtype=dtype)
def sparse_matrix(data, index, shape, force_format=False):
......@@ -90,7 +98,7 @@ def cat(seq, dim):
return nd.concat(*seq, dim=dim)
def stack(seq, dim):
return nd.stack(*seq, dim=dim)
return nd.stack(*seq, axis=dim)
def split(x, sizes_or_sections, dim):
if isinstance(sizes_or_sections, list) or isinstance(sizes_or_sections, np.ndarray):
......@@ -103,13 +111,17 @@ def split(x, sizes_or_sections, dim):
return nd.split(x, sizes_or_sections, axis=dim)
def gather_row(data, row_index):
# MXNet workaround for empty row index
if len(row_index) == 0:
return data[0:0]
if isinstance(row_index, nd.NDArray):
return nd.take(data, row_index)
else:
return data[row_index,]
def narrow_row(data, start, stop):
return nd.slice(data, begin=start, end=stop)
return data[start:stop]
def scatter_row(data, row_index, value):
return mx.nd.contrib.index_copy(data, row_index, value)
......@@ -130,6 +142,9 @@ def reshape(input, shape):
def zeros(shape, dtype, ctx):
return nd.zeros(shape, dtype=dtype, ctx=ctx)
def zeros_like(input):
return nd.zeros_like(input)
def ones(shape, dtype, ctx):
return nd.ones(shape, dtype=dtype, ctx=ctx)
......@@ -165,6 +180,15 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim):
y /= w.reshape((-1,) + (1,) * (y.ndim - 1))
return y
def boolean_mask(input, mask):
return mx.contrib.nd.boolean_mask(input, mask)
def equal(x, y):
return x == y
def logical_not(input):
return nd.logical_not(input)
def unique(input):
# TODO: fallback to numpy is unfortunate
tmp = input.asnumpy()
......
......@@ -118,6 +118,9 @@ def reshape(input, shape):
def zeros(shape, dtype, ctx):
return th.zeros(shape, dtype=dtype, device=ctx)
def zeros_like(input):
return th.zeros_like(input)
def ones(shape, dtype, ctx):
return th.ones(shape, dtype=dtype, device=ctx)
......@@ -137,6 +140,15 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim):
y /= w.view((-1,) + (1,) * (y.dim() - 1))
return y
def boolean_mask(input, mask):
return input[mask]
def equal(x, y):
return x == y
def logical_not(input):
return ~input
def unique(input):
return th.unique(input)
......@@ -144,7 +156,8 @@ def full_1d(length, fill_value, dtype, ctx):
return th.full((length,), fill_value, dtype=dtype, device=ctx)
def nonzero_1d(input):
return th.nonzero(input).squeeze()
x = th.nonzero(input).squeeze()
return x if x.dim() == 1 else x.view(-1)
def sort_1d(input):
return th.sort(input)
......
......@@ -138,10 +138,12 @@ class Column(object):
elif idx.slice_data() is not None:
# for contiguous indices narrow+concat is usually faster than scatter row
slc = idx.slice_data()
part1 = F.narrow_row(self.data, 0, slc.start)
part2 = feats
part3 = F.narrow_row(self.data, slc.stop, len(self))
self.data = F.cat([part1, part2, part3], dim=0)
parts = [feats]
if slc.start > 0:
parts.insert(0, F.narrow_row(self.data, 0, slc.start))
if slc.stop < len(self):
parts.append(F.narrow_row(self.data, slc.stop, len(self)))
self.data = F.cat(parts, dim=0)
else:
idx = idx.tousertensor(F.context(self.data))
self.data = F.scatter_row(self.data, idx, feats)
......
......@@ -1120,12 +1120,12 @@ class DGLGraph(object):
if node_attrs is not None:
for nid, attr in nx_graph.nodes(data=True):
feat_dict = self.get_n_repr(nid)
attr.update({key: feat_dict[key].squeeze(0) for key in node_attrs})
attr.update({key: F.squeeze(feat_dict[key], 0) for key in node_attrs})
if edge_attrs is not None:
for _, _, attr in nx_graph.edges(data=True):
eid = attr['id']
feat_dict = self.get_e_repr(eid)
attr.update({key: feat_dict[key].squeeze(0) for key in edge_attrs})
attr.update({key: F.squeeze(feat_dict[key], 0) for key in edge_attrs})
return nx_graph
def from_networkx(self, nx_graph, node_attrs=None, edge_attrs=None):
......@@ -2830,7 +2830,7 @@ class DGLGraph(object):
return F.nonzero_1d(n_mask)
else:
nodes = F.tensor(nodes)
return nodes[n_mask]
return F.boolean_mask(nodes, n_mask)
def filter_edges(self, predicate, edges=ALL):
"""Return a tensor of edge IDs that satisfy the given predicate.
......@@ -2903,7 +2903,7 @@ class DGLGraph(object):
return F.nonzero_1d(e_mask)
else:
edges = F.tensor(edges)
return edges[e_mask]
return F.boolean_mask(edges, e_mask)
def __repr__(self):
ret = ('DGLGraph(num_nodes={node}, num_edges={edge},\n'
......
......@@ -611,18 +611,22 @@ class GraphIndex(object):
dat = F.ones((m,), dtype=F.float32, ctx=ctx)
inc, shuffle_idx = F.sparse_matrix(dat, ('coo', idx), (n, m))
elif typestr == 'both':
# first remove entries for self loops
mask = F.logical_not(F.equal(src, dst))
src = F.boolean_mask(src, mask)
dst = F.boolean_mask(dst, mask)
eid = F.boolean_mask(eid, mask)
n_entries = F.shape(src)[0]
# create index
row = F.unsqueeze(F.cat([src, dst], dim=0), 0)
col = F.unsqueeze(F.cat([eid, eid], dim=0), 0)
idx = F.cat([row, col], dim=0)
# create data
diagonal = (src == dst)
# FIXME(minjie): data type
x = -F.ones((m,), dtype=F.float32, ctx=ctx)
y = F.ones((m,), dtype=F.float32, ctx=ctx)
x[diagonal] = 0
y[diagonal] = 0
x = -F.ones((n_entries,), dtype=F.float32, ctx=ctx)
y = F.ones((n_entries,), dtype=F.float32, ctx=ctx)
dat = F.cat([x, y], dim=0)
print(idx)
print(dat)
inc, shuffle_idx = F.sparse_matrix(dat, ('coo', idx), (n, m))
else:
raise DGLError('Invalid incidence matrix type: %s' % str(typestr))
......
Unit test
===
The code organization goes as follows:
* `backend`: Additional unified tensor interface for supported frameworks.
The functions there are only used in unit tests, not DGL itself. Note that
the code there are not unit tests by themselves.
* `compute`: All framework-agnostic computation-related unit tests go there.
* `${DGLBACKEND}` (e.g. `pytorch` and `mxnet`): All framework-specific
computation-related unit tests go there.
* `graph_index`: All unit tests for C++ graph structure implementation go
there. The Python API being tested in this directory, if any, should be
as minimal as possible (usually simple wrappers of corresponding C++
functions).
* `lint`: Pylint-related files.
* `scripts`: Automated test scripts for CI.
from dgl.backend import *
from . import backend_unittest
import os
import importlib
import sys
import numpy as np
mod_name = os.environ.get('DGLBACKEND', 'pytorch').lower()
mod = importlib.import_module('.%s' % mod_name, __name__)
thismod = sys.modules[__name__]
for api in backend_unittest.__dict__.keys():
if api.startswith('__'):
continue
elif callable(mod.__dict__[api]):
# Tensor APIs used in unit tests MUST be supported across all backends
globals()[api] = mod.__dict__[api]
# Tensor creation with default dtype and context
_zeros = zeros
_ones = ones
_randn = randn
_tensor = tensor
_arange = arange
_full = full
_full_1d = full_1d
_default_context_str = os.getenv('DGLTESTDEV', 'cpu')
_context_dict = {
'cpu': cpu(),
'cuda': cuda(),
}
_default_context = _context_dict[_default_context_str]
def zeros(shape, dtype=float32, ctx=_default_context):
return _zeros(shape, dtype, ctx)
def ones(shape, dtype=float32, ctx=_default_context):
return _ones(shape, dtype, ctx)
def randn(shape):
return copy_to(_randn(shape), _default_context)
def tensor(data, dtype=None):
if dtype is None:
data = np.array(data)
dtype = int64 if np.issubdtype(data.dtype, np.integer) else float32
return copy_to(_tensor(data, dtype), _default_context)
def arange(start, stop):
return copy_to(_arange(start, stop), _default_context)
def full(shape, fill_value, dtype, ctx=_default_context):
return _full(shape, fill_value, dtype, ctx)
def full_1d(length, fill_value, dtype, ctx=_default_context):
return _full_1d(length, fill_value, dtype, ctx)
"""This file defines the unified tensor framework interface required by DGL
unit testing, other than the ones used in the framework itself.
"""
###############################################################################
# Tensor, data type and context interfaces
def cuda():
"""Context object for CUDA."""
pass
###############################################################################
# Tensor functions on feature data
# --------------------------------
# These functions are performance critical, so it's better to have efficient
# implementation in each framework.
def array_equal(a, b):
"""Check whether the two tensors are *exactly* equal."""
pass
def allclose(a, b):
"""Check whether the two tensors are numerically close to each other."""
pass
def randn(shape):
"""Generate a tensor with elements from standard normal distribution."""
pass
def attach_grad(x):
"""Flag the tensor *in-place* to have its gradient computed in backward
pass.
If the flag is already set, reset the gradient buffer as well.
"""
pass
def backward(x, head_gradient=None):
"""Invoke backward computation with an optional head gradient.
Returns nothing."""
pass
def grad(x):
"""Fetches the gradient from the tensor after backward computation."""
pass
def is_no_grad(x):
"""Check whether a tensor has its gradient computed."""
pass
def full(shape, fill_value, dtype, ctx):
pass
def narrow_row_set(x, start, stop, new):
"""Set a slice of the given tensor to a new value."""
pass
def sparse_to_numpy(x):
"""Convert a sparse tensor to a numpy array."""
pass
def clone(x):
pass
def reduce_sum(x):
"""Sums all the elements into a single scalar."""
pass
###############################################################################
# Tensor functions used *only* on index tensor
# ----------------
# These operators are light-weighted, so it is acceptable to fallback to
# numpy operators if currently missing in the framework. Ideally in the future,
# DGL should contain all the operations on index, so this set of operators
# should be gradually removed.
###############################################################################
# Other interfaces
# ----------------
# These are not related to tensors. Some of them are temporary workarounds that
# should be included in DGL in the future.
class record_grad(object):
"""Context manager that records the gradients"""
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
class no_grad(object):
"""Context manager that explicitly disables gradient computation"""
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
from __future__ import absolute_import
import numpy as np
import mxnet as mx
import mxnet.ndarray as nd
import mxnet.autograd as autograd
def cuda():
return mx.gpu()
def array_equal(a, b):
return nd.equal(a, b).asnumpy().all()
def allclose(a, b):
return np.allclose(a.asnumpy(), b.asnumpy(), rtol=1e-4, atol=1e-4)
def randn(shape):
return nd.random.randn(*shape)
def attach_grad(x):
x.attach_grad()
return x
def backward(x, head_gradient=None):
x.backward(head_gradient)
def grad(x):
return x.grad
def is_no_grad(x):
return (x != 0).sum() == 0
def full(shape, fill_value, dtype, ctx):
return nd.full(shape, fill_value, dtype=dtype, ctx=ctx)
def narrow_row_set(x, start, stop, new):
x[start:stop] = new
def sparse_to_numpy(x):
return x.asscipy().todense().A
def clone(x):
return x.copy()
def reduce_sum(x):
return x.sum()
record_grad = autograd.record
class no_grad(object):
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
from __future__ import absolute_import
import torch as th
def cuda():
return th.device('cuda')
def array_equal(a, b):
return th.equal(a, b)
def allclose(a, b):
return th.allclose(a.float(), b.float(), rtol=1e-4, atol=1e-4)
def randn(shape):
return th.randn(*shape)
def attach_grad(x):
if x.grad is not None:
x.grad.zero_()
return x
else:
return x.requires_grad_()
def backward(x, head_gradient=None):
x.backward(head_gradient)
def grad(x):
return x.grad
def is_no_grad(x):
return x.grad is None or (x.grad == 0).all()
def full(shape, fill_value, dtype, ctx):
return th.full(shape, fill_value, dtype=dtype, device=ctx)
def narrow_row_set(x, start, stop, new):
x[start:stop] = new
def sparse_to_numpy(x):
return x.to_dense().numpy()
def clone(x):
return x.clone()
def reduce_sum(x):
return x.sum()
class record_grad(object):
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
no_grad = th.no_grad
import torch as th
from torch.autograd import Variable
import numpy as np
# Currently readonly graph construction only accepts sparse tensor in MXNet,
# and pytorch doesn't support readonly graph or graph creation from sparse
# tensor. For now, readonly graph test is postponed until we have better
# readonly graph support.
import backend as F
import dgl
from dgl.graph import DGLGraph
import utils as U
from dgl import DGLGraph
from collections import defaultdict as ddict
D = 5
reduce_msg_shapes = set()
def check_eq(a, b):
assert a.shape == b.shape
assert th.sum(a == b) == int(np.prod(list(a.shape)))
def message_func(edges):
assert len(edges.src['h'].shape) == 2
assert edges.src['h'].shape[1] == D
assert F.ndim(edges.src['h']) == 2
assert F.shape(edges.src['h'])[1] == D
return {'m' : edges.src['h']}
def reduce_func(nodes):
msgs = nodes.mailbox['m']
reduce_msg_shapes.add(tuple(msgs.shape))
assert len(msgs.shape) == 3
assert msgs.shape[2] == D
return {'accum' : th.sum(msgs, 1)}
assert F.ndim(msgs) == 3
assert F.shape(msgs)[2] == D
return {'accum' : F.sum(msgs, 1)}
def apply_node_func(nodes):
return {'h' : nodes.data['h'] + nodes.data['accum']}
def generate_graph(grad=False):
g = DGLGraph()
g.add_nodes(10) # 10 nodes.
g.add_nodes(10) # 10 nodes
# create a graph where 0 is the source and 9 is the sink
# 17 edges
for i in range(1, 9):
......@@ -38,8 +35,12 @@ def generate_graph(grad=False):
g.add_edge(i, 9)
# add a back flow from 9 to 0
g.add_edge(9, 0)
ncol = Variable(th.randn(10, D), requires_grad=grad)
ecol = Variable(th.randn(17, D), requires_grad=grad)
ncol = F.randn((10, D))
ecol = F.randn((17, D))
if grad:
ncol = F.attach_grad(ncol)
ecol = F.attach_grad(ecol)
g.ndata['h'] = ncol
g.edata['w'] = ecol
g.set_n_initializer(dgl.init.zero_initializer)
......@@ -48,22 +49,22 @@ def generate_graph(grad=False):
def test_batch_setter_getter():
def _pfc(x):
return list(x.numpy()[:,0])
return list(F.zerocopy_to_numpy(x)[:,0])
g = generate_graph()
# set all nodes
g.ndata['h'] = th.zeros((10, D))
assert U.allclose(g.ndata['h'], th.zeros((10, D)))
g.ndata['h'] = F.zeros((10, D))
assert F.allclose(g.ndata['h'], F.zeros((10, D)))
# pop nodes
old_len = len(g.ndata)
assert _pfc(g.pop_n_repr('h')) == [0.] * 10
assert len(g.ndata) == old_len - 1
g.ndata['h'] = th.zeros((10, D))
g.ndata['h'] = F.zeros((10, D))
# set partial nodes
u = th.tensor([1, 3, 5])
g.nodes[u].data['h'] = th.ones((3, D))
u = F.tensor([1, 3, 5])
g.nodes[u].data['h'] = F.ones((3, D))
assert _pfc(g.ndata['h']) == [0., 1., 0., 1., 0., 1., 0., 0., 0., 0.]
# get partial nodes
u = th.tensor([1, 2, 3])
u = F.tensor([1, 2, 3])
assert _pfc(g.nodes[u].data['h']) == [1., 0., 1.]
'''
......@@ -87,56 +88,57 @@ def test_batch_setter_getter():
9, 0, 16
'''
# set all edges
g.edata['l'] = th.zeros((17, D))
g.edata['l'] = F.zeros((17, D))
assert _pfc(g.edata['l']) == [0.] * 17
# pop edges
old_len = len(g.edata)
assert _pfc(g.pop_e_repr('l')) == [0.] * 17
assert len(g.edata) == old_len - 1
g.edata['l'] = th.zeros((17, D))
g.edata['l'] = F.zeros((17, D))
# set partial edges (many-many)
u = th.tensor([0, 0, 2, 5, 9])
v = th.tensor([1, 3, 9, 9, 0])
g.edges[u, v].data['l'] = th.ones((5, D))
u = F.tensor([0, 0, 2, 5, 9])
v = F.tensor([1, 3, 9, 9, 0])
g.edges[u, v].data['l'] = F.ones((5, D))
truth = [0.] * 17
truth[0] = truth[4] = truth[3] = truth[9] = truth[16] = 1.
assert _pfc(g.edata['l']) == truth
# set partial edges (many-one)
u = th.tensor([3, 4, 6])
v = th.tensor([9])
g.edges[u, v].data['l'] = th.ones((3, D))
u = F.tensor([3, 4, 6])
v = F.tensor([9])
g.edges[u, v].data['l'] = F.ones((3, D))
truth[5] = truth[7] = truth[11] = 1.
assert _pfc(g.edata['l']) == truth
# set partial edges (one-many)
u = th.tensor([0])
v = th.tensor([4, 5, 6])
g.edges[u, v].data['l'] = th.ones((3, D))
u = F.tensor([0])
v = F.tensor([4, 5, 6])
g.edges[u, v].data['l'] = F.ones((3, D))
truth[6] = truth[8] = truth[10] = 1.
assert _pfc(g.edata['l']) == truth
# get partial edges (many-many)
u = th.tensor([0, 6, 0])
v = th.tensor([6, 9, 7])
u = F.tensor([0, 6, 0])
v = F.tensor([6, 9, 7])
assert _pfc(g.edges[u, v].data['l']) == [1., 1., 0.]
# get partial edges (many-one)
u = th.tensor([5, 6, 7])
v = th.tensor([9])
u = F.tensor([5, 6, 7])
v = F.tensor([9])
assert _pfc(g.edges[u, v].data['l']) == [1., 1., 0.]
# get partial edges (one-many)
u = th.tensor([0])
v = th.tensor([3, 4, 5])
u = F.tensor([0])
v = F.tensor([3, 4, 5])
assert _pfc(g.edges[u, v].data['l']) == [1., 1., 1.]
def test_batch_setter_autograd():
g = generate_graph(grad=True)
h1 = g.ndata['h']
# partial set
v = th.tensor([1, 2, 8])
hh = Variable(th.zeros((len(v), D)), requires_grad=True)
v = F.tensor([1, 2, 8])
hh = F.attach_grad(F.zeros((len(v), D)))
with F.record_grad():
g.nodes[v].data['h'] = hh
h2 = g.ndata['h']
h2.backward(th.ones((10, D)) * 2)
check_eq(h1.grad[:,0], th.tensor([2., 0., 0., 2., 2., 2., 2., 2., 0., 2.]))
check_eq(hh.grad[:,0], th.tensor([2., 2., 2.]))
F.backward(h2, F.ones((10, D)) * 2)
assert F.array_equal(F.grad(h1)[:,0], F.tensor([2., 0., 0., 2., 2., 2., 2., 2., 0., 2.]))
assert F.array_equal(F.grad(hh)[:,0], F.tensor([2., 2., 2.]))
def test_nx_conversion():
# check conversion between networkx and DGLGraph
......@@ -151,10 +153,10 @@ def test_nx_conversion():
for nid, attr in nxg.nodes(data=True):
assert len(attr) == len(nf)
for k in nxg.nodes[nid]:
node_feat[k].append(attr[k].unsqueeze(0))
node_feat[k].append(F.unsqueeze(attr[k], 0))
for k in node_feat:
feat = th.cat(node_feat[k], dim=0)
assert U.allclose(feat, nf[k])
feat = F.cat(node_feat[k], 0)
assert F.allclose(feat, nf[k])
else:
assert len(nf) == 0
if num_edges > 0:
......@@ -163,18 +165,18 @@ def test_nx_conversion():
assert len(attr) == len(ef) + 1 # extra id
eid = attr['id']
for k in ef:
edge_feat[k][eid] = attr[k].unsqueeze(0)
edge_feat[k][eid] = F.unsqueeze(attr[k], 0)
for k in edge_feat:
feat = th.cat(edge_feat[k], dim=0)
assert U.allclose(feat, ef[k])
feat = F.cat(edge_feat[k], 0)
assert F.allclose(feat, ef[k])
else:
assert len(ef) == 0
n1 = th.randn(5, 3)
n2 = th.randn(5, 10)
n3 = th.randn(5, 4)
e1 = th.randn(4, 5)
e2 = th.randn(4, 7)
n1 = F.randn((5, 3))
n2 = F.randn((5, 10))
n3 = F.randn((5, 4))
e1 = F.randn((4, 5))
e2 = F.randn((4, 7))
g = DGLGraph(multigraph=True)
g.add_nodes(5)
g.add_edges([0,1,3,4], [2,4,0,3])
......@@ -198,20 +200,20 @@ def test_nx_conversion():
assert len(g.ndata) == 1
assert len(g.edata) == 2
# check feature values
assert U.allclose(g.ndata['n1'], n1)
assert F.allclose(g.ndata['n1'], n1)
# with id in nx edge feature, e1 should follow original order
assert U.allclose(g.edata['e1'], e1)
assert th.equal(g.get_e_repr()['id'], th.arange(4))
assert F.allclose(g.edata['e1'], e1)
assert F.array_equal(g.get_e_repr()['id'], F.arange(0, 4))
# test conversion after modifying DGLGraph
g.pop_e_repr('id') # pop id so we don't need to provide id when adding edges
new_n = th.randn(2, 3)
new_e = th.randn(3, 5)
new_n = F.randn((2, 3))
new_e = F.randn((3, 5))
g.add_nodes(2, data={'n1': new_n})
# add three edges, one is a multi-edge
g.add_edges([3, 6, 0], [4, 5, 2], data={'e1': new_e})
n1 = th.cat((n1, new_n), dim=0)
e1 = th.cat((e1, new_e), dim=0)
n1 = F.cat((n1, new_n), 0)
e1 = F.cat((e1, new_e), 0)
# convert to networkx again
nxg = g.to_networkx(node_attrs=['n1'], edge_attrs=['e1'])
assert len(nxg) == 7
......@@ -232,31 +234,31 @@ def test_nx_conversion():
assert len(g.ndata) == 1
assert len(g.edata) == 1
# check feature values
assert U.allclose(g.ndata['n1'], n1)
assert F.allclose(g.ndata['n1'], n1)
# edge feature order follows nxg.edges()
edge_feat = []
for _, _, attr in nxg.edges(data=True):
edge_feat.append(attr['e1'].unsqueeze(0))
edge_feat = th.cat(edge_feat, dim=0)
assert U.allclose(g.edata['e1'], edge_feat)
edge_feat.append(F.unsqueeze(attr['e1'], 0))
edge_feat = F.cat(edge_feat, 0)
assert F.allclose(g.edata['e1'], edge_feat)
def test_batch_send():
g = generate_graph()
def _fmsg(edges):
assert edges.src['h'].shape == (5, D)
assert tuple(F.shape(edges.src['h'])) == (5, D)
return {'m' : edges.src['h']}
g.register_message_func(_fmsg)
# many-many send
u = th.tensor([0, 0, 0, 0, 0])
v = th.tensor([1, 2, 3, 4, 5])
u = F.tensor([0, 0, 0, 0, 0])
v = F.tensor([1, 2, 3, 4, 5])
g.send((u, v))
# one-many send
u = th.tensor([0])
v = th.tensor([1, 2, 3, 4, 5])
u = F.tensor([0])
v = F.tensor([1, 2, 3, 4, 5])
g.send((u, v))
# many-one send
u = th.tensor([1, 2, 3, 4, 5])
v = th.tensor([9])
u = F.tensor([1, 2, 3, 4, 5])
v = F.tensor([9])
g.send((u, v))
def test_batch_recv():
......@@ -265,11 +267,11 @@ def test_batch_recv():
g.register_message_func(message_func)
g.register_reduce_func(reduce_func)
g.register_apply_node_func(apply_node_func)
u = th.tensor([0, 0, 0, 4, 5, 6])
v = th.tensor([1, 2, 3, 9, 9, 9])
u = F.tensor([0, 0, 0, 4, 5, 6])
v = F.tensor([1, 2, 3, 9, 9, 9])
reduce_msg_shapes.clear()
g.send((u, v))
g.recv(th.unique(v))
g.recv(F.unique(v))
assert(reduce_msg_shapes == {(1, 3, D), (3, 1, D)})
reduce_msg_shapes.clear()
......@@ -280,10 +282,10 @@ def test_apply_nodes():
g.register_apply_node_func(_upd)
old = g.ndata['h']
g.apply_nodes()
assert U.allclose(old * 2, g.ndata['h'])
u = th.tensor([0, 3, 4, 6])
assert F.allclose(old * 2, g.ndata['h'])
u = F.tensor([0, 3, 4, 6])
g.apply_nodes(lambda nodes : {'h' : nodes.data['h'] * 0.}, u)
assert U.allclose(g.ndata['h'][u], th.zeros((4, D)))
assert F.allclose(F.gather_row(g.ndata['h'], u), F.zeros((4, D)))
def test_apply_edges():
def _upd(edges):
......@@ -292,12 +294,12 @@ def test_apply_edges():
g.register_apply_edge_func(_upd)
old = g.edata['w']
g.apply_edges()
assert U.allclose(old * 2, g.edata['w'])
u = th.tensor([0, 0, 0, 4, 5, 6])
v = th.tensor([1, 2, 3, 9, 9, 9])
assert F.allclose(old * 2, g.edata['w'])
u = F.tensor([0, 0, 0, 4, 5, 6])
v = F.tensor([1, 2, 3, 9, 9, 9])
g.apply_edges(lambda edges : {'w' : edges.data['w'] * 0.}, (u, v))
eid = g.edge_ids(u, v)
assert U.allclose(g.edata['w'][eid], th.zeros((6, D)))
assert F.allclose(F.gather_row(g.edata['w'], eid), F.zeros((6, D)))
def test_update_routines():
g = generate_graph()
......@@ -319,14 +321,14 @@ def test_update_routines():
pass
# pull
v = th.tensor([1, 2, 3, 9])
v = F.tensor([1, 2, 3, 9])
reduce_msg_shapes.clear()
g.pull(v)
assert(reduce_msg_shapes == {(1, 8, D), (3, 1, D)})
reduce_msg_shapes.clear()
# push
v = th.tensor([0, 1, 2, 3])
v = F.tensor([0, 1, 2, 3])
reduce_msg_shapes.clear()
g.push(v)
assert(reduce_msg_shapes == {(1, 3, D), (8, 1, D)})
......@@ -346,36 +348,36 @@ def test_recv_0deg():
def _message(edges):
return {'m' : edges.src['h']}
def _reduce(nodes):
return {'h' : nodes.data['h'] + nodes.mailbox['m'].sum(1)}
return {'h' : nodes.data['h'] + F.sum(nodes.mailbox['m'], 1)}
def _apply(nodes):
return {'h' : nodes.data['h'] * 2}
def _init2(shape, dtype, ctx, ids):
return 2 + th.zeros(shape, dtype=dtype, device=ctx)
return 2 + F.zeros(shape, dtype, ctx)
g.register_message_func(_message)
g.register_reduce_func(_reduce)
g.register_apply_node_func(_apply)
g.set_n_initializer(_init2, 'h')
# test#1: recv both 0deg and non-0deg nodes
old = th.randn((2, 5))
old = F.randn((2, 5))
g.ndata['h'] = old
g.send((0, 1))
g.recv([0, 1])
new = g.ndata.pop('h')
# 0deg check: initialized with the func and got applied
assert U.allclose(new[0], th.full((5,), 4))
assert F.allclose(new[0], F.full_1d(5, 4, F.float32))
# non-0deg check
assert U.allclose(new[1], th.sum(old, 0) * 2)
assert F.allclose(new[1], F.sum(old, 0) * 2)
# test#2: recv only 0deg node is equal to apply
old = th.randn((2, 5))
old = F.randn((2, 5))
g.ndata['h'] = old
g.send((0, 1))
g.recv(0)
new = g.ndata.pop('h')
# 0deg check: equal to apply_nodes
assert U.allclose(new[0], 2 * old[0])
assert F.allclose(new[0], 2 * old[0])
# non-0deg check: untouched
assert U.allclose(new[1], old[1])
assert F.allclose(new[1], old[1])
def test_recv_0deg_newfld():
# test recv with 0deg nodes; the reducer also creates a new field
......@@ -385,37 +387,37 @@ def test_recv_0deg_newfld():
def _message(edges):
return {'m' : edges.src['h']}
def _reduce(nodes):
return {'h1' : nodes.data['h'] + nodes.mailbox['m'].sum(1)}
return {'h1' : nodes.data['h'] + F.sum(nodes.mailbox['m'], 1)}
def _apply(nodes):
return {'h1' : nodes.data['h1'] * 2}
def _init2(shape, dtype, ctx, ids):
return 2 + th.zeros(shape, dtype=dtype, device=ctx)
return 2 + F.zeros(shape, dtype=dtype, ctx=ctx)
g.register_message_func(_message)
g.register_reduce_func(_reduce)
g.register_apply_node_func(_apply)
# test#1: recv both 0deg and non-0deg nodes
old = th.randn((2, 5))
old = F.randn((2, 5))
g.set_n_initializer(_init2, 'h1')
g.ndata['h'] = old
g.send((0, 1))
g.recv([0, 1])
new = g.ndata.pop('h1')
# 0deg check: initialized with the func and got applied
assert U.allclose(new[0], th.full((5,), 4))
assert F.allclose(new[0], F.full_1d(5, 4, dtype=F.float32))
# non-0deg check
assert U.allclose(new[1], th.sum(old, 0) * 2)
assert F.allclose(new[1], F.sum(old, 0) * 2)
# test#2: recv only 0deg node
old = th.randn((2, 5))
old = F.randn((2, 5))
g.ndata['h'] = old
g.ndata['h1'] = th.full((2, 5), -1) # this is necessary
g.ndata['h1'] = F.full((2, 5), -1, F.int64) # this is necessary
g.send((0, 1))
g.recv(0)
new = g.ndata.pop('h1')
# 0deg check: fallback to apply
assert U.allclose(new[0], th.full((5,), -2))
assert F.allclose(new[0], F.full_1d(5, -2, F.int64))
# non-0deg check: not changed
assert U.allclose(new[1], th.full((5,), -1))
assert F.allclose(new[1], F.full_1d(5, -1, F.int64))
def test_update_all_0deg():
# test#1
......@@ -428,21 +430,21 @@ def test_update_all_0deg():
def _message(edges):
return {'m' : edges.src['h']}
def _reduce(nodes):
return {'h' : nodes.data['h'] + nodes.mailbox['m'].sum(1)}
return {'h' : nodes.data['h'] + F.sum(nodes.mailbox['m'], 1)}
def _apply(nodes):
return {'h' : nodes.data['h'] * 2}
def _init2(shape, dtype, ctx, ids):
return 2 + th.zeros(shape, dtype=dtype, device=ctx)
return 2 + F.zeros(shape, dtype, ctx)
g.set_n_initializer(_init2, 'h')
old_repr = th.randn(5, 5)
old_repr = F.randn((5, 5))
g.ndata['h'] = old_repr
g.update_all(_message, _reduce, _apply)
new_repr = g.ndata['h']
# the first row of the new_repr should be the sum of all the node
# features; while the 0-deg nodes should be initialized by the
# initializer and applied with UDF.
assert U.allclose(new_repr[1:], 2*(2+th.zeros((4,5))))
assert U.allclose(new_repr[0], 2 * old_repr.sum(0))
assert F.allclose(new_repr[1:], 2*(2+F.zeros((4,5))))
assert F.allclose(new_repr[0], 2 * F.sum(old_repr, 0))
# test#2: graph with no edge
g = DGLGraph()
......@@ -452,7 +454,7 @@ def test_update_all_0deg():
g.update_all(_message, _reduce, _apply)
new_repr = g.ndata['h']
# should fallback to apply
assert U.allclose(new_repr, 2*old_repr)
assert F.allclose(new_repr, 2*old_repr)
def test_pull_0deg():
g = DGLGraph()
......@@ -461,34 +463,34 @@ def test_pull_0deg():
def _message(edges):
return {'m' : edges.src['h']}
def _reduce(nodes):
return {'h' : nodes.data['h'] + nodes.mailbox['m'].sum(1)}
return {'h' : nodes.data['h'] + F.sum(nodes.mailbox['m'], 1)}
def _apply(nodes):
return {'h' : nodes.data['h'] * 2}
def _init2(shape, dtype, ctx, ids):
return 2 + th.zeros(shape, dtype=dtype, device=ctx)
return 2 + F.zeros(shape, dtype, ctx)
g.register_message_func(_message)
g.register_reduce_func(_reduce)
g.register_apply_node_func(_apply)
g.set_n_initializer(_init2, 'h')
# test#1: pull both 0deg and non-0deg nodes
old = th.randn((2, 5))
old = F.randn((2, 5))
g.ndata['h'] = old
g.pull([0, 1])
new = g.ndata.pop('h')
# 0deg check: initialized with the func and got applied
assert U.allclose(new[0], th.full((5,), 4))
assert F.allclose(new[0], F.full_1d(5, 4, dtype=F.float32))
# non-0deg check
assert U.allclose(new[1], th.sum(old, 0) * 2)
assert F.allclose(new[1], F.sum(old, 0) * 2)
# test#2: pull only 0deg node
old = th.randn((2, 5))
old = F.randn((2, 5))
g.ndata['h'] = old
g.pull(0)
new = g.ndata.pop('h')
# 0deg check: fallback to apply
assert U.allclose(new[0], 2*old[0])
assert F.allclose(new[0], 2*old[0])
# non-0deg check: not touched
assert U.allclose(new[1], old[1])
assert F.allclose(new[1], old[1])
def test_send_multigraph():
g = DGLGraph(multigraph=True)
......@@ -503,60 +505,60 @@ def test_send_multigraph():
def _message_b(edges):
return {'a': edges.data['a'] * 3}
def _reduce(nodes):
return {'a': nodes.mailbox['a'].max(1)[0]}
return {'a': F.max(nodes.mailbox['a'], 1)}
def answer(*args):
return th.stack(args, 0).max(0)[0]
return F.max(F.stack(args, 0), 0)
# send by eid
old_repr = th.randn(4, 5)
g.ndata['a'] = th.zeros(3, 5)
old_repr = F.randn((4, 5))
g.ndata['a'] = F.zeros((3, 5))
g.edata['a'] = old_repr
g.send([0, 2], message_func=_message_a)
g.recv(1, _reduce)
new_repr = g.ndata['a']
assert U.allclose(new_repr[1], answer(old_repr[0], old_repr[2]))
assert F.allclose(new_repr[1], answer(old_repr[0], old_repr[2]))
g.ndata['a'] = th.zeros(3, 5)
g.ndata['a'] = F.zeros((3, 5))
g.edata['a'] = old_repr
g.send([0, 2, 3], message_func=_message_a)
g.recv(1, _reduce)
new_repr = g.ndata['a']
assert U.allclose(new_repr[1], answer(old_repr[0], old_repr[2], old_repr[3]))
assert F.allclose(new_repr[1], answer(old_repr[0], old_repr[2], old_repr[3]))
# send on multigraph
g.ndata['a'] = th.zeros(3, 5)
g.ndata['a'] = F.zeros((3, 5))
g.edata['a'] = old_repr
g.send(([0, 2], [1, 1]), _message_a)
g.recv(1, _reduce)
new_repr = g.ndata['a']
assert U.allclose(new_repr[1], old_repr.max(0)[0])
assert F.allclose(new_repr[1], F.max(old_repr, 0))
# consecutive send and send_on
g.ndata['a'] = th.zeros(3, 5)
g.ndata['a'] = F.zeros((3, 5))
g.edata['a'] = old_repr
g.send((2, 1), _message_a)
g.send([0, 1], message_func=_message_b)
g.recv(1, _reduce)
new_repr = g.ndata['a']
assert U.allclose(new_repr[1], answer(old_repr[0] * 3, old_repr[1] * 3, old_repr[3]))
assert F.allclose(new_repr[1], answer(old_repr[0] * 3, old_repr[1] * 3, old_repr[3]))
# consecutive send_on
g.ndata['a'] = th.zeros(3, 5)
g.ndata['a'] = F.zeros((3, 5))
g.edata['a'] = old_repr
g.send(0, message_func=_message_a)
g.send(1, message_func=_message_b)
g.recv(1, _reduce)
new_repr = g.ndata['a']
assert U.allclose(new_repr[1], answer(old_repr[0], old_repr[1] * 3))
assert F.allclose(new_repr[1], answer(old_repr[0], old_repr[1] * 3))
# send_and_recv_on
g.ndata['a'] = th.zeros(3, 5)
g.ndata['a'] = F.zeros((3, 5))
g.edata['a'] = old_repr
g.send_and_recv([0, 2, 3], message_func=_message_a, reduce_func=_reduce)
new_repr = g.ndata['a']
assert U.allclose(new_repr[1], answer(old_repr[0], old_repr[2], old_repr[3]))
assert U.allclose(new_repr[[0, 2]], th.zeros(2, 5))
assert F.allclose(new_repr[1], answer(old_repr[0], old_repr[2], old_repr[3]))
assert F.allclose(new_repr[[0, 2]], F.zeros((2, 5)))
def test_dynamic_addition():
N = 3
......@@ -566,28 +568,28 @@ def test_dynamic_addition():
# Test node addition
g.add_nodes(N)
g.ndata.update({'h1': th.randn(N, D),
'h2': th.randn(N, D)})
g.ndata.update({'h1': F.randn((N, D)),
'h2': F.randn((N, D))})
g.add_nodes(3)
assert g.ndata['h1'].shape[0] == g.ndata['h2'].shape[0] == N + 3
# Test edge addition
g.add_edge(0, 1)
g.add_edge(1, 0)
g.edata.update({'h1': th.randn(2, D),
'h2': th.randn(2, D)})
g.edata.update({'h1': F.randn((2, D)),
'h2': F.randn((2, D))})
assert g.edata['h1'].shape[0] == g.edata['h2'].shape[0] == 2
g.add_edges([0, 2], [2, 0])
g.edata['h1'] = th.randn(4, D)
g.edata['h1'] = F.randn((4, D))
assert g.edata['h1'].shape[0] == g.edata['h2'].shape[0] == 4
g.add_edge(1, 2)
g.edges[4].data['h1'] = th.randn(1, D)
g.edges[4].data['h1'] = F.randn((1, D))
assert g.edata['h1'].shape[0] == g.edata['h2'].shape[0] == 5
# test add edge with part of the features
g.add_edge(2, 1, {'h1': th.randn(1, D)})
g.add_edge(2, 1, {'h1': F.randn((1, D))})
assert len(g.edata['h1']) == len(g.edata['h2'])
......@@ -597,9 +599,9 @@ def test_repr():
G.add_edge(0, 1)
repr_string = G.__repr__()
print(repr_string)
G.ndata['x'] = th.zeros((10, 5))
G.ndata['x'] = F.zeros((10, 5))
G.add_edges([0, 1], 2)
G.edata['y'] = th.zeros((3, 4))
G.edata['y'] = F.zeros((3, 4))
repr_string = G.__repr__()
print(repr_string)
......
import dgl
import torch as th
import utils as U
from dgl import DGLGraph
import backend as F
def tree1():
"""Generate a tree
......@@ -17,8 +17,8 @@ def tree1():
g.add_edge(4, 1)
g.add_edge(1, 0)
g.add_edge(2, 0)
g.ndata['h'] = th.Tensor([0, 1, 2, 3, 4])
g.edata['h'] = th.randn(4, 10)
g.ndata['h'] = F.tensor([0, 1, 2, 3, 4])
g.edata['h'] = F.randn((4, 10))
return g
def tree2():
......@@ -36,8 +36,8 @@ def tree2():
g.add_edge(0, 4)
g.add_edge(4, 1)
g.add_edge(3, 1)
g.ndata['h'] = th.Tensor([0, 1, 2, 3, 4])
g.edata['h'] = th.randn(4, 10)
g.ndata['h'] = F.tensor([0, 1, 2, 3, 4])
g.edata['h'] = F.randn((4, 10))
return g
def test_batch_unbatch():
......@@ -52,10 +52,10 @@ def test_batch_unbatch():
assert bg.batch_num_edges == [4, 4]
tt1, tt2 = dgl.unbatch(bg)
assert U.allclose(t1.ndata['h'], tt1.ndata['h'])
assert U.allclose(t1.edata['h'], tt1.edata['h'])
assert U.allclose(t2.ndata['h'], tt2.ndata['h'])
assert U.allclose(t2.edata['h'], tt2.edata['h'])
assert F.allclose(t1.ndata['h'], tt1.ndata['h'])
assert F.allclose(t1.edata['h'], tt1.edata['h'])
assert F.allclose(t2.ndata['h'], tt2.ndata['h'])
assert F.allclose(t2.edata['h'], tt2.edata['h'])
def test_batch_unbatch1():
t1 = tree1()
......@@ -69,12 +69,12 @@ def test_batch_unbatch1():
assert b2.batch_num_edges == [4, 4, 4]
s1, s2, s3 = dgl.unbatch(b2)
assert U.allclose(t2.ndata['h'], s1.ndata['h'])
assert U.allclose(t2.edata['h'], s1.edata['h'])
assert U.allclose(t1.ndata['h'], s2.ndata['h'])
assert U.allclose(t1.edata['h'], s2.edata['h'])
assert U.allclose(t2.ndata['h'], s3.ndata['h'])
assert U.allclose(t2.edata['h'], s3.edata['h'])
assert F.allclose(t2.ndata['h'], s1.ndata['h'])
assert F.allclose(t2.edata['h'], s1.edata['h'])
assert F.allclose(t1.ndata['h'], s2.ndata['h'])
assert F.allclose(t1.edata['h'], s2.edata['h'])
assert F.allclose(t2.ndata['h'], s3.ndata['h'])
assert F.allclose(t2.edata['h'], s3.edata['h'])
def test_batch_unbatch2():
# test setting/getting features after batch
......@@ -85,10 +85,10 @@ def test_batch_unbatch2():
b.add_nodes(3)
b.add_edges(0, [1, 2])
c = dgl.batch([a, b])
c.ndata['h'] = th.ones(7, 1)
c.edata['w'] = th.ones(5, 1)
assert U.allclose(c.ndata['h'], th.ones(7, 1))
assert U.allclose(c.edata['w'], th.ones(5, 1))
c.ndata['h'] = F.ones((7, 1))
c.edata['w'] = F.ones((5, 1))
assert F.allclose(c.ndata['h'], F.ones((7, 1)))
assert F.allclose(c.edata['w'], F.ones((5, 1)))
def test_batch_send_then_recv():
t1 = tree1()
......@@ -96,7 +96,7 @@ def test_batch_send_then_recv():
bg = dgl.batch([t1, t2])
bg.register_message_func(lambda edges: {'m' : edges.src['h']})
bg.register_reduce_func(lambda nodes: {'h' : th.sum(nodes.mailbox['m'], 1)})
bg.register_reduce_func(lambda nodes: {'h' : F.sum(nodes.mailbox['m'], 1)})
u = [3, 4, 2 + 5, 0 + 5]
v = [1, 1, 4 + 5, 4 + 5]
......@@ -113,7 +113,7 @@ def test_batch_send_and_recv():
bg = dgl.batch([t1, t2])
bg.register_message_func(lambda edges: {'m' : edges.src['h']})
bg.register_reduce_func(lambda nodes: {'h' : th.sum(nodes.mailbox['m'], 1)})
bg.register_reduce_func(lambda nodes: {'h' : F.sum(nodes.mailbox['m'], 1)})
u = [3, 4, 2 + 5, 0 + 5]
v = [1, 1, 4 + 5, 4 + 5]
......@@ -129,7 +129,7 @@ def test_batch_propagate():
bg = dgl.batch([t1, t2])
bg.register_message_func(lambda edges: {'m' : edges.src['h']})
bg.register_reduce_func(lambda nodes: {'h' : th.sum(nodes.mailbox['m'], 1)})
bg.register_reduce_func(lambda nodes: {'h' : F.sum(nodes.mailbox['m'], 1)})
# get leaves.
order = []
......@@ -154,17 +154,17 @@ def test_batched_edge_ordering():
g1 = dgl.DGLGraph()
g1.add_nodes(6)
g1.add_edges([4, 4, 2, 2, 0], [5, 3, 3, 1, 1])
e1 = th.randn(5, 10)
e1 = F.randn((5, 10))
g1.edata['h'] = e1
g2 = dgl.DGLGraph()
g2.add_nodes(6)
g2.add_edges([0, 1 ,2 ,5, 4 ,5], [1, 2, 3, 4, 3, 0])
e2 = th.randn(6, 10)
e2 = F.randn((6, 10))
g2.edata['h'] = e2
g = dgl.batch([g1, g2])
r1 = g.edata['h'][g.edge_id(4, 5)]
r2 = g1.edata['h'][g1.edge_id(4, 5)]
assert th.equal(r1, r2)
assert F.array_equal(r1, r2)
def test_batch_no_edge():
g1 = dgl.DGLGraph()
......
import torch as th
import numpy as np
from dgl.graph import DGLGraph
import utils as U
import backend as F
def test_filter():
g = DGLGraph()
g.add_nodes(4)
g.add_edges([0,1,2,3], [1,2,3,0])
n_repr = th.zeros(4, 5)
e_repr = th.zeros(4, 5)
n_repr = F.zeros((4, 5))
e_repr = F.zeros((4, 5))
n_repr[[1, 3]] = 1
e_repr[[1, 3]] = 1
......@@ -17,23 +16,23 @@ def test_filter():
g.edata['a'] = e_repr
def predicate(r):
return r.data['a'].max(1)[0] > 0
return F.max(r.data['a'], 1) > 0
# full node filter
n_idx = g.filter_nodes(predicate)
assert set(n_idx.numpy()) == {1, 3}
assert set(F.zerocopy_to_numpy(n_idx)) == {1, 3}
# partial node filter
n_idx = g.filter_nodes(predicate, [0, 1])
assert set(n_idx.numpy()) == {1}
assert set(F.zerocopy_to_numpy(n_idx)) == {1}
# full edge filter
e_idx = g.filter_edges(predicate)
assert set(e_idx.numpy()) == {1, 3}
assert set(F.zerocopy_to_numpy(e_idx)) == {1, 3}
# partial edge filter
e_idx = g.filter_edges(predicate, [0, 1])
assert set(e_idx.numpy()) == {1}
assert set(F.zerocopy_to_numpy(e_idx)) == {1}
if __name__ == '__main__':
......
import torch as th
from torch.autograd import Variable
import numpy as np
from dgl.frame import Frame, FrameRef
from dgl.utils import Index, toindex
import utils as U
import backend as F
N = 10
D = 5
......@@ -16,9 +14,13 @@ def check_fail(fn):
return True
def create_test_data(grad=False):
c1 = Variable(th.randn(N, D), requires_grad=grad)
c2 = Variable(th.randn(N, D), requires_grad=grad)
c3 = Variable(th.randn(N, D), requires_grad=grad)
c1 = F.randn((N, D))
c2 = F.randn((N, D))
c3 = F.randn((N, D))
if grad:
c1 = F.attach_grad(c1)
c2 = F.attach_grad(c2)
c3 = F.attach_grad(c3)
return {'a1' : c1, 'a2' : c2, 'a3' : c3}
def test_create():
......@@ -44,12 +46,12 @@ def test_column1():
f = Frame(data)
assert f.num_rows == N
assert len(f) == 3
assert U.allclose(f['a1'].data, data['a1'].data)
assert F.allclose(f['a1'].data, data['a1'])
f['a1'] = data['a2']
assert U.allclose(f['a2'].data, data['a2'].data)
assert F.allclose(f['a2'].data, data['a2'])
# add a different length column should fail
def failed_add_col():
f['a4'] = th.zeros([N+1, D])
f['a4'] = F.zeros([N+1, D])
assert check_fail(failed_add_col)
# delete all the columns
del f['a1']
......@@ -64,14 +66,14 @@ def test_column2():
f = FrameRef(data, toindex([3, 4, 5, 6, 7]))
assert f.num_rows == 5
assert len(f) == 3
assert U.allclose(f['a1'], data['a1'].data[3:8])
assert F.allclose(f['a1'], F.narrow_row(data['a1'].data, 3, 8))
# set column should reflect on the referenced data
f['a1'] = th.zeros([5, D])
assert U.allclose(data['a1'].data[3:8], th.zeros([5, D]))
f['a1'] = F.zeros([5, D])
assert F.allclose(F.narrow_row(data['a1'].data, 3, 8), F.zeros([5, D]))
# add new partial column should fail with error initializer
f.set_initializer(lambda shape, dtype : assert_(False))
def failed_add_col():
f['a4'] = th.ones([5, D])
f['a4'] = F.ones([5, D])
assert check_fail(failed_add_col)
def test_append1():
......@@ -84,11 +86,11 @@ def test_append1():
f1.append(f2)
assert f1.num_rows == 2 * N
c1 = f1['a1']
assert c1.data.shape == (2 * N, D)
truth = th.cat([data['a1'], data['a1']])
assert U.allclose(truth, c1.data)
assert tuple(F.shape(c1.data)) == (2 * N, D)
truth = F.cat([data['a1'], data['a1']], 0)
assert F.allclose(truth, c1.data)
# append dict of different length columns should fail
f3 = {'a1' : th.zeros((3, D)), 'a2' : th.zeros((3, D)), 'a3' : th.zeros((2, D))}
f3 = {'a1' : F.zeros((3, D)), 'a2' : F.zeros((3, D)), 'a3' : F.zeros((2, D))}
def failed_append():
f1.append(f3)
assert check_fail(failed_append)
......@@ -111,25 +113,25 @@ def test_append2():
assert not f.is_span_whole_column()
assert f.num_rows == 3 * N
new_idx = list(range(N)) + list(range(2*N, 4*N))
assert th.all(f._index.tousertensor() == th.tensor(new_idx, dtype=th.int64))
assert F.array_equal(f._index.tousertensor(), F.tensor(new_idx, dtype=F.int64))
assert data.num_rows == 4 * N
def test_append3():
# test append on empty frame
f = Frame(num_rows=5)
data = {'h' : th.ones((3, 2))}
data = {'h' : F.ones((3, 2))}
f.append(data)
assert f.num_rows == 8
ans = th.cat([th.zeros((5, 2)), th.ones((3, 2))], dim=0)
assert U.allclose(f['h'].data, ans)
ans = F.cat([F.zeros((5, 2)), F.ones((3, 2))], 0)
assert F.allclose(f['h'].data, ans)
# test append with new column
data = {'h' : 2 * th.ones((3, 2)), 'w' : 2 * th.ones((3, 2))}
data = {'h' : 2 * F.ones((3, 2)), 'w' : 2 * F.ones((3, 2))}
f.append(data)
assert f.num_rows == 11
ans1 = th.cat([ans, 2 * th.ones((3, 2))], 0)
ans2 = th.cat([th.zeros((8, 2)), 2 * th.ones((3, 2))], 0)
assert U.allclose(f['h'].data, ans1)
assert U.allclose(f['w'].data, ans2)
ans1 = F.cat([ans, 2 * F.ones((3, 2))], 0)
ans2 = F.cat([F.zeros((8, 2)), 2 * F.ones((3, 2))], 0)
assert F.allclose(f['h'].data, ans1)
assert F.allclose(f['w'].data, ans2)
def test_row1():
# test row getter/setter
......@@ -138,32 +140,32 @@ def test_row1():
# getter
# test non-duplicate keys
rowid = Index(th.tensor([0, 2]))
rowid = Index(F.tensor([0, 2]))
rows = f[rowid]
for k, v in rows.items():
assert v.shape == (len(rowid), D)
assert U.allclose(v, data[k][rowid])
assert tuple(F.shape(v)) == (len(rowid), D)
assert F.allclose(v, F.gather_row(data[k], rowid.tousertensor()))
# test duplicate keys
rowid = Index(th.tensor([8, 2, 2, 1]))
rowid = Index(F.tensor([8, 2, 2, 1]))
rows = f[rowid]
for k, v in rows.items():
assert v.shape == (len(rowid), D)
assert U.allclose(v, data[k][rowid])
assert tuple(F.shape(v)) == (len(rowid), D)
assert F.allclose(v, F.gather_row(data[k], rowid.tousertensor()))
# setter
rowid = Index(th.tensor([0, 2, 4]))
vals = {'a1' : th.zeros((len(rowid), D)),
'a2' : th.zeros((len(rowid), D)),
'a3' : th.zeros((len(rowid), D)),
rowid = Index(F.tensor([0, 2, 4]))
vals = {'a1' : F.zeros((len(rowid), D)),
'a2' : F.zeros((len(rowid), D)),
'a3' : F.zeros((len(rowid), D)),
}
f[rowid] = vals
for k, v in f[rowid].items():
assert U.allclose(v, th.zeros((len(rowid), D)))
assert F.allclose(v, F.zeros((len(rowid), D)))
# setting rows with new column should raise error with error initializer
f.set_initializer(lambda shape, dtype : assert_(False))
def failed_update_rows():
vals['a4'] = th.ones((len(rowid), D))
vals['a4'] = F.ones((len(rowid), D))
f[rowid] = vals
assert check_fail(failed_update_rows)
......@@ -172,34 +174,41 @@ def test_row2():
data = create_test_data(grad=True)
f = FrameRef(Frame(data))
with F.record_grad():
# getter
c1 = f['a1']
# test non-duplicate keys
rowid = Index(th.tensor([0, 2]))
rowid = Index(F.tensor([0, 2]))
rows = f[rowid]
rows['a1'].backward(th.ones((len(rowid), D)))
assert U.allclose(c1.grad[:,0], th.tensor([1., 0., 1., 0., 0., 0., 0., 0., 0., 0.]))
c1.grad.data.zero_()
y = rows['a1']
F.backward(y, F.ones((len(rowid), D)))
assert F.allclose(F.grad(c1)[:,0], F.tensor([1., 0., 1., 0., 0., 0., 0., 0., 0., 0.]))
f['a1'] = F.attach_grad(f['a1'])
with F.record_grad():
c1 = f['a1']
# test duplicate keys
rowid = Index(th.tensor([8, 2, 2, 1]))
rowid = Index(F.tensor([8, 2, 2, 1]))
rows = f[rowid]
rows['a1'].backward(th.ones((len(rowid), D)))
assert U.allclose(c1.grad[:,0], th.tensor([0., 1., 2., 0., 0., 0., 0., 0., 1., 0.]))
c1.grad.data.zero_()
y = rows['a1']
F.backward(y, F.ones((len(rowid), D)))
assert F.allclose(F.grad(c1)[:,0], F.tensor([0., 1., 2., 0., 0., 0., 0., 0., 1., 0.]))
f['a1'] = F.attach_grad(f['a1'])
with F.record_grad():
# setter
c1 = f['a1']
rowid = Index(th.tensor([0, 2, 4]))
vals = {'a1' : Variable(th.zeros((len(rowid), D)), requires_grad=True),
'a2' : Variable(th.zeros((len(rowid), D)), requires_grad=True),
'a3' : Variable(th.zeros((len(rowid), D)), requires_grad=True),
rowid = Index(F.tensor([0, 2, 4]))
vals = {'a1' : F.attach_grad(F.zeros((len(rowid), D))),
'a2' : F.attach_grad(F.zeros((len(rowid), D))),
'a3' : F.attach_grad(F.zeros((len(rowid), D))),
}
f[rowid] = vals
c11 = f['a1']
c11.backward(th.ones((N, D)))
assert U.allclose(c1.grad[:,0], th.tensor([0., 1., 0., 1., 0., 1., 1., 1., 1., 1.]))
assert U.allclose(vals['a1'].grad, th.ones((len(rowid), D)))
assert vals['a2'].grad is None
F.backward(c11, F.ones((N, D)))
assert F.allclose(F.grad(c1)[:,0], F.tensor([0., 1., 0., 1., 0., 1., 1., 1., 1., 1.]))
assert F.allclose(F.grad(vals['a1']), F.ones((len(rowid), D)))
assert F.is_no_grad(vals['a2'])
def test_row3():
# test row delete
......@@ -208,7 +217,7 @@ def test_row3():
assert f.is_contiguous()
assert f.is_span_whole_column()
assert f.num_rows == N
del f[toindex(th.tensor([2, 3]))]
del f[toindex(F.tensor([2, 3]))]
assert not f.is_contiguous()
assert not f.is_span_whole_column()
# delete is lazy: only reflect on the ref while the
......@@ -220,16 +229,16 @@ def test_row3():
newidx.pop(2)
newidx = toindex(newidx)
for k, v in f.items():
assert U.allclose(v, data[k][newidx])
assert F.allclose(v, data[k][newidx])
def test_row4():
# test updating row with empty frame but has preset num_rows
f = FrameRef(Frame(num_rows=5))
rowid = Index(th.tensor([0, 2, 4]))
f[rowid] = {'h' : th.ones((3, 2))}
ans = th.zeros((5, 2))
ans[th.tensor([0, 2, 4])] = th.ones((3, 2))
assert U.allclose(f['h'], ans)
rowid = Index(F.tensor([0, 2, 4]))
f[rowid] = {'h' : F.ones((3, 2))}
ans = F.zeros((5, 2))
ans[F.tensor([0, 2, 4])] = F.ones((3, 2))
assert F.allclose(f['h'], ans)
def test_sharing():
data = Frame(create_test_data())
......@@ -237,26 +246,26 @@ def test_sharing():
f2 = FrameRef(data, index=toindex([2, 3, 4, 5, 6]))
# test read
for k, v in f1.items():
assert U.allclose(data[k].data[0:4], v)
assert F.allclose(F.narrow_row(data[k].data, 0, 4), v)
for k, v in f2.items():
assert U.allclose(data[k].data[2:7], v)
f2_a1 = f2['a1'].data
assert F.allclose(F.narrow_row(data[k].data, 2, 7), v)
f2_a1 = f2['a1']
# test write
# update own ref should not been seen by the other.
f1[Index(th.tensor([0, 1]))] = {
'a1' : th.zeros([2, D]),
'a2' : th.zeros([2, D]),
'a3' : th.zeros([2, D]),
f1[Index(F.tensor([0, 1]))] = {
'a1' : F.zeros([2, D]),
'a2' : F.zeros([2, D]),
'a3' : F.zeros([2, D]),
}
assert U.allclose(f2['a1'], f2_a1)
assert F.allclose(f2['a1'], f2_a1)
# update shared space should been seen by the other.
f1[Index(th.tensor([2, 3]))] = {
'a1' : th.ones([2, D]),
'a2' : th.ones([2, D]),
'a3' : th.ones([2, D]),
f1[Index(F.tensor([2, 3]))] = {
'a1' : F.ones([2, D]),
'a2' : F.ones([2, D]),
'a3' : F.ones([2, D]),
}
f2_a1[0:2] = th.ones([2, D])
assert U.allclose(f2['a1'], f2_a1)
F.narrow_row_set(f2_a1, 0, 2, F.ones([2, D]))
assert F.allclose(f2['a1'], f2_a1)
def test_slicing():
data = Frame(create_test_data(grad=True))
......@@ -264,81 +273,81 @@ def test_slicing():
f2 = FrameRef(data, index=toindex(slice(3, 8)))
# test read
for k, v in f1.items():
assert U.allclose(data[k].data[1:5], v)
f2_a1 = f2['a1'].data
assert F.allclose(F.narrow_row(data[k].data, 1, 5), v)
f2_a1 = f2['a1'] # is a tensor
# test write
f1[Index(th.tensor([0, 1]))] = {
'a1': th.zeros([2, D]),
'a2': th.zeros([2, D]),
'a3': th.zeros([2, D]),
f1[Index(F.tensor([0, 1]))] = {
'a1': F.zeros([2, D]),
'a2': F.zeros([2, D]),
'a3': F.zeros([2, D]),
}
assert U.allclose(f2['a1'], f2_a1)
assert F.allclose(f2['a1'], f2_a1)
f1[Index(th.tensor([2, 3]))] = {
'a1': th.ones([2, D]),
'a2': th.ones([2, D]),
'a3': th.ones([2, D]),
f1[Index(F.tensor([2, 3]))] = {
'a1': F.ones([2, D]),
'a2': F.ones([2, D]),
'a3': F.ones([2, D]),
}
f2_a1[toindex(slice(0,2))] = 1
assert U.allclose(f2['a1'], f2_a1)
F.narrow_row_set(f2_a1, 0, 2, 1)
assert F.allclose(f2['a1'], f2_a1)
f1[toindex(slice(2,4))] = {
'a1': th.zeros([2, D]),
'a2': th.zeros([2, D]),
'a3': th.zeros([2, D]),
f1[toindex(slice(2, 4))] = {
'a1': F.zeros([2, D]),
'a2': F.zeros([2, D]),
'a3': F.zeros([2, D]),
}
f2_a1[toindex(slice(0,2))] = 0
assert U.allclose(f2['a1'], f2_a1)
F.narrow_row_set(f2_a1, 0, 2, 0)
assert F.allclose(f2['a1'], f2_a1)
def test_add_rows():
data = Frame()
f1 = FrameRef(data)
f1.add_rows(4)
x = th.randn(1, 4)
f1[Index(th.tensor([0]))] = {'x': x}
ans = th.cat([x, th.zeros(3, 4)])
assert U.allclose(f1['x'], ans)
x = F.randn((1, 4))
f1[Index(F.tensor([0]))] = {'x': x}
ans = F.cat([x, F.zeros((3, 4))], 0)
assert F.allclose(f1['x'], ans)
f1.add_rows(4)
f1[toindex(slice(4,8))] = {'x': th.ones(4, 4), 'y': th.ones(4, 5)}
ans = th.cat([ans, th.ones(4, 4)])
assert U.allclose(f1['x'], ans)
ans = th.cat([th.zeros(4, 5), th.ones(4, 5)])
assert U.allclose(f1['y'], ans)
f1[toindex(slice(4, 8))] = {'x': F.ones((4, 4)), 'y': F.ones((4, 5))}
ans = F.cat([ans, F.ones((4, 4))], 0)
assert F.allclose(f1['x'], ans)
ans = F.cat([F.zeros((4, 5)), F.ones((4, 5))], 0)
assert F.allclose(f1['y'], ans)
def test_inplace():
f = FrameRef(Frame(create_test_data()))
print(f.schemes)
a1addr = f['a1'].data.data_ptr()
a2addr = f['a2'].data.data_ptr()
a3addr = f['a3'].data.data_ptr()
a1addr = id(f['a1'])
a2addr = id(f['a2'])
a3addr = id(f['a3'])
# column updates are always out-of-place
f['a1'] = th.ones((N, D))
newa1addr = f['a1'].data.data_ptr()
f['a1'] = F.ones((N, D))
newa1addr = id(f['a1'])
assert a1addr != newa1addr
a1addr = newa1addr
# full row update that becomes column update
f[toindex(slice(0, N))] = {'a1' : th.ones((N, D))}
assert f['a1'].data.data_ptr() != a1addr
f[toindex(slice(0, N))] = {'a1' : F.ones((N, D))}
assert id(f['a1']) != a1addr
# row update (outplace) w/ slice
f[toindex(slice(1, 4))] = {'a2' : th.ones((3, D))}
newa2addr = f['a2'].data.data_ptr()
f[toindex(slice(1, 4))] = {'a2' : F.ones((3, D))}
newa2addr = id(f['a2'])
assert a2addr != newa2addr
a2addr = newa2addr
# row update (outplace) w/ list
f[toindex([1, 3, 5])] = {'a2' : th.ones((3, D))}
newa2addr = f['a2'].data.data_ptr()
f[toindex([1, 3, 5])] = {'a2' : F.ones((3, D))}
newa2addr = id(f['a2'])
assert a2addr != newa2addr
a2addr = newa2addr
# row update (inplace) w/ slice
f.update_data(toindex(slice(1, 4)), {'a2' : th.ones((3, D))}, True)
newa2addr = f['a2'].data.data_ptr()
f.update_data(toindex(slice(1, 4)), {'a2' : F.ones((3, D))}, True)
newa2addr = id(f['a2'])
assert a2addr == newa2addr
# row update (inplace) w/ list
f.update_data(toindex([1, 3, 5]), {'a2' : th.ones((3, D))}, True)
newa2addr = f['a2'].data.data_ptr()
f.update_data(toindex([1, 3, 5]), {'a2' : F.ones((3, D))}, True)
newa2addr = id(f['a2'])
assert a2addr == newa2addr
if __name__ == '__main__':
......
import torch as th
import dgl
import dgl.function as fn
import utils as U
import backend as F
def generate_graph():
g = dgl.DGLGraph()
g.add_nodes(10) # 10 nodes.
h = th.arange(1, 11, dtype=th.float)
h = F.astype(F.arange(1, 11), F.float32)
g.ndata['h'] = h
# create a graph where 0 is the source and 9 is the sink
for i in range(1, 9):
......@@ -14,13 +13,13 @@ def generate_graph():
g.add_edge(i, 9)
# add a back flow from 9 to 0
g.add_edge(9, 0)
h = th.tensor([1., 2., 1., 3., 1., 4., 1., 5., 1., 6.,\
h = F.tensor([1., 2., 1., 3., 1., 4., 1., 5., 1., 6.,\
1., 7., 1., 8., 1., 9., 10.])
g.edata['h'] = h
return g
def reducer_both(nodes):
return {'h' : th.sum(nodes.mailbox['m'], 1)}
return {'h' : F.sum(nodes.mailbox['m'], 1)}
def test_copy_src():
# copy_src with both fields
......@@ -28,8 +27,8 @@ def test_copy_src():
g.register_message_func(fn.copy_src(src='h', out='m'))
g.register_reduce_func(reducer_both)
g.update_all()
assert U.allclose(g.ndata['h'],
th.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.]))
assert F.allclose(g.ndata['h'],
F.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.]))
def test_copy_edge():
# copy_edge with both fields
......@@ -37,8 +36,8 @@ def test_copy_edge():
g.register_message_func(fn.copy_edge(edge='h', out='m'))
g.register_reduce_func(reducer_both)
g.update_all()
assert U.allclose(g.ndata['h'],
th.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.]))
assert F.allclose(g.ndata['h'],
F.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.]))
def test_src_mul_edge():
# src_mul_edge with all fields
......@@ -46,8 +45,8 @@ def test_src_mul_edge():
g.register_message_func(fn.src_mul_edge(src='h', edge='h', out='m'))
g.register_reduce_func(reducer_both)
g.update_all()
assert U.allclose(g.ndata['h'],
th.tensor([100., 1., 1., 1., 1., 1., 1., 1., 1., 284.]))
assert F.allclose(g.ndata['h'],
F.tensor([100., 1., 1., 1., 1., 1., 1., 1., 1., 284.]))
if __name__ == '__main__':
test_copy_src()
......
......@@ -3,29 +3,28 @@ import math
import numpy as np
import scipy.sparse as sp
import networkx as nx
import torch as th
import dgl
import utils as U
import backend as F
def test_graph_creation():
g = dgl.DGLGraph()
# test add nodes with data
g.add_nodes(5)
g.add_nodes(5, {'h' : th.ones((5, 2))})
ans = th.cat([th.zeros(5, 2), th.ones(5, 2)], 0)
U.allclose(ans, g.ndata['h'])
g.ndata['w'] = 2 * th.ones((10, 2))
assert U.allclose(2 * th.ones((10, 2)), g.ndata['w'])
g.add_nodes(5, {'h' : F.ones((5, 2))})
ans = F.cat([F.zeros((5, 2)), F.ones((5, 2))], 0)
assert F.allclose(ans, g.ndata['h'])
g.ndata['w'] = 2 * F.ones((10, 2))
assert F.allclose(2 * F.ones((10, 2)), g.ndata['w'])
# test add edges with data
g.add_edges([2, 3], [3, 4])
g.add_edges([0, 1], [1, 2], {'m' : th.ones((2, 2))})
ans = th.cat([th.zeros(2, 2), th.ones(2, 2)], 0)
assert U.allclose(ans, g.edata['m'])
g.add_edges([0, 1], [1, 2], {'m' : F.ones((2, 2))})
ans = F.cat([F.zeros((2, 2)), F.ones((2, 2))], 0)
assert F.allclose(ans, g.edata['m'])
# test clear and add again
g.clear()
g.add_nodes(5)
g.ndata['h'] = 3 * th.ones((5, 2))
assert U.allclose(3 * th.ones((5, 2)), g.ndata['h'])
g.ndata['h'] = 3 * F.ones((5, 2))
assert F.allclose(3 * F.ones((5, 2)), g.ndata['h'])
def test_create_from_elist():
elist = [(2, 1), (1, 0), (2, 0), (3, 0), (0, 2)]
......@@ -74,21 +73,27 @@ def test_incmat():
g.add_edge(0, 3) # 2
g.add_edge(2, 3) # 3
g.add_edge(1, 1) # 4
assert U.allclose(
g.incidence_matrix('in').to_dense(),
th.tensor([[0., 0., 0., 0., 0.],
inc_in = F.sparse_to_numpy(g.incidence_matrix('in'))
inc_out = F.sparse_to_numpy(g.incidence_matrix('out'))
inc_both = F.sparse_to_numpy(g.incidence_matrix('both'))
print(inc_in)
print(inc_out)
print(inc_both)
assert np.allclose(
inc_in,
np.array([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 1., 0.]]))
assert U.allclose(
g.incidence_matrix('out').to_dense(),
th.tensor([[1., 1., 1., 0., 0.],
assert np.allclose(
inc_out,
np.array([[1., 1., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0.]]))
assert U.allclose(
g.incidence_matrix('both').to_dense(),
th.tensor([[-1., -1., -1., 0., 0.],
assert np.allclose(
inc_both,
np.array([[-1., -1., -1., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 1., 0., -1., 0.],
[0., 0., 1., 1., 0.]]))
......
......@@ -2,9 +2,7 @@ import dgl
import dgl.ndarray as nd
from dgl.utils import toindex
import numpy as np
import torch as th
from torch.utils import dlpack
import utils as U
import backend as F
def test_dlpack():
# test dlpack conversion.
......@@ -14,30 +12,34 @@ def test_dlpack():
[0., 0., 0., 0.]])
x = nd.array(np.zeros((3, 4), dtype=np.float32))
dl = x.to_dlpack()
y = dlpack.from_dlpack(dl)
y = F.zerocopy_from_dlpack(dl)
y[0] = 1
print(x)
print(y)
assert np.allclose(x.asnumpy(), ans)
def th2nd():
ans = np.array([[1., 1., 1., 1.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
x = th.zeros((3, 4))
dl = dlpack.to_dlpack(x)
x = F.zeros((3, 4))
dl = F.zerocopy_to_dlpack(x)
y = nd.from_dlpack(dl)
x[0] = 1
print(x)
print(y)
assert np.allclose(y.asnumpy(), ans)
def th2nd_incontiguous():
import dgl.backend as F
x = th.LongTensor([[0, 1], [2, 3]])
x = F.astype(F.tensor([[0, 1], [2, 3]]), F.int64)
ans = np.array([0, 2])
y = x[:2, 0]
# Uncomment this line and comment the one below to observe error
#dl = dlpack.to_dlpack(y)
dl = F.zerocopy_to_dlpack(y)
z = nd.from_dlpack(dl)
print(x)
print(z)
assert np.allclose(z.asnumpy(), ans)
nd2th()
......@@ -50,7 +52,7 @@ def test_index():
data = np.ones((10,), dtype=np.int64) * 10
idx = toindex(data)
y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy()
y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1)
assert np.allclose(ans, y2)
......@@ -60,17 +62,17 @@ def test_index():
data = [10] * 10
idx = toindex(data)
y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy()
y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1)
assert np.allclose(ans, y2)
assert np.allclose(ans, y3)
# from torch
data = th.ones((10,), dtype=th.int64) * 10
data = F.ones((10,), dtype=F.int64) * 10
idx = toindex(data)
y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy()
y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1)
assert np.allclose(ans, y2)
......@@ -80,7 +82,7 @@ def test_index():
data = dgl.ndarray.array(np.ones((10,), dtype=np.int64) * 10)
idx = toindex(data)
y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy()
y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1)
assert np.allclose(ans, y2)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment