Commit e19cd62e authored by Quan (Andy) Gan's avatar Quan (Andy) Gan Committed by Minjie Wang
Browse files

[Test] Unify tests for different backends (#333)

* test basics

* batched graph & filter, mxnet filter fix

* frame and function; bugfix

* test graph adj and inc matrices

* fixing start = 0 for mxnet

* test index

* inplace update & line graph

* multi send recv

* more tests

* oops

* more tests

* removing old test files; readonly graphs for mxnet still kept

* modifying test scripts

* adding a placeholder for pytorch to reserve directory

* torch 0.4.1 compat fixes

* moving backend out of compute to avoid nose detection

* tests guide

* mx sparse-to-dense/sparse-to-numpy is buggy

* oops

* contribution guide for unit tests

* printing incmat

* printing dlpack

* small push

* typo

* fixing duplicate entries that causes undefined behavior

* move equal comparison to backend
parent 3edcaa1e
...@@ -4,7 +4,7 @@ Contribution is always welcomed. A good starting place is the roadmap issue, whe ...@@ -4,7 +4,7 @@ Contribution is always welcomed. A good starting place is the roadmap issue, whe
you can find our current milestones. All contributions must go through pull requests you can find our current milestones. All contributions must go through pull requests
and be reviewed by the committors. See our [contribution guide](https://docs.dgl.ai/contribute.html) for more details. and be reviewed by the committors. See our [contribution guide](https://docs.dgl.ai/contribute.html) for more details.
Once your contribution is accepted and merged, congratulation, you are now an contributor to the DGL project. Once your contribution is accepted and merged, congratulations, you are now a contributor to the DGL project.
We will put your name in the list below and also on our [website](https://www.dgl.ai/ack). We will put your name in the list below and also on our [website](https://www.dgl.ai/ack).
Contributors Contributors
......
...@@ -118,7 +118,51 @@ You could test the build by running the following command and see the path of yo ...@@ -118,7 +118,51 @@ You could test the build by running the following command and see the path of yo
python -c 'import dgl; print(dgl.__path__)' python -c 'import dgl; print(dgl.__path__)'
TBD by Quan about how to run and write unittests. Unit tests
``````````
Currently, we use ``nose`` for unit tests. The organization goes as follows:
* ``backend``: Additional unified tensor interface for supported frameworks.
The functions there are only used in unit tests, not DGL itself. Note that
the code there are not unit tests by themselves. The additional backend can
be imported with
.. code-block:: python
import backend
The additional backend contains the following files:
- ``backend/backend_unittest.py``: stub file for all additional tensor
functions.
- ``backend/${DGLBACKEND}/__init__.py``: implementations of the stubs
for the backend ``${DGLBACKEND}``.
- ``backend/__init__.py``: when imported, it replaces the stub implementations
with the framework-specific code, depending on the selected backend. It
also changes the signature of some existing backend functions to automatically
select dtypes and contexts.
* ``compute``: All framework-agnostic computation-related unit tests go there.
Anything inside should not depend on a specific tensor library. Tensor
functions not provided in DGL unified tensor interface (i.e. ``dgl.backend``)
should go into ``backend`` directory.
* ``${DGLBACKEND}`` (e.g. ``pytorch`` and ``mxnet``): All framework-specific
computation-related unit tests go there.
* ``graph_index``: All unit tests for C++ graph structure implementation go
there. The Python API being tested in this directory, if any, should be
as minimal as possible (usually simple wrappers of corresponding C++
functions).
* ``lint``: Pylint-related files.
* ``scripts``: Automated test scripts for CI.
To run unit tests, run
.. code-block:: bash
sh tests/scripts/task_unit_test.sh <your-backend>
where ``<your-backend>`` can be any supported backends (i.e. ``pytorch`` or ``mxnet``).
Building documents Building documents
------------------ ------------------
......
...@@ -9,7 +9,7 @@ The principles of this interface: ...@@ -9,7 +9,7 @@ The principles of this interface:
* Argument type should be easier to understand. * Argument type should be easier to understand.
It is recommended the frameworks implement all the interfaces. However, it is It is recommended the frameworks implement all the interfaces. However, it is
also OK to skip some. The generated backend module has an ``is_enbaled`` function also OK to skip some. The generated backend module has an ``is_enabled`` function
that returns whether the interface is supported by the framework or not. that returns whether the interface is supported by the framework or not.
""" """
...@@ -507,6 +507,22 @@ def zeros(shape, dtype, ctx): ...@@ -507,6 +507,22 @@ def zeros(shape, dtype, ctx):
""" """
pass pass
def zeros_like(input):
"""Create a zero tensor with the same shape, dtype and context of the
given tensor.
Parameters
----------
input : Tensor
The input
Returns
-------
Tensor
The result
"""
pass
def ones(shape, dtype, ctx): def ones(shape, dtype, ctx):
"""Create a one tensor. """Create a one tensor.
...@@ -595,6 +611,54 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim): ...@@ -595,6 +611,54 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim):
""" """
pass pass
def boolean_mask(input, mask):
"""Selects elements in x according to the given mask from the first
dimension.
Parameters
----------
input : Tensor
The input tensor
mask : Boolean Tensor
The mask
Returns
-------
Tensor
The result
"""
pass
def equal(x, y):
"""Compares whether the elements are equal.
Parameters
----------
x, y : Tensor
The two tensors
Returns
-------
Boolean tensor
The result, with the same shape as input.
"""
pass
def logical_not(input):
"""Perform a logical not operation. Equivalent to np.logical_not
Parameters
----------
input : Tensor
The input
Returns
-------
Tensor
The result
"""
pass
############################################################################### ###############################################################################
# Tensor functions used *only* on index tensor # Tensor functions used *only* on index tensor
# ---------------- # ----------------
......
...@@ -3,6 +3,7 @@ from __future__ import absolute_import ...@@ -3,6 +3,7 @@ from __future__ import absolute_import
import numpy as np import numpy as np
import mxnet as mx import mxnet as mx
import mxnet.ndarray as nd import mxnet.ndarray as nd
import numbers
def data_type_dict(): def data_type_dict():
return {'float16' : np.float16, return {'float16' : np.float16,
...@@ -18,6 +19,13 @@ def cpu(): ...@@ -18,6 +19,13 @@ def cpu():
return mx.cpu() return mx.cpu()
def tensor(data, dtype=None): def tensor(data, dtype=None):
# MXNet always returns a float tensor regardless of type inside data.
# This is a workaround.
if dtype is None:
if isinstance(data[0], numbers.Integral):
dtype = np.int64
else:
dtype = np.float32
return nd.array(data, dtype=dtype) return nd.array(data, dtype=dtype)
def sparse_matrix(data, index, shape, force_format=False): def sparse_matrix(data, index, shape, force_format=False):
...@@ -90,7 +98,7 @@ def cat(seq, dim): ...@@ -90,7 +98,7 @@ def cat(seq, dim):
return nd.concat(*seq, dim=dim) return nd.concat(*seq, dim=dim)
def stack(seq, dim): def stack(seq, dim):
return nd.stack(*seq, dim=dim) return nd.stack(*seq, axis=dim)
def split(x, sizes_or_sections, dim): def split(x, sizes_or_sections, dim):
if isinstance(sizes_or_sections, list) or isinstance(sizes_or_sections, np.ndarray): if isinstance(sizes_or_sections, list) or isinstance(sizes_or_sections, np.ndarray):
...@@ -103,13 +111,17 @@ def split(x, sizes_or_sections, dim): ...@@ -103,13 +111,17 @@ def split(x, sizes_or_sections, dim):
return nd.split(x, sizes_or_sections, axis=dim) return nd.split(x, sizes_or_sections, axis=dim)
def gather_row(data, row_index): def gather_row(data, row_index):
# MXNet workaround for empty row index
if len(row_index) == 0:
return data[0:0]
if isinstance(row_index, nd.NDArray): if isinstance(row_index, nd.NDArray):
return nd.take(data, row_index) return nd.take(data, row_index)
else: else:
return data[row_index,] return data[row_index,]
def narrow_row(data, start, stop): def narrow_row(data, start, stop):
return nd.slice(data, begin=start, end=stop) return data[start:stop]
def scatter_row(data, row_index, value): def scatter_row(data, row_index, value):
return mx.nd.contrib.index_copy(data, row_index, value) return mx.nd.contrib.index_copy(data, row_index, value)
...@@ -130,6 +142,9 @@ def reshape(input, shape): ...@@ -130,6 +142,9 @@ def reshape(input, shape):
def zeros(shape, dtype, ctx): def zeros(shape, dtype, ctx):
return nd.zeros(shape, dtype=dtype, ctx=ctx) return nd.zeros(shape, dtype=dtype, ctx=ctx)
def zeros_like(input):
return nd.zeros_like(input)
def ones(shape, dtype, ctx): def ones(shape, dtype, ctx):
return nd.ones(shape, dtype=dtype, ctx=ctx) return nd.ones(shape, dtype=dtype, ctx=ctx)
...@@ -165,6 +180,15 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim): ...@@ -165,6 +180,15 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim):
y /= w.reshape((-1,) + (1,) * (y.ndim - 1)) y /= w.reshape((-1,) + (1,) * (y.ndim - 1))
return y return y
def boolean_mask(input, mask):
return mx.contrib.nd.boolean_mask(input, mask)
def equal(x, y):
return x == y
def logical_not(input):
return nd.logical_not(input)
def unique(input): def unique(input):
# TODO: fallback to numpy is unfortunate # TODO: fallback to numpy is unfortunate
tmp = input.asnumpy() tmp = input.asnumpy()
......
...@@ -118,6 +118,9 @@ def reshape(input, shape): ...@@ -118,6 +118,9 @@ def reshape(input, shape):
def zeros(shape, dtype, ctx): def zeros(shape, dtype, ctx):
return th.zeros(shape, dtype=dtype, device=ctx) return th.zeros(shape, dtype=dtype, device=ctx)
def zeros_like(input):
return th.zeros_like(input)
def ones(shape, dtype, ctx): def ones(shape, dtype, ctx):
return th.ones(shape, dtype=dtype, device=ctx) return th.ones(shape, dtype=dtype, device=ctx)
...@@ -137,6 +140,15 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim): ...@@ -137,6 +140,15 @@ def unsorted_1d_segment_mean(input, seg_id, n_segs, dim):
y /= w.view((-1,) + (1,) * (y.dim() - 1)) y /= w.view((-1,) + (1,) * (y.dim() - 1))
return y return y
def boolean_mask(input, mask):
return input[mask]
def equal(x, y):
return x == y
def logical_not(input):
return ~input
def unique(input): def unique(input):
return th.unique(input) return th.unique(input)
...@@ -144,7 +156,8 @@ def full_1d(length, fill_value, dtype, ctx): ...@@ -144,7 +156,8 @@ def full_1d(length, fill_value, dtype, ctx):
return th.full((length,), fill_value, dtype=dtype, device=ctx) return th.full((length,), fill_value, dtype=dtype, device=ctx)
def nonzero_1d(input): def nonzero_1d(input):
return th.nonzero(input).squeeze() x = th.nonzero(input).squeeze()
return x if x.dim() == 1 else x.view(-1)
def sort_1d(input): def sort_1d(input):
return th.sort(input) return th.sort(input)
......
...@@ -138,10 +138,12 @@ class Column(object): ...@@ -138,10 +138,12 @@ class Column(object):
elif idx.slice_data() is not None: elif idx.slice_data() is not None:
# for contiguous indices narrow+concat is usually faster than scatter row # for contiguous indices narrow+concat is usually faster than scatter row
slc = idx.slice_data() slc = idx.slice_data()
part1 = F.narrow_row(self.data, 0, slc.start) parts = [feats]
part2 = feats if slc.start > 0:
part3 = F.narrow_row(self.data, slc.stop, len(self)) parts.insert(0, F.narrow_row(self.data, 0, slc.start))
self.data = F.cat([part1, part2, part3], dim=0) if slc.stop < len(self):
parts.append(F.narrow_row(self.data, slc.stop, len(self)))
self.data = F.cat(parts, dim=0)
else: else:
idx = idx.tousertensor(F.context(self.data)) idx = idx.tousertensor(F.context(self.data))
self.data = F.scatter_row(self.data, idx, feats) self.data = F.scatter_row(self.data, idx, feats)
......
...@@ -1120,12 +1120,12 @@ class DGLGraph(object): ...@@ -1120,12 +1120,12 @@ class DGLGraph(object):
if node_attrs is not None: if node_attrs is not None:
for nid, attr in nx_graph.nodes(data=True): for nid, attr in nx_graph.nodes(data=True):
feat_dict = self.get_n_repr(nid) feat_dict = self.get_n_repr(nid)
attr.update({key: feat_dict[key].squeeze(0) for key in node_attrs}) attr.update({key: F.squeeze(feat_dict[key], 0) for key in node_attrs})
if edge_attrs is not None: if edge_attrs is not None:
for _, _, attr in nx_graph.edges(data=True): for _, _, attr in nx_graph.edges(data=True):
eid = attr['id'] eid = attr['id']
feat_dict = self.get_e_repr(eid) feat_dict = self.get_e_repr(eid)
attr.update({key: feat_dict[key].squeeze(0) for key in edge_attrs}) attr.update({key: F.squeeze(feat_dict[key], 0) for key in edge_attrs})
return nx_graph return nx_graph
def from_networkx(self, nx_graph, node_attrs=None, edge_attrs=None): def from_networkx(self, nx_graph, node_attrs=None, edge_attrs=None):
...@@ -2830,7 +2830,7 @@ class DGLGraph(object): ...@@ -2830,7 +2830,7 @@ class DGLGraph(object):
return F.nonzero_1d(n_mask) return F.nonzero_1d(n_mask)
else: else:
nodes = F.tensor(nodes) nodes = F.tensor(nodes)
return nodes[n_mask] return F.boolean_mask(nodes, n_mask)
def filter_edges(self, predicate, edges=ALL): def filter_edges(self, predicate, edges=ALL):
"""Return a tensor of edge IDs that satisfy the given predicate. """Return a tensor of edge IDs that satisfy the given predicate.
...@@ -2903,7 +2903,7 @@ class DGLGraph(object): ...@@ -2903,7 +2903,7 @@ class DGLGraph(object):
return F.nonzero_1d(e_mask) return F.nonzero_1d(e_mask)
else: else:
edges = F.tensor(edges) edges = F.tensor(edges)
return edges[e_mask] return F.boolean_mask(edges, e_mask)
def __repr__(self): def __repr__(self):
ret = ('DGLGraph(num_nodes={node}, num_edges={edge},\n' ret = ('DGLGraph(num_nodes={node}, num_edges={edge},\n'
......
...@@ -611,18 +611,22 @@ class GraphIndex(object): ...@@ -611,18 +611,22 @@ class GraphIndex(object):
dat = F.ones((m,), dtype=F.float32, ctx=ctx) dat = F.ones((m,), dtype=F.float32, ctx=ctx)
inc, shuffle_idx = F.sparse_matrix(dat, ('coo', idx), (n, m)) inc, shuffle_idx = F.sparse_matrix(dat, ('coo', idx), (n, m))
elif typestr == 'both': elif typestr == 'both':
# first remove entries for self loops
mask = F.logical_not(F.equal(src, dst))
src = F.boolean_mask(src, mask)
dst = F.boolean_mask(dst, mask)
eid = F.boolean_mask(eid, mask)
n_entries = F.shape(src)[0]
# create index # create index
row = F.unsqueeze(F.cat([src, dst], dim=0), 0) row = F.unsqueeze(F.cat([src, dst], dim=0), 0)
col = F.unsqueeze(F.cat([eid, eid], dim=0), 0) col = F.unsqueeze(F.cat([eid, eid], dim=0), 0)
idx = F.cat([row, col], dim=0) idx = F.cat([row, col], dim=0)
# create data
diagonal = (src == dst)
# FIXME(minjie): data type # FIXME(minjie): data type
x = -F.ones((m,), dtype=F.float32, ctx=ctx) x = -F.ones((n_entries,), dtype=F.float32, ctx=ctx)
y = F.ones((m,), dtype=F.float32, ctx=ctx) y = F.ones((n_entries,), dtype=F.float32, ctx=ctx)
x[diagonal] = 0
y[diagonal] = 0
dat = F.cat([x, y], dim=0) dat = F.cat([x, y], dim=0)
print(idx)
print(dat)
inc, shuffle_idx = F.sparse_matrix(dat, ('coo', idx), (n, m)) inc, shuffle_idx = F.sparse_matrix(dat, ('coo', idx), (n, m))
else: else:
raise DGLError('Invalid incidence matrix type: %s' % str(typestr)) raise DGLError('Invalid incidence matrix type: %s' % str(typestr))
......
Unit test
===
The code organization goes as follows:
* `backend`: Additional unified tensor interface for supported frameworks.
The functions there are only used in unit tests, not DGL itself. Note that
the code there are not unit tests by themselves.
* `compute`: All framework-agnostic computation-related unit tests go there.
* `${DGLBACKEND}` (e.g. `pytorch` and `mxnet`): All framework-specific
computation-related unit tests go there.
* `graph_index`: All unit tests for C++ graph structure implementation go
there. The Python API being tested in this directory, if any, should be
as minimal as possible (usually simple wrappers of corresponding C++
functions).
* `lint`: Pylint-related files.
* `scripts`: Automated test scripts for CI.
from dgl.backend import *
from . import backend_unittest
import os
import importlib
import sys
import numpy as np
mod_name = os.environ.get('DGLBACKEND', 'pytorch').lower()
mod = importlib.import_module('.%s' % mod_name, __name__)
thismod = sys.modules[__name__]
for api in backend_unittest.__dict__.keys():
if api.startswith('__'):
continue
elif callable(mod.__dict__[api]):
# Tensor APIs used in unit tests MUST be supported across all backends
globals()[api] = mod.__dict__[api]
# Tensor creation with default dtype and context
_zeros = zeros
_ones = ones
_randn = randn
_tensor = tensor
_arange = arange
_full = full
_full_1d = full_1d
_default_context_str = os.getenv('DGLTESTDEV', 'cpu')
_context_dict = {
'cpu': cpu(),
'cuda': cuda(),
}
_default_context = _context_dict[_default_context_str]
def zeros(shape, dtype=float32, ctx=_default_context):
return _zeros(shape, dtype, ctx)
def ones(shape, dtype=float32, ctx=_default_context):
return _ones(shape, dtype, ctx)
def randn(shape):
return copy_to(_randn(shape), _default_context)
def tensor(data, dtype=None):
if dtype is None:
data = np.array(data)
dtype = int64 if np.issubdtype(data.dtype, np.integer) else float32
return copy_to(_tensor(data, dtype), _default_context)
def arange(start, stop):
return copy_to(_arange(start, stop), _default_context)
def full(shape, fill_value, dtype, ctx=_default_context):
return _full(shape, fill_value, dtype, ctx)
def full_1d(length, fill_value, dtype, ctx=_default_context):
return _full_1d(length, fill_value, dtype, ctx)
"""This file defines the unified tensor framework interface required by DGL
unit testing, other than the ones used in the framework itself.
"""
###############################################################################
# Tensor, data type and context interfaces
def cuda():
"""Context object for CUDA."""
pass
###############################################################################
# Tensor functions on feature data
# --------------------------------
# These functions are performance critical, so it's better to have efficient
# implementation in each framework.
def array_equal(a, b):
"""Check whether the two tensors are *exactly* equal."""
pass
def allclose(a, b):
"""Check whether the two tensors are numerically close to each other."""
pass
def randn(shape):
"""Generate a tensor with elements from standard normal distribution."""
pass
def attach_grad(x):
"""Flag the tensor *in-place* to have its gradient computed in backward
pass.
If the flag is already set, reset the gradient buffer as well.
"""
pass
def backward(x, head_gradient=None):
"""Invoke backward computation with an optional head gradient.
Returns nothing."""
pass
def grad(x):
"""Fetches the gradient from the tensor after backward computation."""
pass
def is_no_grad(x):
"""Check whether a tensor has its gradient computed."""
pass
def full(shape, fill_value, dtype, ctx):
pass
def narrow_row_set(x, start, stop, new):
"""Set a slice of the given tensor to a new value."""
pass
def sparse_to_numpy(x):
"""Convert a sparse tensor to a numpy array."""
pass
def clone(x):
pass
def reduce_sum(x):
"""Sums all the elements into a single scalar."""
pass
###############################################################################
# Tensor functions used *only* on index tensor
# ----------------
# These operators are light-weighted, so it is acceptable to fallback to
# numpy operators if currently missing in the framework. Ideally in the future,
# DGL should contain all the operations on index, so this set of operators
# should be gradually removed.
###############################################################################
# Other interfaces
# ----------------
# These are not related to tensors. Some of them are temporary workarounds that
# should be included in DGL in the future.
class record_grad(object):
"""Context manager that records the gradients"""
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
class no_grad(object):
"""Context manager that explicitly disables gradient computation"""
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
from __future__ import absolute_import
import numpy as np
import mxnet as mx
import mxnet.ndarray as nd
import mxnet.autograd as autograd
def cuda():
return mx.gpu()
def array_equal(a, b):
return nd.equal(a, b).asnumpy().all()
def allclose(a, b):
return np.allclose(a.asnumpy(), b.asnumpy(), rtol=1e-4, atol=1e-4)
def randn(shape):
return nd.random.randn(*shape)
def attach_grad(x):
x.attach_grad()
return x
def backward(x, head_gradient=None):
x.backward(head_gradient)
def grad(x):
return x.grad
def is_no_grad(x):
return (x != 0).sum() == 0
def full(shape, fill_value, dtype, ctx):
return nd.full(shape, fill_value, dtype=dtype, ctx=ctx)
def narrow_row_set(x, start, stop, new):
x[start:stop] = new
def sparse_to_numpy(x):
return x.asscipy().todense().A
def clone(x):
return x.copy()
def reduce_sum(x):
return x.sum()
record_grad = autograd.record
class no_grad(object):
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
from __future__ import absolute_import
import torch as th
def cuda():
return th.device('cuda')
def array_equal(a, b):
return th.equal(a, b)
def allclose(a, b):
return th.allclose(a.float(), b.float(), rtol=1e-4, atol=1e-4)
def randn(shape):
return th.randn(*shape)
def attach_grad(x):
if x.grad is not None:
x.grad.zero_()
return x
else:
return x.requires_grad_()
def backward(x, head_gradient=None):
x.backward(head_gradient)
def grad(x):
return x.grad
def is_no_grad(x):
return x.grad is None or (x.grad == 0).all()
def full(shape, fill_value, dtype, ctx):
return th.full(shape, fill_value, dtype=dtype, device=ctx)
def narrow_row_set(x, start, stop, new):
x[start:stop] = new
def sparse_to_numpy(x):
return x.to_dense().numpy()
def clone(x):
return x.clone()
def reduce_sum(x):
return x.sum()
class record_grad(object):
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
no_grad = th.no_grad
import dgl import dgl
import torch as th from dgl import DGLGraph
import utils as U import backend as F
def tree1(): def tree1():
"""Generate a tree """Generate a tree
...@@ -17,8 +17,8 @@ def tree1(): ...@@ -17,8 +17,8 @@ def tree1():
g.add_edge(4, 1) g.add_edge(4, 1)
g.add_edge(1, 0) g.add_edge(1, 0)
g.add_edge(2, 0) g.add_edge(2, 0)
g.ndata['h'] = th.Tensor([0, 1, 2, 3, 4]) g.ndata['h'] = F.tensor([0, 1, 2, 3, 4])
g.edata['h'] = th.randn(4, 10) g.edata['h'] = F.randn((4, 10))
return g return g
def tree2(): def tree2():
...@@ -36,8 +36,8 @@ def tree2(): ...@@ -36,8 +36,8 @@ def tree2():
g.add_edge(0, 4) g.add_edge(0, 4)
g.add_edge(4, 1) g.add_edge(4, 1)
g.add_edge(3, 1) g.add_edge(3, 1)
g.ndata['h'] = th.Tensor([0, 1, 2, 3, 4]) g.ndata['h'] = F.tensor([0, 1, 2, 3, 4])
g.edata['h'] = th.randn(4, 10) g.edata['h'] = F.randn((4, 10))
return g return g
def test_batch_unbatch(): def test_batch_unbatch():
...@@ -52,10 +52,10 @@ def test_batch_unbatch(): ...@@ -52,10 +52,10 @@ def test_batch_unbatch():
assert bg.batch_num_edges == [4, 4] assert bg.batch_num_edges == [4, 4]
tt1, tt2 = dgl.unbatch(bg) tt1, tt2 = dgl.unbatch(bg)
assert U.allclose(t1.ndata['h'], tt1.ndata['h']) assert F.allclose(t1.ndata['h'], tt1.ndata['h'])
assert U.allclose(t1.edata['h'], tt1.edata['h']) assert F.allclose(t1.edata['h'], tt1.edata['h'])
assert U.allclose(t2.ndata['h'], tt2.ndata['h']) assert F.allclose(t2.ndata['h'], tt2.ndata['h'])
assert U.allclose(t2.edata['h'], tt2.edata['h']) assert F.allclose(t2.edata['h'], tt2.edata['h'])
def test_batch_unbatch1(): def test_batch_unbatch1():
t1 = tree1() t1 = tree1()
...@@ -69,12 +69,12 @@ def test_batch_unbatch1(): ...@@ -69,12 +69,12 @@ def test_batch_unbatch1():
assert b2.batch_num_edges == [4, 4, 4] assert b2.batch_num_edges == [4, 4, 4]
s1, s2, s3 = dgl.unbatch(b2) s1, s2, s3 = dgl.unbatch(b2)
assert U.allclose(t2.ndata['h'], s1.ndata['h']) assert F.allclose(t2.ndata['h'], s1.ndata['h'])
assert U.allclose(t2.edata['h'], s1.edata['h']) assert F.allclose(t2.edata['h'], s1.edata['h'])
assert U.allclose(t1.ndata['h'], s2.ndata['h']) assert F.allclose(t1.ndata['h'], s2.ndata['h'])
assert U.allclose(t1.edata['h'], s2.edata['h']) assert F.allclose(t1.edata['h'], s2.edata['h'])
assert U.allclose(t2.ndata['h'], s3.ndata['h']) assert F.allclose(t2.ndata['h'], s3.ndata['h'])
assert U.allclose(t2.edata['h'], s3.edata['h']) assert F.allclose(t2.edata['h'], s3.edata['h'])
def test_batch_unbatch2(): def test_batch_unbatch2():
# test setting/getting features after batch # test setting/getting features after batch
...@@ -85,10 +85,10 @@ def test_batch_unbatch2(): ...@@ -85,10 +85,10 @@ def test_batch_unbatch2():
b.add_nodes(3) b.add_nodes(3)
b.add_edges(0, [1, 2]) b.add_edges(0, [1, 2])
c = dgl.batch([a, b]) c = dgl.batch([a, b])
c.ndata['h'] = th.ones(7, 1) c.ndata['h'] = F.ones((7, 1))
c.edata['w'] = th.ones(5, 1) c.edata['w'] = F.ones((5, 1))
assert U.allclose(c.ndata['h'], th.ones(7, 1)) assert F.allclose(c.ndata['h'], F.ones((7, 1)))
assert U.allclose(c.edata['w'], th.ones(5, 1)) assert F.allclose(c.edata['w'], F.ones((5, 1)))
def test_batch_send_then_recv(): def test_batch_send_then_recv():
t1 = tree1() t1 = tree1()
...@@ -96,7 +96,7 @@ def test_batch_send_then_recv(): ...@@ -96,7 +96,7 @@ def test_batch_send_then_recv():
bg = dgl.batch([t1, t2]) bg = dgl.batch([t1, t2])
bg.register_message_func(lambda edges: {'m' : edges.src['h']}) bg.register_message_func(lambda edges: {'m' : edges.src['h']})
bg.register_reduce_func(lambda nodes: {'h' : th.sum(nodes.mailbox['m'], 1)}) bg.register_reduce_func(lambda nodes: {'h' : F.sum(nodes.mailbox['m'], 1)})
u = [3, 4, 2 + 5, 0 + 5] u = [3, 4, 2 + 5, 0 + 5]
v = [1, 1, 4 + 5, 4 + 5] v = [1, 1, 4 + 5, 4 + 5]
...@@ -113,7 +113,7 @@ def test_batch_send_and_recv(): ...@@ -113,7 +113,7 @@ def test_batch_send_and_recv():
bg = dgl.batch([t1, t2]) bg = dgl.batch([t1, t2])
bg.register_message_func(lambda edges: {'m' : edges.src['h']}) bg.register_message_func(lambda edges: {'m' : edges.src['h']})
bg.register_reduce_func(lambda nodes: {'h' : th.sum(nodes.mailbox['m'], 1)}) bg.register_reduce_func(lambda nodes: {'h' : F.sum(nodes.mailbox['m'], 1)})
u = [3, 4, 2 + 5, 0 + 5] u = [3, 4, 2 + 5, 0 + 5]
v = [1, 1, 4 + 5, 4 + 5] v = [1, 1, 4 + 5, 4 + 5]
...@@ -129,7 +129,7 @@ def test_batch_propagate(): ...@@ -129,7 +129,7 @@ def test_batch_propagate():
bg = dgl.batch([t1, t2]) bg = dgl.batch([t1, t2])
bg.register_message_func(lambda edges: {'m' : edges.src['h']}) bg.register_message_func(lambda edges: {'m' : edges.src['h']})
bg.register_reduce_func(lambda nodes: {'h' : th.sum(nodes.mailbox['m'], 1)}) bg.register_reduce_func(lambda nodes: {'h' : F.sum(nodes.mailbox['m'], 1)})
# get leaves. # get leaves.
order = [] order = []
...@@ -154,17 +154,17 @@ def test_batched_edge_ordering(): ...@@ -154,17 +154,17 @@ def test_batched_edge_ordering():
g1 = dgl.DGLGraph() g1 = dgl.DGLGraph()
g1.add_nodes(6) g1.add_nodes(6)
g1.add_edges([4, 4, 2, 2, 0], [5, 3, 3, 1, 1]) g1.add_edges([4, 4, 2, 2, 0], [5, 3, 3, 1, 1])
e1 = th.randn(5, 10) e1 = F.randn((5, 10))
g1.edata['h'] = e1 g1.edata['h'] = e1
g2 = dgl.DGLGraph() g2 = dgl.DGLGraph()
g2.add_nodes(6) g2.add_nodes(6)
g2.add_edges([0, 1 ,2 ,5, 4 ,5], [1, 2, 3, 4, 3, 0]) g2.add_edges([0, 1 ,2 ,5, 4 ,5], [1, 2, 3, 4, 3, 0])
e2 = th.randn(6, 10) e2 = F.randn((6, 10))
g2.edata['h'] = e2 g2.edata['h'] = e2
g = dgl.batch([g1, g2]) g = dgl.batch([g1, g2])
r1 = g.edata['h'][g.edge_id(4, 5)] r1 = g.edata['h'][g.edge_id(4, 5)]
r2 = g1.edata['h'][g1.edge_id(4, 5)] r2 = g1.edata['h'][g1.edge_id(4, 5)]
assert th.equal(r1, r2) assert F.array_equal(r1, r2)
def test_batch_no_edge(): def test_batch_no_edge():
g1 = dgl.DGLGraph() g1 = dgl.DGLGraph()
......
import torch as th import torch as th
import numpy as np
from dgl.graph import DGLGraph from dgl.graph import DGLGraph
import utils as U import backend as F
def test_filter(): def test_filter():
g = DGLGraph() g = DGLGraph()
g.add_nodes(4) g.add_nodes(4)
g.add_edges([0,1,2,3], [1,2,3,0]) g.add_edges([0,1,2,3], [1,2,3,0])
n_repr = th.zeros(4, 5) n_repr = F.zeros((4, 5))
e_repr = th.zeros(4, 5) e_repr = F.zeros((4, 5))
n_repr[[1, 3]] = 1 n_repr[[1, 3]] = 1
e_repr[[1, 3]] = 1 e_repr[[1, 3]] = 1
...@@ -17,23 +16,23 @@ def test_filter(): ...@@ -17,23 +16,23 @@ def test_filter():
g.edata['a'] = e_repr g.edata['a'] = e_repr
def predicate(r): def predicate(r):
return r.data['a'].max(1)[0] > 0 return F.max(r.data['a'], 1) > 0
# full node filter # full node filter
n_idx = g.filter_nodes(predicate) n_idx = g.filter_nodes(predicate)
assert set(n_idx.numpy()) == {1, 3} assert set(F.zerocopy_to_numpy(n_idx)) == {1, 3}
# partial node filter # partial node filter
n_idx = g.filter_nodes(predicate, [0, 1]) n_idx = g.filter_nodes(predicate, [0, 1])
assert set(n_idx.numpy()) == {1} assert set(F.zerocopy_to_numpy(n_idx)) == {1}
# full edge filter # full edge filter
e_idx = g.filter_edges(predicate) e_idx = g.filter_edges(predicate)
assert set(e_idx.numpy()) == {1, 3} assert set(F.zerocopy_to_numpy(e_idx)) == {1, 3}
# partial edge filter # partial edge filter
e_idx = g.filter_edges(predicate, [0, 1]) e_idx = g.filter_edges(predicate, [0, 1])
assert set(e_idx.numpy()) == {1} assert set(F.zerocopy_to_numpy(e_idx)) == {1}
if __name__ == '__main__': if __name__ == '__main__':
......
import torch as th
from torch.autograd import Variable
import numpy as np import numpy as np
from dgl.frame import Frame, FrameRef from dgl.frame import Frame, FrameRef
from dgl.utils import Index, toindex from dgl.utils import Index, toindex
import utils as U import backend as F
N = 10 N = 10
D = 5 D = 5
...@@ -16,9 +14,13 @@ def check_fail(fn): ...@@ -16,9 +14,13 @@ def check_fail(fn):
return True return True
def create_test_data(grad=False): def create_test_data(grad=False):
c1 = Variable(th.randn(N, D), requires_grad=grad) c1 = F.randn((N, D))
c2 = Variable(th.randn(N, D), requires_grad=grad) c2 = F.randn((N, D))
c3 = Variable(th.randn(N, D), requires_grad=grad) c3 = F.randn((N, D))
if grad:
c1 = F.attach_grad(c1)
c2 = F.attach_grad(c2)
c3 = F.attach_grad(c3)
return {'a1' : c1, 'a2' : c2, 'a3' : c3} return {'a1' : c1, 'a2' : c2, 'a3' : c3}
def test_create(): def test_create():
...@@ -44,12 +46,12 @@ def test_column1(): ...@@ -44,12 +46,12 @@ def test_column1():
f = Frame(data) f = Frame(data)
assert f.num_rows == N assert f.num_rows == N
assert len(f) == 3 assert len(f) == 3
assert U.allclose(f['a1'].data, data['a1'].data) assert F.allclose(f['a1'].data, data['a1'])
f['a1'] = data['a2'] f['a1'] = data['a2']
assert U.allclose(f['a2'].data, data['a2'].data) assert F.allclose(f['a2'].data, data['a2'])
# add a different length column should fail # add a different length column should fail
def failed_add_col(): def failed_add_col():
f['a4'] = th.zeros([N+1, D]) f['a4'] = F.zeros([N+1, D])
assert check_fail(failed_add_col) assert check_fail(failed_add_col)
# delete all the columns # delete all the columns
del f['a1'] del f['a1']
...@@ -64,14 +66,14 @@ def test_column2(): ...@@ -64,14 +66,14 @@ def test_column2():
f = FrameRef(data, toindex([3, 4, 5, 6, 7])) f = FrameRef(data, toindex([3, 4, 5, 6, 7]))
assert f.num_rows == 5 assert f.num_rows == 5
assert len(f) == 3 assert len(f) == 3
assert U.allclose(f['a1'], data['a1'].data[3:8]) assert F.allclose(f['a1'], F.narrow_row(data['a1'].data, 3, 8))
# set column should reflect on the referenced data # set column should reflect on the referenced data
f['a1'] = th.zeros([5, D]) f['a1'] = F.zeros([5, D])
assert U.allclose(data['a1'].data[3:8], th.zeros([5, D])) assert F.allclose(F.narrow_row(data['a1'].data, 3, 8), F.zeros([5, D]))
# add new partial column should fail with error initializer # add new partial column should fail with error initializer
f.set_initializer(lambda shape, dtype : assert_(False)) f.set_initializer(lambda shape, dtype : assert_(False))
def failed_add_col(): def failed_add_col():
f['a4'] = th.ones([5, D]) f['a4'] = F.ones([5, D])
assert check_fail(failed_add_col) assert check_fail(failed_add_col)
def test_append1(): def test_append1():
...@@ -84,11 +86,11 @@ def test_append1(): ...@@ -84,11 +86,11 @@ def test_append1():
f1.append(f2) f1.append(f2)
assert f1.num_rows == 2 * N assert f1.num_rows == 2 * N
c1 = f1['a1'] c1 = f1['a1']
assert c1.data.shape == (2 * N, D) assert tuple(F.shape(c1.data)) == (2 * N, D)
truth = th.cat([data['a1'], data['a1']]) truth = F.cat([data['a1'], data['a1']], 0)
assert U.allclose(truth, c1.data) assert F.allclose(truth, c1.data)
# append dict of different length columns should fail # append dict of different length columns should fail
f3 = {'a1' : th.zeros((3, D)), 'a2' : th.zeros((3, D)), 'a3' : th.zeros((2, D))} f3 = {'a1' : F.zeros((3, D)), 'a2' : F.zeros((3, D)), 'a3' : F.zeros((2, D))}
def failed_append(): def failed_append():
f1.append(f3) f1.append(f3)
assert check_fail(failed_append) assert check_fail(failed_append)
...@@ -111,25 +113,25 @@ def test_append2(): ...@@ -111,25 +113,25 @@ def test_append2():
assert not f.is_span_whole_column() assert not f.is_span_whole_column()
assert f.num_rows == 3 * N assert f.num_rows == 3 * N
new_idx = list(range(N)) + list(range(2*N, 4*N)) new_idx = list(range(N)) + list(range(2*N, 4*N))
assert th.all(f._index.tousertensor() == th.tensor(new_idx, dtype=th.int64)) assert F.array_equal(f._index.tousertensor(), F.tensor(new_idx, dtype=F.int64))
assert data.num_rows == 4 * N assert data.num_rows == 4 * N
def test_append3(): def test_append3():
# test append on empty frame # test append on empty frame
f = Frame(num_rows=5) f = Frame(num_rows=5)
data = {'h' : th.ones((3, 2))} data = {'h' : F.ones((3, 2))}
f.append(data) f.append(data)
assert f.num_rows == 8 assert f.num_rows == 8
ans = th.cat([th.zeros((5, 2)), th.ones((3, 2))], dim=0) ans = F.cat([F.zeros((5, 2)), F.ones((3, 2))], 0)
assert U.allclose(f['h'].data, ans) assert F.allclose(f['h'].data, ans)
# test append with new column # test append with new column
data = {'h' : 2 * th.ones((3, 2)), 'w' : 2 * th.ones((3, 2))} data = {'h' : 2 * F.ones((3, 2)), 'w' : 2 * F.ones((3, 2))}
f.append(data) f.append(data)
assert f.num_rows == 11 assert f.num_rows == 11
ans1 = th.cat([ans, 2 * th.ones((3, 2))], 0) ans1 = F.cat([ans, 2 * F.ones((3, 2))], 0)
ans2 = th.cat([th.zeros((8, 2)), 2 * th.ones((3, 2))], 0) ans2 = F.cat([F.zeros((8, 2)), 2 * F.ones((3, 2))], 0)
assert U.allclose(f['h'].data, ans1) assert F.allclose(f['h'].data, ans1)
assert U.allclose(f['w'].data, ans2) assert F.allclose(f['w'].data, ans2)
def test_row1(): def test_row1():
# test row getter/setter # test row getter/setter
...@@ -138,32 +140,32 @@ def test_row1(): ...@@ -138,32 +140,32 @@ def test_row1():
# getter # getter
# test non-duplicate keys # test non-duplicate keys
rowid = Index(th.tensor([0, 2])) rowid = Index(F.tensor([0, 2]))
rows = f[rowid] rows = f[rowid]
for k, v in rows.items(): for k, v in rows.items():
assert v.shape == (len(rowid), D) assert tuple(F.shape(v)) == (len(rowid), D)
assert U.allclose(v, data[k][rowid]) assert F.allclose(v, F.gather_row(data[k], rowid.tousertensor()))
# test duplicate keys # test duplicate keys
rowid = Index(th.tensor([8, 2, 2, 1])) rowid = Index(F.tensor([8, 2, 2, 1]))
rows = f[rowid] rows = f[rowid]
for k, v in rows.items(): for k, v in rows.items():
assert v.shape == (len(rowid), D) assert tuple(F.shape(v)) == (len(rowid), D)
assert U.allclose(v, data[k][rowid]) assert F.allclose(v, F.gather_row(data[k], rowid.tousertensor()))
# setter # setter
rowid = Index(th.tensor([0, 2, 4])) rowid = Index(F.tensor([0, 2, 4]))
vals = {'a1' : th.zeros((len(rowid), D)), vals = {'a1' : F.zeros((len(rowid), D)),
'a2' : th.zeros((len(rowid), D)), 'a2' : F.zeros((len(rowid), D)),
'a3' : th.zeros((len(rowid), D)), 'a3' : F.zeros((len(rowid), D)),
} }
f[rowid] = vals f[rowid] = vals
for k, v in f[rowid].items(): for k, v in f[rowid].items():
assert U.allclose(v, th.zeros((len(rowid), D))) assert F.allclose(v, F.zeros((len(rowid), D)))
# setting rows with new column should raise error with error initializer # setting rows with new column should raise error with error initializer
f.set_initializer(lambda shape, dtype : assert_(False)) f.set_initializer(lambda shape, dtype : assert_(False))
def failed_update_rows(): def failed_update_rows():
vals['a4'] = th.ones((len(rowid), D)) vals['a4'] = F.ones((len(rowid), D))
f[rowid] = vals f[rowid] = vals
assert check_fail(failed_update_rows) assert check_fail(failed_update_rows)
...@@ -172,34 +174,41 @@ def test_row2(): ...@@ -172,34 +174,41 @@ def test_row2():
data = create_test_data(grad=True) data = create_test_data(grad=True)
f = FrameRef(Frame(data)) f = FrameRef(Frame(data))
with F.record_grad():
# getter # getter
c1 = f['a1'] c1 = f['a1']
# test non-duplicate keys # test non-duplicate keys
rowid = Index(th.tensor([0, 2])) rowid = Index(F.tensor([0, 2]))
rows = f[rowid] rows = f[rowid]
rows['a1'].backward(th.ones((len(rowid), D))) y = rows['a1']
assert U.allclose(c1.grad[:,0], th.tensor([1., 0., 1., 0., 0., 0., 0., 0., 0., 0.])) F.backward(y, F.ones((len(rowid), D)))
c1.grad.data.zero_() assert F.allclose(F.grad(c1)[:,0], F.tensor([1., 0., 1., 0., 0., 0., 0., 0., 0., 0.]))
f['a1'] = F.attach_grad(f['a1'])
with F.record_grad():
c1 = f['a1']
# test duplicate keys # test duplicate keys
rowid = Index(th.tensor([8, 2, 2, 1])) rowid = Index(F.tensor([8, 2, 2, 1]))
rows = f[rowid] rows = f[rowid]
rows['a1'].backward(th.ones((len(rowid), D))) y = rows['a1']
assert U.allclose(c1.grad[:,0], th.tensor([0., 1., 2., 0., 0., 0., 0., 0., 1., 0.])) F.backward(y, F.ones((len(rowid), D)))
c1.grad.data.zero_() assert F.allclose(F.grad(c1)[:,0], F.tensor([0., 1., 2., 0., 0., 0., 0., 0., 1., 0.]))
f['a1'] = F.attach_grad(f['a1'])
with F.record_grad():
# setter # setter
c1 = f['a1'] c1 = f['a1']
rowid = Index(th.tensor([0, 2, 4])) rowid = Index(F.tensor([0, 2, 4]))
vals = {'a1' : Variable(th.zeros((len(rowid), D)), requires_grad=True), vals = {'a1' : F.attach_grad(F.zeros((len(rowid), D))),
'a2' : Variable(th.zeros((len(rowid), D)), requires_grad=True), 'a2' : F.attach_grad(F.zeros((len(rowid), D))),
'a3' : Variable(th.zeros((len(rowid), D)), requires_grad=True), 'a3' : F.attach_grad(F.zeros((len(rowid), D))),
} }
f[rowid] = vals f[rowid] = vals
c11 = f['a1'] c11 = f['a1']
c11.backward(th.ones((N, D))) F.backward(c11, F.ones((N, D)))
assert U.allclose(c1.grad[:,0], th.tensor([0., 1., 0., 1., 0., 1., 1., 1., 1., 1.])) assert F.allclose(F.grad(c1)[:,0], F.tensor([0., 1., 0., 1., 0., 1., 1., 1., 1., 1.]))
assert U.allclose(vals['a1'].grad, th.ones((len(rowid), D))) assert F.allclose(F.grad(vals['a1']), F.ones((len(rowid), D)))
assert vals['a2'].grad is None assert F.is_no_grad(vals['a2'])
def test_row3(): def test_row3():
# test row delete # test row delete
...@@ -208,7 +217,7 @@ def test_row3(): ...@@ -208,7 +217,7 @@ def test_row3():
assert f.is_contiguous() assert f.is_contiguous()
assert f.is_span_whole_column() assert f.is_span_whole_column()
assert f.num_rows == N assert f.num_rows == N
del f[toindex(th.tensor([2, 3]))] del f[toindex(F.tensor([2, 3]))]
assert not f.is_contiguous() assert not f.is_contiguous()
assert not f.is_span_whole_column() assert not f.is_span_whole_column()
# delete is lazy: only reflect on the ref while the # delete is lazy: only reflect on the ref while the
...@@ -220,16 +229,16 @@ def test_row3(): ...@@ -220,16 +229,16 @@ def test_row3():
newidx.pop(2) newidx.pop(2)
newidx = toindex(newidx) newidx = toindex(newidx)
for k, v in f.items(): for k, v in f.items():
assert U.allclose(v, data[k][newidx]) assert F.allclose(v, data[k][newidx])
def test_row4(): def test_row4():
# test updating row with empty frame but has preset num_rows # test updating row with empty frame but has preset num_rows
f = FrameRef(Frame(num_rows=5)) f = FrameRef(Frame(num_rows=5))
rowid = Index(th.tensor([0, 2, 4])) rowid = Index(F.tensor([0, 2, 4]))
f[rowid] = {'h' : th.ones((3, 2))} f[rowid] = {'h' : F.ones((3, 2))}
ans = th.zeros((5, 2)) ans = F.zeros((5, 2))
ans[th.tensor([0, 2, 4])] = th.ones((3, 2)) ans[F.tensor([0, 2, 4])] = F.ones((3, 2))
assert U.allclose(f['h'], ans) assert F.allclose(f['h'], ans)
def test_sharing(): def test_sharing():
data = Frame(create_test_data()) data = Frame(create_test_data())
...@@ -237,26 +246,26 @@ def test_sharing(): ...@@ -237,26 +246,26 @@ def test_sharing():
f2 = FrameRef(data, index=toindex([2, 3, 4, 5, 6])) f2 = FrameRef(data, index=toindex([2, 3, 4, 5, 6]))
# test read # test read
for k, v in f1.items(): for k, v in f1.items():
assert U.allclose(data[k].data[0:4], v) assert F.allclose(F.narrow_row(data[k].data, 0, 4), v)
for k, v in f2.items(): for k, v in f2.items():
assert U.allclose(data[k].data[2:7], v) assert F.allclose(F.narrow_row(data[k].data, 2, 7), v)
f2_a1 = f2['a1'].data f2_a1 = f2['a1']
# test write # test write
# update own ref should not been seen by the other. # update own ref should not been seen by the other.
f1[Index(th.tensor([0, 1]))] = { f1[Index(F.tensor([0, 1]))] = {
'a1' : th.zeros([2, D]), 'a1' : F.zeros([2, D]),
'a2' : th.zeros([2, D]), 'a2' : F.zeros([2, D]),
'a3' : th.zeros([2, D]), 'a3' : F.zeros([2, D]),
} }
assert U.allclose(f2['a1'], f2_a1) assert F.allclose(f2['a1'], f2_a1)
# update shared space should been seen by the other. # update shared space should been seen by the other.
f1[Index(th.tensor([2, 3]))] = { f1[Index(F.tensor([2, 3]))] = {
'a1' : th.ones([2, D]), 'a1' : F.ones([2, D]),
'a2' : th.ones([2, D]), 'a2' : F.ones([2, D]),
'a3' : th.ones([2, D]), 'a3' : F.ones([2, D]),
} }
f2_a1[0:2] = th.ones([2, D]) F.narrow_row_set(f2_a1, 0, 2, F.ones([2, D]))
assert U.allclose(f2['a1'], f2_a1) assert F.allclose(f2['a1'], f2_a1)
def test_slicing(): def test_slicing():
data = Frame(create_test_data(grad=True)) data = Frame(create_test_data(grad=True))
...@@ -264,81 +273,81 @@ def test_slicing(): ...@@ -264,81 +273,81 @@ def test_slicing():
f2 = FrameRef(data, index=toindex(slice(3, 8))) f2 = FrameRef(data, index=toindex(slice(3, 8)))
# test read # test read
for k, v in f1.items(): for k, v in f1.items():
assert U.allclose(data[k].data[1:5], v) assert F.allclose(F.narrow_row(data[k].data, 1, 5), v)
f2_a1 = f2['a1'].data f2_a1 = f2['a1'] # is a tensor
# test write # test write
f1[Index(th.tensor([0, 1]))] = { f1[Index(F.tensor([0, 1]))] = {
'a1': th.zeros([2, D]), 'a1': F.zeros([2, D]),
'a2': th.zeros([2, D]), 'a2': F.zeros([2, D]),
'a3': th.zeros([2, D]), 'a3': F.zeros([2, D]),
} }
assert U.allclose(f2['a1'], f2_a1) assert F.allclose(f2['a1'], f2_a1)
f1[Index(th.tensor([2, 3]))] = { f1[Index(F.tensor([2, 3]))] = {
'a1': th.ones([2, D]), 'a1': F.ones([2, D]),
'a2': th.ones([2, D]), 'a2': F.ones([2, D]),
'a3': th.ones([2, D]), 'a3': F.ones([2, D]),
} }
f2_a1[toindex(slice(0,2))] = 1 F.narrow_row_set(f2_a1, 0, 2, 1)
assert U.allclose(f2['a1'], f2_a1) assert F.allclose(f2['a1'], f2_a1)
f1[toindex(slice(2,4))] = { f1[toindex(slice(2, 4))] = {
'a1': th.zeros([2, D]), 'a1': F.zeros([2, D]),
'a2': th.zeros([2, D]), 'a2': F.zeros([2, D]),
'a3': th.zeros([2, D]), 'a3': F.zeros([2, D]),
} }
f2_a1[toindex(slice(0,2))] = 0 F.narrow_row_set(f2_a1, 0, 2, 0)
assert U.allclose(f2['a1'], f2_a1) assert F.allclose(f2['a1'], f2_a1)
def test_add_rows(): def test_add_rows():
data = Frame() data = Frame()
f1 = FrameRef(data) f1 = FrameRef(data)
f1.add_rows(4) f1.add_rows(4)
x = th.randn(1, 4) x = F.randn((1, 4))
f1[Index(th.tensor([0]))] = {'x': x} f1[Index(F.tensor([0]))] = {'x': x}
ans = th.cat([x, th.zeros(3, 4)]) ans = F.cat([x, F.zeros((3, 4))], 0)
assert U.allclose(f1['x'], ans) assert F.allclose(f1['x'], ans)
f1.add_rows(4) f1.add_rows(4)
f1[toindex(slice(4,8))] = {'x': th.ones(4, 4), 'y': th.ones(4, 5)} f1[toindex(slice(4, 8))] = {'x': F.ones((4, 4)), 'y': F.ones((4, 5))}
ans = th.cat([ans, th.ones(4, 4)]) ans = F.cat([ans, F.ones((4, 4))], 0)
assert U.allclose(f1['x'], ans) assert F.allclose(f1['x'], ans)
ans = th.cat([th.zeros(4, 5), th.ones(4, 5)]) ans = F.cat([F.zeros((4, 5)), F.ones((4, 5))], 0)
assert U.allclose(f1['y'], ans) assert F.allclose(f1['y'], ans)
def test_inplace(): def test_inplace():
f = FrameRef(Frame(create_test_data())) f = FrameRef(Frame(create_test_data()))
print(f.schemes) print(f.schemes)
a1addr = f['a1'].data.data_ptr() a1addr = id(f['a1'])
a2addr = f['a2'].data.data_ptr() a2addr = id(f['a2'])
a3addr = f['a3'].data.data_ptr() a3addr = id(f['a3'])
# column updates are always out-of-place # column updates are always out-of-place
f['a1'] = th.ones((N, D)) f['a1'] = F.ones((N, D))
newa1addr = f['a1'].data.data_ptr() newa1addr = id(f['a1'])
assert a1addr != newa1addr assert a1addr != newa1addr
a1addr = newa1addr a1addr = newa1addr
# full row update that becomes column update # full row update that becomes column update
f[toindex(slice(0, N))] = {'a1' : th.ones((N, D))} f[toindex(slice(0, N))] = {'a1' : F.ones((N, D))}
assert f['a1'].data.data_ptr() != a1addr assert id(f['a1']) != a1addr
# row update (outplace) w/ slice # row update (outplace) w/ slice
f[toindex(slice(1, 4))] = {'a2' : th.ones((3, D))} f[toindex(slice(1, 4))] = {'a2' : F.ones((3, D))}
newa2addr = f['a2'].data.data_ptr() newa2addr = id(f['a2'])
assert a2addr != newa2addr assert a2addr != newa2addr
a2addr = newa2addr a2addr = newa2addr
# row update (outplace) w/ list # row update (outplace) w/ list
f[toindex([1, 3, 5])] = {'a2' : th.ones((3, D))} f[toindex([1, 3, 5])] = {'a2' : F.ones((3, D))}
newa2addr = f['a2'].data.data_ptr() newa2addr = id(f['a2'])
assert a2addr != newa2addr assert a2addr != newa2addr
a2addr = newa2addr a2addr = newa2addr
# row update (inplace) w/ slice # row update (inplace) w/ slice
f.update_data(toindex(slice(1, 4)), {'a2' : th.ones((3, D))}, True) f.update_data(toindex(slice(1, 4)), {'a2' : F.ones((3, D))}, True)
newa2addr = f['a2'].data.data_ptr() newa2addr = id(f['a2'])
assert a2addr == newa2addr assert a2addr == newa2addr
# row update (inplace) w/ list # row update (inplace) w/ list
f.update_data(toindex([1, 3, 5]), {'a2' : th.ones((3, D))}, True) f.update_data(toindex([1, 3, 5]), {'a2' : F.ones((3, D))}, True)
newa2addr = f['a2'].data.data_ptr() newa2addr = id(f['a2'])
assert a2addr == newa2addr assert a2addr == newa2addr
if __name__ == '__main__': if __name__ == '__main__':
......
import torch as th
import dgl import dgl
import dgl.function as fn import dgl.function as fn
import utils as U import backend as F
def generate_graph(): def generate_graph():
g = dgl.DGLGraph() g = dgl.DGLGraph()
g.add_nodes(10) # 10 nodes. g.add_nodes(10) # 10 nodes.
h = th.arange(1, 11, dtype=th.float) h = F.astype(F.arange(1, 11), F.float32)
g.ndata['h'] = h g.ndata['h'] = h
# create a graph where 0 is the source and 9 is the sink # create a graph where 0 is the source and 9 is the sink
for i in range(1, 9): for i in range(1, 9):
...@@ -14,13 +13,13 @@ def generate_graph(): ...@@ -14,13 +13,13 @@ def generate_graph():
g.add_edge(i, 9) g.add_edge(i, 9)
# add a back flow from 9 to 0 # add a back flow from 9 to 0
g.add_edge(9, 0) g.add_edge(9, 0)
h = th.tensor([1., 2., 1., 3., 1., 4., 1., 5., 1., 6.,\ h = F.tensor([1., 2., 1., 3., 1., 4., 1., 5., 1., 6.,\
1., 7., 1., 8., 1., 9., 10.]) 1., 7., 1., 8., 1., 9., 10.])
g.edata['h'] = h g.edata['h'] = h
return g return g
def reducer_both(nodes): def reducer_both(nodes):
return {'h' : th.sum(nodes.mailbox['m'], 1)} return {'h' : F.sum(nodes.mailbox['m'], 1)}
def test_copy_src(): def test_copy_src():
# copy_src with both fields # copy_src with both fields
...@@ -28,8 +27,8 @@ def test_copy_src(): ...@@ -28,8 +27,8 @@ def test_copy_src():
g.register_message_func(fn.copy_src(src='h', out='m')) g.register_message_func(fn.copy_src(src='h', out='m'))
g.register_reduce_func(reducer_both) g.register_reduce_func(reducer_both)
g.update_all() g.update_all()
assert U.allclose(g.ndata['h'], assert F.allclose(g.ndata['h'],
th.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.])) F.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.]))
def test_copy_edge(): def test_copy_edge():
# copy_edge with both fields # copy_edge with both fields
...@@ -37,8 +36,8 @@ def test_copy_edge(): ...@@ -37,8 +36,8 @@ def test_copy_edge():
g.register_message_func(fn.copy_edge(edge='h', out='m')) g.register_message_func(fn.copy_edge(edge='h', out='m'))
g.register_reduce_func(reducer_both) g.register_reduce_func(reducer_both)
g.update_all() g.update_all()
assert U.allclose(g.ndata['h'], assert F.allclose(g.ndata['h'],
th.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.])) F.tensor([10., 1., 1., 1., 1., 1., 1., 1., 1., 44.]))
def test_src_mul_edge(): def test_src_mul_edge():
# src_mul_edge with all fields # src_mul_edge with all fields
...@@ -46,8 +45,8 @@ def test_src_mul_edge(): ...@@ -46,8 +45,8 @@ def test_src_mul_edge():
g.register_message_func(fn.src_mul_edge(src='h', edge='h', out='m')) g.register_message_func(fn.src_mul_edge(src='h', edge='h', out='m'))
g.register_reduce_func(reducer_both) g.register_reduce_func(reducer_both)
g.update_all() g.update_all()
assert U.allclose(g.ndata['h'], assert F.allclose(g.ndata['h'],
th.tensor([100., 1., 1., 1., 1., 1., 1., 1., 1., 284.])) F.tensor([100., 1., 1., 1., 1., 1., 1., 1., 1., 284.]))
if __name__ == '__main__': if __name__ == '__main__':
test_copy_src() test_copy_src()
......
...@@ -3,29 +3,28 @@ import math ...@@ -3,29 +3,28 @@ import math
import numpy as np import numpy as np
import scipy.sparse as sp import scipy.sparse as sp
import networkx as nx import networkx as nx
import torch as th
import dgl import dgl
import utils as U import backend as F
def test_graph_creation(): def test_graph_creation():
g = dgl.DGLGraph() g = dgl.DGLGraph()
# test add nodes with data # test add nodes with data
g.add_nodes(5) g.add_nodes(5)
g.add_nodes(5, {'h' : th.ones((5, 2))}) g.add_nodes(5, {'h' : F.ones((5, 2))})
ans = th.cat([th.zeros(5, 2), th.ones(5, 2)], 0) ans = F.cat([F.zeros((5, 2)), F.ones((5, 2))], 0)
U.allclose(ans, g.ndata['h']) assert F.allclose(ans, g.ndata['h'])
g.ndata['w'] = 2 * th.ones((10, 2)) g.ndata['w'] = 2 * F.ones((10, 2))
assert U.allclose(2 * th.ones((10, 2)), g.ndata['w']) assert F.allclose(2 * F.ones((10, 2)), g.ndata['w'])
# test add edges with data # test add edges with data
g.add_edges([2, 3], [3, 4]) g.add_edges([2, 3], [3, 4])
g.add_edges([0, 1], [1, 2], {'m' : th.ones((2, 2))}) g.add_edges([0, 1], [1, 2], {'m' : F.ones((2, 2))})
ans = th.cat([th.zeros(2, 2), th.ones(2, 2)], 0) ans = F.cat([F.zeros((2, 2)), F.ones((2, 2))], 0)
assert U.allclose(ans, g.edata['m']) assert F.allclose(ans, g.edata['m'])
# test clear and add again # test clear and add again
g.clear() g.clear()
g.add_nodes(5) g.add_nodes(5)
g.ndata['h'] = 3 * th.ones((5, 2)) g.ndata['h'] = 3 * F.ones((5, 2))
assert U.allclose(3 * th.ones((5, 2)), g.ndata['h']) assert F.allclose(3 * F.ones((5, 2)), g.ndata['h'])
def test_create_from_elist(): def test_create_from_elist():
elist = [(2, 1), (1, 0), (2, 0), (3, 0), (0, 2)] elist = [(2, 1), (1, 0), (2, 0), (3, 0), (0, 2)]
...@@ -74,21 +73,27 @@ def test_incmat(): ...@@ -74,21 +73,27 @@ def test_incmat():
g.add_edge(0, 3) # 2 g.add_edge(0, 3) # 2
g.add_edge(2, 3) # 3 g.add_edge(2, 3) # 3
g.add_edge(1, 1) # 4 g.add_edge(1, 1) # 4
assert U.allclose( inc_in = F.sparse_to_numpy(g.incidence_matrix('in'))
g.incidence_matrix('in').to_dense(), inc_out = F.sparse_to_numpy(g.incidence_matrix('out'))
th.tensor([[0., 0., 0., 0., 0.], inc_both = F.sparse_to_numpy(g.incidence_matrix('both'))
print(inc_in)
print(inc_out)
print(inc_both)
assert np.allclose(
inc_in,
np.array([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 1.], [1., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.], [0., 1., 0., 0., 0.],
[0., 0., 1., 1., 0.]])) [0., 0., 1., 1., 0.]]))
assert U.allclose( assert np.allclose(
g.incidence_matrix('out').to_dense(), inc_out,
th.tensor([[1., 1., 1., 0., 0.], np.array([[1., 1., 1., 0., 0.],
[0., 0., 0., 0., 1.], [0., 0., 0., 0., 1.],
[0., 0., 0., 1., 0.], [0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0.]])) [0., 0., 0., 0., 0.]]))
assert U.allclose( assert np.allclose(
g.incidence_matrix('both').to_dense(), inc_both,
th.tensor([[-1., -1., -1., 0., 0.], np.array([[-1., -1., -1., 0., 0.],
[1., 0., 0., 0., 0.], [1., 0., 0., 0., 0.],
[0., 1., 0., -1., 0.], [0., 1., 0., -1., 0.],
[0., 0., 1., 1., 0.]])) [0., 0., 1., 1., 0.]]))
......
...@@ -2,9 +2,7 @@ import dgl ...@@ -2,9 +2,7 @@ import dgl
import dgl.ndarray as nd import dgl.ndarray as nd
from dgl.utils import toindex from dgl.utils import toindex
import numpy as np import numpy as np
import torch as th import backend as F
from torch.utils import dlpack
import utils as U
def test_dlpack(): def test_dlpack():
# test dlpack conversion. # test dlpack conversion.
...@@ -14,30 +12,34 @@ def test_dlpack(): ...@@ -14,30 +12,34 @@ def test_dlpack():
[0., 0., 0., 0.]]) [0., 0., 0., 0.]])
x = nd.array(np.zeros((3, 4), dtype=np.float32)) x = nd.array(np.zeros((3, 4), dtype=np.float32))
dl = x.to_dlpack() dl = x.to_dlpack()
y = dlpack.from_dlpack(dl) y = F.zerocopy_from_dlpack(dl)
y[0] = 1 y[0] = 1
print(x)
print(y)
assert np.allclose(x.asnumpy(), ans) assert np.allclose(x.asnumpy(), ans)
def th2nd(): def th2nd():
ans = np.array([[1., 1., 1., 1.], ans = np.array([[1., 1., 1., 1.],
[0., 0., 0., 0.], [0., 0., 0., 0.],
[0., 0., 0., 0.]]) [0., 0., 0., 0.]])
x = th.zeros((3, 4)) x = F.zeros((3, 4))
dl = dlpack.to_dlpack(x) dl = F.zerocopy_to_dlpack(x)
y = nd.from_dlpack(dl) y = nd.from_dlpack(dl)
x[0] = 1 x[0] = 1
print(x)
print(y)
assert np.allclose(y.asnumpy(), ans) assert np.allclose(y.asnumpy(), ans)
def th2nd_incontiguous(): def th2nd_incontiguous():
import dgl.backend as F x = F.astype(F.tensor([[0, 1], [2, 3]]), F.int64)
x = th.LongTensor([[0, 1], [2, 3]])
ans = np.array([0, 2]) ans = np.array([0, 2])
y = x[:2, 0] y = x[:2, 0]
# Uncomment this line and comment the one below to observe error # Uncomment this line and comment the one below to observe error
#dl = dlpack.to_dlpack(y) #dl = dlpack.to_dlpack(y)
dl = F.zerocopy_to_dlpack(y) dl = F.zerocopy_to_dlpack(y)
z = nd.from_dlpack(dl) z = nd.from_dlpack(dl)
print(x)
print(z)
assert np.allclose(z.asnumpy(), ans) assert np.allclose(z.asnumpy(), ans)
nd2th() nd2th()
...@@ -50,7 +52,7 @@ def test_index(): ...@@ -50,7 +52,7 @@ def test_index():
data = np.ones((10,), dtype=np.int64) * 10 data = np.ones((10,), dtype=np.int64) * 10
idx = toindex(data) idx = toindex(data)
y1 = idx.tonumpy() y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy() y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy() y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1) assert np.allclose(ans, y1)
assert np.allclose(ans, y2) assert np.allclose(ans, y2)
...@@ -60,17 +62,17 @@ def test_index(): ...@@ -60,17 +62,17 @@ def test_index():
data = [10] * 10 data = [10] * 10
idx = toindex(data) idx = toindex(data)
y1 = idx.tonumpy() y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy() y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy() y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1) assert np.allclose(ans, y1)
assert np.allclose(ans, y2) assert np.allclose(ans, y2)
assert np.allclose(ans, y3) assert np.allclose(ans, y3)
# from torch # from torch
data = th.ones((10,), dtype=th.int64) * 10 data = F.ones((10,), dtype=F.int64) * 10
idx = toindex(data) idx = toindex(data)
y1 = idx.tonumpy() y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy() y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy() y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1) assert np.allclose(ans, y1)
assert np.allclose(ans, y2) assert np.allclose(ans, y2)
...@@ -80,7 +82,7 @@ def test_index(): ...@@ -80,7 +82,7 @@ def test_index():
data = dgl.ndarray.array(np.ones((10,), dtype=np.int64) * 10) data = dgl.ndarray.array(np.ones((10,), dtype=np.int64) * 10)
idx = toindex(data) idx = toindex(data)
y1 = idx.tonumpy() y1 = idx.tonumpy()
y2 = idx.tousertensor().numpy() y2 = F.asnumpy(idx.tousertensor())
y3 = idx.todgltensor().asnumpy() y3 = idx.todgltensor().asnumpy()
assert np.allclose(ans, y1) assert np.allclose(ans, y1)
assert np.allclose(ans, y2) assert np.allclose(ans, y2)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment