Unverified Commit 44089c8b authored by Minjie Wang's avatar Minjie Wang Committed by GitHub
Browse files

[Refactor][Graph] Merge DGLGraph and DGLHeteroGraph (#1862)



* Merge

* [Graph][CUDA] Graph on GPU and many refactoring (#1791)

* change edge_ids behavior and C++ impl

* fix unittests; remove utils.Index in edge_id

* pass mx and th tests

* pass tf test

* add aten::Scatter_

* Add nonzero; impl CSRGetDataAndIndices/CSRSliceMatrix

* CSRGetData and CSRGetDataAndIndices passed tests

* CSRSliceMatrix basic tests

* fix bug in empty slice

* CUDA CSRHasDuplicate

* has_node; has_edge_between

* predecessors, successors

* deprecate send/recv; fix send_and_recv

* deprecate send/recv; fix send_and_recv

* in_edges; out_edges; all_edges; apply_edges

* in deg/out deg

* subgraph/edge_subgraph

* adj

* in_subgraph/out_subgraph

* sample neighbors

* set/get_n/e_repr

* wip: working on refactoring all idtypes

* pass ndata/edata tests on gpu

* fix

* stash

* workaround nonzero issue

* stash

* nx conversion

* test_hetero_basics except update routines

* test_update_routines

* test_hetero_basics for pytorch

* more fixes

* WIP: flatten graph

* wip: flatten

* test_flatten

* test_to_device

* fix bug in to_homo

* fix bug in CSRSliceMatrix

* pass subgraph test

* fix send_and_recv

* fix filter

* test_heterograph

* passed all pytorch tests

* fix mx unittest

* fix pytorch test_nn

* fix all unittests for PyTorch

* passed all mxnet tests

* lint

* fix tf nn test

* pass all tf tests

* lint

* lint

* change deprecation

* try fix compile

* lint

* update METIDS

* fix utest

* fix

* fix utests

* try debug

* revert

* small fix

* fix utests

* upd

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* trigger

* +1s

* [kernel] Use heterograph index instead of unitgraph index (#1813)

* upd

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* trigger

* +1s

* [Graph] Mutation for Heterograph (#1818)

* mutation add_nodes and add_edges

* Add support for remove_edges, remove_nodes, add_selfloop, remove_selfloop

* Fix
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-51-214.ec2.internal>

* upd

* upd

* upd

* fix

* [Transfom] Mutable transform (#1833)

* add nodesy

* All three

* Fix

* lint

* Add some test case

* Fix

* Fix

* Fix

* Fix

* Fix

* Fix

* fix

* triger

* Fix

* fix
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-51-214.ec2.internal>

* [Graph] Migrate Batch & Readout module to heterograph (#1836)

* dgl.batch

* unbatch

* fix to device

* reduce readout; segment reduce

* change batch_num_nodes|edges to function

* reduce readout/ softmax

* broadcast

* topk

* fix

* fix tf and mx

* fix some ci

* fix batch but unbatch differently

* new checkk

* upd

* upd

* upd

* idtype behavior; code reorg

* idtype behavior; code reorg

* wip: test_basics

* pass test_basics

* WIP: from nx/ to nx

* missing files

* upd

* pass test_basics:test_nx_conversion

* Fix test

* Fix inplace update

* WIP: fixing tests

* upd

* pass test_transform cpu

* pass gpu test_transform

* pass test_batched_graph

* GPU graph auto cast to int32

* missing file

* stash

* WIP: rgcn-hetero

* Fix two datasety

* upd

* weird

* Fix capsuley

* fuck you

* fuck matthias

* Fix dgmg

* fix bug in block degrees; pass rgcn-hetero

* rgcn

* gat and diffpool fix
also fix ppi and tu dataset

* Tree LSTM

* pointcloud

* rrn; wip: sgc

* resolve conflicts

* upd

* sgc and reddit dataset

* upd

* Fix deepwalk, gindt and gcn

* fix datasets and sign

* optimization

* optimization

* upd

* upd

* Fix GIN

* fix bug in add_nodes add_edges; tagcn

* adaptive sampling and gcmc

* upd

* upd

* fix geometric

* fix

* metapath2vec

* fix agnn

* fix pickling problem of block

* fix utests

* miss file

* linegraph

* upd

* upd

* upd

* graphsage

* stgcn_wave

* fix hgt

* on unittests

* Fix transformer

* Fix HAN

* passed pytorch unittests

* lint

* fix

* Fix cluster gcn

* cluster-gcn is ready

* on fixing block related codes

* 2nd order derivative

* Revert "2nd order derivative"

This reverts commit 523bf6c249bee61b51b1ad1babf42aad4167f206.

* passed torch utests again

* fix all mxnet unittests

* delete some useless tests

* pass all tf cpu tests

* disable

* disable distributed unittest

* fix

* fix

* lint

* fix

* fix

* fix script

* fix tutorial

* fix apply edges bug

* fix 2 basics

* fix tutorial
Co-authored-by: default avataryzh119 <expye@outlook.com>
Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-51-214.ec2.internal>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-7-42.us-west-2.compute.internal>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-1-5.us-west-2.compute.internal>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-68-185.ec2.internal>
parent 015acfd2
#include <gtest/gtest.h>
#include <dgl/array.h>
#include "./common.h"
using namespace dgl;
using namespace dgl::runtime;
namespace {
template <typename IDX>
aten::CSRMatrix CSR1(DLContext ctx = CTX) {
// [[0, 1, 1, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
// data: [0, 2, 3, 1, 4]
return aten::CSRMatrix(
4, 5,
aten::VecToIdArray(std::vector<IDX>({0, 2, 3, 5, 5}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({1, 2, 0, 3, 2}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({0, 2, 3, 4, 1}), sizeof(IDX)*8, ctx),
false);
}
template <typename IDX>
aten::CSRMatrix CSR2(DLContext ctx = CTX) {
// has duplicate entries
// [[0, 1, 2, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
// data: [0, 2, 5, 3, 1, 4]
return aten::CSRMatrix(
4, 5,
aten::VecToIdArray(std::vector<IDX>({0, 3, 4, 6, 6}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({1, 2, 2, 0, 2, 3}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({0, 2, 5, 3, 1, 4}), sizeof(IDX)*8, ctx),
false);
}
template <typename IDX>
aten::COOMatrix COO1(DLContext ctx = CTX) {
// [[0, 1, 1, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
// data: [0, 2, 3, 1, 4]
// row : [0, 2, 0, 1, 2]
// col : [1, 2, 2, 0, 3]
return aten::COOMatrix(
4, 5,
aten::VecToIdArray(std::vector<IDX>({0, 2, 0, 1, 2}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({1, 2, 2, 0, 3}), sizeof(IDX)*8, ctx));
}
template <typename IDX>
aten::COOMatrix COO2(DLContext ctx = CTX) {
// has duplicate entries
// [[0, 1, 2, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
// data: [0, 2, 5, 3, 1, 4]
// row : [0, 2, 0, 1, 2, 0]
// col : [1, 2, 2, 0, 3, 2]
return aten::COOMatrix(
4, 5,
aten::VecToIdArray(std::vector<IDX>({0, 2, 0, 1, 2, 0}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({1, 2, 2, 0, 3, 2}), sizeof(IDX)*8, ctx));
}
template <typename IDX>
aten::CSRMatrix SR_CSR3(DLContext ctx) {
// [[0, 1, 2, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
return aten::CSRMatrix(
4, 5,
aten::VecToIdArray(std::vector<IDX>({0, 3, 4, 6, 6}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({2, 1, 2, 0, 2, 3}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({0, 2, 5, 3, 1, 4}), sizeof(IDX)*8, ctx),
false);
}
template <typename IDX>
aten::CSRMatrix SRC_CSR3(DLContext ctx) {
// [[0, 1, 2, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
return aten::CSRMatrix(
4, 5,
aten::VecToIdArray(std::vector<IDX>({0, 3, 4, 6, 6}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({1, 2, 2, 0, 2, 3}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({2, 0, 5, 3, 1, 4}), sizeof(IDX)*8, ctx),
false);
}
template <typename IDX>
aten::COOMatrix COO3(DLContext ctx) {
// has duplicate entries
// [[0, 1, 2, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
// row : [0, 2, 0, 1, 2, 0]
// col : [2, 2, 1, 0, 3, 2]
return aten::COOMatrix(
4, 5,
aten::VecToIdArray(std::vector<IDX>({0, 2, 0, 1, 2, 0}), sizeof(IDX)*8, ctx),
aten::VecToIdArray(std::vector<IDX>({2, 2, 1, 0, 3, 2}), sizeof(IDX)*8, ctx));
}
} // namespace
template <typename IDX>
void _TestCOOToCSR(DLContext ctx) {
auto coo = COO1<IDX>(ctx);
auto csr = CSR1<IDX>(ctx);
auto tcsr = aten::COOToCSR(coo);
ASSERT_EQ(coo.num_rows, csr.num_rows);
ASSERT_EQ(coo.num_cols, csr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(csr.indptr, tcsr.indptr));
coo = COO2<IDX>(ctx);
csr = CSR2<IDX>(ctx);
tcsr = aten::COOToCSR(coo);
ASSERT_EQ(coo.num_rows, csr.num_rows);
ASSERT_EQ(coo.num_cols, csr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(csr.indptr, tcsr.indptr));
// Convert from row sorted coo
coo = COO1<IDX>(ctx);
auto rs_coo = aten::COOSort(coo, false);
auto rs_csr = CSR1<IDX>(ctx);
auto rs_tcsr = aten::COOToCSR(rs_coo);
ASSERT_EQ(coo.num_rows, rs_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, rs_tcsr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(rs_csr.indptr, rs_tcsr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.indices, rs_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.data, rs_coo.data));
coo = COO3<IDX>(ctx);
rs_coo = aten::COOSort(coo, false);
rs_csr = SR_CSR3<IDX>(ctx);
rs_tcsr = aten::COOToCSR(rs_coo);
ASSERT_EQ(coo.num_rows, rs_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, rs_tcsr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(rs_csr.indptr, rs_tcsr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.indices, rs_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.data, rs_coo.data));
// Convert from col sorted coo
coo = COO1<IDX>(ctx);
auto src_coo = aten::COOSort(coo, true);
auto src_csr = CSR1<IDX>(ctx);
auto src_tcsr = aten::COOToCSR(src_coo);
ASSERT_EQ(coo.num_rows, src_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, src_tcsr.num_cols);
ASSERT_TRUE(src_tcsr.sorted);
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indptr, src_csr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indices, src_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.data, src_coo.data));
coo = COO3<IDX>(ctx);
src_coo = aten::COOSort(coo, true);
src_csr = SRC_CSR3<IDX>(ctx);
src_tcsr = aten::COOToCSR(src_coo);
ASSERT_EQ(coo.num_rows, src_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, src_tcsr.num_cols);
ASSERT_TRUE(src_tcsr.sorted);
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indptr, src_csr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indices, src_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.data, src_coo.data));
}
TEST(SpmatTest, COOToCSR) {
_TestCOOToCSR<int32_t>(CPU);
_TestCOOToCSR<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCOOToCSR<int32_t>(GPU);
#endif
}
template <typename IDX>
void _TestCOOHasDuplicate() {
auto csr = COO1<IDX>();
ASSERT_FALSE(aten::COOHasDuplicate(csr));
csr = COO2<IDX>();
ASSERT_TRUE(aten::COOHasDuplicate(csr));
}
TEST(SpmatTest, TestCOOHasDuplicate) {
_TestCOOHasDuplicate<int32_t>();
_TestCOOHasDuplicate<int64_t>();
}
template <typename IDX>
void _TestCOOSort(DLContext ctx) {
auto coo = COO3<IDX>(ctx);
auto sr_coo = COOSort(coo, false);
ASSERT_EQ(coo.num_rows, sr_coo.num_rows);
ASSERT_EQ(coo.num_cols, sr_coo.num_cols);
ASSERT_TRUE(sr_coo.row_sorted);
auto flags = COOIsSorted(sr_coo);
ASSERT_TRUE(flags.first);
flags = COOIsSorted(coo); // original coo should stay the same
ASSERT_FALSE(flags.first);
ASSERT_FALSE(flags.second);
auto src_coo = COOSort(coo, true);
ASSERT_EQ(coo.num_rows, src_coo.num_rows);
ASSERT_EQ(coo.num_cols, src_coo.num_cols);
ASSERT_TRUE(src_coo.row_sorted);
ASSERT_TRUE(src_coo.col_sorted);
flags = COOIsSorted(src_coo);
ASSERT_TRUE(flags.first);
ASSERT_TRUE(flags.second);
// sort inplace
COOSort_(&coo);
ASSERT_TRUE(coo.row_sorted);
flags = COOIsSorted(coo);
ASSERT_TRUE(flags.first);
COOSort_(&coo, true);
ASSERT_TRUE(coo.row_sorted);
ASSERT_TRUE(coo.col_sorted);
flags = COOIsSorted(coo);
ASSERT_TRUE(flags.first);
ASSERT_TRUE(flags.second);
// COO3
// [[0, 1, 2, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
// data: [0, 1, 2, 3, 4, 5]
// row : [0, 2, 0, 1, 2, 0]
// col : [2, 2, 1, 0, 3, 2]
// Row Sorted
// data: [0, 2, 5, 3, 1, 4]
// row : [0, 0, 0, 1, 2, 2]
// col : [2, 1, 2, 0, 2, 3]
// Row Col Sorted
// data: [2, 0, 5, 3, 1, 4]
// row : [0, 0, 0, 1, 2, 2]
// col : [1, 2, 2, 0, 2, 3]
auto sort_row = aten::VecToIdArray(
std::vector<IDX>({0, 0, 0, 1, 2, 2}), sizeof(IDX)*8, ctx);
auto sort_col = aten::VecToIdArray(
std::vector<IDX>({1, 2, 2, 0, 2, 3}), sizeof(IDX)*8, ctx);
auto sort_col_data = aten::VecToIdArray(
std::vector<IDX>({2, 0, 5, 3, 1, 4}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(sr_coo.row, sort_row));
ASSERT_TRUE(ArrayEQ<IDX>(src_coo.row, sort_row));
ASSERT_TRUE(ArrayEQ<IDX>(src_coo.col, sort_col));
ASSERT_TRUE(ArrayEQ<IDX>(src_coo.data, sort_col_data));
}
TEST(SpmatTest, COOSort) {
_TestCOOSort<int32_t>(CPU);
_TestCOOSort<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCOOSort<int32_t>(GPU);
#endif
}
template <typename IDX>
void _TestCOOReorder() {
auto coo = COO2<IDX>();
auto new_row = aten::VecToIdArray(
std::vector<IDX>({2, 0, 3, 1}), sizeof(IDX)*8, CTX);
auto new_col = aten::VecToIdArray(
std::vector<IDX>({2, 0, 4, 3, 1}), sizeof(IDX)*8, CTX);
auto new_coo = COOReorder(coo, new_row, new_col);
ASSERT_EQ(new_coo.num_rows, coo.num_rows);
ASSERT_EQ(new_coo.num_cols, coo.num_cols);
}
TEST(SpmatTest, TestCOOReorder) {
_TestCOOReorder<int32_t>();
_TestCOOReorder<int64_t>();
}
template <typename IDX>
void _TestCOOGetData(DLContext ctx) {
auto coo = COO2<IDX>(ctx);
// test get all data
auto x = aten::COOGetAllData(coo, 0, 0);
auto tx = aten::VecToIdArray(std::vector<IDX>({}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
x = aten::COOGetAllData(coo, 0, 2);
tx = aten::VecToIdArray(std::vector<IDX>({2, 5}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
// test get data
auto r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, ctx);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
x = aten::COOGetData(coo, r, c);
tx = aten::VecToIdArray(std::vector<IDX>({-1, 0, 2}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
// test get data on sorted
coo = aten::COOSort(coo);
r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, ctx);
c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
x = aten::COOGetData(coo, r, c);
tx = aten::VecToIdArray(std::vector<IDX>({-1, 0, 2}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
// test get data w/ broadcasting
r = aten::VecToIdArray(std::vector<IDX>({0}), sizeof(IDX)*8, ctx);
c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
x = aten::COOGetData(coo, r, c);
tx = aten::VecToIdArray(std::vector<IDX>({-1, 0, 2}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
}
TEST(SpmatTest, COOGetData) {
_TestCOOGetData<int32_t>(CPU);
_TestCOOGetData<int64_t>(CPU);
//#ifdef DGL_USE_CUDA
//_TestCOOGetData<int32_t>(GPU);
//_TestCOOGetData<int64_t>(GPU);
//#endif
}
template <typename IDX>
void _TestCOOGetDataAndIndices() {
auto csr = COO2<IDX>();
auto r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, CTX);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, CTX);
auto x = aten::COOGetDataAndIndices(csr, r, c);
auto tr = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, CTX);
auto tc = aten::VecToIdArray(std::vector<IDX>({1, 2, 2}), sizeof(IDX)*8, CTX);
auto td = aten::VecToIdArray(std::vector<IDX>({0, 2, 5}), sizeof(IDX)*8, CTX);
ASSERT_TRUE(ArrayEQ<IDX>(x[0], tr));
ASSERT_TRUE(ArrayEQ<IDX>(x[1], tc));
ASSERT_TRUE(ArrayEQ<IDX>(x[2], td));
}
TEST(SpmatTest, COOGetDataAndIndices) {
_TestCOOGetDataAndIndices<int32_t>();
_TestCOOGetDataAndIndices<int64_t>();
}
......@@ -202,44 +202,69 @@ TEST(SpmatTest, TestCSRGetRowData) {
}
template <typename IDX>
void _TestCSRGetData() {
auto csr = CSR2<IDX>();
auto x = aten::CSRGetData(csr, 0, 0);
auto tx = aten::VecToIdArray(std::vector<IDX>({}), sizeof(IDX)*8, CTX);
void _TestCSRGetData(DLContext ctx) {
auto csr = CSR2<IDX>(ctx);
// test get all data
auto x = aten::CSRGetAllData(csr, 0, 0);
auto tx = aten::VecToIdArray(std::vector<IDX>({}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
x = aten::CSRGetAllData(csr, 0, 2);
tx = aten::VecToIdArray(std::vector<IDX>({2, 5}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
// test get data
auto r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, ctx);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
x = aten::CSRGetData(csr, r, c);
tx = aten::VecToIdArray(std::vector<IDX>({-1, 0, 2}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
x = aten::CSRGetData(csr, 0, 2);
tx = aten::VecToIdArray(std::vector<IDX>({2, 5}), sizeof(IDX)*8, CTX);
// test get data on sorted
csr = aten::CSRSort(csr);
r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, ctx);
c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
x = aten::CSRGetData(csr, r, c);
tx = aten::VecToIdArray(std::vector<IDX>({-1, 0, 2}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
auto r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, CTX);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, CTX);
// test get data w/ broadcasting
r = aten::VecToIdArray(std::vector<IDX>({0}), sizeof(IDX)*8, ctx);
c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
x = aten::CSRGetData(csr, r, c);
tx = aten::VecToIdArray(std::vector<IDX>({0, 2, 5}), sizeof(IDX)*8, CTX);
tx = aten::VecToIdArray(std::vector<IDX>({-1, 0, 2}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x, tx));
}
TEST(SpmatTest, TestCSRGetData) {
_TestCSRGetData<int32_t>();
_TestCSRGetData<int64_t>();
TEST(SpmatTest, CSRGetData) {
_TestCSRGetData<int32_t>(CPU);
_TestCSRGetData<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCSRGetData<int32_t>(GPU);
#endif
}
template <typename IDX>
void _TestCSRGetDataAndIndices() {
auto csr = CSR2<IDX>();
auto r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, CTX);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, CTX);
void _TestCSRGetDataAndIndices(DLContext ctx) {
auto csr = CSR2<IDX>(ctx);
auto r = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, ctx);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
auto x = aten::CSRGetDataAndIndices(csr, r, c);
auto tr = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, CTX);
auto tc = aten::VecToIdArray(std::vector<IDX>({1, 2, 2}), sizeof(IDX)*8, CTX);
auto td = aten::VecToIdArray(std::vector<IDX>({0, 2, 5}), sizeof(IDX)*8, CTX);
auto tr = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, ctx);
auto tc = aten::VecToIdArray(std::vector<IDX>({1, 2, 2}), sizeof(IDX)*8, ctx);
auto td = aten::VecToIdArray(std::vector<IDX>({0, 2, 5}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x[0], tr));
ASSERT_TRUE(ArrayEQ<IDX>(x[1], tc));
ASSERT_TRUE(ArrayEQ<IDX>(x[2], td));
}
TEST(SpmatTest, TestCSRGetDataAndIndices) {
_TestCSRGetDataAndIndices<int32_t>();
_TestCSRGetDataAndIndices<int64_t>();
TEST(SpmatTest, CSRGetDataAndIndices) {
_TestCSRGetDataAndIndices<int32_t>(CPU);
_TestCSRGetDataAndIndices<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCSRGetDataAndIndices<int32_t>(GPU);
_TestCSRGetDataAndIndices<int64_t>(GPU);
#endif
}
template <typename IDX>
......@@ -354,10 +379,12 @@ TEST(SpmatTest, TestCSRSliceRows) {
}
template <typename IDX>
void _TestCSRSliceMatrix() {
auto csr = CSR2<IDX>();
auto r = aten::VecToIdArray(std::vector<IDX>({0, 1, 3}), sizeof(IDX)*8, CTX);
auto c = aten::VecToIdArray(std::vector<IDX>({1, 2, 3}), sizeof(IDX)*8, CTX);
void _TestCSRSliceMatrix(DLContext ctx) {
auto csr = CSR2<IDX>(ctx);
{
// square
auto r = aten::VecToIdArray(std::vector<IDX>({0, 1, 3}), sizeof(IDX)*8, ctx);
auto c = aten::VecToIdArray(std::vector<IDX>({1, 2, 3}), sizeof(IDX)*8, ctx);
auto x = aten::CSRSliceMatrix(csr, r, c);
// [[1, 2, 0],
// [0, 0, 0],
......@@ -365,30 +392,72 @@ void _TestCSRSliceMatrix() {
// data: [0, 2, 5]
ASSERT_EQ(x.num_rows, 3);
ASSERT_EQ(x.num_cols, 3);
auto tp = aten::VecToIdArray(std::vector<IDX>({0, 3, 3, 3}), sizeof(IDX)*8, CTX);
auto ti = aten::VecToIdArray(std::vector<IDX>({0, 1, 1}), sizeof(IDX)*8, CTX);
auto td = aten::VecToIdArray(std::vector<IDX>({0, 2, 5}), sizeof(IDX)*8, CTX);
auto tp = aten::VecToIdArray(std::vector<IDX>({0, 3, 3, 3}), sizeof(IDX)*8, ctx);
auto ti = aten::VecToIdArray(std::vector<IDX>({0, 1, 1}), sizeof(IDX)*8, ctx);
auto td = aten::VecToIdArray(std::vector<IDX>({0, 2, 5}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x.indptr, tp));
ASSERT_TRUE(ArrayEQ<IDX>(x.indices, ti));
ASSERT_TRUE(ArrayEQ<IDX>(x.data, td));
}
{
// non-square
auto r = aten::VecToIdArray(std::vector<IDX>({0, 1, 2}), sizeof(IDX)*8, ctx);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1}), sizeof(IDX)*8, ctx);
auto x = aten::CSRSliceMatrix(csr, r, c);
// [[0, 1],
// [1, 0],
// [0, 0]]
// data: [0, 3]
ASSERT_EQ(x.num_rows, 3);
ASSERT_EQ(x.num_cols, 2);
auto tp = aten::VecToIdArray(std::vector<IDX>({0, 1, 2, 2}), sizeof(IDX)*8, ctx);
auto ti = aten::VecToIdArray(std::vector<IDX>({1, 0}), sizeof(IDX)*8, ctx);
auto td = aten::VecToIdArray(std::vector<IDX>({0, 3}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x.indptr, tp));
ASSERT_TRUE(ArrayEQ<IDX>(x.indices, ti));
ASSERT_TRUE(ArrayEQ<IDX>(x.data, td));
}
{
// empty slice
auto r = aten::VecToIdArray(std::vector<IDX>({2, 3}), sizeof(IDX)*8, ctx);
auto c = aten::VecToIdArray(std::vector<IDX>({0, 1}), sizeof(IDX)*8, ctx);
auto x = aten::CSRSliceMatrix(csr, r, c);
// [[0, 0],
// [0, 0]]
// data: []
ASSERT_EQ(x.num_rows, 2);
ASSERT_EQ(x.num_cols, 2);
auto tp = aten::VecToIdArray(std::vector<IDX>({0, 0, 0}), sizeof(IDX)*8, ctx);
auto ti = aten::VecToIdArray(std::vector<IDX>({}), sizeof(IDX)*8, ctx);
auto td = aten::VecToIdArray(std::vector<IDX>({}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(x.indptr, tp));
ASSERT_TRUE(ArrayEQ<IDX>(x.indices, ti));
ASSERT_TRUE(ArrayEQ<IDX>(x.data, td));
}
}
TEST(SpmatTest, TestCSRSliceMatrix) {
_TestCSRSliceMatrix<int32_t>();
_TestCSRSliceMatrix<int64_t>();
TEST(SpmatTest, CSRSliceMatrix) {
_TestCSRSliceMatrix<int32_t>(CPU);
_TestCSRSliceMatrix<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCSRSliceMatrix<int32_t>(GPU);
#endif
}
template <typename IDX>
void _TestCSRHasDuplicate() {
auto csr = CSR1<IDX>();
void _TestCSRHasDuplicate(DLContext ctx) {
auto csr = CSR1<IDX>(ctx);
ASSERT_FALSE(aten::CSRHasDuplicate(csr));
csr = CSR2<IDX>();
csr = CSR2<IDX>(ctx);
ASSERT_TRUE(aten::CSRHasDuplicate(csr));
}
TEST(SpmatTest, TestCSRHasDuplicate) {
_TestCSRHasDuplicate<int32_t>();
_TestCSRHasDuplicate<int64_t>();
TEST(SpmatTest, CSRHasDuplicate) {
_TestCSRHasDuplicate<int32_t>(CPU);
_TestCSRHasDuplicate<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCSRHasDuplicate<int32_t>(GPU);
#endif
}
template <typename IDX>
......@@ -414,160 +483,6 @@ TEST(SpmatTest, CSRSort) {
#endif
}
template <typename IDX>
void _TestCOOToCSR(DLContext ctx) {
auto coo = COO1<IDX>(ctx);
auto csr = CSR1<IDX>(ctx);
auto tcsr = aten::COOToCSR(coo);
ASSERT_EQ(coo.num_rows, csr.num_rows);
ASSERT_EQ(coo.num_cols, csr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(csr.indptr, tcsr.indptr));
coo = COO2<IDX>(ctx);
csr = CSR2<IDX>(ctx);
tcsr = aten::COOToCSR(coo);
ASSERT_EQ(coo.num_rows, csr.num_rows);
ASSERT_EQ(coo.num_cols, csr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(csr.indptr, tcsr.indptr));
// Convert from row sorted coo
coo = COO1<IDX>(ctx);
auto rs_coo = aten::COOSort(coo, false);
auto rs_csr = CSR1<IDX>(ctx);
auto rs_tcsr = aten::COOToCSR(rs_coo);
ASSERT_EQ(coo.num_rows, rs_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, rs_tcsr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(rs_csr.indptr, rs_tcsr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.indices, rs_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.data, rs_coo.data));
coo = COO3<IDX>(ctx);
rs_coo = aten::COOSort(coo, false);
rs_csr = SR_CSR3<IDX>(ctx);
rs_tcsr = aten::COOToCSR(rs_coo);
ASSERT_EQ(coo.num_rows, rs_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, rs_tcsr.num_cols);
ASSERT_TRUE(ArrayEQ<IDX>(rs_csr.indptr, rs_tcsr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.indices, rs_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(rs_tcsr.data, rs_coo.data));
// Convert from col sorted coo
coo = COO1<IDX>(ctx);
auto src_coo = aten::COOSort(coo, true);
auto src_csr = CSR1<IDX>(ctx);
auto src_tcsr = aten::COOToCSR(src_coo);
ASSERT_EQ(coo.num_rows, src_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, src_tcsr.num_cols);
ASSERT_TRUE(src_tcsr.sorted);
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indptr, src_csr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indices, src_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.data, src_coo.data));
coo = COO3<IDX>(ctx);
src_coo = aten::COOSort(coo, true);
src_csr = SRC_CSR3<IDX>(ctx);
src_tcsr = aten::COOToCSR(src_coo);
ASSERT_EQ(coo.num_rows, src_tcsr.num_rows);
ASSERT_EQ(coo.num_cols, src_tcsr.num_cols);
ASSERT_TRUE(src_tcsr.sorted);
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indptr, src_csr.indptr));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.indices, src_coo.col));
ASSERT_TRUE(ArrayEQ<IDX>(src_tcsr.data, src_coo.data));
}
TEST(SpmatTest, COOToCSR) {
_TestCOOToCSR<int32_t>(CPU);
_TestCOOToCSR<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCOOToCSR<int32_t>(GPU);
#endif
}
template <typename IDX>
void _TestCOOHasDuplicate() {
auto csr = COO1<IDX>();
ASSERT_FALSE(aten::COOHasDuplicate(csr));
csr = COO2<IDX>();
ASSERT_TRUE(aten::COOHasDuplicate(csr));
}
TEST(SpmatTest, TestCOOHasDuplicate) {
_TestCOOHasDuplicate<int32_t>();
_TestCOOHasDuplicate<int64_t>();
}
template <typename IDX>
void _TestCOOSort(DLContext ctx) {
auto coo = COO3<IDX>(ctx);
auto sr_coo = COOSort(coo, false);
ASSERT_EQ(coo.num_rows, sr_coo.num_rows);
ASSERT_EQ(coo.num_cols, sr_coo.num_cols);
ASSERT_TRUE(sr_coo.row_sorted);
auto flags = COOIsSorted(sr_coo);
ASSERT_TRUE(flags.first);
flags = COOIsSorted(coo); // original coo should stay the same
ASSERT_FALSE(flags.first);
ASSERT_FALSE(flags.second);
auto src_coo = COOSort(coo, true);
ASSERT_EQ(coo.num_rows, src_coo.num_rows);
ASSERT_EQ(coo.num_cols, src_coo.num_cols);
ASSERT_TRUE(src_coo.row_sorted);
ASSERT_TRUE(src_coo.col_sorted);
flags = COOIsSorted(src_coo);
ASSERT_TRUE(flags.first);
ASSERT_TRUE(flags.second);
// sort inplace
COOSort_(&coo);
ASSERT_TRUE(coo.row_sorted);
flags = COOIsSorted(coo);
ASSERT_TRUE(flags.first);
COOSort_(&coo, true);
ASSERT_TRUE(coo.row_sorted);
ASSERT_TRUE(coo.col_sorted);
flags = COOIsSorted(coo);
ASSERT_TRUE(flags.first);
ASSERT_TRUE(flags.second);
// COO3
// [[0, 1, 2, 0, 0],
// [1, 0, 0, 0, 0],
// [0, 0, 1, 1, 0],
// [0, 0, 0, 0, 0]]
// data: [0, 1, 2, 3, 4, 5]
// row : [0, 2, 0, 1, 2, 0]
// col : [2, 2, 1, 0, 3, 2]
// Row Sorted
// data: [0, 2, 5, 3, 1, 4]
// row : [0, 0, 0, 1, 2, 2]
// col : [2, 1, 2, 0, 2, 3]
// Row Col Sorted
// data: [2, 0, 5, 3, 1, 4]
// row : [0, 0, 0, 1, 2, 2]
// col : [1, 2, 2, 0, 2, 3]
auto sort_row = aten::VecToIdArray(
std::vector<IDX>({0, 0, 0, 1, 2, 2}), sizeof(IDX)*8, ctx);
auto sort_col = aten::VecToIdArray(
std::vector<IDX>({1, 2, 2, 0, 2, 3}), sizeof(IDX)*8, ctx);
auto sort_col_data = aten::VecToIdArray(
std::vector<IDX>({2, 0, 5, 3, 1, 4}), sizeof(IDX)*8, ctx);
ASSERT_TRUE(ArrayEQ<IDX>(sr_coo.row, sort_row));
ASSERT_TRUE(ArrayEQ<IDX>(src_coo.row, sort_row));
ASSERT_TRUE(ArrayEQ<IDX>(src_coo.col, sort_col));
ASSERT_TRUE(ArrayEQ<IDX>(src_coo.data, sort_col_data));
}
TEST(SpmatTest, COOSort) {
_TestCOOSort<int32_t>(CPU);
_TestCOOSort<int64_t>(CPU);
#ifdef DGL_USE_CUDA
_TestCOOSort<int32_t>(GPU);
#endif
}
template <typename IDX>
void _TestCSRReorder() {
auto csr = CSR2<IDX>();
......@@ -584,20 +499,3 @@ TEST(SpmatTest, TestCSRReorder) {
_TestCSRReorder<int32_t>();
_TestCSRReorder<int64_t>();
}
template <typename IDX>
void _TestCOOReorder() {
auto coo = COO2<IDX>();
auto new_row = aten::VecToIdArray(
std::vector<IDX>({2, 0, 3, 1}), sizeof(IDX)*8, CTX);
auto new_col = aten::VecToIdArray(
std::vector<IDX>({2, 0, 4, 3, 1}), sizeof(IDX)*8, CTX);
auto new_coo = COOReorder(coo, new_row, new_col);
ASSERT_EQ(new_coo.num_rows, coo.num_rows);
ASSERT_EQ(new_coo.num_cols, coo.num_cols);
}
TEST(SpmatTest, TestCOOReorder) {
_TestCOOReorder<int32_t>();
_TestCOOReorder<int64_t>();
}
......@@ -5,6 +5,7 @@
*/
#include <gtest/gtest.h>
#include <dgl/array.h>
#include <memory>
#include <vector>
#include <dgl/immutable_graph.h>
#include "./common.h"
......@@ -12,7 +13,6 @@
#include "../../src/graph/unit_graph.h"
using namespace dgl;
using namespace dgl::aten;
using namespace dgl::runtime;
template <typename IdType>
......@@ -71,50 +71,41 @@ void _TestUnitGraph(DLContext ctx) {
const aten::CSRMatrix &csr = CSR1<IdType>(ctx);
const aten::COOMatrix &coo = COO1<IdType>(ctx);
auto hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSC(2, csr, SparseFormat::kAny));
UnitGraphPtr g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 4);
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSR(2, csr, SparseFormat::kAny));
g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 2);
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCOO(2, coo, SparseFormat::kAny));
g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 1);
auto src = VecToIdArray<int64_t>({1, 2, 5, 3});
auto dst = VecToIdArray<int64_t>({1, 6, 2, 6});
auto mg = std::dynamic_pointer_cast<UnitGraph>(
dgl::UnitGraph::CreateFromCOO(2, 9, 8, src, dst, dgl::SparseFormat::kCOO));
ASSERT_EQ(mg->GetFormatInUse(), 1);
auto hmg = dgl::UnitGraph::CreateFromCOO(1, 8, 8, src, dst, dgl::SparseFormat::kCOO);
auto g = CreateFromCSC(2, csr);
ASSERT_EQ(g->GetCreatedFormats(), 4);
g = CreateFromCSR(2, csr);
ASSERT_EQ(g->GetCreatedFormats(), 2);
g = CreateFromCOO(2, coo);
ASSERT_EQ(g->GetCreatedFormats(), 1);
auto src = aten::VecToIdArray<int64_t>({1, 2, 5, 3});
auto dst = aten::VecToIdArray<int64_t>({1, 6, 2, 6});
auto mg = dgl::UnitGraph::CreateFromCOO(2, 9, 8, src, dst, coo_code);
ASSERT_EQ(mg->GetCreatedFormats(), 1);
auto hmg = dgl::UnitGraph::CreateFromCOO(1, 8, 8, src, dst, coo_code);
auto img = std::dynamic_pointer_cast<ImmutableGraph>(hmg->AsImmutableGraph());
ASSERT_TRUE(img != nullptr);
mg = std::dynamic_pointer_cast<UnitGraph>(
dgl::UnitGraph::CreateFromCOO(2, 9, 8, src, dst, dgl::SparseFormat::kCSR));
ASSERT_EQ(mg->GetFormatInUse(), 2);
hmg = dgl::UnitGraph::CreateFromCOO(1, 8, 8, src, dst, dgl::SparseFormat::kCSR);
mg = dgl::UnitGraph::CreateFromCOO(2, 9, 8, src, dst, csr_code | coo_code);
ASSERT_EQ(mg->GetCreatedFormats(), 1);
hmg = dgl::UnitGraph::CreateFromCOO(1, 8, 8, src, dst, csr_code | coo_code);
img = std::dynamic_pointer_cast<ImmutableGraph>(hmg->AsImmutableGraph());
ASSERT_TRUE(img != nullptr);
mg = std::dynamic_pointer_cast<UnitGraph>(
dgl::UnitGraph::CreateFromCOO(2, 9, 8, src, dst, dgl::SparseFormat::kCSC));
ASSERT_EQ(mg->GetFormatInUse(), 4);
hmg = dgl::UnitGraph::CreateFromCOO(1, 8, 8, src, dst, dgl::SparseFormat::kCSC);
mg = dgl::UnitGraph::CreateFromCOO(2, 9, 8, src, dst, csc_code | coo_code);
ASSERT_EQ(mg->GetCreatedFormats(), 1);
hmg = dgl::UnitGraph::CreateFromCOO(1, 8, 8, src, dst, csc_code | coo_code);
img = std::dynamic_pointer_cast<ImmutableGraph>(hmg->AsImmutableGraph());
ASSERT_TRUE(img != nullptr);
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSC(2, csr, SparseFormat::kAuto));
g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 4);
g = CreateFromCSC(2, csr);
ASSERT_EQ(g->GetCreatedFormats(), 4);
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSR(2, csr, SparseFormat::kAuto));
g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 2);
g = CreateFromCSR(2, csr);
ASSERT_EQ(g->GetCreatedFormats(), 2);
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCOO(2, coo, SparseFormat::kAuto));
g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 1);
g = CreateFromCOO(2, coo);
ASSERT_EQ(g->GetCreatedFormats(), 1);
}
template <typename IdType>
......@@ -122,39 +113,36 @@ void _TestUnitGraph_GetInCSR(DLContext ctx) {
const aten::CSRMatrix &csr = CSR1<IdType>(ctx);
const aten::COOMatrix &coo = COO1<IdType>(ctx);
auto hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSC(2, csr, SparseFormat::kAny));
UnitGraphPtr g = hg->relation_graphs()[0];
auto g = CreateFromCSC(2, csr);
auto in_csr_matrix = g->GetCSCMatrix(0);
ASSERT_EQ(in_csr_matrix.num_rows, csr.num_rows);
ASSERT_EQ(in_csr_matrix.num_cols, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 4);
ASSERT_EQ(g->GetCreatedFormats(), 4);
// test out csr
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSR(2, csr, SparseFormat::kAny));
g = hg->relation_graphs()[0];
UnitGraphPtr g_ptr = std::dynamic_pointer_cast<UnitGraph>(g->GetGraphInFormat(SparseFormat::kCSC));
g = CreateFromCSR(2, csr);
auto g_ptr = g->GetGraphInFormat(csc_code);
in_csr_matrix = g_ptr->GetCSCMatrix(0);
ASSERT_EQ(in_csr_matrix.num_cols, csr.num_rows);
ASSERT_EQ(in_csr_matrix.num_rows, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 2);
ASSERT_EQ(g->GetCreatedFormats(), 2);
in_csr_matrix = g->GetCSCMatrix(0);
ASSERT_EQ(in_csr_matrix.num_cols, csr.num_rows);
ASSERT_EQ(in_csr_matrix.num_rows, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 6);
ASSERT_EQ(g->GetCreatedFormats(), 6);
// test out coo
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCOO(2, coo, SparseFormat::kAny));
g = hg->relation_graphs()[0];
g_ptr = std::dynamic_pointer_cast<UnitGraph>(g->GetGraphInFormat(SparseFormat::kCSC));
g = CreateFromCOO(2, coo);
g_ptr = g->GetGraphInFormat(csc_code);
in_csr_matrix = g_ptr->GetCSCMatrix(0);
ASSERT_EQ(in_csr_matrix.num_cols, coo.num_rows);
ASSERT_EQ(in_csr_matrix.num_rows, coo.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 1);
ASSERT_EQ(g->GetCreatedFormats(), 1);
in_csr_matrix = g->GetCSCMatrix(0);
ASSERT_EQ(in_csr_matrix.num_cols, coo.num_rows);
ASSERT_EQ(in_csr_matrix.num_rows, coo.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 5);
ASSERT_EQ(g->GetCreatedFormats(), 5);
}
template <typename IdType>
......@@ -162,39 +150,36 @@ void _TestUnitGraph_GetOutCSR(DLContext ctx) {
const aten::CSRMatrix &csr = CSR1<IdType>(ctx);
const aten::COOMatrix &coo = COO1<IdType>(ctx);
auto hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSC(2, csr, SparseFormat::kAny));
UnitGraphPtr g = hg->relation_graphs()[0];
UnitGraphPtr g_ptr = std::dynamic_pointer_cast<UnitGraph>(g->GetGraphInFormat(SparseFormat::kCSR));
auto g = CreateFromCSC(2, csr);
auto g_ptr = g->GetGraphInFormat(csr_code);
auto out_csr_matrix = g_ptr->GetCSRMatrix(0);
ASSERT_EQ(out_csr_matrix.num_cols, csr.num_rows);
ASSERT_EQ(out_csr_matrix.num_rows, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 4);
ASSERT_EQ(g->GetCreatedFormats(), 4);
out_csr_matrix = g->GetCSRMatrix(0);
ASSERT_EQ(out_csr_matrix.num_cols, csr.num_rows);
ASSERT_EQ(out_csr_matrix.num_rows, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 6);
ASSERT_EQ(g->GetCreatedFormats(), 6);
// test out csr
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSR(2, csr, SparseFormat::kAny));
g = hg->relation_graphs()[0];
g = CreateFromCSR(2, csr);
out_csr_matrix = g->GetCSRMatrix(0);
ASSERT_EQ(out_csr_matrix.num_rows, csr.num_rows);
ASSERT_EQ(out_csr_matrix.num_cols, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 2);
ASSERT_EQ(g->GetCreatedFormats(), 2);
// test out coo
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCOO(2, coo, SparseFormat::kAny));
g = hg->relation_graphs()[0];
g_ptr = std::dynamic_pointer_cast<UnitGraph>(g->GetGraphInFormat(SparseFormat::kCSR));
g = CreateFromCOO(2, coo);
g_ptr = g->GetGraphInFormat(csr_code);
out_csr_matrix = g_ptr->GetCSRMatrix(0);
ASSERT_EQ(out_csr_matrix.num_rows, coo.num_rows);
ASSERT_EQ(out_csr_matrix.num_cols, coo.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 1);
ASSERT_EQ(g->GetCreatedFormats(), 1);
out_csr_matrix = g->GetCSRMatrix(0);
ASSERT_EQ(out_csr_matrix.num_rows, coo.num_rows);
ASSERT_EQ(out_csr_matrix.num_cols, coo.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 3);
ASSERT_EQ(g->GetCreatedFormats(), 3);
}
template <typename IdType>
......@@ -202,38 +187,35 @@ void _TestUnitGraph_GetCOO(DLContext ctx) {
const aten::CSRMatrix &csr = CSR1<IdType>(ctx);
const aten::COOMatrix &coo = COO1<IdType>(ctx);
auto hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSC(2, csr, SparseFormat::kAny));
UnitGraphPtr g = hg->relation_graphs()[0];
UnitGraphPtr g_ptr = std::dynamic_pointer_cast<UnitGraph>(g->GetGraphInFormat(SparseFormat::kCOO));
auto g = CreateFromCSC(2, csr);
auto g_ptr = g->GetGraphInFormat(coo_code);
auto out_coo_matrix = g_ptr->GetCOOMatrix(0);
ASSERT_EQ(out_coo_matrix.num_cols, csr.num_rows);
ASSERT_EQ(out_coo_matrix.num_rows, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 4);
ASSERT_EQ(g->GetCreatedFormats(), 4);
out_coo_matrix = g->GetCOOMatrix(0);
ASSERT_EQ(out_coo_matrix.num_cols, csr.num_rows);
ASSERT_EQ(out_coo_matrix.num_rows, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 5);
ASSERT_EQ(g->GetCreatedFormats(), 5);
// test out csr
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSR(2, csr, SparseFormat::kAny));
g = hg->relation_graphs()[0];
g_ptr = std::dynamic_pointer_cast<UnitGraph>(g->GetGraphInFormat(SparseFormat::kCOO));
g = CreateFromCSR(2, csr);
g_ptr = g->GetGraphInFormat(coo_code);
out_coo_matrix = g_ptr->GetCOOMatrix(0);
ASSERT_EQ(out_coo_matrix.num_rows, csr.num_rows);
ASSERT_EQ(out_coo_matrix.num_cols, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 2);
ASSERT_EQ(g->GetCreatedFormats(), 2);
out_coo_matrix = g->GetCOOMatrix(0);
ASSERT_EQ(out_coo_matrix.num_rows, csr.num_rows);
ASSERT_EQ(out_coo_matrix.num_cols, csr.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 3);
ASSERT_EQ(g->GetCreatedFormats(), 3);
// test out coo
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCOO(2, coo, SparseFormat::kAny));
g = hg->relation_graphs()[0];
g = CreateFromCOO(2, coo);
out_coo_matrix = g->GetCOOMatrix(0);
ASSERT_EQ(out_coo_matrix.num_rows, coo.num_rows);
ASSERT_EQ(out_coo_matrix.num_cols, coo.num_cols);
ASSERT_EQ(g->GetFormatInUse(), 1);
ASSERT_EQ(g->GetCreatedFormats(), 1);
}
template <typename IdType>
......@@ -241,63 +223,61 @@ void _TestUnitGraph_Reserve(DLContext ctx) {
const aten::CSRMatrix &csr = CSR1<IdType>(ctx);
const aten::COOMatrix &coo = COO1<IdType>(ctx);
auto hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSC(2, csr, SparseFormat::kAny));
UnitGraphPtr g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 4);
UnitGraphPtr r_g = g->Reverse();
ASSERT_EQ(r_g->GetFormatInUse(), 2);
auto g = CreateFromCSC(2, csr);
ASSERT_EQ(g->GetCreatedFormats(), 4);
auto r_g =
std::dynamic_pointer_cast<UnitGraph>(g->GetRelationGraph(0))->Reverse();
ASSERT_EQ(r_g->GetCreatedFormats(), 2);
aten::CSRMatrix g_in_csr = g->GetCSCMatrix(0);
aten::CSRMatrix r_g_out_csr = r_g->GetCSRMatrix(0);
ASSERT_TRUE(g_in_csr.indptr->data == r_g_out_csr.indptr->data);
ASSERT_TRUE(g_in_csr.indices->data == r_g_out_csr.indices->data);
aten::CSRMatrix g_out_csr = g->GetCSRMatrix(0);
ASSERT_EQ(g->GetFormatInUse(), 6);
ASSERT_EQ(r_g->GetFormatInUse(), 6);
ASSERT_EQ(g->GetCreatedFormats(), 6);
ASSERT_EQ(r_g->GetCreatedFormats(), 6);
aten::CSRMatrix r_g_in_csr = r_g->GetCSCMatrix(0);
ASSERT_TRUE(g_out_csr.indptr->data == r_g_in_csr.indptr->data);
ASSERT_TRUE(g_out_csr.indices->data == r_g_in_csr.indices->data);
aten::COOMatrix g_coo = g->GetCOOMatrix(0);
ASSERT_EQ(g->GetFormatInUse(), 7);
ASSERT_EQ(r_g->GetFormatInUse(), 6);
ASSERT_EQ(g->GetCreatedFormats(), 7);
ASSERT_EQ(r_g->GetCreatedFormats(), 6);
aten::COOMatrix r_g_coo = r_g->GetCOOMatrix(0);
ASSERT_EQ(r_g->GetFormatInUse(), 7);
ASSERT_EQ(r_g->GetCreatedFormats(), 7);
ASSERT_EQ(g_coo.num_rows, r_g_coo.num_cols);
ASSERT_EQ(g_coo.num_cols, r_g_coo.num_rows);
ASSERT_TRUE(ArrayEQ<IdType>(g_coo.row, r_g_coo.col));
ASSERT_TRUE(ArrayEQ<IdType>(g_coo.col, r_g_coo.row));
// test out csr
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCSR(2, csr, SparseFormat::kAny));
g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 2);
r_g = g->Reverse();
ASSERT_EQ(r_g->GetFormatInUse(), 4);
g = CreateFromCSR(2, csr);
ASSERT_EQ(g->GetCreatedFormats(), 2);
r_g = std::dynamic_pointer_cast<UnitGraph>(g->GetRelationGraph(0))->Reverse();
ASSERT_EQ(r_g->GetCreatedFormats(), 4);
g_out_csr = g->GetCSRMatrix(0);
r_g_in_csr = r_g->GetCSCMatrix(0);
ASSERT_TRUE(g_out_csr.indptr->data == r_g_in_csr.indptr->data);
ASSERT_TRUE(g_out_csr.indices->data == r_g_in_csr.indices->data);
g_in_csr = g->GetCSCMatrix(0);
ASSERT_EQ(g->GetFormatInUse(), 6);
ASSERT_EQ(r_g->GetFormatInUse(), 6);
ASSERT_EQ(g->GetCreatedFormats(), 6);
ASSERT_EQ(r_g->GetCreatedFormats(), 6);
r_g_out_csr = r_g->GetCSRMatrix(0);
ASSERT_TRUE(g_in_csr.indptr->data == r_g_out_csr.indptr->data);
ASSERT_TRUE(g_in_csr.indices->data == r_g_out_csr.indices->data);
g_coo = g->GetCOOMatrix(0);
ASSERT_EQ(g->GetFormatInUse(), 7);
ASSERT_EQ(r_g->GetFormatInUse(), 6);
ASSERT_EQ(g->GetCreatedFormats(), 7);
ASSERT_EQ(r_g->GetCreatedFormats(), 6);
r_g_coo = r_g->GetCOOMatrix(0);
ASSERT_EQ(r_g->GetFormatInUse(), 7);
ASSERT_EQ(r_g->GetCreatedFormats(), 7);
ASSERT_EQ(g_coo.num_rows, r_g_coo.num_cols);
ASSERT_EQ(g_coo.num_cols, r_g_coo.num_rows);
ASSERT_TRUE(ArrayEQ<IdType>(g_coo.row, r_g_coo.col));
ASSERT_TRUE(ArrayEQ<IdType>(g_coo.col, r_g_coo.row));
// test out coo
hg = std::dynamic_pointer_cast<HeteroGraph>(CreateFromCOO(2, coo, SparseFormat::kAny));
g = hg->relation_graphs()[0];
ASSERT_EQ(g->GetFormatInUse(), 1);
r_g = g->Reverse();
ASSERT_EQ(r_g->GetFormatInUse(), 1);
g = CreateFromCOO(2, coo);
ASSERT_EQ(g->GetCreatedFormats(), 1);
r_g = std::dynamic_pointer_cast<UnitGraph>(g->GetRelationGraph(0))->Reverse();
ASSERT_EQ(r_g->GetCreatedFormats(), 1);
g_coo = g->GetCOOMatrix(0);
r_g_coo = r_g->GetCOOMatrix(0);
ASSERT_EQ(g_coo.num_rows, r_g_coo.num_cols);
......@@ -305,14 +285,14 @@ void _TestUnitGraph_Reserve(DLContext ctx) {
ASSERT_TRUE(g_coo.row->data == r_g_coo.col->data);
ASSERT_TRUE(g_coo.col->data == r_g_coo.row->data);
g_in_csr = g->GetCSCMatrix(0);
ASSERT_EQ(g->GetFormatInUse(), 5);
ASSERT_EQ(r_g->GetFormatInUse(), 3);
ASSERT_EQ(g->GetCreatedFormats(), 5);
ASSERT_EQ(r_g->GetCreatedFormats(), 3);
r_g_out_csr = r_g->GetCSRMatrix(0);
ASSERT_TRUE(g_in_csr.indptr->data == r_g_out_csr.indptr->data);
ASSERT_TRUE(g_in_csr.indices->data == r_g_out_csr.indices->data);
g_out_csr = g->GetCSRMatrix(0);
ASSERT_EQ(g->GetFormatInUse(), 7);
ASSERT_EQ(r_g->GetFormatInUse(), 7);
ASSERT_EQ(g->GetCreatedFormats(), 7);
ASSERT_EQ(r_g->GetCreatedFormats(), 7);
r_g_in_csr = r_g->GetCSCMatrix(0);
ASSERT_TRUE(g_out_csr.indptr->data == r_g_in_csr.indptr->data);
ASSERT_TRUE(g_out_csr.indices->data == r_g_in_csr.indices->data);
......
import numpy as np
import dgl
import dgl.ndarray as nd
import dgl.graph_index as dgl_gidx
import dgl.heterograph_index as dgl_hgidx
from dgl.utils import toindex
import backend as F
"""
Test with a heterograph of three ntypes and three etypes
meta graph:
0 -> 1
1 -> 2
2 -> 1
Num nodes per ntype:
0 : 5
1 : 2
2 : 3
rel graph:
0->1 : [0 -> 0, 1 -> 0, 2 -> 0, 3 -> 0]
1->2 : [0 -> 0, 1 -> 1, 1 -> 2]
2->1 : [0 -> 1, 1 -> 1, 2 -> 1]
"""
def _array_equal(dglidx, l):
return list(dglidx) == list(l)
def rel1_from_coo():
row = toindex([0, 1, 2, 3])
col = toindex([0, 0, 0, 0])
return dgl_hgidx.create_bipartite_from_coo(5, 2, row, col)
def rel2_from_coo():
row = toindex([0, 1, 1])
col = toindex([0, 1, 2])
return dgl_hgidx.create_bipartite_from_coo(2, 3, row, col)
def rel3_from_coo():
row = toindex([0, 1, 2])
col = toindex([1, 1, 1])
return dgl_hgidx.create_bipartite_from_coo(3, 2, row, col)
def rel1_from_csr():
indptr = toindex([0, 1, 2, 3, 4, 4])
indices = toindex([0, 0, 0, 0])
edge_ids = toindex([0, 1, 2, 3])
return dgl_hgidx.create_bipartite_from_csr(5, 2, indptr, indices, edge_ids)
def rel2_from_csr():
indptr = toindex([0, 1, 3])
indices = toindex([0, 1, 2])
edge_ids = toindex([0, 1, 2])
return dgl_hgidx.create_bipartite_from_csr(2, 3, indptr, indices, edge_ids)
def rel3_from_csr():
indptr = toindex([0, 1, 2, 3])
indices = toindex([1, 1, 1])
edge_ids = toindex([0, 1, 2])
return dgl_hgidx.create_bipartite_from_csr(3, 2, indptr, indices, edge_ids)
def gen_from_coo():
mg = dgl_gidx.from_edge_list([(0, 1), (1, 2), (2, 1)], is_multigraph=False, readonly=True)
return dgl_hgidx.create_heterograph(mg, [rel1_from_coo(), rel2_from_coo(), rel3_from_coo()])
def gen_from_csr():
mg = dgl_gidx.from_edge_list([(0, 1), (1, 2), (2, 1)], is_multigraph=False, readonly=True)
return dgl_hgidx.create_heterograph(mg, [rel1_from_csr(), rel2_from_csr(), rel3_from_csr()])
def test_query():
R1 = 0
R2 = 1
R3 = 2
def _test_g(g):
assert g.number_of_ntypes() == 3
assert g.number_of_etypes() == 3
assert g.metagraph.number_of_nodes() == 3
assert g.metagraph.number_of_edges() == 3
assert g.ctx() == nd.cpu(0)
assert g.nbits() == 64
assert not g.is_multigraph()
assert g.is_readonly()
# relation graph 1
assert g.number_of_nodes(R1) == 5
assert g.number_of_edges(R1) == 4
assert g.has_node(0, 0)
assert not g.has_node(0, 10)
assert _array_equal(g.has_nodes(0, toindex([0, 10])), [1, 0])
assert g.has_edge_between(R1, 3, 0)
assert not g.has_edge_between(R1, 4, 0)
assert _array_equal(g.has_edges_between(R1, toindex([3, 4]), toindex([0, 0])), [1, 0])
assert _array_equal(g.predecessors(R1, 0), [0, 1, 2, 3])
assert _array_equal(g.predecessors(R1, 1), [])
assert _array_equal(g.successors(R1, 3), [0])
assert _array_equal(g.successors(R1, 4), [])
assert _array_equal(g.edge_id(R1, 0, 0), [0])
src, dst, eid = g.edge_ids(R1, toindex([0, 2, 1, 3]), toindex([0, 0, 0, 0]))
assert _array_equal(src, [0, 2, 1, 3])
assert _array_equal(dst, [0, 0, 0, 0])
assert _array_equal(eid, [0, 2, 1, 3])
src, dst, eid = g.find_edges(R1, toindex([3, 0]))
assert _array_equal(src, [3, 0])
assert _array_equal(dst, [0, 0])
assert _array_equal(eid, [3, 0])
src, dst, eid = g.in_edges(R1, toindex([0, 1]))
assert _array_equal(src, [0, 1, 2, 3])
assert _array_equal(dst, [0, 0, 0, 0])
assert _array_equal(eid, [0, 1, 2, 3])
src, dst, eid = g.out_edges(R1, toindex([1, 0, 4]))
assert _array_equal(src, [1, 0])
assert _array_equal(dst, [0, 0])
assert _array_equal(eid, [1, 0])
src, dst, eid = g.edges(R1, 'eid')
assert _array_equal(src, [0, 1, 2, 3])
assert _array_equal(dst, [0, 0, 0, 0])
assert _array_equal(eid, [0, 1, 2, 3])
assert g.in_degree(R1, 0) == 4
assert g.in_degree(R1, 1) == 0
assert _array_equal(g.in_degrees(R1, toindex([0, 1])), [4, 0])
assert g.out_degree(R1, 2) == 1
assert g.out_degree(R1, 4) == 0
assert _array_equal(g.out_degrees(R1, toindex([4, 2])), [0, 1])
# adjmat
adj = g.adjacency_matrix(R1, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0.],
[1., 0.],
[1., 0.],
[1., 0.],
[0., 0.]]))
adj = g.adjacency_matrix(R2, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0., 0.],
[0., 1., 1.]]))
adj = g.adjacency_matrix(R3, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0., 1.],
[0., 1.],
[0., 1.]]))
g = gen_from_coo()
_test_g(g)
g = gen_from_csr()
_test_g(g)
def test_subgraph():
R1 = 0
R2 = 1
R3 = 2
def _test_g(g):
# node subgraph
induced_nodes = [toindex([0, 1, 4]), toindex([0]), toindex([0, 2])]
sub = g.node_subgraph(induced_nodes)
subg = sub.graph
assert subg.number_of_ntypes() == 3
assert subg.number_of_etypes() == 3
assert subg.number_of_nodes(0) == 3
assert subg.number_of_nodes(1) == 1
assert subg.number_of_nodes(2) == 2
assert subg.number_of_edges(R1) == 2
assert subg.number_of_edges(R2) == 1
assert subg.number_of_edges(R3) == 0
adj = subg.adjacency_matrix(R1, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1.],
[1.],
[0.]]))
adj = subg.adjacency_matrix(R2, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0.]]))
adj = subg.adjacency_matrix(R3, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0.],
[0.]]))
assert len(sub.induced_nodes) == 3
assert _array_equal(sub.induced_nodes[0], induced_nodes[0])
assert _array_equal(sub.induced_nodes[1], induced_nodes[1])
assert _array_equal(sub.induced_nodes[2], induced_nodes[2])
assert len(sub.induced_edges) == 3
assert _array_equal(sub.induced_edges[0], [0, 1])
assert _array_equal(sub.induced_edges[1], [0])
assert _array_equal(sub.induced_edges[2], [])
# node subgraph with empty type graph
induced_nodes = [toindex([0, 1, 4]), toindex([0]), toindex([])]
sub = g.node_subgraph(induced_nodes)
subg = sub.graph
assert subg.number_of_ntypes() == 3
assert subg.number_of_etypes() == 3
assert subg.number_of_nodes(0) == 3
assert subg.number_of_nodes(1) == 1
assert subg.number_of_nodes(2) == 0
assert subg.number_of_edges(R1) == 2
assert subg.number_of_edges(R2) == 0
assert subg.number_of_edges(R3) == 0
adj = subg.adjacency_matrix(R1, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1.],
[1.],
[0.]]))
adj = subg.adjacency_matrix(R2, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([]))
adj = subg.adjacency_matrix(R3, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([]))
# edge subgraph (preserve_nodes=False)
induced_edges = [toindex([0, 2]), toindex([0]), toindex([0, 1, 2])]
sub = g.edge_subgraph(induced_edges, False)
subg = sub.graph
assert subg.number_of_ntypes() == 3
assert subg.number_of_etypes() == 3
assert subg.number_of_nodes(0) == 2
assert subg.number_of_nodes(1) == 2
assert subg.number_of_nodes(2) == 3
assert subg.number_of_edges(R1) == 2
assert subg.number_of_edges(R2) == 1
assert subg.number_of_edges(R3) == 3
adj = subg.adjacency_matrix(R1, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0.],
[1., 0.]]))
adj = subg.adjacency_matrix(R2, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0., 0.],
[0., 0., 0.]]))
adj = subg.adjacency_matrix(R3, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0., 1.],
[0., 1.],
[0., 1.]]))
assert len(sub.induced_nodes) == 3
assert _array_equal(sub.induced_nodes[0], [0, 2])
assert _array_equal(sub.induced_nodes[1], [0, 1])
assert _array_equal(sub.induced_nodes[2], [0, 1, 2])
assert len(sub.induced_edges) == 3
assert _array_equal(sub.induced_edges[0], induced_edges[0])
assert _array_equal(sub.induced_edges[1], induced_edges[1])
assert _array_equal(sub.induced_edges[2], induced_edges[2])
# edge subgraph (preserve_nodes=True)
induced_edges = [toindex([0, 2]), toindex([0]), toindex([0, 1, 2])]
sub = g.edge_subgraph(induced_edges, True)
subg = sub.graph
assert subg.number_of_ntypes() == 3
assert subg.number_of_etypes() == 3
assert subg.number_of_nodes(0) == 5
assert subg.number_of_nodes(1) == 2
assert subg.number_of_nodes(2) == 3
assert subg.number_of_edges(R1) == 2
assert subg.number_of_edges(R2) == 1
assert subg.number_of_edges(R3) == 3
adj = subg.adjacency_matrix(R1, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0.],
[0., 0.],
[1., 0.],
[0., 0.],
[0., 0.]]))
adj = subg.adjacency_matrix(R2, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0., 0.],
[0., 0., 0.]]))
adj = subg.adjacency_matrix(R3, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0., 1.],
[0., 1.],
[0., 1.]]))
assert len(sub.induced_nodes) == 3
assert _array_equal(sub.induced_nodes[0], [0, 1, 2, 3, 4])
assert _array_equal(sub.induced_nodes[1], [0, 1])
assert _array_equal(sub.induced_nodes[2], [0, 1, 2])
assert len(sub.induced_edges) == 3
assert _array_equal(sub.induced_edges[0], induced_edges[0])
assert _array_equal(sub.induced_edges[1], induced_edges[1])
assert _array_equal(sub.induced_edges[2], induced_edges[2])
# edge subgraph with empty induced edges (preserve_nodes=False)
induced_edges = [toindex([0, 2]), toindex([]), toindex([0, 1, 2])]
sub = g.edge_subgraph(induced_edges, False)
subg = sub.graph
assert subg.number_of_ntypes() == 3
assert subg.number_of_etypes() == 3
assert subg.number_of_nodes(0) == 2
assert subg.number_of_nodes(1) == 2
assert subg.number_of_nodes(2) == 3
assert subg.number_of_edges(R1) == 2
assert subg.number_of_edges(R2) == 0
assert subg.number_of_edges(R3) == 3
adj = subg.adjacency_matrix(R1, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0.],
[1., 0.]]))
adj = subg.adjacency_matrix(R2, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0., 0., 0.],
[0., 0., 0.]]))
adj = subg.adjacency_matrix(R3, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0., 1.],
[0., 1.],
[0., 1.]]))
assert len(sub.induced_nodes) == 3
assert _array_equal(sub.induced_nodes[0], [0, 2])
assert _array_equal(sub.induced_nodes[1], [0, 1])
assert _array_equal(sub.induced_nodes[2], [0, 1, 2])
assert len(sub.induced_edges) == 3
assert _array_equal(sub.induced_edges[0], induced_edges[0])
assert _array_equal(sub.induced_edges[1], induced_edges[1])
assert _array_equal(sub.induced_edges[2], induced_edges[2])
# edge subgraph with empty induced edges (preserve_nodes=True)
induced_edges = [toindex([0, 2]), toindex([]), toindex([0, 1, 2])]
sub = g.edge_subgraph(induced_edges, True)
subg = sub.graph
assert subg.number_of_ntypes() == 3
assert subg.number_of_etypes() == 3
assert subg.number_of_nodes(0) == 5
assert subg.number_of_nodes(1) == 2
assert subg.number_of_nodes(2) == 3
assert subg.number_of_edges(R1) == 2
assert subg.number_of_edges(R2) == 0
assert subg.number_of_edges(R3) == 3
adj = subg.adjacency_matrix(R1, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[1., 0.],
[0., 0.],
[1., 0.],
[0., 0.],
[0., 0.]]))
adj = subg.adjacency_matrix(R2, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0., 0., 0.],
[0., 0., 0.]]))
adj = subg.adjacency_matrix(R3, True, F.cpu())[0]
assert np.allclose(F.sparse_to_numpy(adj),
np.array([[0., 1.],
[0., 1.],
[0., 1.]]))
assert len(sub.induced_nodes) == 3
assert _array_equal(sub.induced_nodes[0], [0, 1, 2, 3, 4])
assert _array_equal(sub.induced_nodes[1], [0, 1])
assert _array_equal(sub.induced_nodes[2], [0, 1, 2])
assert len(sub.induced_edges) == 3
assert _array_equal(sub.induced_edges[0], induced_edges[0])
assert _array_equal(sub.induced_edges[1], induced_edges[1])
assert _array_equal(sub.induced_edges[2], induced_edges[2])
g = gen_from_coo()
_test_g(g)
g = gen_from_csr()
_test_g(g)
if __name__ == '__main__':
test_query()
test_subgraph()
......@@ -192,7 +192,7 @@ function-naming-style=snake_case
#function-rgx=
# Good variable names which should always be accepted, separated by a comma.
good-names=i,j,k,u,v,e,n,m,w,x,y,g,G,hg,fn,ex,Run,_
good-names=i,j,k,u,v,e,n,m,w,x,y,g,G,hg,fn,ex,Run,_,us,vs,gs,op,ty
# Include a hint for the correct naming format with invalid-name.
include-naming-hint=no
......
......@@ -7,8 +7,8 @@ import dgl
import dgl.nn.mxnet as nn
import dgl.function as fn
import backend as F
from test_utils.graph_cases import get_cases, random_graph, random_bipartite, random_dglgraph, \
random_block
from test_utils.graph_cases import get_cases, random_graph, random_bipartite, random_dglgraph
from test_utils import parametrize_dtype
from mxnet import autograd, gluon, nd
def check_close(a, b):
......@@ -19,8 +19,10 @@ def _AXWb(A, X, W, b):
Y = mx.nd.dot(A, X.reshape(X.shape[0], -1)).reshape(X.shape)
return Y + b.data(X.context)
def test_graph_conv():
g = dgl.DGLGraph(nx.path_graph(3))
@parametrize_dtype
def test_graph_conv(idtype):
g = dgl.graph(nx.path_graph(3))
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
adj = g.adjacency_matrix(ctx=ctx)
......@@ -76,32 +78,43 @@ def test_graph_conv():
assert "h" in g.ndata
check_close(g.ndata['h'], 2 * F.ones((3, 1)))
@pytest.mark.parametrize('g', get_cases(['path', 'bipartite', 'small', 'block'], exclude=['zero-degree']))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite'], exclude=['zero-degree', 'dglgraph']))
@pytest.mark.parametrize('norm', ['none', 'both', 'right'])
@pytest.mark.parametrize('weight', [True, False])
@pytest.mark.parametrize('bias', [False])
def test_graph_conv2(g, norm, weight, bias):
def test_graph_conv2(idtype, g, norm, weight, bias):
g = g.astype(idtype).to(F.ctx())
conv = nn.GraphConv(5, 2, norm=norm, weight=weight, bias=bias)
conv.initialize(ctx=F.ctx())
ext_w = F.randn((5, 2)).as_in_context(F.ctx())
nsrc = g.number_of_nodes() if isinstance(g, dgl.DGLGraph) else g.number_of_src_nodes()
ndst = g.number_of_nodes() if isinstance(g, dgl.DGLGraph) else g.number_of_dst_nodes()
nsrc = ndst = g.number_of_nodes()
h = F.randn((nsrc, 5)).as_in_context(F.ctx())
h_dst = F.randn((ndst, 2)).as_in_context(F.ctx())
if weight:
h_out = conv(g, h)
else:
h_out = conv(g, h, ext_w)
assert h_out.shape == (ndst, 2)
if not isinstance(g, dgl.DGLGraph) and len(g.ntypes) == 2:
# bipartite, should also accept pair of tensors
if weight:
h_out2 = conv(g, (h, h_dst))
else:
h_out2 = conv(g, (h, h_dst), ext_w)
assert h_out2.shape == (ndst, 2)
assert F.array_equal(h_out, h_out2)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite'], exclude=['zero-degree', 'dglgraph']))
@pytest.mark.parametrize('norm', ['none', 'both', 'right'])
@pytest.mark.parametrize('weight', [True, False])
@pytest.mark.parametrize('bias', [False])
def test_graph_conv2_bi(idtype, g, norm, weight, bias):
g = g.astype(idtype).to(F.ctx())
conv = nn.GraphConv(5, 2, norm=norm, weight=weight, bias=bias)
conv.initialize(ctx=F.ctx())
ext_w = F.randn((5, 2)).as_in_context(F.ctx())
nsrc = g.number_of_src_nodes()
ndst = g.number_of_dst_nodes()
h = F.randn((nsrc, 5)).as_in_context(F.ctx())
h_dst = F.randn((ndst, 2)).as_in_context(F.ctx())
if weight:
h_out = conv(g, (h, h_dst))
else:
h_out = conv(g, (h, h_dst), ext_w)
assert h_out.shape == (ndst, 2)
def _S2AXWb(A, N, X, W, b):
X1 = X * N
......@@ -116,7 +129,7 @@ def _S2AXWb(A, N, X, W, b):
return Y + b
def test_tagconv():
g = dgl.DGLGraph(nx.path_graph(3))
g = dgl.DGLGraph(nx.path_graph(3)).to(F.ctx())
ctx = F.ctx()
adj = g.adjacency_matrix(ctx=ctx)
norm = mx.nd.power(g.in_degrees().astype('float32'), -0.5)
......@@ -143,76 +156,62 @@ def test_tagconv():
h1 = conv(g, h0)
assert h1.shape[-1] == 2
def test_gat_conv():
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_gat_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
gat = nn.GATConv(10, 20, 5) # n_heads = 5
gat.initialize(ctx=ctx)
print(gat)
# test#1: basic
feat = F.randn((20, 10))
feat = F.randn((g.number_of_nodes(), 10))
h = gat(g, feat)
assert h.shape == (20, 5, 20)
assert h.shape == (g.number_of_nodes(), 5, 20)
# test#2: bipartite
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_gat_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gat = nn.GATConv((5, 10), 2, 4)
gat.initialize(ctx=ctx)
feat = (F.randn((100, 5)), F.randn((200, 10)))
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 10)))
h = gat(g, feat)
assert h.shape == (200, 4, 2)
# test#3: block
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].asnumpy())
block = dgl.to_block(g, seed_nodes)
gat = nn.GATConv(5, 2, 4)
gat.initialize(ctx=ctx)
feat = F.randn((block.number_of_src_nodes(), 5))
h = gat(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 4, 2)
assert h.shape == (g.number_of_dst_nodes(), 4, 2)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn'])
def test_sage_conv(aggre_type):
def test_sage_conv(idtype, g, aggre_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((100, 5))
sage.initialize(ctx=ctx)
h = sage(g, feat)
assert h.shape[-1] == 10
g = dgl.graph(sp.sparse.random(100, 100, density=0.1))
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
sage.initialize(ctx=ctx)
h = sage(g, feat)
assert h.shape[-1] == 10
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn'])
def test_sage_conv_bi(idtype, g, aggre_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
dst_dim = 5 if aggre_type != 'gcn' else 10
sage = nn.SAGEConv((10, dst_dim), 2, aggre_type)
feat = (F.randn((100, 10)), F.randn((200, dst_dim)))
feat = (F.randn((g.number_of_src_nodes(), 10)), F.randn((g.number_of_dst_nodes(), dst_dim)))
sage.initialize(ctx=ctx)
h = sage(g, feat)
assert h.shape[-1] == 2
assert h.shape[0] == 200
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].asnumpy())
block = dgl.to_block(g, seed_nodes)
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((block.number_of_src_nodes(), 5))
sage.initialize(ctx=ctx)
h = sage(block, feat)
assert h.shape[0] == block.number_of_dst_nodes()
assert h.shape[-1] == 10
assert h.shape[0] == g.number_of_dst_nodes()
@parametrize_dtype
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn'])
def test_sage_conv_bi2(idtype, aggre_type):
# Test the case for graphs without edges
g = dgl.bipartite([], num_nodes=(5, 3))
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
sage = nn.SAGEConv((3, 3), 2, 'gcn')
feat = (F.randn((5, 3)), F.randn((3, 3)))
sage.initialize(ctx=ctx)
......@@ -228,7 +227,7 @@ def test_sage_conv(aggre_type):
assert h.shape[0] == 3
def test_gg_conv():
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3)).to(F.ctx())
ctx = F.ctx()
gg_conv = nn.GatedGraphConv(10, 20, 3, 4) # n_step = 3, n_etypes = 4
......@@ -242,7 +241,7 @@ def test_gg_conv():
assert h1.shape == (20, 20)
def test_cheb_conv():
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3)).to(F.ctx())
ctx = F.ctx()
cheb = nn.ChebConv(10, 20, 3) # k = 3
......@@ -254,33 +253,32 @@ def test_cheb_conv():
h1 = cheb(g, h0)
assert h1.shape == (20, 20)
def test_agnn_conv():
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_agnn_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
agnn_conv = nn.AGNNConv(0.1, True)
agnn_conv.initialize(ctx=ctx)
print(agnn_conv)
# test#1: basic
feat = F.randn((20, 10))
feat = F.randn((g.number_of_nodes(), 10))
h = agnn_conv(g, feat)
assert h.shape == (20, 10)
assert h.shape == (g.number_of_nodes(), 10)
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
feat = (F.randn((100, 5)), F.randn((200, 5)))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_agnn_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
agnn_conv = nn.AGNNConv(0.1, True)
agnn_conv.initialize(ctx=ctx)
print(agnn_conv)
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 5)))
h = agnn_conv(g, feat)
assert h.shape == (200, 5)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].asnumpy())
block = dgl.to_block(g, seed_nodes)
feat = F.randn((block.number_of_src_nodes(), 5))
h = agnn_conv(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 5)
assert h.shape == (g.number_of_dst_nodes(), 5)
def test_appnp_conv():
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3)).to(F.ctx())
ctx = F.ctx()
appnp_conv = nn.APPNPConv(3, 0.1, 0)
......@@ -295,7 +293,7 @@ def test_appnp_conv():
def test_dense_cheb_conv():
for k in range(1, 4):
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.3), readonly=True)
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.3)).to(F.ctx())
adj = g.adjacency_matrix(ctx=ctx).tostype('default')
cheb = nn.ChebConv(5, 2, k)
dense_cheb = nn.DenseChebConv(5, 2, k)
......@@ -314,9 +312,11 @@ def test_dense_cheb_conv():
out_dense_cheb = dense_cheb(adj, feat, 2.0)
assert F.allclose(out_cheb, out_dense_cheb)
@parametrize_dtype
@pytest.mark.parametrize('norm_type', ['both', 'right', 'none'])
@pytest.mark.parametrize('g', [random_graph(100), random_bipartite(100, 200)])
def test_dense_graph_conv(g, norm_type):
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_dense_graph_conv(idtype, g, norm_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
adj = g.adjacency_matrix(ctx=ctx).tostype('default')
conv = nn.GraphConv(5, 2, norm=norm_type, bias=True)
......@@ -332,8 +332,10 @@ def test_dense_graph_conv(g, norm_type):
out_dense_conv = dense_conv(adj, feat)
assert F.allclose(out_conv, out_dense_conv)
@pytest.mark.parametrize('g', [random_graph(100), random_bipartite(100, 200)])
def test_dense_sage_conv(g):
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'bipartite', 'block-bipartite']))
def test_dense_sage_conv(idtype, g):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
adj = g.adjacency_matrix(ctx=ctx).tostype('default')
sage = nn.SAGEConv(5, 2, 'gcn')
......@@ -356,72 +358,83 @@ def test_dense_sage_conv(g):
out_dense_sage = dense_sage(adj, feat)
assert F.allclose(out_sage, out_dense_sage)
@pytest.mark.parametrize('g', [random_dglgraph(20), random_graph(20), random_bipartite(20, 10), random_block(20)])
def test_edge_conv(g):
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_edge_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
edge_conv = nn.EdgeConv(5, 2)
edge_conv.initialize(ctx=ctx)
print(edge_conv)
# test #1: basic
h0 = F.randn((g.number_of_nodes(), 5))
h1 = edge_conv(g, h0)
assert h1.shape == (g.number_of_nodes(), 2)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_edge_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
edge_conv = nn.EdgeConv(5, 2)
edge_conv.initialize(ctx=ctx)
print(edge_conv)
# test #1: basic
h0 = F.randn((g.number_of_src_nodes(), 5))
if not g.is_homograph() and not g.is_block:
# bipartite
h1 = edge_conv(g, (h0, h0[:10]))
else:
h1 = edge_conv(g, h0)
x0 = F.randn((g.number_of_dst_nodes(), 5))
h1 = edge_conv(g, (h0, x0))
assert h1.shape == (g.number_of_dst_nodes(), 2)
def test_gin_conv():
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
@pytest.mark.parametrize('aggregator_type', ['mean', 'max', 'sum'])
def test_gin_conv(g, idtype, aggregator_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gin_conv = nn.GINConv(lambda x: x, 'mean', 0.1)
gin_conv = nn.GINConv(lambda x: x, aggregator_type, 0.1)
gin_conv.initialize(ctx=ctx)
print(gin_conv)
# test #1: basic
feat = F.randn((g.number_of_nodes(), 5))
h = gin_conv(g, feat)
assert h.shape == (20, 5)
assert h.shape == (g.number_of_nodes(), 5)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
@pytest.mark.parametrize('aggregator_type', ['mean', 'max', 'sum'])
def test_gin_conv_bi(g, idtype, aggregator_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gin_conv = nn.GINConv(lambda x: x, aggregator_type, 0.1)
gin_conv.initialize(ctx=ctx)
print(gin_conv)
# test #2: bipartite
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
feat = (F.randn((100, 5)), F.randn((200, 5)))
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 5)))
h = gin_conv(g, feat)
return h.shape == (20, 5)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].asnumpy())
block = dgl.to_block(g, seed_nodes)
feat = F.randn((block.number_of_src_nodes(), 5))
h = gin_conv(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 12)
return h.shape == (g.number_of_dst_nodes(), 5)
def test_gmm_conv():
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_gmm_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
gmm_conv = nn.GMMConv(5, 2, 5, 3, 'max')
gmm_conv.initialize(ctx=ctx)
# test #1: basic
h0 = F.randn((g.number_of_nodes(), 5))
pseudo = F.randn((g.number_of_edges(), 5))
h1 = gmm_conv(g, h0, pseudo)
assert h1.shape == (g.number_of_nodes(), 2)
g = dgl.graph(nx.erdos_renyi_graph(20, 0.3))
gmm_conv = nn.GMMConv(5, 2, 5, 3, 'max')
gmm_conv.initialize(ctx=ctx)
# test #1: basic
h0 = F.randn((g.number_of_nodes(), 5))
pseudo = F.randn((g.number_of_edges(), 5))
h1 = gmm_conv(g, h0, pseudo)
assert h1.shape == (g.number_of_nodes(), 2)
g = dgl.bipartite(sp.sparse.random(20, 10, 0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_gmm_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gmm_conv = nn.GMMConv((5, 4), 2, 5, 3, 'max')
gmm_conv.initialize(ctx=ctx)
# test #1: basic
......@@ -431,21 +444,11 @@ def test_gmm_conv():
h1 = gmm_conv(g, (h0, hd), pseudo)
assert h1.shape == (g.number_of_dst_nodes(), 2)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].asnumpy())
block = dgl.to_block(g, seed_nodes)
gmm_conv = nn.GMMConv(5, 2, 5, 3, 'mean')
gmm_conv.initialize(ctx=ctx)
h0 = F.randn((block.number_of_src_nodes(), 5))
pseudo = F.randn((block.number_of_edges(), 5))
h = gmm_conv(block, h0, pseudo)
assert h.shape[0] == block.number_of_dst_nodes()
assert h.shape[-1] == 2
def test_nn_conv():
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_nn_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
nn_conv = nn.NNConv(5, 2, gluon.nn.Embedding(3, 5 * 2), 'max')
nn_conv.initialize(ctx=ctx)
# test #1: basic
......@@ -454,16 +457,11 @@ def test_nn_conv():
h1 = nn_conv(g, h0, etypes)
assert h1.shape == (g.number_of_nodes(), 2)
g = dgl.graph(nx.erdos_renyi_graph(20, 0.3))
nn_conv = nn.NNConv(5, 2, gluon.nn.Embedding(3, 5 * 2), 'max')
nn_conv.initialize(ctx=ctx)
# test #1: basic
h0 = F.randn((g.number_of_nodes(), 5))
etypes = nd.random.randint(0, 4, g.number_of_edges()).as_in_context(ctx)
h1 = nn_conv(g, h0, etypes)
assert h1.shape == (g.number_of_nodes(), 2)
g = dgl.bipartite(sp.sparse.random(20, 10, 0.3))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_nn_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
nn_conv = nn.NNConv((5, 4), 2, gluon.nn.Embedding(3, 5 * 2), 'max')
nn_conv.initialize(ctx=ctx)
# test #1: basic
......@@ -473,19 +471,8 @@ def test_nn_conv():
h1 = nn_conv(g, (h0, hd), etypes)
assert h1.shape == (g.number_of_dst_nodes(), 2)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].asnumpy())
block = dgl.to_block(g, seed_nodes)
nn_conv = nn.NNConv((5, 4), 2, gluon.nn.Embedding(3, 5 * 2), 'max')
nn_conv.initialize(ctx=ctx)
feat = F.randn((block.number_of_src_nodes(), 5))
etypes = nd.random.randint(0, 4, g.number_of_edges()).as_in_context(ctx)
h = nn_conv(block, feat, etypes)
assert h.shape[0] == block.number_of_dst_nodes()
assert h.shape[-1] == 2
def test_sg_conv():
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3))
g = dgl.DGLGraph(nx.erdos_renyi_graph(20, 0.3)).to(F.ctx())
ctx = F.ctx()
sgc = nn.SGConv(5, 2, 2)
......@@ -498,7 +485,7 @@ def test_sg_conv():
assert h1.shape == (g.number_of_nodes(), 2)
def test_set2set():
g = dgl.DGLGraph(nx.path_graph(10))
g = dgl.DGLGraph(nx.path_graph(10)).to(F.ctx())
ctx = F.ctx()
s2s = nn.Set2Set(5, 3, 3) # hidden size 5, 3 iters, 3 layers
......@@ -517,7 +504,7 @@ def test_set2set():
assert h1.shape[0] == 3 and h1.shape[1] == 10 and h1.ndim == 2
def test_glob_att_pool():
g = dgl.DGLGraph(nx.path_graph(10))
g = dgl.DGLGraph(nx.path_graph(10)).to(F.ctx())
ctx = F.ctx()
gap = nn.GlobalAttentionPooling(gluon.nn.Dense(1), gluon.nn.Dense(10))
......@@ -535,7 +522,7 @@ def test_glob_att_pool():
assert h1.shape[0] == 4 and h1.shape[1] == 10 and h1.ndim == 2
def test_simple_pool():
g = dgl.DGLGraph(nx.path_graph(15))
g = dgl.DGLGraph(nx.path_graph(15)).to(F.ctx())
sum_pool = nn.SumPooling()
avg_pool = nn.AvgPooling()
......@@ -555,7 +542,7 @@ def test_simple_pool():
assert h1.shape[0] == 1 and h1.shape[1] == 10 * 5 and h1.ndim == 2
# test#2: batched graph
g_ = dgl.DGLGraph(nx.path_graph(5))
g_ = dgl.DGLGraph(nx.path_graph(5)).to(F.ctx())
bg = dgl.batch([g, g_, g, g_, g])
h0 = F.randn((bg.number_of_nodes(), 5))
h1 = sum_pool(bg, h0)
......@@ -586,13 +573,13 @@ def test_simple_pool():
assert h1.shape[0] == 5 and h1.shape[1] == 10 * 5 and h1.ndim == 2
def uniform_attention(g, shape):
a = mx.nd.ones(shape)
a = mx.nd.ones(shape).as_in_context(g.device)
target_shape = (g.number_of_edges(),) + (1,) * (len(shape) - 1)
return a / g.in_degrees(g.edges()[1]).reshape(target_shape).astype('float32')
def test_edge_softmax():
# Basic
g = dgl.DGLGraph(nx.path_graph(3))
g = dgl.DGLGraph(nx.path_graph(3)).to(F.ctx())
edata = F.ones((g.number_of_edges(), 1))
a = nn.edge_softmax(g, edata)
assert len(g.ndata) == 0
......@@ -609,7 +596,7 @@ def test_edge_softmax():
1e-4, 1e-4)
def test_partial_edge_softmax():
g = dgl.DGLGraph()
g = dgl.DGLGraph().to(F.ctx())
g.add_nodes(30)
# build a complete graph
for i in range(30):
......@@ -620,8 +607,7 @@ def test_partial_edge_softmax():
score.attach_grad()
grad = F.randn((300, 1))
import numpy as np
eids = np.random.choice(900, 300, replace=False).astype('int64')
eids = F.zerocopy_from_numpy(eids)
eids = F.tensor(np.random.choice(900, 300, replace=False), g.idtype)
# compute partial edge softmax
with mx.autograd.record():
y_1 = nn.edge_softmax(g, score, eids)
......@@ -629,7 +615,7 @@ def test_partial_edge_softmax():
grad_1 = score.grad
# compute edge softmax on edge subgraph
subg = g.edge_subgraph(eids)
subg = g.edge_subgraph(eids, preserve_nodes=True)
with mx.autograd.record():
y_2 = nn.edge_softmax(subg, score)
y_2.backward(grad)
......@@ -641,7 +627,7 @@ def test_partial_edge_softmax():
def test_rgcn():
ctx = F.ctx()
etype = []
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True).to(F.ctx())
# 5 etypes
R = 5
for i in range(g.number_of_edges()):
......@@ -705,7 +691,7 @@ def test_sequential():
e_feat += graph.edata['e']
return n_feat, e_feat
g = dgl.DGLGraph()
g = dgl.DGLGraph().to(F.ctx())
g.add_nodes(3)
g.add_edges([0, 1, 2, 0, 1, 2, 0, 1, 2], [0, 0, 0, 1, 1, 1, 2, 2, 2])
net = nn.Sequential()
......@@ -731,9 +717,9 @@ def test_sequential():
n_feat += graph.ndata['h']
return n_feat.reshape(graph.number_of_nodes() // 2, 2, -1).sum(1)
g1 = dgl.DGLGraph(nx.erdos_renyi_graph(32, 0.05))
g2 = dgl.DGLGraph(nx.erdos_renyi_graph(16, 0.2))
g3 = dgl.DGLGraph(nx.erdos_renyi_graph(8, 0.8))
g1 = dgl.DGLGraph(nx.erdos_renyi_graph(32, 0.05)).to(F.ctx())
g2 = dgl.DGLGraph(nx.erdos_renyi_graph(16, 0.2)).to(F.ctx())
g3 = dgl.DGLGraph(nx.erdos_renyi_graph(8, 0.8)).to(F.ctx())
net = nn.Sequential()
net.add(ExampleLayer())
net.add(ExampleLayer())
......@@ -749,12 +735,14 @@ def myagg(alist, dsttype):
rst = rst + (i + 1) * alist[i]
return rst
@parametrize_dtype
@pytest.mark.parametrize('agg', ['sum', 'max', 'min', 'mean', 'stack', myagg])
def test_hetero_conv(agg):
def test_hetero_conv(agg, idtype):
g = dgl.heterograph({
('user', 'follows', 'user'): [(0, 1), (0, 2), (2, 1), (1, 3)],
('user', 'plays', 'game'): [(0, 0), (0, 2), (0, 3), (1, 0), (2, 2)],
('store', 'sells', 'game'): [(0, 0), (0, 3), (1, 1), (1, 2)]})
('store', 'sells', 'game'): [(0, 0), (0, 3), (1, 1), (1, 2)]},
idtype=idtype, device=F.ctx())
conv = nn.HeteroGraphConv({
'follows': nn.GraphConv(2, 3),
'plays': nn.GraphConv(2, 4),
......
......@@ -5,8 +5,8 @@ import dgl.nn.pytorch as nn
import dgl.function as fn
import backend as F
import pytest
from test_utils.graph_cases import get_cases, random_graph, random_bipartite, random_dglgraph, \
random_block
from test_utils.graph_cases import get_cases, random_graph, random_bipartite, random_dglgraph
from test_utils import parametrize_dtype
from copy import deepcopy
import numpy as np
......@@ -17,8 +17,8 @@ def _AXWb(A, X, W, b):
Y = th.matmul(A, X.view(X.shape[0], -1)).view_as(X)
return Y + b
def test_graph_conv():
g = dgl.DGLGraph(nx.path_graph(3))
def test_graph_conv0():
g = dgl.DGLGraph(nx.path_graph(3)).to(F.ctx())
ctx = F.ctx()
adj = g.adjacency_matrix(ctx=ctx)
......@@ -70,31 +70,44 @@ def test_graph_conv():
new_weight = conv.weight.data
assert not F.allclose(old_weight, new_weight)
@pytest.mark.parametrize('g', get_cases(['path', 'bipartite', 'small', 'block'], exclude=['zero-degree']))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'bipartite'], exclude=['zero-degree', 'dglgraph']))
@pytest.mark.parametrize('norm', ['none', 'both', 'right'])
@pytest.mark.parametrize('weight', [True, False])
@pytest.mark.parametrize('bias', [True, False])
def test_graph_conv2(g, norm, weight, bias):
def test_graph_conv(idtype, g, norm, weight, bias):
# Test one tensor input
g = g.astype(idtype).to(F.ctx())
conv = nn.GraphConv(5, 2, norm=norm, weight=weight, bias=bias).to(F.ctx())
ext_w = F.randn((5, 2)).to(F.ctx())
nsrc = g.number_of_nodes() if isinstance(g, dgl.DGLGraph) else g.number_of_src_nodes()
ndst = g.number_of_nodes() if isinstance(g, dgl.DGLGraph) else g.number_of_dst_nodes()
nsrc = g.number_of_src_nodes()
ndst = g.number_of_dst_nodes()
h = F.randn((nsrc, 5)).to(F.ctx())
h_dst = F.randn((ndst, 2)).to(F.ctx())
if weight:
h_out = conv(g, h)
else:
h_out = conv(g, h, weight=ext_w)
assert h_out.shape == (ndst, 2)
if not isinstance(g, dgl.DGLGraph) and len(g.ntypes) == 2:
# bipartite, should also accept pair of tensors
if weight:
h_out2 = conv(g, (h, h_dst))
else:
h_out2 = conv(g, (h, h_dst), weight=ext_w)
assert h_out2.shape == (ndst, 2)
assert F.array_equal(h_out, h_out2)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite'], exclude=['zero-degree', 'dglgraph']))
@pytest.mark.parametrize('norm', ['none', 'both', 'right'])
@pytest.mark.parametrize('weight', [True, False])
@pytest.mark.parametrize('bias', [True, False])
def test_graph_conv_bi(idtype, g, norm, weight, bias):
# Test a pair of tensor inputs
g = g.astype(idtype).to(F.ctx())
conv = nn.GraphConv(5, 2, norm=norm, weight=weight, bias=bias).to(F.ctx())
ext_w = F.randn((5, 2)).to(F.ctx())
nsrc = g.number_of_src_nodes()
ndst = g.number_of_dst_nodes()
h = F.randn((nsrc, 5)).to(F.ctx())
h_dst = F.randn((ndst, 2)).to(F.ctx())
if weight:
h_out = conv(g, (h, h_dst))
else:
h_out = conv(g, (h, h_dst), weight=ext_w)
assert h_out.shape == (ndst, 2)
def _S2AXWb(A, N, X, W, b):
X1 = X * N
......@@ -110,6 +123,7 @@ def _S2AXWb(A, N, X, W, b):
def test_tagconv():
g = dgl.DGLGraph(nx.path_graph(3))
g = g.to(F.ctx())
ctx = F.ctx()
adj = g.adjacency_matrix(ctx=ctx)
norm = th.pow(g.in_degrees().float(), -0.5)
......@@ -145,6 +159,7 @@ def test_tagconv():
def test_set2set():
ctx = F.ctx()
g = dgl.DGLGraph(nx.path_graph(10))
g = g.to(F.ctx())
s2s = nn.Set2Set(5, 3, 3) # hidden size 5, 3 iters, 3 layers
s2s = s2s.to(ctx)
......@@ -156,8 +171,8 @@ def test_set2set():
assert h1.shape[0] == 1 and h1.shape[1] == 10 and h1.dim() == 2
# test#2: batched graph
g1 = dgl.DGLGraph(nx.path_graph(11))
g2 = dgl.DGLGraph(nx.path_graph(5))
g1 = dgl.DGLGraph(nx.path_graph(11)).to(F.ctx())
g2 = dgl.DGLGraph(nx.path_graph(5)).to(F.ctx())
bg = dgl.batch([g, g1, g2])
h0 = F.randn((bg.number_of_nodes(), 5))
h1 = s2s(bg, h0)
......@@ -166,6 +181,7 @@ def test_set2set():
def test_glob_att_pool():
ctx = F.ctx()
g = dgl.DGLGraph(nx.path_graph(10))
g = g.to(F.ctx())
gap = nn.GlobalAttentionPooling(th.nn.Linear(5, 1), th.nn.Linear(5, 10))
gap = gap.to(ctx)
......@@ -185,6 +201,7 @@ def test_glob_att_pool():
def test_simple_pool():
ctx = F.ctx()
g = dgl.DGLGraph(nx.path_graph(15))
g = g.to(F.ctx())
sum_pool = nn.SumPooling()
avg_pool = nn.AvgPooling()
......@@ -208,7 +225,7 @@ def test_simple_pool():
assert h1.shape[0] == 1 and h1.shape[1] == 10 * 5 and h1.dim() == 2
# test#2: batched graph
g_ = dgl.DGLGraph(nx.path_graph(5))
g_ = dgl.DGLGraph(nx.path_graph(5)).to(F.ctx())
bg = dgl.batch([g, g_, g, g_, g])
h0 = F.randn((bg.number_of_nodes(), 5))
h1 = sum_pool(bg, h0)
......@@ -273,13 +290,15 @@ def test_set_trans():
assert h2.shape[0] == 3 and h2.shape[1] == 200 and h2.dim() == 2
def uniform_attention(g, shape):
a = th.ones(shape)
a = F.ones(shape)
target_shape = (g.number_of_edges(),) + (1,) * (len(shape) - 1)
return a / g.in_degrees(g.edges()[1]).view(target_shape).float()
return a / g.in_degrees(g.edges(order='eid')[1]).view(target_shape).float()
def test_edge_softmax():
@parametrize_dtype
def test_edge_softmax(idtype):
# Basic
g = dgl.graph(nx.path_graph(3))
g = g.astype(idtype).to(F.ctx())
edata = F.ones((g.number_of_edges(), 1))
a = nn.edge_softmax(g, edata)
assert len(g.ndata) == 0
......@@ -295,6 +314,7 @@ def test_edge_softmax():
# Test both forward and backward with PyTorch built-in softmax.
g = dgl.rand_graph(30, 900)
g = g.astype(idtype).to(F.ctx())
score = F.randn((900, 1))
score.requires_grad_()
......@@ -313,6 +333,27 @@ def test_edge_softmax():
assert F.allclose(score.grad, grad_score)
print(score.grad[:10], grad_score[:10])
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite', 'homo'], exclude=['zero-degree', 'dglgraph']))
def test_edge_softmax2(idtype, g):
g = g.astype(idtype).to(F.ctx())
g = g.local_var()
g.srcdata.clear()
g.dstdata.clear()
g.edata.clear()
a1 = F.randn((g.number_of_edges(), 1)).requires_grad_()
a2 = a1.clone().detach().requires_grad_()
g.edata['s'] = a1
g.group_apply_edges('dst', lambda edges: {'ss':F.softmax(edges.data['s'], 1)})
g.edata['ss'].sum().backward()
builtin_sm = nn.edge_softmax(g, a2)
builtin_sm.sum().backward()
#print(a1.grad - a2.grad)
assert len(g.srcdata) == 0
assert len(g.dstdata) == 0
assert len(g.edata) == 2
assert F.allclose(a1.grad, a2.grad, rtol=1e-4, atol=1e-4) # Follow tolerance in unittest backend
"""
# Test 2
def generate_rand_graph(n, m=None, ctor=dgl.DGLGraph):
......@@ -339,22 +380,24 @@ def test_edge_softmax():
assert F.allclose(a1.grad, a2.grad, rtol=1e-4, atol=1e-4) # Follow tolerance in unittest backend
"""
def test_partial_edge_softmax():
@parametrize_dtype
def test_partial_edge_softmax(idtype):
g = dgl.rand_graph(30, 900)
g = g.astype(idtype).to(F.ctx())
score = F.randn((300, 1))
score.requires_grad_()
grad = F.randn((300, 1))
import numpy as np
eids = np.random.choice(900, 300, replace=False).astype('int64')
eids = F.zerocopy_from_numpy(eids)
eids = np.random.choice(900, 300, replace=False)
eids = F.tensor(eids, dtype=g.idtype)
# compute partial edge softmax
y_1 = nn.edge_softmax(g, score, eids)
y_1.backward(grad)
grad_1 = score.grad
score.grad.zero_()
# compute edge softmax on edge subgraph
subg = g.edge_subgraph(eids)
subg = g.edge_subgraph(eids, preserve_nodes=True)
y_2 = nn.edge_softmax(subg, score)
y_2.backward(grad)
grad_2 = score.grad
......@@ -367,6 +410,7 @@ def test_rgcn():
ctx = F.ctx()
etype = []
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = g.to(F.ctx())
# 5 etypes
R = 5
for i in range(g.number_of_edges()):
......@@ -437,69 +481,59 @@ def test_rgcn():
assert list(h_new_low.shape) == [100, O]
assert F.allclose(h_new, h_new_low)
def test_gat_conv():
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_gat_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.rand_graph(100, 1000)
gat = nn.GATConv(5, 2, 4)
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
gat = gat.to(ctx)
h = gat(g, feat)
assert h.shape == (100, 4, 2)
assert h.shape == (g.number_of_nodes(), 4, 2)
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_gat_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gat = nn.GATConv((5, 10), 2, 4)
feat = (F.randn((100, 5)), F.randn((200, 10)))
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 10)))
gat = gat.to(ctx)
h = gat(g, feat)
assert h.shape == (200, 4, 2)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = th.unique(g.edges()[1])
block = dgl.to_block(g, seed_nodes)
gat = nn.GATConv(5, 2, 4)
feat = F.randn((block.number_of_src_nodes(), 5))
gat = gat.to(ctx)
h = gat(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 4, 2)
assert h.shape == (g.number_of_dst_nodes(), 4, 2)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn', 'lstm'])
def test_sage_conv(aggre_type):
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((100, 5))
sage = sage.to(ctx)
h = sage(g, feat)
assert h.shape[-1] == 10
g = dgl.graph(sp.sparse.random(100, 100, density=0.1))
def test_sage_conv(idtype, g, aggre_type):
g = g.astype(idtype).to(F.ctx())
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((100, 5))
sage = sage.to(ctx)
feat = F.randn((g.number_of_nodes(), 5))
sage = sage.to(F.ctx())
h = sage(g, feat)
assert h.shape[-1] == 10
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn', 'lstm'])
def test_sage_conv_bi(idtype, g, aggre_type):
g = g.astype(idtype).to(F.ctx())
dst_dim = 5 if aggre_type != 'gcn' else 10
sage = nn.SAGEConv((10, dst_dim), 2, aggre_type)
feat = (F.randn((100, 10)), F.randn((200, dst_dim)))
sage = sage.to(ctx)
feat = (F.randn((g.number_of_src_nodes(), 10)), F.randn((g.number_of_dst_nodes(), dst_dim)))
sage = sage.to(F.ctx())
h = sage(g, feat)
assert h.shape[-1] == 2
assert h.shape[0] == 200
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = th.unique(g.edges()[1])
block = dgl.to_block(g, seed_nodes)
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((block.number_of_src_nodes(), 5))
sage = sage.to(ctx)
h = sage(block, feat)
assert h.shape[0] == block.number_of_dst_nodes()
assert h.shape[-1] == 10
assert h.shape[0] == g.number_of_dst_nodes()
@parametrize_dtype
def test_sage_conv2(idtype):
# TODO: add test for blocks
# Test the case for graphs without edges
g = dgl.bipartite([], num_nodes=(5, 3))
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
sage = nn.SAGEConv((3, 3), 2, 'gcn')
feat = (F.randn((5, 3)), F.randn((3, 3)))
sage = sage.to(ctx)
......@@ -517,6 +551,7 @@ def test_sage_conv(aggre_type):
def test_sgc_conv():
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = g.to(F.ctx())
# not cached
sgc = nn.SGConv(5, 10, 3)
feat = F.randn((100, 5))
......@@ -536,6 +571,7 @@ def test_sgc_conv():
def test_appnp_conv():
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = g.to(F.ctx())
appnp = nn.APPNPConv(10, 0.1)
feat = F.randn((100, 5))
appnp = appnp.to(ctx)
......@@ -543,66 +579,62 @@ def test_appnp_conv():
h = appnp(g, feat)
assert h.shape[-1] == 5
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
@pytest.mark.parametrize('aggregator_type', ['mean', 'max', 'sum'])
def test_gin_conv(aggregator_type):
def test_gin_conv(g, idtype, aggregator_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.graph(sp.sparse.random(100, 100, density=0.1))
gin = nn.GINConv(
th.nn.Linear(5, 12),
aggregator_type
)
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
gin = gin.to(ctx)
h = gin(g, feat)
assert h.shape == (100, 12)
assert h.shape == (g.number_of_nodes(), 12)
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
@pytest.mark.parametrize('aggregator_type', ['mean', 'max', 'sum'])
def test_gin_conv_bi(g, idtype, aggregator_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gin = nn.GINConv(
th.nn.Linear(5, 12),
aggregator_type
)
feat = (F.randn((100, 5)), F.randn((200, 5)))
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 5)))
gin = gin.to(ctx)
h = gin(g, feat)
assert h.shape == (200, 12)
assert h.shape == (g.number_of_dst_nodes(), 12)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = th.unique(g.edges()[1])
block = dgl.to_block(g, seed_nodes)
gin = nn.GINConv(th.nn.Linear(5, 12), aggregator_type)
feat = F.randn((block.number_of_src_nodes(), 5))
gin = gin.to(ctx)
h = gin(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 12)
def test_agnn_conv():
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_agnn_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.graph(sp.sparse.random(100, 100, density=0.1))
agnn = nn.AGNNConv(1)
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
agnn = agnn.to(ctx)
h = agnn(g, feat)
assert h.shape == (100, 5)
assert h.shape == (g.number_of_nodes(), 5)
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_agnn_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
agnn = nn.AGNNConv(1)
feat = (F.randn((100, 5)), F.randn((200, 5)))
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 5)))
agnn = agnn.to(ctx)
h = agnn(g, feat)
assert h.shape == (200, 5)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = th.unique(g.edges()[1])
block = dgl.to_block(g, seed_nodes)
agnn = nn.AGNNConv(1)
feat = F.randn((block.number_of_src_nodes(), 5))
agnn = agnn.to(ctx)
h = agnn(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 5)
assert h.shape == (g.number_of_dst_nodes(), 5)
def test_gated_graph_conv():
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = g.to(F.ctx())
ggconv = nn.GatedGraphConv(5, 10, 5, 3)
etypes = th.arange(g.number_of_edges()) % 3
feat = F.randn((100, 5))
......@@ -613,95 +645,68 @@ def test_gated_graph_conv():
# current we only do shape check
assert h.shape[-1] == 10
def test_nn_conv():
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_nn_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
edge_func = th.nn.Linear(4, 5 * 10)
nnconv = nn.NNConv(5, 10, edge_func, 'mean')
feat = F.randn((100, 5))
efeat = F.randn((g.number_of_edges(), 4))
nnconv = nnconv.to(ctx)
h = nnconv(g, feat, efeat)
# currently we only do shape check
assert h.shape[-1] == 10
g = dgl.graph(sp.sparse.random(100, 100, density=0.1))
edge_func = th.nn.Linear(4, 5 * 10)
nnconv = nn.NNConv(5, 10, edge_func, 'mean')
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
efeat = F.randn((g.number_of_edges(), 4))
nnconv = nnconv.to(ctx)
h = nnconv(g, feat, efeat)
# currently we only do shape check
assert h.shape[-1] == 10
g = dgl.bipartite(sp.sparse.random(50, 100, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_nn_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
#g = dgl.bipartite(sp.sparse.random(50, 100, density=0.1))
edge_func = th.nn.Linear(4, 5 * 10)
nnconv = nn.NNConv((5, 2), 10, edge_func, 'mean')
feat = F.randn((50, 5))
feat_dst = F.randn((100, 2))
feat = F.randn((g.number_of_src_nodes(), 5))
feat_dst = F.randn((g.number_of_dst_nodes(), 2))
efeat = F.randn((g.number_of_edges(), 4))
nnconv = nnconv.to(ctx)
h = nnconv(g, (feat, feat_dst), efeat)
# currently we only do shape check
assert h.shape[-1] == 10
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = th.unique(g.edges()[1])
block = dgl.to_block(g, seed_nodes)
edge_func = th.nn.Linear(4, 5 * 10)
nnconv = nn.NNConv(5, 10, edge_func, 'mean')
feat = F.randn((block.number_of_src_nodes(), 5))
efeat = F.randn((block.number_of_edges(), 4))
nnconv = nnconv.to(ctx)
h = nnconv(block, feat, efeat)
assert h.shape[0] == block.number_of_dst_nodes()
assert h.shape[-1] == 10
def test_gmm_conv():
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo']))
def test_gmm_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
gmmconv = nn.GMMConv(5, 10, 3, 4, 'mean')
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
pseudo = F.randn((g.number_of_edges(), 3))
gmmconv = gmmconv.to(ctx)
h = gmmconv(g, feat, pseudo)
# currently we only do shape check
assert h.shape[-1] == 10
g = dgl.graph(sp.sparse.random(100, 100, density=0.1), readonly=True)
gmmconv = nn.GMMConv(5, 10, 3, 4, 'mean')
feat = F.randn((100, 5))
pseudo = F.randn((g.number_of_edges(), 3))
gmmconv = gmmconv.to(ctx)
h = gmmconv(g, feat, pseudo)
# currently we only do shape check
assert h.shape[-1] == 10
g = dgl.bipartite(sp.sparse.random(100, 50, density=0.1), readonly=True)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite', 'block-bipartite']))
def test_gmm_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gmmconv = nn.GMMConv((5, 2), 10, 3, 4, 'mean')
feat = F.randn((100, 5))
feat_dst = F.randn((50, 2))
feat = F.randn((g.number_of_src_nodes(), 5))
feat_dst = F.randn((g.number_of_dst_nodes(), 2))
pseudo = F.randn((g.number_of_edges(), 3))
gmmconv = gmmconv.to(ctx)
h = gmmconv(g, (feat, feat_dst), pseudo)
# currently we only do shape check
assert h.shape[-1] == 10
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = th.unique(g.edges()[1])
block = dgl.to_block(g, seed_nodes)
gmmconv = nn.GMMConv(5, 10, 3, 4, 'mean')
feat = F.randn((block.number_of_src_nodes(), 5))
pseudo = F.randn((block.number_of_edges(), 3))
gmmconv = gmmconv.to(ctx)
h = gmmconv(block, feat, pseudo)
assert h.shape[0] == block.number_of_dst_nodes()
assert h.shape[-1] == 10
@parametrize_dtype
@pytest.mark.parametrize('norm_type', ['both', 'right', 'none'])
@pytest.mark.parametrize('g', [random_graph(100), random_bipartite(100, 200)])
def test_dense_graph_conv(norm_type, g):
@pytest.mark.parametrize('g', get_cases(['homo', 'bipartite']))
def test_dense_graph_conv(norm_type, g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
# TODO(minjie): enable the following option after #1385
adj = g.adjacency_matrix(ctx=ctx).to_dense()
......@@ -716,8 +721,10 @@ def test_dense_graph_conv(norm_type, g):
out_dense_conv = dense_conv(adj, feat)
assert F.allclose(out_conv, out_dense_conv)
@pytest.mark.parametrize('g', [random_graph(100), random_bipartite(100, 200)])
def test_dense_sage_conv(g):
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'bipartite']))
def test_dense_sage_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
adj = g.adjacency_matrix(ctx=ctx).to_dense()
sage = nn.SAGEConv(5, 2, 'gcn')
......@@ -737,26 +744,34 @@ def test_dense_sage_conv(g):
out_dense_sage = dense_sage(adj, feat)
assert F.allclose(out_sage, out_dense_sage), g
@pytest.mark.parametrize('g', [random_dglgraph(20), random_graph(20), random_bipartite(20, 10), random_block(20)])
def test_edge_conv(g):
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_edge_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
edge_conv = nn.EdgeConv(5, 2).to(ctx)
print(edge_conv)
h0 = F.randn((g.number_of_nodes(), 5))
h1 = edge_conv(g, h0)
assert h1.shape == (g.number_of_nodes(), 2)
# test #1: basic
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_edge_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
edge_conv = nn.EdgeConv(5, 2).to(ctx)
print(edge_conv)
h0 = F.randn((g.number_of_src_nodes(), 5))
if not g.is_homograph() and not g.is_block:
# bipartite
h1 = edge_conv(g, (h0, h0[:10]))
else:
h1 = edge_conv(g, h0)
x0 = F.randn((g.number_of_dst_nodes(), 5))
h1 = edge_conv(g, (h0, x0))
assert h1.shape == (g.number_of_dst_nodes(), 2)
def test_dense_cheb_conv():
for k in range(1, 4):
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = g.to(F.ctx())
adj = g.adjacency_matrix(ctx=ctx).to_dense()
cheb = nn.ChebConv(5, 2, k, None)
dense_cheb = nn.DenseChebConv(5, 2, k)
......@@ -792,6 +807,7 @@ def test_sequential():
g = dgl.DGLGraph()
g.add_nodes(3)
g.add_edges([0, 1, 2, 0, 1, 2, 0, 1, 2], [0, 0, 0, 1, 1, 1, 2, 2, 2])
g = g.to(F.ctx())
net = nn.Sequential(ExampleLayer(), ExampleLayer(), ExampleLayer())
n_feat = F.randn((3, 4))
e_feat = F.randn((9, 4))
......@@ -812,9 +828,9 @@ def test_sequential():
n_feat += graph.ndata['h']
return n_feat.view(graph.number_of_nodes() // 2, 2, -1).sum(1)
g1 = dgl.DGLGraph(nx.erdos_renyi_graph(32, 0.05))
g2 = dgl.DGLGraph(nx.erdos_renyi_graph(16, 0.2))
g3 = dgl.DGLGraph(nx.erdos_renyi_graph(8, 0.8))
g1 = dgl.DGLGraph(nx.erdos_renyi_graph(32, 0.05)).to(F.ctx())
g2 = dgl.DGLGraph(nx.erdos_renyi_graph(16, 0.2)).to(F.ctx())
g3 = dgl.DGLGraph(nx.erdos_renyi_graph(8, 0.8)).to(F.ctx())
net = nn.Sequential(ExampleLayer(), ExampleLayer(), ExampleLayer())
net = net.to(ctx)
n_feat = F.randn((32, 4))
......@@ -822,7 +838,7 @@ def test_sequential():
assert n_feat.shape == (4, 4)
def test_atomic_conv():
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True).to(F.ctx())
aconv = nn.AtomicConv(interaction_cutoffs=F.tensor([12.0, 12.0]),
rbf_kernel_means=F.tensor([0.0, 2.0]),
rbf_kernel_scaling=F.tensor([4.0, 4.0]),
......@@ -840,7 +856,7 @@ def test_atomic_conv():
assert h.shape[-1] == 4
def test_cf_conv():
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True).to(F.ctx())
cfconv = nn.CFConv(node_in_feats=2,
edge_in_feats=3,
hidden_feats=2,
......@@ -862,19 +878,20 @@ def myagg(alist, dsttype):
rst = rst + (i + 1) * alist[i]
return rst
@parametrize_dtype
@pytest.mark.parametrize('agg', ['sum', 'max', 'min', 'mean', 'stack', myagg])
def test_hetero_conv(agg):
def test_hetero_conv(agg, idtype):
g = dgl.heterograph({
('user', 'follows', 'user'): [(0, 1), (0, 2), (2, 1), (1, 3)],
('user', 'plays', 'game'): [(0, 0), (0, 2), (0, 3), (1, 0), (2, 2)],
('store', 'sells', 'game'): [(0, 0), (0, 3), (1, 1), (1, 2)]})
('store', 'sells', 'game'): [(0, 0), (0, 3), (1, 1), (1, 2)]},
idtype=idtype, device=F.ctx())
conv = nn.HeteroGraphConv({
'follows': nn.GraphConv(2, 3),
'plays': nn.GraphConv(2, 4),
'sells': nn.GraphConv(3, 4)},
agg)
if F.gpu_ctx():
conv = conv.to(F.ctx())
conv = conv.to(F.ctx())
uf = F.randn((4, 2))
gf = F.randn((4, 4))
sf = F.randn((2, 3))
......@@ -912,8 +929,7 @@ def test_hetero_conv(agg):
'plays': nn.SAGEConv((2, 4), 4, 'mean'),
'sells': nn.SAGEConv(3, 4, 'mean')},
agg)
if F.gpu_ctx():
conv = conv.to(F.ctx())
conv = conv.to(F.ctx())
h = conv(g, ({'user': uf}, {'user' : uf, 'game' : gf}))
assert set(h.keys()) == {'user', 'game'}
......@@ -954,8 +970,7 @@ def test_hetero_conv(agg):
'plays': mod2,
'sells': mod3},
agg)
if F.gpu_ctx():
conv = conv.to(F.ctx())
conv = conv.to(F.ctx())
mod_args = {'follows' : (1,), 'plays' : (1,)}
mod_kwargs = {'sells' : {'arg2' : 'abc'}}
h = conv(g, {'user' : uf, 'store' : sf}, mod_args=mod_args, mod_kwargs=mod_kwargs)
......
......@@ -37,6 +37,6 @@ python3 -m pytest -v --junitxml=pytest_gindex.xml tests/graph_index || fail "gra
python3 -m pytest -v --junitxml=pytest_backend.xml tests/$DGLBACKEND || fail "backend-specific"
export OMP_NUM_THREADS=1
if [ $2 != "gpu" ]; then
python3 -m pytest -v --junitxml=pytest_distributed.xml tests/distributed || fail "distributed"
fi
#if [ $2 != "gpu" ]; then
# python3 -m pytest -v --junitxml=pytest_distributed.xml tests/distributed || fail "distributed"
#fi
......@@ -6,8 +6,8 @@ import dgl
import dgl.nn.tensorflow as nn
import dgl.function as fn
import backend as F
from test_utils.graph_cases import get_cases, random_graph, random_bipartite, random_dglgraph, \
random_block
from test_utils.graph_cases import get_cases, random_graph, random_bipartite, random_dglgraph
from test_utils import parametrize_dtype
from copy import deepcopy
import numpy as np
......@@ -19,7 +19,7 @@ def _AXWb(A, X, W, b):
return Y + b
def test_graph_conv():
g = dgl.DGLGraph(nx.path_graph(3))
g = dgl.DGLGraph(nx.path_graph(3)).to(F.ctx())
ctx = F.ctx()
adj = tf.sparse.to_dense(tf.sparse.reorder(g.adjacency_matrix(ctx=ctx)))
......@@ -71,15 +71,17 @@ def test_graph_conv():
# new_weight = conv.weight.data
# assert not F.allclose(old_weight, new_weight)
@pytest.mark.parametrize('g', get_cases(['path', 'bipartite', 'small', 'block'], exclude=['zero-degree']))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite'], exclude=['zero-degree', 'dglgraph']))
@pytest.mark.parametrize('norm', ['none', 'both', 'right'])
@pytest.mark.parametrize('weight', [True, False])
@pytest.mark.parametrize('bias', [True, False])
def test_graph_conv2(g, norm, weight, bias):
def test_graph_conv2(idtype, g, norm, weight, bias):
g = g.astype(idtype).to(F.ctx())
conv = nn.GraphConv(5, 2, norm=norm, weight=weight, bias=bias)
ext_w = F.randn((5, 2))
nsrc = g.number_of_nodes() if isinstance(g, dgl.DGLGraph) else g.number_of_src_nodes()
ndst = g.number_of_nodes() if isinstance(g, dgl.DGLGraph) else g.number_of_dst_nodes()
nsrc = g.number_of_src_nodes()
ndst = g.number_of_dst_nodes()
h = F.randn((nsrc, 5))
h_dst = F.randn((ndst, 2))
if weight:
......@@ -88,18 +90,28 @@ def test_graph_conv2(g, norm, weight, bias):
h_out = conv(g, h, weight=ext_w)
assert h_out.shape == (ndst, 2)
if not isinstance(g, dgl.DGLGraph) and len(g.ntypes) == 2:
# bipartite, should also accept pair of tensors
if weight:
h_out2 = conv(g, (h, h_dst))
else:
h_out2 = conv(g, (h, h_dst), weight=ext_w)
assert h_out2.shape == (ndst, 2)
assert F.array_equal(h_out, h_out2)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite'], exclude=['zero-degree', 'dglgraph']))
@pytest.mark.parametrize('norm', ['none', 'both', 'right'])
@pytest.mark.parametrize('weight', [True, False])
@pytest.mark.parametrize('bias', [True, False])
def test_graph_conv2_bi(idtype, g, norm, weight, bias):
g = g.astype(idtype).to(F.ctx())
conv = nn.GraphConv(5, 2, norm=norm, weight=weight, bias=bias)
ext_w = F.randn((5, 2))
nsrc = g.number_of_src_nodes()
ndst = g.number_of_dst_nodes()
h = F.randn((nsrc, 5))
h_dst = F.randn((ndst, 2))
if weight:
h_out = conv(g, (h, h_dst))
else:
h_out = conv(g, (h, h_dst), weight=ext_w)
assert h_out.shape == (ndst, 2)
def test_simple_pool():
ctx = F.ctx()
g = dgl.DGLGraph(nx.path_graph(15))
g = dgl.DGLGraph(nx.path_graph(15)).to(F.ctx())
sum_pool = nn.SumPooling()
avg_pool = nn.AvgPooling()
......@@ -119,7 +131,7 @@ def test_simple_pool():
assert h1.shape[0] == 1 and h1.shape[1] == 10 * 5 and h1.ndim == 2
# test#2: batched graph
g_ = dgl.DGLGraph(nx.path_graph(5))
g_ = dgl.DGLGraph(nx.path_graph(5)).to(F.ctx())
bg = dgl.batch([g, g_, g, g_, g])
h0 = F.randn((bg.number_of_nodes(), 5))
h1 = sum_pool(bg, h0)
......@@ -156,7 +168,7 @@ def uniform_attention(g, shape):
def test_edge_softmax():
# Basic
g = dgl.DGLGraph(nx.path_graph(3))
g = dgl.DGLGraph(nx.path_graph(3)).to(F.ctx())
edata = F.ones((g.number_of_edges(), 1))
a = nn.edge_softmax(g, edata)
assert len(g.ndata) == 0
......@@ -171,7 +183,7 @@ def test_edge_softmax():
assert F.allclose(a, uniform_attention(g, a.shape))
# Test both forward and backward with Tensorflow built-in softmax.
g = dgl.DGLGraph()
g = dgl.DGLGraph().to(F.ctx())
g.add_nodes(30)
# build a complete graph
for i in range(30):
......@@ -203,7 +215,7 @@ def test_edge_softmax():
arr = (sp.sparse.random(n, n, density=0.1, format='coo') != 0).astype(np.int64)
return dgl.DGLGraph(arr, readonly=True)
g = generate_rand_graph(50)
g = generate_rand_graph(50).to(F.ctx())
a1 = F.randn((g.number_of_edges(), 1))
a2 = tf.identity(a1)
with tf.GradientTape() as tape:
......@@ -224,7 +236,7 @@ def test_edge_softmax():
assert F.allclose(a1_grad, a2_grad, rtol=1e-4, atol=1e-4) # Follow tolerance in unittest backend
def test_partial_edge_softmax():
g = dgl.DGLGraph()
g = dgl.DGLGraph().to(F.ctx())
g.add_nodes(30)
# build a complete graph
for i in range(30):
......@@ -254,7 +266,7 @@ def test_partial_edge_softmax():
assert F.allclose(grad_1, grad_2)
def test_glob_att_pool():
g = dgl.DGLGraph(nx.path_graph(10))
g = dgl.DGLGraph(nx.path_graph(10)).to(F.ctx())
gap = nn.GlobalAttentionPooling(layers.Dense(1), layers.Dense(10))
print(gap)
......@@ -273,7 +285,7 @@ def test_glob_att_pool():
def test_rgcn():
etype = []
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True).to(F.ctx())
# 5 etypes
R = 5
for i in range(g.number_of_edges()):
......@@ -344,60 +356,55 @@ def test_rgcn():
assert list(h_new_low.shape) == [100, O]
assert F.allclose(h_new, h_new_low)
def test_gat_conv():
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
def test_gat_conv(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gat = nn.GATConv(5, 2, 4)
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
h = gat(g, feat)
assert h.shape == (100, 4, 2)
assert h.shape == (g.number_of_nodes(), 4, 2)
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
def test_gat_conv_bi(g, idtype):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gat = nn.GATConv((5, 10), 2, 4)
feat = (F.randn((100, 5)), F.randn((200, 10)))
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 10)))
h = gat(g, feat)
assert h.shape == (g.number_of_dst_nodes(), 4, 2)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].numpy())
block = dgl.to_block(g, seed_nodes)
gat = nn.GATConv(5, 2, 4)
feat = F.randn((block.number_of_src_nodes(), 5))
h = gat(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 4, 2)
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn', 'lstm'])
def test_sage_conv(aggre_type):
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn'])
def test_sage_conv(idtype, g, aggre_type):
g = g.astype(idtype).to(F.ctx())
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((100, 5))
feat = F.randn((g.number_of_nodes(), 5))
h = sage(g, feat)
assert h.shape[-1] == 10
g = dgl.graph(sp.sparse.random(100, 100, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn'])
def test_sage_conv_bi(idtype, g, aggre_type):
g = g.astype(idtype).to(F.ctx())
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((100, 5))
h = sage(g, feat)
assert h.shape[-1] == 10
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
dst_dim = 5 if aggre_type != 'gcn' else 10
sage = nn.SAGEConv((10, dst_dim), 2, aggre_type)
feat = (F.randn((100, 10)), F.randn((200, dst_dim)))
feat = (F.randn((g.number_of_src_nodes(), 10)), F.randn((g.number_of_dst_nodes(), dst_dim)))
h = sage(g, feat)
assert h.shape[-1] == 2
assert h.shape[0] == 200
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].numpy())
block = dgl.to_block(g, seed_nodes)
sage = nn.SAGEConv(5, 10, aggre_type)
feat = F.randn((block.number_of_src_nodes(), 5))
h = sage(block, feat)
assert h.shape[0] == block.number_of_dst_nodes()
assert h.shape[-1] == 10
assert h.shape[0] == g.number_of_dst_nodes()
@parametrize_dtype
@pytest.mark.parametrize('aggre_type', ['mean', 'pool', 'gcn'])
def test_sage_conv_bi_empty(idtype, aggre_type):
# Test the case for graphs without edges
g = dgl.bipartite([], num_nodes=(5, 3))
g = dgl.bipartite([], num_nodes=(5, 3)).to(F.ctx())
g = g.astype(idtype).to(F.ctx())
sage = nn.SAGEConv((3, 3), 2, 'gcn')
feat = (F.randn((5, 3)), F.randn((3, 3)))
h = sage(g, feat)
......@@ -412,7 +419,7 @@ def test_sage_conv(aggre_type):
def test_sgc_conv():
ctx = F.ctx()
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True).to(F.ctx())
# not cached
sgc = nn.SGConv(5, 10, 3)
feat = F.randn((100, 5))
......@@ -428,44 +435,39 @@ def test_sgc_conv():
assert h_0.shape[-1] == 10
def test_appnp_conv():
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True).to(F.ctx())
appnp = nn.APPNPConv(10, 0.1)
feat = F.randn((100, 5))
h = appnp(g, feat)
assert h.shape[-1] == 5
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['homo', 'block-bipartite']))
@pytest.mark.parametrize('aggregator_type', ['mean', 'max', 'sum'])
def test_gin_conv(aggregator_type):
g = dgl.DGLGraph(sp.sparse.random(100, 100, density=0.1), readonly=True)
def test_gin_conv(g, idtype, aggregator_type):
g = g.astype(idtype).to(F.ctx())
ctx = F.ctx()
gin = nn.GINConv(
tf.keras.layers.Dense(12),
aggregator_type
)
feat = F.randn((100, 5))
gin = gin
feat = F.randn((g.number_of_nodes(), 5))
h = gin(g, feat)
assert h.shape == (100, 12)
assert h.shape == (g.number_of_nodes(), 12)
g = dgl.bipartite(sp.sparse.random(100, 200, density=0.1))
@parametrize_dtype
@pytest.mark.parametrize('g', get_cases(['bipartite']))
@pytest.mark.parametrize('aggregator_type', ['mean', 'max', 'sum'])
def test_gin_conv_bi(g, idtype, aggregator_type):
g = g.astype(idtype).to(F.ctx())
gin = nn.GINConv(
tf.keras.layers.Dense(12),
aggregator_type
)
feat = (F.randn((100, 5)), F.randn((200, 5)))
feat = (F.randn((g.number_of_src_nodes(), 5)), F.randn((g.number_of_dst_nodes(), 5)))
h = gin(g, feat)
assert h.shape == (200, 12)
g = dgl.graph(sp.sparse.random(100, 100, density=0.001))
seed_nodes = np.unique(g.edges()[1].numpy())
block = dgl.to_block(g, seed_nodes)
gin = nn.GINConv(
tf.keras.layers.Dense(12),
aggregator_type
)
feat = F.randn((block.number_of_src_nodes(), 5))
h = gin(block, feat)
assert h.shape == (block.number_of_dst_nodes(), 12)
assert h.shape == (g.number_of_dst_nodes(), 12)
def myagg(alist, dsttype):
rst = alist[0]
......@@ -473,12 +475,14 @@ def myagg(alist, dsttype):
rst = rst + (i + 1) * alist[i]
return rst
@parametrize_dtype
@pytest.mark.parametrize('agg', ['sum', 'max', 'min', 'mean', 'stack', myagg])
def test_hetero_conv(agg):
def test_hetero_conv(agg, idtype):
g = dgl.heterograph({
('user', 'follows', 'user'): [(0, 1), (0, 2), (2, 1), (1, 3)],
('user', 'plays', 'game'): [(0, 0), (0, 2), (0, 3), (1, 0), (2, 2)],
('store', 'sells', 'game'): [(0, 0), (0, 3), (1, 1), (1, 2)]})
('store', 'sells', 'game'): [(0, 0), (0, 3), (1, 1), (1, 2)]},
idtype=idtype, device=F.ctx())
conv = nn.HeteroGraphConv({
'follows': nn.GraphConv(2, 3),
'plays': nn.GraphConv(2, 4),
......
import pytest
import backend as F
if F._default_context_str == 'cpu':
parametrize_dtype = pytest.mark.parametrize("idtype", [F.int32, F.int64])
else:
# only test int32 on GPU because many graph operators are not supported for int64.
parametrize_dtype = pytest.mark.parametrize("idtype", [F.int32])
from .checks import *
from .graph_cases import get_cases
import dgl
import backend as F
__all__ = ['check_graph_equal']
def check_graph_equal(g1, g2, *,
check_idtype=True,
check_feature=True):
assert g1.device == g1.device
if check_idtype:
assert g1.idtype == g2.idtype
assert g1.ntypes == g2.ntypes
assert g1.etypes == g2.etypes
assert g1.srctypes == g2.srctypes
assert g1.dsttypes == g2.dsttypes
assert g1.canonical_etypes == g2.canonical_etypes
assert g1.batch_size == g2.batch_size
# check if two metagraphs are identical
for edges, features in g1.metagraph.edges(keys=True).items():
assert g2.metagraph.edges(keys=True)[edges] == features
for nty in g1.ntypes:
assert g1.number_of_nodes(nty) == g2.number_of_nodes(nty)
assert F.allclose(g1.batch_num_nodes(nty), g2.batch_num_nodes(nty))
for ety in g1.canonical_etypes:
assert g1.number_of_edges(ety) == g2.number_of_edges(ety)
assert F.allclose(g1.batch_num_edges(ety), g2.batch_num_edges(ety))
src1, dst1, eid1 = g1.edges(etype=ety, form='all')
src2, dst2, eid2 = g2.edges(etype=ety, form='all')
if check_idtype:
assert F.allclose(src1, src2)
assert F.allclose(dst1, dst2)
assert F.allclose(eid1, eid2)
else:
assert F.allclose(src1, F.astype(src2, g1.idtype))
assert F.allclose(dst1, F.astype(dst2, g1.idtype))
assert F.allclose(eid1, F.astype(eid2, g1.idtype))
if check_feature:
for nty in g1.ntypes:
if g1.number_of_nodes(nty) == 0:
continue
for feat_name in g1.nodes[nty].data.keys():
assert F.allclose(g1.nodes[nty].data[feat_name], g2.nodes[nty].data[feat_name])
for ety in g1.canonical_etypes:
if g1.number_of_edges(ety) == 0:
continue
for feat_name in g2.edges[ety].data.keys():
assert F.allclose(g1.edges[ety].data[feat_name], g2.edges[ety].data[feat_name])
......@@ -3,7 +3,9 @@ import backend as F
import dgl
import numpy as np
import networkx as nx
import numpy as np
import scipy.sparse as ssp
import backend as F
case_registry = defaultdict(list)
......@@ -11,36 +13,80 @@ def register_case(labels):
def wrapper(fn):
for lbl in labels:
case_registry[lbl].append(fn)
fn.__labels__ = labels
return fn
return wrapper
def get_cases(labels=None, exclude=None):
def get_cases(labels=None, exclude=[]):
"""Get all graph instances of the given labels."""
cases = set()
if labels is None:
# get all the cases
labels = case_registry.keys()
for lbl in labels:
if exclude is not None and lbl in exclude:
continue
cases.update(case_registry[lbl])
for case in case_registry[lbl]:
if not any([l in exclude for l in case.__labels__]):
cases.add(case)
return [fn() for fn in cases]
@register_case(['dglgraph', 'path', 'small'])
@register_case(['dglgraph', 'path'])
def dglgraph_path():
return dgl.DGLGraph(nx.path_graph(5))
@register_case(['bipartite', 'small', 'hetero', 'zero-degree'])
@register_case(['bipartite'])
def bipartite1():
return dgl.bipartite([(0, 0), (0, 1), (0, 4), (2, 1), (2, 4), (3, 3)])
@register_case(['bipartite', 'small', 'hetero'])
@register_case(['bipartite'])
def bipartite_full():
return dgl.bipartite([(0, 0), (0, 1), (0, 2), (0, 3), (1, 0), (1, 1), (1, 2), (1, 3)])
@register_case(['homo'])
def graph0():
return dgl.graph(([0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 6, 6, 7, 8, 9],
[4, 5, 1, 2, 4, 7, 9, 8 ,6, 4, 1, 0, 1, 0, 2, 3, 5]))
@register_case(['homo', 'has_feature'])
def graph1():
g = dgl.graph(([0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 6, 6, 7, 8, 9],
[4, 5, 1, 2, 4, 7, 9, 8 ,6, 4, 1, 0, 1, 0, 2, 3, 5]))
g.ndata['h'] = F.copy_to(F.randn((g.number_of_nodes(), 2)), F.cpu())
g.edata['w'] = F.copy_to(F.randn((g.number_of_edges(), 3)), F.cpu())
return g
@register_case(['hetero', 'has_feature'])
def heterograph0():
g = dgl.heterograph({
('user', 'plays', 'game'): ([0, 1, 1, 2], [0, 0, 1, 1]),
('developer', 'develops', 'game'): ([0, 1], [0, 1])})
g.nodes['user'].data['h'] = F.copy_to(F.randn((g.number_of_nodes('user'), 3)), F.cpu())
g.nodes['game'].data['h'] = F.copy_to(F.randn((g.number_of_nodes('game'), 2)), F.cpu())
g.nodes['developer'].data['h'] = F.copy_to(F.randn((g.number_of_nodes('developer'), 3)), F.cpu())
g.edges['plays'].data['h'] = F.copy_to(F.randn((g.number_of_edges('plays'), 1)), F.cpu())
g.edges['develops'].data['h'] = F.copy_to(F.randn((g.number_of_edges('develops'), 5)), F.cpu())
return g
@register_case(['batched', 'homo'])
def batched_graph0():
g1 = dgl.graph(([0, 1, 2], [1, 2, 3]))
g2 = dgl.graph(([1, 1], [2, 0]))
g3 = dgl.graph(([0], [1]))
return dgl.batch([g1, g2, g3])
@register_case(['block', 'bipartite', 'block-biparitite'])
def block_graph0():
g = dgl.graph(([2, 3, 4], [5, 6, 7]), num_nodes=100)
return dgl.to_block(g)
@register_case(['block'])
def block():
g = dgl.graph(([0, 1, 2, 3], [1, 2, 3, 4]))
return dgl.to_block(g, [1, 2, 3, 4])
def block_graph1():
g = dgl.heterograph({
('user', 'plays', 'game') : ([0, 1, 2], [1, 1, 0]),
('user', 'likes', 'game') : ([1, 2, 3], [0, 0, 2]),
('store', 'sells', 'game') : ([0, 1, 1], [0, 1, 2]),
})
return dgl.to_block(g)
def random_dglgraph(size):
return dgl.DGLGraph(nx.erdos_renyi_graph(size, 0.3))
......
......@@ -60,11 +60,6 @@ u = th.tensor([0, 0, 0, 0, 0])
v = th.tensor([1, 2, 3, 4, 5])
star1 = dgl.DGLGraph((u, v))
# Create the same graph in one go! Essentially, if one of the arrays is a scalar,
# the value is automatically broadcasted to match the length of the other array
# -- a feature called *edge broadcasting*.
start2 = dgl.DGLGraph((0, v))
# Create the same graph from a scipy sparse matrix (using ``scipy.sparse.csr_matrix`` works too).
adj = spp.coo_matrix((np.ones(len(u)), (u.numpy(), v.numpy())))
star3 = dgl.DGLGraph(adj)
......@@ -87,7 +82,8 @@ src = th.tensor([8, 9]); dst = th.tensor([0, 0])
g.add_edges(src, dst)
# Edge broadcasting will do star graph in one go!
g.clear(); g.add_nodes(10)
g = dgl.DGLGraph()
g.add_nodes(10)
src = th.tensor(list(range(1, 10)));
g.add_edges(src, 0)
......@@ -180,7 +176,7 @@ print(g_multi.edges())
# An edge in multigraph cannot be uniquely identified by using its incident nodes
# :math:`u` and :math:`v`; query their edge IDs use ``edge_id`` interface.
eid_10 = g_multi.edge_id(1, 0, return_array=True)
_, _, eid_10 = g_multi.edge_id(1, 0, return_uv=True)
g_multi.edges[eid_10].data['w'] = th.ones(len(eid_10), 2)
print(g_multi.edata['w'])
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment