"...hilander/PSS/git@developer.sourcefind.cn:OpenDAS/dgl.git" did not exist on "e9b624fe227d2e01d3aff057b4a49f0cae58da13"
Unverified Commit 78e0dae6 authored by Vasimuddin Md's avatar Vasimuddin Md Committed by GitHub
Browse files

[DistGNN, Graph partitioning] Libra partition (#3376)



* added distgnn plus libra codebase

* Dist application codes

* added comments in partition code. changed the interface of partitioning call.

* updated readme

* create libra partitioning branch for the PR

* removed disgnn files for first PR

* updated kernel.cc

* added libra_partition.cc and moved libra code from kernel.cc to libra_partition.cc

* fixed lint error; merged libra2dgl.py and main_Libra.py to libra_partition.py; added graphsage/distgnn folder and partition script.

* removed libra2dgl.py

* fixed the lint error and cleaned the code.

* revisions due to PR comments. added distgnn/tools contains partitions routines

* update 2 PR revision I

* fixed errors; also improved the runtime by 10x.

* fixed minor lint error

* fixed some more lints

* PR revision II changed the interface of libra partition function

* rewrite docstring
Co-authored-by: default avatarQuan (Andy) Gan <coin2028@hotmail.com>
parent 02880e9f
## DistGNN vertex-cut based graph partitioning (using Libra)
### How to run graph partitioning
```python partition_graph.py --dataset <dataset> --num-parts <num_parts> --out-dir <output_location>```
Example: The following command-line creates 4 partitions of pubmed graph
``` python partition_graph.py --dataset pubmed --num-parts 4 --out-dir ./```
The ouptut partitions are created in the current directory in Libra_result_\<dataset\>/ folder.
The *upcoming DistGNN* application can directly use these partitions for distributed training.
### How Libra partitioning works
Libra is a vertex-cut based graph partitioning method. It applies greedy heuristics to uniquely distribute the input graph edges among the partitions. It generates the partitions as a list of edges. Script ```libra_partition.py``` after generates the Libra partitions and converts the Libra output to DGL/DistGNN input format.
Note: Current Libra implementation is sequential. Extra overhead is paid due to the additional work of format conversion of the partitioned graph.
### Expected partitioning timinigs
Cora, Pubmed, Citeseer: < 10 sec (<10GB)
Reddit: ~150 sec (~ 25GB)
OGBN-Products: ~200 sec (~30GB)
Proteins: 1800 sec (Format conversion from public data takes time) (~100GB)
OGBN-Paper100M: 2500 sec (~200GB)
### Settings
Tested with:
Cent OS 7.6
gcc v8.3.0
PyTorch 1.7.1
Python 3.7.10
r"""
Copyright (c) 2021 Intel Corporation
\file Graph partitioning
\brief Calls Libra - Vertex-cut based graph partitioner for distirbuted training
\author Vasimuddin Md <vasimuddin.md@intel.com>,
Guixiang Ma <guixiang.ma@intel.com>
Sanchit Misra <sanchit.misra@intel.com>,
Ramanarayan Mohanty <ramanarayan.mohanty@intel.com>,
Sasikanth Avancha <sasikanth.avancha@intel.com>
Nesreen K. Ahmed <nesreen.k.ahmed@intel.com>
"""
import os
import sys
import numpy as np
import csv
from statistics import mean
import random
import time
import argparse
from load_graph import load_ogb
import dgl
from dgl.data import load_data
from dgl.distgnn.partition import partition_graph
from dgl.distgnn.tools import load_proteins
from dgl.base import DGLError
if __name__ == "__main__":
argparser = argparse.ArgumentParser()
argparser.add_argument('--dataset', type=str, default='cora')
argparser.add_argument('--num-parts', type=int, default=2)
argparser.add_argument('--out-dir', type=str, default='./')
args = argparser.parse_args()
dataset = args.dataset
num_community = args.num_parts
out_dir = 'Libra_result_' + dataset ## "Libra_result_" prefix is mandatory
resultdir = os.path.join(args.out_dir, out_dir)
print("Input dataset for partitioning: ", dataset)
if args.dataset == 'ogbn-products':
print("Loading ogbn-products")
G, _ = load_ogb('ogbn-products')
elif args.dataset == 'ogbn-papers100M':
print("Loading ogbn-papers100M")
G, _ = load_ogb('ogbn-papers100M')
elif args.dataset == 'proteins':
G = load_proteins('proteins')
elif args.dataset == 'ogbn-arxiv':
print("Loading ogbn-arxiv")
G, _ = load_ogb('ogbn-arxiv')
else:
try:
G = load_data(args)[0]
except:
raise DGLError("Error: Dataset {} not found !!!".format(dataset))
print("Done loading the graph.", flush=True)
partition_graph(num_community, G, resultdir)
## DistGNN vertex-cut based graph partitioning (using Libra)
### How to run graph partitioning
```python ../../../../python/dgl/distgnn/partition/main_Libra.py <dataset> <#partitions>```
Example: The following command-line creates 4 partitions of pubmed graph
```python ../../../../python/dgl/distgnn/partition/main_Libra.py pubmed 4```
The ouptut partitions are created in the current directory in Libra_result_\<dataset\>/ folder.
The *upcoming DistGNN* application can directly use these partitions for distributed training.
### How Libra partitioning works
Libra is a vertex-cut based graph partitioning method. It applies greedy heuristics to uniquely distribute the input graph edges among the partitions. It generates the partitions as a list of edges. Script ```main_Libra.py``` after getting the Libra partitions converts the Libra output to DGL/DistGNN input format.
Note: Current Libra implementation is sequential. Extra overhead is paid due to the additional work of format conversion of the partitioned graph.
### Expected partitioning timinigs
Cora, Pubmed, Citeseer: < 10 sec (<10GB)
Reddit: 1500 sec (~ 25GB)
OGBN-Products: ~2000 sec (~30GB)
Proteins: 18000 sec (Format conversion from public data takes time) (~100GB)
OGBN-Paper100M: 25000 sec (~200GB)
### Settings
Tested with:
Cent OS 7.6
gcc v8.3.0
PyTorch 1.7.1
Python 3.7.10
## Distributed training
This is an example of training GraphSage in a distributed fashion. Before training, please install some python libs by pip:
......
"""
This package contains DistGNN and Libra based graph partitioning tools.
"""
from . import partition
from . import tools
"""
This package contains Libra graph partitioner.
"""
from .libra_partition import partition_graph
r"""Libra partition functions.
Libra partition is a vertex-cut based partitioning algorithm from
`Distributed Power-law Graph Computing:
Theoretical and Empirical Analysis
<https://proceedings.neurips.cc/paper/2014/file/67d16d00201083a2b118dd5128dd6f59-Paper.pdf>`__
from Xie et al.
"""
# Copyright (c) 2021 Intel Corporation
# \file distgnn/partition/libra_partition.py
# \brief Libra - Vertex-cut based graph partitioner for distributed training
# \author Vasimuddin Md <vasimuddin.md@intel.com>,
# Guixiang Ma <guixiang.ma@intel.com>
# Sanchit Misra <sanchit.misra@intel.com>,
# Ramanarayan Mohanty <ramanarayan.mohanty@intel.com>,
# Sasikanth Avancha <sasikanth.avancha@intel.com>
# Nesreen K. Ahmed <nesreen.k.ahmed@intel.com>
# \cite Distributed Power-law Graph Computing: Theoretical and Empirical Analysis
import os
import time
import json
import torch as th
from dgl import DGLGraph
from dgl.sparse import libra_vertex_cut
from dgl.sparse import libra2dgl_build_dict
from dgl.sparse import libra2dgl_set_lr
from dgl.sparse import libra2dgl_build_adjlist
from dgl.data.utils import save_graphs, save_tensors
from dgl.base import DGLError
def libra_partition(num_community, G, resultdir):
"""
Performs vertex-cut based graph partitioning and converts the partitioning
output to DGL input format.
Parameters
----------
num_community : Number of partitions to create
G : Input graph to be partitioned
resultdir : Output location for storing the partitioned graphs
Output
------
1. Creates X partition folder as XCommunities (say, X=2, so, 2Communities)
XCommunities contains file name communityZ.txt per partition Z (Z <- 0 .. X-1);
each such file contains a list of edges assigned to that partition.
These files constitute the output of Libra graph partitioner
(An intermediate result of this function).
2. The folder also contains partZ folders, each of these folders stores
DGL/DistGNN graphs for the Z partitions;
these graph files are used as input to DistGNN.
3. The folder also contains a json file which contains partitions' information.
"""
num_nodes = G.number_of_nodes() # number of nodes
num_edges = G.number_of_edges() # number of edges
print("Number of nodes in the graph: ", num_nodes)
print("Number of edges in the graph: ", num_edges)
in_d = G.in_degrees()
out_d = G.out_degrees()
node_degree = in_d + out_d
edgenum_unassigned = node_degree.clone()
u_t, v_t = G.edges()
weight_ = th.ones(u_t.shape[0], dtype=th.int64)
community_weights = th.zeros(num_community, dtype=th.int64)
# self_loop = 0
# for p, q in zip(u_t, v_t):
# if p == q:
# self_loop += 1
# print("#self loops in the dataset: ", self_loop)
# del G
## call to C/C++ code
out = th.zeros(u_t.shape[0], dtype=th.int32)
libra_vertex_cut(num_community, node_degree, edgenum_unassigned, community_weights,
u_t, v_t, weight_, out, num_nodes, num_edges, resultdir)
print("Max partition size: ", int(community_weights.max()))
print(" ** Converting libra partitions to dgl graphs **")
fsize = int(community_weights.max()) + 1024 ## max edges in partition
# print("fsize: ", fsize, flush=True)
node_map = th.zeros(num_community, dtype=th.int64)
indices = th.zeros(num_nodes, dtype=th.int64)
lrtensor = th.zeros(num_nodes, dtype=th.int64)
gdt_key = th.zeros(num_nodes, dtype=th.int64)
gdt_value = th.zeros([num_nodes, num_community], dtype=th.int64)
offset = th.zeros(1, dtype=th.int64)
ldt_ar = []
gg_ar = [DGLGraph() for i in range(num_community)]
part_nodes = []
print(">>> ", "num_nodes ", " ", "num_edges")
## Iterator over number of partitions
for i in range(num_community):
g = gg_ar[i]
a_t = th.zeros(fsize, dtype=th.int64)
b_t = th.zeros(fsize, dtype=th.int64)
ldt_key = th.zeros(fsize, dtype=th.int64)
ldt_ar.append(ldt_key)
## building node, parition dictionary
## Assign local node ids and mapping to global node ids
ret = libra2dgl_build_dict(a_t, b_t, indices, ldt_key, gdt_key, gdt_value,
node_map, offset, num_community, i, fsize, resultdir)
num_nodes_partition = int(ret[0])
num_edges_partition = int(ret[1])
part_nodes.append(num_nodes_partition)
print(">>> ", num_nodes_partition, " ", num_edges_partition)
g.add_edges(a_t[0:num_edges_partition], b_t[0:num_edges_partition])
########################################################
## fixing lr - 1-level tree for the split-nodes
libra2dgl_set_lr(gdt_key, gdt_value, lrtensor, num_community, num_nodes)
########################################################
#graph_name = dataset
graph_name = resultdir.split("_")[-1].split("/")[0]
part_method = 'Libra'
num_parts = num_community ## number of paritions/communities
num_hops = 0
node_map_val = node_map.tolist()
edge_map_val = 0
out_path = resultdir
part_metadata = {'graph_name': graph_name,
'num_nodes': G.number_of_nodes(),
'num_edges': G.number_of_edges(),
'part_method': part_method,
'num_parts': num_parts,
'halo_hops': num_hops,
'node_map': node_map_val,
'edge_map': edge_map_val}
############################################################
for i in range(num_community):
g = gg_ar[0]
num_nodes_partition = part_nodes[i]
adj = th.zeros([num_nodes_partition, num_community - 1], dtype=th.int64)
inner_node = th.zeros(num_nodes_partition, dtype=th.int32)
lr_t = th.zeros(num_nodes_partition, dtype=th.int64)
ldt = ldt_ar[0]
try:
feat = G.ndata['feat']
except KeyError:
feat = G.ndata['features']
try:
labels = G.ndata['label']
except KeyError:
labels = G.ndata['labels']
trainm = G.ndata['train_mask'].int()
testm = G.ndata['test_mask'].int()
valm = G.ndata['val_mask'].int()
feat_size = feat.shape[1]
gfeat = th.zeros([num_nodes_partition, feat_size], dtype=feat.dtype)
glabels = th.zeros(num_nodes_partition, dtype=labels.dtype)
gtrainm = th.zeros(num_nodes_partition, dtype=trainm.dtype)
gtestm = th.zeros(num_nodes_partition, dtype=testm.dtype)
gvalm = th.zeros(num_nodes_partition, dtype=valm.dtype)
## build remote node databse per local node
## gather feats, train, test, val, and labels for each partition
libra2dgl_build_adjlist(feat, gfeat, adj, inner_node, ldt, gdt_key,
gdt_value, node_map, lr_t, lrtensor, num_nodes_partition,
num_community, i, feat_size, labels, trainm, testm, valm,
glabels, gtrainm, gtestm, gvalm, feat.shape[0])
g.ndata['adj'] = adj ## database of remote clones
g.ndata['inner_node'] = inner_node ## split node '0' else '1'
g.ndata['feat'] = gfeat ## gathered features
g.ndata['lf'] = lr_t ## 1-level tree among split nodes
g.ndata['label'] = glabels
g.ndata['train_mask'] = gtrainm
g.ndata['test_mask'] = gtestm
g.ndata['val_mask'] = gvalm
# Validation code, run only small graphs
# for l in range(num_nodes_partition):
# index = int(ldt[l])
# assert glabels[l] == labels[index]
# assert gtrainm[l] == trainm[index]
# assert gtestm[l] == testm[index]
# for j in range(feat_size):
# assert gfeat[l][j] == feat[index][j]
print("Writing partition {} to file".format(i), flush=True)
part = g
part_id = i
part_dir = os.path.join(out_path, "part" + str(part_id))
node_feat_file = os.path.join(part_dir, "node_feat.dgl")
edge_feat_file = os.path.join(part_dir, "edge_feat.dgl")
part_graph_file = os.path.join(part_dir, "graph.dgl")
part_metadata['part-{}'.format(part_id)] = {'node_feats': node_feat_file,
'edge_feats': edge_feat_file,
'part_graph': part_graph_file}
os.makedirs(part_dir, mode=0o775, exist_ok=True)
save_tensors(node_feat_file, part.ndata)
save_graphs(part_graph_file, [part])
del g
del gg_ar[0]
del ldt
del ldt_ar[0]
with open('{}/{}.json'.format(out_path, graph_name), 'w') as outfile:
json.dump(part_metadata, outfile, sort_keys=True, indent=4)
print("Conversion libra2dgl completed !!!")
def partition_graph(num_community, G, resultdir):
"""
Performs vertex-cut based graph partitioning and converts the partitioning
output to DGL input format.
Given a graph, this function will create a folder named ``XCommunities`` where ``X``
stands for the number of communities. It will contain ``X`` files named
``communityZ.txt`` for each partition Z (from 0 to X-1);
each such file contains a list of edges assigned to that partition.
These files constitute the output of Libra graph partitioner.
The folder also contains X subfolders named ``partZ``, each of these folders stores
DGL/DistGNN graphs for partition Z; these graph files are used as input to
DistGNN.
The folder also contains a json file which contains partitions' information.
Currently we require the graph's node data to contain the following columns:
* ``features`` for node features.
* ``label`` for node labels.
* ``train_mask`` as a boolean mask of training node set.
* ``val_mask`` as a boolean mask of validation node set.
* ``test_mask`` as a boolean mask of test node set.
Parameters
----------
num_community : int
Number of partitions to create.
G : DGLGraph
Input graph to be partitioned.
resultdir : str
Output location for storing the partitioned graphs.
"""
print("num partitions: ", num_community)
print("output location: ", resultdir)
## create ouptut directory
try:
os.makedirs(resultdir, mode=0o775, exist_ok=True)
except:
raise DGLError("Error: Could not create directory: ", resultdir)
tic = time.time()
print("####################################################################")
print("Executing parititons: ", num_community)
ltic = time.time()
try:
resultdir = os.path.join(resultdir, str(num_community) + "Communities")
os.makedirs(resultdir, mode=0o775, exist_ok=True)
except:
raise DGLError("Error: Could not create sub-directory: ", resultdir)
## Libra partitioning
libra_partition(num_community, G, resultdir)
ltoc = time.time()
print("Time taken by {} partitions {:0.4f} sec".format(num_community, ltoc - ltic))
print()
toc = time.time()
print("Generated ", num_community, " partitions in {:0.4f} sec".format(toc - tic), flush=True)
print("Partitioning completed successfully !!!")
"""
This package contains extra routines related to Libra graph partitioner.
"""
from .tools import load_proteins
r"""
Copyright (c) 2021 Intel Corporation
\file distgnn/tools/tools.py
\brief Tools for use in Libra graph partitioner.
\author Vasimuddin Md <vasimuddin.md@intel.com>
"""
import os
import random
import requests
from scipy.io import mmread
import torch as th
import dgl
from dgl.base import DGLError
from dgl.data.utils import load_graphs, save_graphs, save_tensors
def rep_per_node(prefix, num_community):
"""
Used on Libra partitioned data.
This function reports number of split-copes per node (replication) of
a partitioned graph
Parameters
----------
prefix: Partition folder location (contains replicationlist.csv)
num_community: number of partitions or communities
"""
ifile = os.path.join(prefix, 'replicationlist.csv')
fhandle = open(ifile, "r")
r_dt = {}
fline = fhandle.readline() ## reading first line, contains the comment.
print(fline)
for line in fhandle:
if line[0] == '#':
raise DGLError("[Bug] Read Hash char in rep_per_node func.")
node = line.strip('\n')
if r_dt.get(node, -100) == -100:
r_dt[node] = 1
else:
r_dt[node] += 1
fhandle.close()
## sanity checks
for v in r_dt.values():
if v >= num_community:
raise DGLError("[Bug] Unexpected event in rep_per_node() in tools.py.")
return r_dt
def download_proteins():
"""
Downloads the proteins dataset
"""
print("Downloading dataset...")
print("This might a take while..")
url = "https://portal.nersc.gov/project/m1982/GNN/"
file_name = "subgraph3_iso_vs_iso_30_70length_ALL.m100.propermm.mtx"
url = url + file_name
try:
req = requests.get(url)
except:
raise DGLError("Error: Failed to download Proteins dataset!! Aborting..")
with open("proteins.mtx", "wb") as handle:
handle.write(req.content)
def proteins_mtx2dgl():
"""
This function converts Proteins dataset from mtx to dgl format.
"""
print("Converting mtx2dgl..")
print("This might a take while..")
a_mtx = mmread('proteins.mtx')
coo = a_mtx.tocoo()
u = th.tensor(coo.row, dtype=th.int64)
v = th.tensor(coo.col, dtype=th.int64)
g = dgl.DGLGraph()
g.add_edges(u, v)
n = g.number_of_nodes()
feat_size = 128 ## arbitrary number
feats = th.empty([n, feat_size], dtype=th.float32)
## arbitrary numbers
train_size = 1000000
test_size = 500000
val_size = 5000
nlabels = 256
train_mask = th.zeros(n, dtype=th.bool)
test_mask = th.zeros(n, dtype=th.bool)
val_mask = th.zeros(n, dtype=th.bool)
label = th.zeros(n, dtype=th.int64)
for i in range(train_size):
train_mask[i] = True
for i in range(test_size):
test_mask[train_size + i] = True
for i in range(val_size):
val_mask[train_size + test_size + i] = True
for i in range(n):
label[i] = random.choice(range(nlabels))
g.ndata['feat'] = feats
g.ndata['train_mask'] = train_mask
g.ndata['test_mask'] = test_mask
g.ndata['val_mask'] = val_mask
g.ndata['label'] = label
return g
def save(g, dataset):
"""
This function saves input dataset to dgl format
Parameters
----------
g : graph to be saved
dataset : output folder name
"""
print("Saving dataset..")
part_dir = os.path.join("./" + dataset)
node_feat_file = os.path.join(part_dir, "node_feat.dgl")
part_graph_file = os.path.join(part_dir, "graph.dgl")
os.makedirs(part_dir, mode=0o775, exist_ok=True)
save_tensors(node_feat_file, g.ndata)
save_graphs(part_graph_file, [g])
print("Graph saved successfully !!")
def load_proteins(dataset):
"""
This function downloads, converts, and load Proteins graph dataset
Parameter
---------
dataset: output folder name
"""
part_dir = dataset
graph_file = os.path.join(part_dir + "/graph.dgl")
if not os.path.exists("proteins.mtx"):
download_proteins()
if not os.path.exists(graph_file):
g = proteins_mtx2dgl()
save(g, dataset)
## load
graph = load_graphs(graph_file)[0][0]
return graph
......@@ -703,4 +703,104 @@ def _csrmask(A, A_weights, B):
"""
return F.from_dgl_nd(_CAPI_DGLCSRMask(A, F.to_dgl_nd(A_weights), B))
###################################################################################################
## Libra Graph Partition
def libra_vertex_cut(nc, node_degree, edgenum_unassigned,
community_weights, u, v, w, out, N, N_e, dataset):
"""
This function invokes C/C++ code for Libra based graph partitioning.
Parameter details are present in dgl/src/array/libra_partition.cc
"""
_CAPI_DGLLibraVertexCut(nc,
to_dgl_nd_for_write(node_degree),
to_dgl_nd_for_write(edgenum_unassigned),
to_dgl_nd_for_write(community_weights),
to_dgl_nd(u),
to_dgl_nd(v),
to_dgl_nd(w),
to_dgl_nd_for_write(out),
N,
N_e,
dataset)
def libra2dgl_build_dict(a, b, indices, ldt_key, gdt_key, gdt_value, node_map,
offset, nc, c, fsize, dataset):
"""
This function invokes C/C++ code for pre-processing Libra output.
After graph partitioning using Libra, during conversion from Libra output to DGL/DistGNN input,
this function creates dictionaries to assign local node ids to the partitioned nodes
and also to create a database of the split nodes.
Parameter details are present in dgl/src/array/libra_partition.cc
"""
ret = _CAPI_DGLLibra2dglBuildDict(to_dgl_nd_for_write(a),
to_dgl_nd_for_write(b),
to_dgl_nd_for_write(indices),
to_dgl_nd_for_write(ldt_key),
to_dgl_nd_for_write(gdt_key),
to_dgl_nd_for_write(gdt_value),
to_dgl_nd_for_write(node_map),
to_dgl_nd_for_write(offset),
nc,
c,
fsize,
dataset)
return ret
def libra2dgl_build_adjlist(feat, gfeat, adj, inner_node, ldt, gdt_key,
gdt_value, node_map, lr, lrtensor, num_nodes,
nc, c, feat_size, labels, trainm, testm, valm,
glabels, gtrainm, gtestm, gvalm, feat_shape):
"""
This function invokes C/C++ code for pre-processing Libra output.
After graph partitioning using Libra, once the local and global dictionaries are built,
for each node in each partition, this function copies the split node details from the
global dictionary. It also copies features, label, train, test, and validation information
for each node from the input graph to the corresponding partitions.
Parameter details are present in dgl/src/array/libra_partition.cc
"""
_CAPI_DGLLibra2dglBuildAdjlist(to_dgl_nd(feat),
to_dgl_nd_for_write(gfeat),
to_dgl_nd_for_write(adj),
to_dgl_nd_for_write(inner_node),
to_dgl_nd(ldt),
to_dgl_nd(gdt_key),
to_dgl_nd(gdt_value),
to_dgl_nd(node_map),
to_dgl_nd_for_write(lr),
to_dgl_nd(lrtensor),
num_nodes,
nc,
c,
feat_size,
to_dgl_nd(labels),
to_dgl_nd(trainm),
to_dgl_nd(testm),
to_dgl_nd(valm),
to_dgl_nd_for_write(glabels),
to_dgl_nd_for_write(gtrainm),
to_dgl_nd_for_write(gtestm),
to_dgl_nd_for_write(gvalm),
feat_shape)
def libra2dgl_set_lr(gdt_key, gdt_value, lrtensor, nc, Nn):
"""
This function invokes C/C++ code for pre-processing Libra output.
To prepare the graph partitions for DistGNN input, this function sets the leaf
and root (1-level tree) among the split copies (across different partitions)
of a node from input graph.
Parameter details are present in dgl/src/array/libra_partition.cc
"""
_CAPI_DGLLibra2dglSetLR(to_dgl_nd(gdt_key),
to_dgl_nd(gdt_value),
to_dgl_nd_for_write(lrtensor),
nc,
Nn)
_init_api("dgl.sparse")
......@@ -604,6 +604,5 @@ DGL_REGISTER_GLOBAL("sparse._CAPI_FG_SDDMMTreeReduction")
});
#endif // USE_TVM
} // namespace aten
} // namespace dgl
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment