1. 25 Aug, 2022 1 commit
    • Chang Liu's avatar
      [Example][Refactor] Refactor RGCN example (#4327) · 40a2f3c7
      Chang Liu authored
      * Refactor full graph entity classification
      
      * Refactor rgcn with sampling
      
      * README update
      
      * Update
      
      * Results update
      
      * Respect default setting of self_loop=false in entity.py
      
      * Update
      
      * Update README
      
      * Update for multi-gpu
      
      * Update
      40a2f3c7
  2. 18 Aug, 2022 1 commit
  3. 13 Jul, 2022 1 commit
  4. 11 May, 2022 1 commit
  5. 12 Apr, 2022 1 commit
  6. 23 Mar, 2022 1 commit
  7. 28 Feb, 2022 1 commit
  8. 27 Feb, 2022 1 commit
  9. 23 Feb, 2022 1 commit
    • Minjie Wang's avatar
      [NN] Rework RelGraphConv and HGTConv (#3742) · 0227ddfb
      Minjie Wang authored
      * WIP: TypedLinear and new RelGraphConv
      
      * wip
      
      * further simplify RGCN
      
      * a bunch of tweak for performance; add basic cpu support
      
      * update on segmm
      
      * wip: segment.cu
      
      * new backward kernel works
      
      * fix a bunch of bugs in kernel; leave idx_a for future
      
      * add nn test for typed_linear
      
      * rgcn nn test
      
      * bugfix in corner case; update RGCN README
      
      * doc
      
      * fix cpp lint
      
      * fix lint
      
      * fix ut
      
      * wip: hgtconv; presorted flag for rgcn
      
      * hgt code and ut; WIP: some fix on reorder graph
      
      * better typed linear init
      
      * fix ut
      
      * fix lint; add docstring
      0227ddfb
  10. 17 Feb, 2022 1 commit
  11. 30 Jan, 2022 1 commit
    • Quan (Andy) Gan's avatar
      [Sampling] New sampling pipeline plus asynchronous prefetching (#3665) · 701b4fcc
      Quan (Andy) Gan authored
      * initial update
      
      * more
      
      * more
      
      * multi-gpu example
      
      * cluster gcn, finalize homogeneous
      
      * more explanation
      
      * fix
      
      * bunch of fixes
      
      * fix
      
      * RGAT example and more fixes
      
      * shadow-gnn sampler and some changes in unit test
      
      * fix
      
      * wth
      
      * more fixes
      
      * remove shadow+node/edge dataloader tests for possible ux changes
      
      * lints
      
      * add legacy dataloading import just in case
      
      * fix
      
      * update pylint for f-strings
      
      * fix
      
      * lint
      
      * lint
      
      * lint again
      
      * cherry-picking commit fa9f494
      
      * oops
      
      * fix
      
      * add sample_neighbors in dist_graph
      
      * fix
      
      * lint
      
      * fix
      
      * fix
      
      * fix
      
      * fix tutorial
      
      * fix
      
      * fix
      
      * fix
      
      * fix warning
      
      * remove debug
      
      * add get_foo_storage apis
      
      * lint
      701b4fcc
  12. 15 Jan, 2022 1 commit
  13. 08 Nov, 2021 1 commit
  14. 02 Sep, 2021 1 commit
  15. 23 Aug, 2021 1 commit
  16. 02 Aug, 2021 1 commit
  17. 28 Jul, 2021 1 commit
    • xiang song(charlie.song)'s avatar
      [New Feature] Per edge type sampler for to_homogeneous graphs. (#3131) · ba7e7cf9
      xiang song(charlie.song) authored
      
      
      * fix.
      
      * fix.
      
      * fix.
      
      * fix.
      
      * Fix test
      
      * Deprecate old DistEmbedding impl, use synchronized embedding impl
      
      * Basic imple of heterogeneous on homogenenous sampling
      
      * make pass
      
      * Pass C++ test
      
      * Add python test code
      
      * lint
      
      * lint
      
      * Add MultiLayerEtypeNeighborSampler
      
      * Add unitest for single machine dataloader
      
      * Add dist dataloader test for edge type sampler
      
      * Fix lint
      
      * fix
      
      * support for per etype sample
      
      * Fix some bug and enable distributed training with per edge sample
      
      * fix
      
      * Now distributed training works
      
      * turn off some mxnet
      
      * turn off mxnet for some dist test
      
      * fix
      
      * upd
      
      * upd according to the comments
      
      * Fix
      
      * Fix test and now distributed works.
      
      * upd
      
      * upd
      
      * Fix
      
      * Fix bug
      
      * remove dead code.
      
      * upd
      
      * Fix
      
      * upd
      
      * Fix
      Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-71-112.ec2.internal>
      Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-2-66.ec2.internal>
      Co-authored-by: default avatarDa Zheng <zhengda1936@gmail.com>
      ba7e7cf9
  18. 17 Jul, 2021 1 commit
  19. 13 Jul, 2021 1 commit
  20. 02 Jul, 2021 1 commit
  21. 29 Jun, 2021 1 commit
  22. 13 Jun, 2021 2 commits
  23. 11 Jun, 2021 1 commit
    • nv-dlasalle's avatar
      [Feature] Allow using NCCL for communication in dgl.NodeEmbedding and dgl.SparseOptimizer (#2824) · 17d604b5
      nv-dlasalle authored
      
      
      * Split from NCCL PR
      
      * Fix type in comment
      
      * Expand documentation for sparse_all_to_all_push
      
      * Restore previous behavior in example
      
      * Re-work optimizer to use NCCL based on gradient location
      
      * Allow for running with embedding on CPU but using NCCL for gradient exchange
      
      * Optimize single partition case
      
      * Fix pylint errors
      
      * Add missing include
      
      * fix gradient indexing
      
      * Fix line continuation
      
      * Migrate 'first_step'
      
      * Skip tests without enough GPUs to run NCCL
      
      * Improve empty tensor handling for pytorch 1.5
      
      * Fix indentation
      
      * Allow multiple NCCL communicator to coexist
      
      * Improve handling of empty message
      
      * Update python/dgl/nn/pytorch/sparse_emb.py
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      
      * Update python/dgl/nn/pytorch/sparse_emb.py
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      
      * Keepy empty tensor dimensionaless
      
      * th.empty -> th.tensor
      
      * Preserve shape for empty non-zero dimension tensors
      
      * Use shared state, when embedding is shared
      
      * Add support for gathering an embedding
      
      * Fix typo
      
      * Fix more typos
      
      * Fix backend call
      
      * Use NodeDataLoader to take advantage of ddp
      
      * Update training script to share memory
      
      * Only squeeze last dimension
      
      * Better handle empty message
      
      * Keep embedding on the target device GPU if dgl_sparse if false in RGCN example
      
      * Fix typo in comment
      
      * Add asserts
      
      * Improve documentation in example
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      17d604b5
  24. 02 Jun, 2021 1 commit
  25. 14 May, 2021 1 commit
  26. 03 May, 2021 1 commit
  27. 08 Apr, 2021 1 commit
  28. 30 Mar, 2021 1 commit
  29. 22 Mar, 2021 1 commit
  30. 17 Mar, 2021 1 commit
  31. 01 Mar, 2021 1 commit
  32. 28 Feb, 2021 1 commit
  33. 25 Feb, 2021 2 commits
  34. 16 Feb, 2021 1 commit
  35. 09 Feb, 2021 1 commit
    • Da Zheng's avatar
      [Distributed] Distributed METIS partition (#2576) · e4ff4844
      Da Zheng authored
      
      
      * add convert.
      
      * fix.
      
      * add write_mag.
      
      * fix convert_partition.py
      
      * write data.
      
      * use pyarrow to read.
      
      * update write_mag.py
      
      * fix convert_partition.py.
      
      * load node/edge features when necessary.
      
      * reshuffle nodes.
      
      * write mag correctly.
      
      * fix a bug: inner nodes in a partition might be empty.
      
      * fix bugs.
      
      * add verify code.
      
      * insert reverse edges.
      
      * fix a bug.
      
      * add get node/edge data.
      
      * add instructions.
      
      * remove unnecessary argument.
      
      * update distributed preprocessing.
      
      * fix readme.
      
      * fix.
      
      * fix.
      
      * fix.
      
      * fix readme.
      
      * fix doc.
      
      * fix.
      
      * update readme
      
      * update doc.
      
      * update readme.
      Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-132.us-west-1.compute.internal>
      Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-2-202.us-west-1.compute.internal>
      e4ff4844
  36. 08 Feb, 2021 1 commit
  37. 05 Feb, 2021 1 commit
  38. 27 Jan, 2021 1 commit
    • xiang song(charlie.song)'s avatar
      [Feature] Add support for sparse embedding (#2451) · a7e941c3
      xiang song(charlie.song) authored
      
      
      * Add sparse embedding for dgl and update rgcn example
      
      * upd
      
      * Fix
      
      * Revert "Fix"
      
      This reverts commit 4da87cdfb8b8c3506b7fc7376cd2385ba8045c2a.
      
      * Fix
      
      * upd
      
      * upd
      
      * Fix
      
      * Add unitest and update impl
      
      * fix
      
      * Clean up rgcn example code
      
      * upd
      
      * upd
      
      * update
      
      * Fix
      
      * update score
      
      * sparse for sage
      
      * remove model sparse
      
      * upd
      
      * upd
      
      * remove global norm
      
      * revert delete model_sparse.py
      
      * update according to comments
      
      * Fix doc
      
      * upd
      
      * Fix test
      
      * upd
      
      * lint
      
      * lint
      
      * lint
      
      * upd
      
      * upd
      
      * clean up
      Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-56-220.ec2.internal>
      a7e941c3