- 31 Jul, 2023 1 commit
-
-
Ilia Taraban authored
-
- 27 Jul, 2023 6 commits
-
-
Andrei Ivanov authored
Co-authored-by:Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com>
-
Andrei Ivanov authored
-
Andrei Ivanov authored
Co-authored-by:Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com>
-
Andrei Ivanov authored
Co-authored-by:Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com>
-
Andrei Ivanov authored
Co-authored-by:rudongyu <ru_dongyu@outlook.com>
-
Andrei Ivanov authored
Co-authored-by:Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com>
-
- 14 Jul, 2023 1 commit
-
-
Andrei Ivanov authored
Co-authored-by:Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com>
-
- 13 Jun, 2023 1 commit
-
-
Andrei Ivanov authored
Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
- 12 Jun, 2023 1 commit
-
-
hummingg authored
-
- 09 Jun, 2023 1 commit
-
-
Rhett Ying authored
-
- 08 Jun, 2023 1 commit
-
-
Rhett Ying authored
-
- 07 Jun, 2023 2 commits
-
-
Andrei Ivanov authored
Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
Chang Liu authored
-
- 10 May, 2023 1 commit
-
-
Rhett Ying authored
-
- 20 Apr, 2023 1 commit
-
-
czkkkkkk authored
-
- 10 Apr, 2023 1 commit
-
-
Mufei Li authored
Co-authored-by:Ubuntu <ubuntu@ip-172-31-36-188.ap-northeast-1.compute.internal>
-
- 29 Mar, 2023 1 commit
-
-
Hongzhi (Steve), Chen authored
* pytorch_example * fix --------- Co-authored-by:Ubuntu <ubuntu@ip-172-31-28-63.ap-northeast-1.compute.internal>
-
- 22 Mar, 2023 1 commit
-
-
Mufei Li authored
-
- 15 Mar, 2023 1 commit
-
-
Minjie Wang authored
-
- 19 Feb, 2023 2 commits
-
-
Hongzhi (Steve), Chen authored
Co-authored-by:Ubuntu <ubuntu@ip-172-31-28-63.ap-northeast-1.compute.internal>
-
Hongzhi (Steve), Chen authored
Co-authored-by:Ubuntu <ubuntu@ip-172-31-28-63.ap-northeast-1.compute.internal>
-
- 01 Feb, 2023 1 commit
-
-
Hongzhi (Steve), Chen authored
* remove_mock_sparse_example * mock_sparse_test * remove_mock_sparse --------- Co-authored-by:Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
-
- 12 Jan, 2023 1 commit
-
-
Xin Yao authored
-
- 06 Jan, 2023 2 commits
-
-
peizhou001 authored
-
peizhou001 authored
-
- 05 Jan, 2023 1 commit
-
-
Chang Liu authored
-
- 05 Dec, 2022 1 commit
-
-
Dylan authored
Correction like mentioned in #4969 I noticed that there is a normalisation step on line 97 while the normalised values are not used downstream. Even if this was meant to show the normalisation step, it would not be calculating the normalisation step described in the CGN paper. The paper considers both in and out degrees while the normalisation in the code only describes normalisation using the in degrees. In the end, the normalised values are assigned to g.ndata["norm"] but these values are not used afterwards. Having a normalisation step here is also unnecessary since the GraphConv layer that is used already takes care of the normalisation. https://docs.dgl.ai/en/0.9.x/_modules/dgl/nn/pytorch/conv/graphconv.html#GraphConv It confused me for a second thinking that I had to do the normalisation myself but this is already handled by the GraphConf.
-
- 01 Dec, 2022 1 commit
-
-
peizhou001 authored
-
- 25 Nov, 2022 2 commits
-
-
Hongzhi (Steve), Chen authored
* black on explain_main * isort * add dot Co-authored-by:Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
-
Hongzhi (Steve), Chen authored
Co-authored-by:Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
-
- 17 Nov, 2022 1 commit
-
-
Rhett Ying authored
* [Dist][Examples] refactor dist graphsage examples * refine train_dist.py * update train_dist_unsupervised.py * fix debug info * update train_dist_transductive * update unsupervised_transductive * remove distgnn * fix join() in standalone mode * change batch_labels to long() for ogbn-papers100M * free unnecessary mem * lint * fix lint * refine * fix lint * fix incorrect args * refine
-
- 15 Nov, 2022 1 commit
-
-
peizhou001 authored
rename DGLHeteroGraph to DGLGraph
-
- 09 Nov, 2022 1 commit
-
-
Chang Liu authored
Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
- 03 Nov, 2022 1 commit
-
-
Ereboas authored
* Use black for formatting * limit line width to 80 characters. * Use a backslash instead of directly concatenating * file structure adjustment. * file structure adjustment(2) * codes for citation2 * format slight adjustment * adjust format in models.py * now it runs normally for all datasets. * add comments; adjust code order. * adjust indenting. Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
- 27 Oct, 2022 1 commit
-
-
Ereboas authored
* Use black for formatting * limit line width to 80 characters. * Use a backslash instead of directly concatenating * file structure adjustment. * file structure adjustment(2) * codes for citation2 * format slight adjustment * adjust format in models.py * now it runs normally for all datasets. Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
- 26 Oct, 2022 1 commit
-
-
Mufei Li authored
* Update from master (#4584) * [Example][Refactor] Refactor graphsage multigpu and full-graph example (#4430) * Add refactors for multi-gpu and full-graph example * Fix format * Update * Update * Update * [Cleanup] Remove async_transferer (#4505) * Remove async_transferer * remove test * Remove AsyncTransferer Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> * [Cleanup] Remove duplicate entries of CUB submodule (issue# 4395) (#4499) * remove third_part/cub * remove from third_party Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> * [Bug] Enable turn on/off libxsmm at runtime (#4455) * enable turn on/off libxsmm at runtime by adding a global config and related API Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> * [Feature] Unify the cuda stream used in core library (#4480) * Use an internal cuda stream for CopyDataFromTo * small fix white space * Fix to compile * Make stream optional in copydata for compile * fix lint issue * Update cub functions to use internal stream * Lint check * Update CopyTo/CopyFrom/CopyFromTo to use internal stream * Address comments * Fix backward CUDA stream * Avoid overloading CopyFromTo() * Minor comment update * Overload copydatafromto in cuda device api Co-authored-by:
xiny <xiny@nvidia.com> * [Feature] Added exclude_self and output_batch to knn graph construction (Issues #4323 #4316) (#4389) * * Added "exclude_self" and "output_batch" options to knn_graph and segmented_knn_graph * Updated out-of-date comments on remove_edges and remove_self_loop, since they now preserve batch information * * Changed defaults on new knn_graph and segmented_knn_graph function parameters, for compatibility; pytorch/test_geometry.py was failing * * Added test to ensure dgl.remove_self_loop function correctly updates batch information * * Added new knn_graph and segmented_knn_graph parameters to dgl.nn.KNNGraph and dgl.nn.SegmentedKNNGraph * * Formatting * * Oops, I missed the one in segmented_knn_graph when I fixed the similar thing in knn_graph * * Fixed edge case handling when invalid k specified, since it still needs to be handled consistently for tests to pass * Fixed context of batch info, since it must match the context of the input position data for remove_self_loop to succeed * * Fixed batch info resulting from knn_graph when output_batch is true, for case of 3D input tensor, representing multiple segments * * Added testing of new exclude_self and output_batch parameters on knn_graph and segmented_knn_graph, and their wrappers, KNNGraph and SegmentedKNNGraph, into the test_knn_cuda test * * Added doc comments for new parameters * * Added correct handling for uncommon case of k or more coincident points when excluding self edges in knn_graph and segmented_knn_graph * Added test cases for more than k coincident points * * Updated doc comments for output_batch parameters for clarity * * Linter formatting fixes * * Extracted out common function for test_knn_cpu and test_knn_cuda, to add the new test cases to test_knn_cpu * * Rewording in doc comments * * Removed output_batch parameter from knn_graph and segmented_knn_graph, in favour of always setting the batch information, except in knn_graph if x is a 2D tensor Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [CI] only known devs are authorized to trigger CI (#4518) * [CI] only known devs are authorized to trigger CI * fix if author is null * add comments * [Readability] Auto fix setup.py and update-version.py (#4446) * Auto fix update-version * Auto fix setup.py * Auto fix update-version * Auto fix setup.py * [Doc] Change random.py to random_partition.py in guide on distributed partition pipeline (#4438) * Update distributed-preprocessing.rst * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * fix unpinning when tensoradaptor is not available (#4450) * [Doc] fix print issue in tutorial (#4459) * [Example][Refactor] Refactor RGCN example (#4327) * Refactor full graph entity classification * Refactor rgcn with sampling * README update * Update * Results update * Respect default setting of self_loop=false in entity.py * Update * Update README * Update for multi-gpu * Update * [doc] fix invalid link in user guide (#4468) * [Example] directional_GSN for ogbg-molpcba (#4405) * version-1 * version-2 * version-3 * update examples/README * Update .gitignore * update performance in README, delete scripts * 1st approving review * 2nd approving review Co-authored-by:
Mufei Li <mufeili1996@gmail.com> * Clarify the message name, which is 'm'. (#4462) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> * [Refactor] Auto fix view.py. (#4461) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [Example] SEAL for OGBL (#4291) * [Example] SEAL for OGBL * update index * update * fix readme typo * add seal sampler * modify set ops * prefetch * efficiency test * update * optimize * fix ScatterAdd dtype issue * update sampler style * update Co-authored-by:
Quan Gan <coin2028@hotmail.com> * [CI] use https instead of http (#4488) * [BugFix] fix crash due to incorrect dtype in dgl.to_block() (#4487) * [BugFix] fix crash due to incorrect dtype in dgl.to_block() * fix test failure in TF * [Feature] Make TensorAdapter Stream Aware (#4472) * Allocate tensors in DGL's current stream * make tensoradaptor stream-aware * replace TAemtpy with cpu allocator * fix typo * try fix cpu allocation * clean header * redirect AllocDataSpace as well * resolve comments * [Build][Doc] Specify the sphinx version (#4465) Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * reformat * reformat * Auto fix update-version * Auto fix setup.py * reformat * reformat Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Mufei Li <mufeili1996@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> * Move mock version of dgl_sparse library to DGL main repo (#4524) * init * Add api doc for sparse library * support op btwn matrices with differnt sparsity * Fixed docstring * addresses comments * lint check * change keyword format to fmt Co-authored-by:
Israt Nisa <nisisrat@amazon.com> * [DistPart] expose timeout config for process group (#4532) * [DistPart] expose timeout config for process group * refine code * Update tools/distpartitioning/data_proc_pipeline.py Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [Feature] Import PyTorch's CUDA stream management (#4503) * add set_stream * add .record_stream for NDArray and HeteroGraph * refactor dgl stream Python APIs * test record_stream * add unit test for record stream * use pytorch's stream * fix lint * fix cpu build * address comments * address comments * add record stream tests for dgl.graph * record frames and update dataloder * add docstring * update frame * add backend check for record_stream * remove CUDAThreadEntry::stream * record stream for newly created formats * fix bug * fix cpp test * fix None c_void_p to c_handle * [examples]educe memory consumption (#4558) * [examples]educe memory consumption * reffine help message * refine * [Feature][REVIEW] Enable DGL cugaph nightly CI (#4525) * Added cugraph nightly scripts * Removed nvcr.io//nvidia/pytorch:22.04-py3 reference Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> * Revert "[Feature][REVIEW] Enable DGL cugaph nightly CI (#4525)" (#4563) This reverts commit ec171c64 . * [Misc] Add flake8 lint workflow. (#4566) * Add pyproject.toml for autopep8. * Add pyproject.toml for autopep8. * Add flake8 annotation in workflow. * remove * add * clean up Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Try use official pylint workflow. (#4568) * polish update_version * update pylint workflow. * add * revert. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [CI] refine stage logic (#4565) * [CI] refine stage logic * refine * refine * remove (#4570) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Add Pylint workflow for flake8. (#4571) * remove * Add pylint. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Update the python version in Pylint workflow for flake8. (#4572) * remove * Add pylint. * Change the python version for pylint. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint. (#4574) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Use another workflow. (#4575) * Update pylint. * Use another workflow. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint. (#4576) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint.yml * Update pylint.yml * Delete pylint.yml * [Misc]Add pyproject.toml for autopep8 & black. (#4543) * Add pyproject.toml for autopep8. * Add pyproject.toml for autopep8. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Feature] Bump DLPack to v0.7 and decouple DLPack from the core library (#4454) * rename `DLContext` to `DGLContext` * rename `kDLGPU` to `kDLCUDA` * replace DLTensor with DGLArray * fix linting * Unify DGLType and DLDataType to DGLDataType * Fix FFI * rename DLDeviceType to DGLDeviceType * decouple dlpack from the core library * fix bug * fix lint * fix merge * fix build * address comments * rename dl_converter to dlpack_convert * remove redundant comments Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> Co-authored-by:
Israt Nisa <neesha295@gmail.com> Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
peizhou001 <110809584+peizhou001@users.noreply.github.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> Co-authored-by:
ndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> Co-authored-by:
Vibhu Jawa <vibhujawa@gmail.com> * [Deprecation] Dataset Attributes (#4546) * Update * CI * CI * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * [Example] Bug Fix (#4665) * Update * CI * CI * Update * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * Update * Update (#4724) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> Co-authored-by:
Israt Nisa <neesha295@gmail.com> Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
peizhou001 <110809584+peizhou001@users.noreply.github.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> Co-authored-by:
ndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> Co-authored-by:
Vibhu Jawa <vibhujawa@gmail.com>
-
- 22 Oct, 2022 1 commit
-
-
Chendi.Xue authored
* Add device support in hetero_rgcn Signed-off-by:
Xue, Chendi <chendi.xue@intel.com> * use num_workers instead of hard code and enable cpu_affinity for pytorch > 1.12 Signed-off-by:
Xue, Chendi <chendi.xue@intel.com> * Remove hard code num_workers and use dgl version to check if cpu_affinity is supported Signed-off-by:
Xue, Chendi <chendi.xue@intel.com> * Remove specified dl_cores and computer_cores Add error print if num_workers are miss set Signed-off-by:
Xue, Chendi <chendi.xue@intel.com> * expected num_workers should be less than num_physical_cores Signed-off-by:
Chendi Xue <chendi.xue@intel.com> * Update examples/pytorch/ogb/ogbn-mag/hetero_rgcn.py Co-authored-by:
Chang Liu <chang.liu@utexas.edu> * Remove dgl version and num_workers print Signed-off-by:
Xue, Chendi <chendi.xue@intel.com> * add comment to explain is_support_affinity * Fix typo Signed-off-by:
Xue, Chendi <chendi.xue@intel.com> Signed-off-by:
Xue, Chendi <chendi.xue@intel.com> Signed-off-by:
Chendi Xue <chendi.xue@intel.com> Co-authored-by:
Chang Liu <chang.liu@utexas.edu>
-
- 18 Oct, 2022 1 commit
-
-
Chang Liu authored
* RGAT refactor * File rename * Address comments Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
- 13 Oct, 2022 1 commit
-
-
Mufei Li authored
* Update from master (#4584) * [Example][Refactor] Refactor graphsage multigpu and full-graph example (#4430) * Add refactors for multi-gpu and full-graph example * Fix format * Update * Update * Update * [Cleanup] Remove async_transferer (#4505) * Remove async_transferer * remove test * Remove AsyncTransferer Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> * [Cleanup] Remove duplicate entries of CUB submodule (issue# 4395) (#4499) * remove third_part/cub * remove from third_party Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> * [Bug] Enable turn on/off libxsmm at runtime (#4455) * enable turn on/off libxsmm at runtime by adding a global config and related API Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> * [Feature] Unify the cuda stream used in core library (#4480) * Use an internal cuda stream for CopyDataFromTo * small fix white space * Fix to compile * Make stream optional in copydata for compile * fix lint issue * Update cub functions to use internal stream * Lint check * Update CopyTo/CopyFrom/CopyFromTo to use internal stream * Address comments * Fix backward CUDA stream * Avoid overloading CopyFromTo() * Minor comment update * Overload copydatafromto in cuda device api Co-authored-by:
xiny <xiny@nvidia.com> * [Feature] Added exclude_self and output_batch to knn graph construction (Issues #4323 #4316) (#4389) * * Added "exclude_self" and "output_batch" options to knn_graph and segmented_knn_graph * Updated out-of-date comments on remove_edges and remove_self_loop, since they now preserve batch information * * Changed defaults on new knn_graph and segmented_knn_graph function parameters, for compatibility; pytorch/test_geometry.py was failing * * Added test to ensure dgl.remove_self_loop function correctly updates batch information * * Added new knn_graph and segmented_knn_graph parameters to dgl.nn.KNNGraph and dgl.nn.SegmentedKNNGraph * * Formatting * * Oops, I missed the one in segmented_knn_graph when I fixed the similar thing in knn_graph * * Fixed edge case handling when invalid k specified, since it still needs to be handled consistently for tests to pass * Fixed context of batch info, since it must match the context of the input position data for remove_self_loop to succeed * * Fixed batch info resulting from knn_graph when output_batch is true, for case of 3D input tensor, representing multiple segments * * Added testing of new exclude_self and output_batch parameters on knn_graph and segmented_knn_graph, and their wrappers, KNNGraph and SegmentedKNNGraph, into the test_knn_cuda test * * Added doc comments for new parameters * * Added correct handling for uncommon case of k or more coincident points when excluding self edges in knn_graph and segmented_knn_graph * Added test cases for more than k coincident points * * Updated doc comments for output_batch parameters for clarity * * Linter formatting fixes * * Extracted out common function for test_knn_cpu and test_knn_cuda, to add the new test cases to test_knn_cpu * * Rewording in doc comments * * Removed output_batch parameter from knn_graph and segmented_knn_graph, in favour of always setting the batch information, except in knn_graph if x is a 2D tensor Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [CI] only known devs are authorized to trigger CI (#4518) * [CI] only known devs are authorized to trigger CI * fix if author is null * add comments * [Readability] Auto fix setup.py and update-version.py (#4446) * Auto fix update-version * Auto fix setup.py * Auto fix update-version * Auto fix setup.py * [Doc] Change random.py to random_partition.py in guide on distributed partition pipeline (#4438) * Update distributed-preprocessing.rst * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * fix unpinning when tensoradaptor is not available (#4450) * [Doc] fix print issue in tutorial (#4459) * [Example][Refactor] Refactor RGCN example (#4327) * Refactor full graph entity classification * Refactor rgcn with sampling * README update * Update * Results update * Respect default setting of self_loop=false in entity.py * Update * Update README * Update for multi-gpu * Update * [doc] fix invalid link in user guide (#4468) * [Example] directional_GSN for ogbg-molpcba (#4405) * version-1 * version-2 * version-3 * update examples/README * Update .gitignore * update performance in README, delete scripts * 1st approving review * 2nd approving review Co-authored-by:
Mufei Li <mufeili1996@gmail.com> * Clarify the message name, which is 'm'. (#4462) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> * [Refactor] Auto fix view.py. (#4461) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [Example] SEAL for OGBL (#4291) * [Example] SEAL for OGBL * update index * update * fix readme typo * add seal sampler * modify set ops * prefetch * efficiency test * update * optimize * fix ScatterAdd dtype issue * update sampler style * update Co-authored-by:
Quan Gan <coin2028@hotmail.com> * [CI] use https instead of http (#4488) * [BugFix] fix crash due to incorrect dtype in dgl.to_block() (#4487) * [BugFix] fix crash due to incorrect dtype in dgl.to_block() * fix test failure in TF * [Feature] Make TensorAdapter Stream Aware (#4472) * Allocate tensors in DGL's current stream * make tensoradaptor stream-aware * replace TAemtpy with cpu allocator * fix typo * try fix cpu allocation * clean header * redirect AllocDataSpace as well * resolve comments * [Build][Doc] Specify the sphinx version (#4465) Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * reformat * reformat * Auto fix update-version * Auto fix setup.py * reformat * reformat Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Mufei Li <mufeili1996@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> * Move mock version of dgl_sparse library to DGL main repo (#4524) * init * Add api doc for sparse library * support op btwn matrices with differnt sparsity * Fixed docstring * addresses comments * lint check * change keyword format to fmt Co-authored-by:
Israt Nisa <nisisrat@amazon.com> * [DistPart] expose timeout config for process group (#4532) * [DistPart] expose timeout config for process group * refine code * Update tools/distpartitioning/data_proc_pipeline.py Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [Feature] Import PyTorch's CUDA stream management (#4503) * add set_stream * add .record_stream for NDArray and HeteroGraph * refactor dgl stream Python APIs * test record_stream * add unit test for record stream * use pytorch's stream * fix lint * fix cpu build * address comments * address comments * add record stream tests for dgl.graph * record frames and update dataloder * add docstring * update frame * add backend check for record_stream * remove CUDAThreadEntry::stream * record stream for newly created formats * fix bug * fix cpp test * fix None c_void_p to c_handle * [examples]educe memory consumption (#4558) * [examples]educe memory consumption * reffine help message * refine * [Feature][REVIEW] Enable DGL cugaph nightly CI (#4525) * Added cugraph nightly scripts * Removed nvcr.io//nvidia/pytorch:22.04-py3 reference Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> * Revert "[Feature][REVIEW] Enable DGL cugaph nightly CI (#4525)" (#4563) This reverts commit ec171c64 . * [Misc] Add flake8 lint workflow. (#4566) * Add pyproject.toml for autopep8. * Add pyproject.toml for autopep8. * Add flake8 annotation in workflow. * remove * add * clean up Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Try use official pylint workflow. (#4568) * polish update_version * update pylint workflow. * add * revert. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [CI] refine stage logic (#4565) * [CI] refine stage logic * refine * refine * remove (#4570) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Add Pylint workflow for flake8. (#4571) * remove * Add pylint. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Update the python version in Pylint workflow for flake8. (#4572) * remove * Add pylint. * Change the python version for pylint. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint. (#4574) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Use another workflow. (#4575) * Update pylint. * Use another workflow. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint. (#4576) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint.yml * Update pylint.yml * Delete pylint.yml * [Misc]Add pyproject.toml for autopep8 & black. (#4543) * Add pyproject.toml for autopep8. * Add pyproject.toml for autopep8. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Feature] Bump DLPack to v0.7 and decouple DLPack from the core library (#4454) * rename `DLContext` to `DGLContext` * rename `kDLGPU` to `kDLCUDA` * replace DLTensor with DGLArray * fix linting * Unify DGLType and DLDataType to DGLDataType * Fix FFI * rename DLDeviceType to DGLDeviceType * decouple dlpack from the core library * fix bug * fix lint * fix merge * fix build * address comments * rename dl_converter to dlpack_convert * remove redundant comments Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> Co-authored-by:
Israt Nisa <neesha295@gmail.com> Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
peizhou001 <110809584+peizhou001@users.noreply.github.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> Co-authored-by:
ndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> Co-authored-by:
Vibhu Jawa <vibhujawa@gmail.com> * [Deprecation] Dataset Attributes (#4546) * Update * CI * CI * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * [Example] Bug Fix (#4665) * Update * CI * CI * Update * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * Update Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> Co-authored-by:
Israt Nisa <neesha295@gmail.com> Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
peizhou001 <110809584+peizhou001@users.noreply.github.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> Co-authored-by:
ndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> Co-authored-by:
Vibhu Jawa <vibhujawa@gmail.com>
-