"tests/python/common/test_heterograph.py" did not exist on "9eaace9216e10790c76e7675741daefa92ae1b59"
- 30 Nov, 2023 2 commits
-
-
Rhett Ying authored
-
Rhett Ying authored
-
- 22 Aug, 2023 1 commit
-
-
Andrei Ivanov authored
-
- 31 Jul, 2023 1 commit
-
-
Ilia Taraban authored
-
- 09 Jun, 2023 1 commit
-
-
Rhett Ying authored
-
- 29 Mar, 2023 1 commit
-
-
Hongzhi (Steve), Chen authored
* Other * revert --------- Co-authored-by:Ubuntu <ubuntu@ip-172-31-28-63.ap-northeast-1.compute.internal>
-
- 02 Mar, 2023 1 commit
-
-
czkkkkkk authored
-
- 29 Dec, 2022 1 commit
-
-
Xin Yao authored
-
- 26 Dec, 2022 1 commit
-
-
Rhett Ying authored
* [Doc] update doc page for distributed partition pipeline * update
-
- 10 Nov, 2022 1 commit
-
-
peizhou001 authored
* Update from master (#4584) * [Example][Refactor] Refactor graphsage multigpu and full-graph example (#4430) * Add refactors for multi-gpu and full-graph example * Fix format * Update * Update * Update * [Cleanup] Remove async_transferer (#4505) * Remove async_transferer * remove test * Remove AsyncTransferer Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> * [Cleanup] Remove duplicate entries of CUB submodule (issue# 4395) (#4499) * remove third_part/cub * remove from third_party Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> * [Bug] Enable turn on/off libxsmm at runtime (#4455) * enable turn on/off libxsmm at runtime by adding a global config and related API Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> * [Feature] Unify the cuda stream used in core library (#4480) * Use an internal cuda stream for CopyDataFromTo * small fix white space * Fix to compile * Make stream optional in copydata for compile * fix lint issue * Update cub functions to use internal stream * Lint check * Update CopyTo/CopyFrom/CopyFromTo to use internal stream * Address comments * Fix backward CUDA stream * Avoid overloading CopyFromTo() * Minor comment update * Overload copydatafromto in cuda device api Co-authored-by:
xiny <xiny@nvidia.com> * [Feature] Added exclude_self and output_batch to knn graph construction (Issues #4323 #4316) (#4389) * * Added "exclude_self" and "output_batch" options to knn_graph and segmented_knn_graph * Updated out-of-date comments on remove_edges and remove_self_loop, since they now preserve batch information * * Changed defaults on new knn_graph and segmented_knn_graph function parameters, for compatibility; pytorch/test_geometry.py was failing * * Added test to ensure dgl.remove_self_loop function correctly updates batch information * * Added new knn_graph and segmented_knn_graph parameters to dgl.nn.KNNGraph and dgl.nn.SegmentedKNNGraph * * Formatting * * Oops, I missed the one in segmented_knn_graph when I fixed the similar thing in knn_graph * * Fixed edge case handling when invalid k specified, since it still needs to be handled consistently for tests to pass * Fixed context of batch info, since it must match the context of the input position data for remove_self_loop to succeed * * Fixed batch info resulting from knn_graph when output_batch is true, for case of 3D input tensor, representing multiple segments * * Added testing of new exclude_self and output_batch parameters on knn_graph and segmented_knn_graph, and their wrappers, KNNGraph and SegmentedKNNGraph, into the test_knn_cuda test * * Added doc comments for new parameters * * Added correct handling for uncommon case of k or more coincident points when excluding self edges in knn_graph and segmented_knn_graph * Added test cases for more than k coincident points * * Updated doc comments for output_batch parameters for clarity * * Linter formatting fixes * * Extracted out common function for test_knn_cpu and test_knn_cuda, to add the new test cases to test_knn_cpu * * Rewording in doc comments * * Removed output_batch parameter from knn_graph and segmented_knn_graph, in favour of always setting the batch information, except in knn_graph if x is a 2D tensor Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [CI] only known devs are authorized to trigger CI (#4518) * [CI] only known devs are authorized to trigger CI * fix if author is null * add comments * [Readability] Auto fix setup.py and update-version.py (#4446) * Auto fix update-version * Auto fix setup.py * Auto fix update-version * Auto fix setup.py * [Doc] Change random.py to random_partition.py in guide on distributed partition pipeline (#4438) * Update distributed-preprocessing.rst * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * fix unpinning when tensoradaptor is not available (#4450) * [Doc] fix print issue in tutorial (#4459) * [Example][Refactor] Refactor RGCN example (#4327) * Refactor full graph entity classification * Refactor rgcn with sampling * README update * Update * Results update * Respect default setting of self_loop=false in entity.py * Update * Update README * Update for multi-gpu * Update * [doc] fix invalid link in user guide (#4468) * [Example] directional_GSN for ogbg-molpcba (#4405) * version-1 * version-2 * version-3 * update examples/README * Update .gitignore * update performance in README, delete scripts * 1st approving review * 2nd approving review Co-authored-by:
Mufei Li <mufeili1996@gmail.com> * Clarify the message name, which is 'm'. (#4462) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> * [Refactor] Auto fix view.py. (#4461) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [Example] SEAL for OGBL (#4291) * [Example] SEAL for OGBL * update index * update * fix readme typo * add seal sampler * modify set ops * prefetch * efficiency test * update * optimize * fix ScatterAdd dtype issue * update sampler style * update Co-authored-by:
Quan Gan <coin2028@hotmail.com> * [CI] use https instead of http (#4488) * [BugFix] fix crash due to incorrect dtype in dgl.to_block() (#4487) * [BugFix] fix crash due to incorrect dtype in dgl.to_block() * fix test failure in TF * [Feature] Make TensorAdapter Stream Aware (#4472) * Allocate tensors in DGL's current stream * make tensoradaptor stream-aware * replace TAemtpy with cpu allocator * fix typo * try fix cpu allocation * clean header * redirect AllocDataSpace as well * resolve comments * [Build][Doc] Specify the sphinx version (#4465) Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * reformat * reformat * Auto fix update-version * Auto fix setup.py * reformat * reformat Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Mufei Li <mufeili1996@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> * Move mock version of dgl_sparse library to DGL main repo (#4524) * init * Add api doc for sparse library * support op btwn matrices with differnt sparsity * Fixed docstring * addresses comments * lint check * change keyword format to fmt Co-authored-by:
Israt Nisa <nisisrat@amazon.com> * [DistPart] expose timeout config for process group (#4532) * [DistPart] expose timeout config for process group * refine code * Update tools/distpartitioning/data_proc_pipeline.py Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> * [Feature] Import PyTorch's CUDA stream management (#4503) * add set_stream * add .record_stream for NDArray and HeteroGraph * refactor dgl stream Python APIs * test record_stream * add unit test for record stream * use pytorch's stream * fix lint * fix cpu build * address comments * address comments * add record stream tests for dgl.graph * record frames and update dataloder * add docstring * update frame * add backend check for record_stream * remove CUDAThreadEntry::stream * record stream for newly created formats * fix bug * fix cpp test * fix None c_void_p to c_handle * [examples]educe memory consumption (#4558) * [examples]educe memory consumption * reffine help message * refine * [Feature][REVIEW] Enable DGL cugaph nightly CI (#4525) * Added cugraph nightly scripts * Removed nvcr.io//nvidia/pytorch:22.04-py3 reference Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> * Revert "[Feature][REVIEW] Enable DGL cugaph nightly CI (#4525)" (#4563) This reverts commit ec171c64 . * [Misc] Add flake8 lint workflow. (#4566) * Add pyproject.toml for autopep8. * Add pyproject.toml for autopep8. * Add flake8 annotation in workflow. * remove * add * clean up Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Try use official pylint workflow. (#4568) * polish update_version * update pylint workflow. * add * revert. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [CI] refine stage logic (#4565) * [CI] refine stage logic * refine * refine * remove (#4570) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Add Pylint workflow for flake8. (#4571) * remove * Add pylint. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Update the python version in Pylint workflow for flake8. (#4572) * remove * Add pylint. * Change the python version for pylint. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint. (#4574) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Misc] Use another workflow. (#4575) * Update pylint. * Use another workflow. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint. (#4576) Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * Update pylint.yml * Update pylint.yml * Delete pylint.yml * [Misc]Add pyproject.toml for autopep8 & black. (#4543) * Add pyproject.toml for autopep8. * Add pyproject.toml for autopep8. Co-authored-by:
Steve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> * [Feature] Bump DLPack to v0.7 and decouple DLPack from the core library (#4454) * rename `DLContext` to `DGLContext` * rename `kDLGPU` to `kDLCUDA` * replace DLTensor with DGLArray * fix linting * Unify DGLType and DLDataType to DGLDataType * Fix FFI * rename DLDeviceType to DGLDeviceType * decouple dlpack from the core library * fix bug * fix lint * fix merge * fix build * address comments * rename dl_converter to dlpack_convert * remove redundant comments Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> Co-authored-by:
Israt Nisa <neesha295@gmail.com> Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
peizhou001 <110809584+peizhou001@users.noreply.github.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> Co-authored-by:
ndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> Co-authored-by:
Vibhu Jawa <vibhujawa@gmail.com> * [Deprecation] Dataset Attributes (#4546) * Update * CI * CI * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * [Example] Bug Fix (#4665) * Update * CI * CI * Update * Update Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * Update * Update (#4724) Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> * change DGLHeteroGraph to DGLGraph in DOC * revert c change Co-authored-by:
Mufei Li <mufeili1996@gmail.com> Co-authored-by:
Chang Liu <chang.liu@utexas.edu> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xin Yao <yaox12@outlook.com> Co-authored-by:
Israt Nisa <neesha295@gmail.com> Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal> Co-authored-by:
ndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Hongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal> Co-authored-by:
Zhiteng Li <55398076+ZHITENGLI@users.noreply.github.com> Co-authored-by:
rudongyu <ru_dongyu@outlook.com> Co-authored-by:
Quan Gan <coin2028@hotmail.com> Co-authored-by:
Vibhu Jawa <vibhujawa@gmail.com> Co-authored-by:
Ubuntu <ubuntu@ip-172-31-16-19.ap-northeast-1.compute.internal>
-
- 06 Nov, 2022 1 commit
-
-
Xin Yao authored
* add bf16 specializations * remove SWITCH_BITS * enable amp for bf16 * remove SWITCH_BITS for cpu kernels * enbale bf16 based on CUDART * fix compiling for sm<80 * fix cpu build * enable unit tests * update doc * disable test for CUDA < 11.0 * address comments * address comments
-
- 27 Oct, 2022 1 commit
-
-
Triston authored
* Update distributed-preprocessing.rst * Another typo for the writes edge type
-
- 08 Oct, 2022 1 commit
-
-
Rhett Ying authored
* [doc] add doc for saving original node/edge IDs in dist part pipeline * move added docs into advanced topic * fix ypo * refine
-
- 25 Aug, 2022 1 commit
-
-
Rhett Ying authored
-
- 22 Aug, 2022 1 commit
-
-
Mufei Li authored
* Update distributed-preprocessing.rst * Update Co-authored-by:Ubuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>
-
- 18 Aug, 2022 1 commit
-
-
Mufei Li authored
* Update * rollback for partition_algo/random.py Co-authored-by:Ubuntu <ubuntu@ip-172-31-20-21.us-west-2.compute.internal>
-
- 17 Aug, 2022 1 commit
-
-
Minjie Wang authored
* dist index chapter * preproc chapter * rst * tools page * partition chapter * rst * hetero chapter * 7.1 step1 * add parmetis back * changed based on feedback * address comments
-
- 29 Jul, 2022 1 commit
-
-
Xin Yao authored
* add weighted sampling without replacement (A-Chao) * improve Algorithm A-Chao with block-wise prefix sum * correctly fill out_idxs * implement weighted sampling with replacement * small fix * merge host-side code of weighted/uniform sampling * enable unit tests for cuda weighted sampling * move thrust/cub wrapper to the cmake file * update docs accordingly * fix linting * fix linting * fix unit test * Bump external CUB/Thrust versions * Fix code style and update description of algorithm design * [Feature] GPU support weighted graph neighbor sampling commit by pengqirong(OPPO) * merge pengqirong's implementation * revert the change to cub and thrust * fix linting * use DeviceSegmentedSort for better performance * add more comments * add necessary notes * add necessary notes * resolve some comments * define THRUST_CUB_WRAPPED_NAMESPACE * fix doc Co-authored-by:彭齐荣 <657017034@qq.com>
-
- 26 Jul, 2022 1 commit
-
-
Dewvin authored
* [Feature] Add CUDA Weighted Randomwalk Sampling * [Feature] Add CUDA Weighted Randomwalk Sampling * [Feature] Add CUDA Weighted Randomwalk Sampling * [Feature] Add CUDA Weighted Randomwalk Sampling * fix empty prob array && enable non-uniform for restart && enable unit tests * update doc and guide for randomwalk and pinsage * update comments Co-authored-by:
zhenliangqiu <ubuntu@ip-172-31-24-245.ap-southeast-1.compute.internal> Co-authored-by:
xiny <xiny@nvidia.com>
-
- 21 Jul, 2022 1 commit
-
-
Mufei Li authored
Co-authored-by:Ubuntu <ubuntu@ip-172-31-53-142.us-west-2.compute.internal>
-
- 02 Jun, 2022 2 commits
-
-
Xin Zhang authored
Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
Mufei Li authored
* Update * CI * Update * Update * Fix * Fix
-
- 26 Mar, 2022 1 commit
-
-
Minjie Wang authored
* wip: dataloading doc * update dataloading package doc and many others * lint
-
- 25 Mar, 2022 2 commits
-
-
István Ketykó authored
fix #3812 Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com>
-
Quan (Andy) Gan authored
* fix distributed multi-GPU example device * try Join * update version requirement in README * use model.join * fix docs Co-authored-by:Jinjing Zhou <VoVAllen@users.noreply.github.com>
-
- 28 Feb, 2022 1 commit
-
-
Xin Yao authored
Co-authored-by:Quan (Andy) Gan <coin2028@hotmail.com>
-
- 27 Feb, 2022 1 commit
-
-
Quan (Andy) Gan authored
* huuuuge update * remove * lint * lint * fix * what happened to nccl * update multi-gpu unsupervised graphsage example * replace most of the dgl.mp.process with torch.mp.spawn * update if condition for use_uva case * update user guide * address comments * incorporating suggestions from @jermainewang * oops * fix tutorial to pass CI * oops * fix again Co-authored-by:Xin Yao <xiny@nvidia.com>
-
- 24 Feb, 2022 1 commit
-
-
Mufei Li authored
* Update * Update * Update * Fix * Update * Update * Update * Fix
-
- 23 Feb, 2022 1 commit
-
-
Rhett Ying authored
* [Fix] be able to parse ids if numeric and non-numeric values are used together * add required package info and cache note into docstring * duplicate node id is not allowed
-
- 17 Feb, 2022 4 commits
- 09 Feb, 2022 2 commits
-
-
Rhett Ying authored
* enable to launch multiple client groups sequentially * launch simultaneously is enabled * refine docstring * revert unnecessary change * [DOC] add doc for long live server * refine * refine doc * refine doc
-
Xin Yao authored
* implement pin_memory/unpin_memory/is_pinned for dgl.graph * update python docstring * update c++ docstring * add test * fix the broken UnifiedTensor * XPU_SWITCH for kDLCPUPinned * a rough version ready for testing * eliminate extra context parameter for pin/unpin * update train_sampling * fix linting * fix typo * multi-gpu uva sampling case * disable new format materialization for pinned graphs * update python doc for pin_memory_ * fix unit test * UVA sampling for link prediction * dispatch most csr ops * update graphsage example to combine uva sampling and UnifiedTensor * update graphsage example to combine uva sampling and UnifiedTensor * update graphsage example to combine uva sampling and UnifiedTensor * update doc * update examples * change unitgraph and heterograph's PinMemory to in-place * update examples for multi-gpu uva sampling * update doc * fix linting * fix cpu build * fix is_pinned for DistGraph * fix is_pinned for DistGraph * update graphsage unsupervised example * update doc for gpu sampling * update some check for sampling device switching * fix linting * adapt for new dataloader * fix linting * fix * fix some name issue * adjust device check * add unit test for uva sampling & fix some zero_copy bug * fix linting * update num_threads in graphsage examples Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com> Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com>
-
- 30 Jan, 2022 1 commit
-
-
Quan (Andy) Gan authored
* initial update * more * more * multi-gpu example * cluster gcn, finalize homogeneous * more explanation * fix * bunch of fixes * fix * RGAT example and more fixes * shadow-gnn sampler and some changes in unit test * fix * wth * more fixes * remove shadow+node/edge dataloader tests for possible ux changes * lints * add legacy dataloading import just in case * fix * update pylint for f-strings * fix * lint * lint * lint again * cherry-picking commit fa9f494 * oops * fix * add sample_neighbors in dist_graph * fix * lint * fix * fix * fix * fix tutorial * fix * fix * fix * fix warning * remove debug * add get_foo_storage apis * lint
-
- 26 Jan, 2022 1 commit
-
-
Jinjing Zhou authored
Fix #3626
-
- 25 Jan, 2022 1 commit
-
-
Jeremy Goh authored
* Fix ref to message-passing guide * Fix pygments and spacing * Update build documentation steps in README.md * Use links * Adjust parameters in SAGEConv docstring in same order as init * Fix spelling error * Change doc link
-
- 18 Nov, 2021 1 commit
-
-
Mufei Li authored
* Update * Update * CI
-
- 14 Nov, 2021 1 commit
-
-
Yang Su authored
* Update graph-heterogeneous.rst `tensor([0, 1, 2, 0, 1, 2])` should be output instead of code * Update message-api.rst `updata_all_example()` should be `update_all_example()` * Update message-efficient.rst `cat_feat` need to concatenate with `dim=1` for the # edge features to match # edges * Update nn-construction.rst all `max_pool` in the aggregator type of `SAGEConv` should be `pool` instead * Update graph-heterogeneous.rst `tensor([0, 1, 2, 0, 1, 2])` should be output instead of code * Update message-api.rst `updata_all_example()` should be `update_all_example()` * Update message-efficient.rst `cat_feat` need to concatenate with `dim=1` for the # edge features to match # edges * Update nn-construction.rst all `max_pool` in the aggregator type of `SAGEConv` should be `pool` instead * Update nn-forward.rst all `max_pool` in the aggregator type of `SAGEConv` should be `pool` instead * Update nn-forward.rst all `max_pool` in the aggregator type of `SAGEConv` should be `pool` instead Co-authored-by:zhjwy9343 <6593865@qq.com>
-