- 18 May, 2022 1 commit
-
-
Rhett Ying authored
* [Dist][BugFix] enable sampling on bipartite * add comments for tests
-
- 17 May, 2022 4 commits
-
-
Xin Zhang authored
Co-authored-by:Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com>
-
Mufei Li authored
-
paoxiaode authored
* Change the curand_init parameter * Change the curand_init parameter * commit * commit * change the curandState and launch dim of CSRRowwiseSample kernel * commit * keep _CSRRowWiseSampleReplaceKernel in sync Co-authored-by:nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com>
-
ndickson-nvidia authored
* * Added half_(), float_(), and double_() functions to DGLHeteroGraph, HeteroNodeDataView, and HeteroEdgeDataView, for converting floating-point tensor data to float16, float32, or float64 precision * * Extracted out private functions for floating-point type conversion, to reduce code duplication * * Added test for floating-point data conversion functions, half_(), float_(), and double_() * * Moved half_(), float_(), and double_() functions from HeteroNodeDataView and HeteroEdgeDataView to Frame class * * Updated test_float_cast() to use dgl.heterograph instead of dgl.graph * Added to CONTRIBUTORS.md * * Changed data type conversion to be deferred until the data is accessed, to avoid redundant conversions of data that isn't used. * * Addressed issues flagged by linter * * Worked around a bug in the old version of mxnet that's currently used for DGL testing * * Only defer Column data type conversion if there is a pending device transfer or index sampling to be done. This is expected to be the desired behaviour based on discussions of a few use cases, as described in the comments. * * Moved floating-point feature data conversion functions to dgl.transforms.functional * Changed them from in-place behaviour to shallow copy (out-of-place) behaviour * * Fixed linter issues * * Removed lines that unintentionally added to_half, to_float, and to_double to DGLHeteroGraph * Moved _init_api line to the end of the file again * * Removed one of the two leading underscores from Frame.__astype_float, making it not fully private Co-authored-by:nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com>
-
- 16 May, 2022 4 commits
-
-
nv-dlasalle authored
Prevent users from attempting to pin PyTorch non-contiguous tensors or views only encompassing part of tensor. (#3992) * Disable pinning non-contiguous memory * Prevent views from being converted for write * Fix linting * Add unit tests * Improve error message for users * Switch to pytest function * exclude mxnet and tensorflow from inplace pinning * Add skip * Restrict to pytorch backend * Use backend to retrieve device * Fix capitalization in decorator Co-authored-by:Quan (Andy) Gan <coin2028@hotmail.com>
-
nv-dlasalle authored
* Explicitly unpin tensoradapter allocated arrays * Undo unrelated change * Add unit test * update unit test
-
Mufei Li authored
* Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Fix * Update * Update * Update
-
Xin Yao authored
* remove unnecessary induced vertices in EdgeSubgraph * add unit test
-
- 13 May, 2022 2 commits
-
-
Quan (Andy) Gan authored
* fix * revert * Update dataloader.py
-
Rhett Ying authored
-
- 12 May, 2022 1 commit
-
-
nv-dlasalle authored
-
- 11 May, 2022 4 commits
-
-
Vikram Sharma authored
Based on the pull request: https://github.com/dmlc/dgl/pull/3983
-
Vikram Sharma authored
With the emergence of new ISA (like ARM and RISCV) retaining USE_AVX ON default makes the default build instructions fail. Fundamentally DGL does not require the use of AVX for functional working. AVX is mainly needed when to enable optimization. So proposal is to default turn it off and then later during build instructions, folks with AVX capabilities can enable with `cmake .. -DUSE_AVX=ON` Co-authored-by:Zihao Ye <expye@outlook.com>
-
Rhett Ying authored
* [Dist] Enable maximum try times for socket backend via DGL_DIST_MAX_TRY_TIMES * reset env before/after test * print log for info when trying to connect * fix * print log in python instead of cpp
-
Quan (Andy) Gan authored
* rename * Update node_classification.py * more fixes... Co-authored-by:Minjie Wang <wmjlyjemaine@gmail.com>
-
- 10 May, 2022 2 commits
-
-
Rhett Ying authored
-
Xinger authored
Co-authored-by:Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com>
-
- 09 May, 2022 2 commits
-
-
ndickson-nvidia authored
* * Fixed race condition bug in distributed/optim/pytorch/sparse_optim.py's SparseAdam::update, corresponding with the bug fixed in the non-distributed version in https://github.com/dmlc/dgl/pull/3013 , though using the newer Event-based approach from that corresponding function. The race condition would often result in NaNs, like the previously fixed bug. https://github.com/dmlc/dgl/issues/2760 * * Fixed race condition bug in SparseAdagrad::update corresponding with the one fixed in SparseAdam::update in the previous commit. Same info applies. * * Fixed typo in all copies of a repeatedly-copied comment near bug fixed 3 commits ago, checking all implementations nearby for a corresponding bug. (All of them appear to have been fixed as of 2 commits ago.) * * Removed trailing whitespace Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com> Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com>
-
Mufei Li authored
-
- 08 May, 2022 1 commit
-
-
Quan (Andy) Gan authored
Co-authored-by:Minjie Wang <wmjlyjemaine@gmail.com>
-
- 07 May, 2022 1 commit
-
-
RecLusIve-F authored
* [Model]P-GNN * updata * [Example]P-GNN * Update README.md * Add NodeFeatureMasking and NormalizeFeatures * Update * Update transforms.rst * Update * Update * Update * Update test_transform.py * Update * Update * Update test_transform.py * Update module.py * Update module.py * Update module.py Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
- 06 May, 2022 1 commit
-
-
Mufei Li authored
* Update * Update * Fix * Update * CI * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update * Update
-
- 05 May, 2022 2 commits
-
-
Rhett Ying authored
-
Krzysztof Sadowski authored
-
- 29 Apr, 2022 3 commits
-
-
Hengrui Zhang authored
Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com> Co-authored-by:
Zihao Ye <expye@outlook.com>
-
Rhett Ying authored
-
Rhett Ying authored
* [BugFix] fix job status in master CI * finalize
-
- 28 Apr, 2022 2 commits
-
-
Rhett Ying authored
* [CI] fix job status * fetch from master
-
Daniil Sizov authored
* PR3355 + CSR conversion workaround * Remove debug code * Fix convention errors * Remove wrongly added code section during merge * Update to reflect dataloading changes * Fix missing changes * Remove comment * Fix linter errors * Fix trailing whitespace * Add wrapper around worker init function Co-authored-by:Quan (Andy) Gan <coin2028@hotmail.com>
-
- 27 Apr, 2022 3 commits
-
-
Rhett Ying authored
* [Feature] enable socket net_type for rpc * fix lint * fix lint * fix build issue on windows * fix test failure on windows * fix test failure * fix cpp unit test failure * net_type blocking max_try_times * fix other comments * fix lint * fix comment * fix lint * fix cpp
-
Quan (Andy) Gan authored
* fix * fix Co-authored-by:Xin Yao <xiny@nvidia.com>
-
Rhett Ying authored
-
- 26 Apr, 2022 1 commit
-
-
ayasar70 authored
* Based on issue #3436. Improving _SegmentCopyKernel s GPU utilization by switching to nonzero based thread assignment * fixing lint issues * Update cub for cuda 11.5 compatibility (#3468) * fixing type mismatch * tx guaranteed to be smaller than nnz. Hence removing last check * minor: updating comment * adding three unit tests for csr slice method to cover some corner cases * timing repeatkernel * clean * clean * clean * updating _SegmentMaskColKernel * Working on requests: removing sorted array check and adding comments to utility functions * fixing lint issue * Optimizing disjoint union kernel * Trying to resolve compilation issue on CI * [EMPTY] Relevant commit message here * applying revision requests on cpu/disjoint_union.cc * removing unnecessary casts * remove extra space Co-authored-by:
Abdurrahman Yasar <ayasar@nvidia.com> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com> Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com>
-
- 25 Apr, 2022 2 commits
-
-
Sharique Shamim authored
Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
Mufei Li authored
* Update * Update * Update * Update Co-authored-by:Minjie Wang <wmjlyjemaine@gmail.com>
-
- 24 Apr, 2022 1 commit
-
-
Daniil Sizov authored
* Fix benchmark time measurement * Reduce batch size for bench_rgcn_homogeneous_ns am bench_rgcn_homogeneous_ns am data sample size is too small for 1024 batch size Co-authored-by:
Rhett Ying <85214957+Rhett-Ying@users.noreply.github.com> Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com>
-
- 23 Apr, 2022 1 commit
-
-
Serge Panev authored
Signed-off-by:
Serge Panev <spanev@nvidia.com> Co-authored-by:
Mufei Li <mufeili1996@gmail.com>
-
- 22 Apr, 2022 2 commits
-
-
Quan (Andy) Gan authored
* fix * oops Co-authored-by:Mufei Li <mufeili1996@gmail.com>
-
Jinjing Zhou authored
refactor CI report and log
-