- 06 Sep, 2022 1 commit
-
-
Chang Liu authored
* Use an internal cuda stream for CopyDataFromTo * small fix white space * Fix to compile * Make stream optional in copydata for compile * fix lint issue * Update cub functions to use internal stream * Lint check * Update CopyTo/CopyFrom/CopyFromTo to use internal stream * Address comments * Fix backward CUDA stream * Avoid overloading CopyFromTo() * Minor comment update * Overload copydatafromto in cuda device api Co-authored-by:xiny <xiny@nvidia.com>
-
- 05 Sep, 2022 1 commit
-
-
peizhou001 authored
* enable turn on/off libxsmm at runtime by adding a global config and related API Co-authored-by:Ubuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal>
-
- 12 Aug, 2022 1 commit
-
-
Xin Yao authored
* Change CUDA_MAX_NUM_THREADS to 256 * change the configuration of grid
-
- 09 Aug, 2022 1 commit
-
-
Xin Yao authored
-
- 01 Aug, 2022 1 commit
-
-
Xin Yao authored
* enable use for weighted neighbor sampler and biased random walk * add unit tests * fix for mxnet/tf * fix typo
-
- 29 Jul, 2022 1 commit
-
-
Xin Yao authored
* add weighted sampling without replacement (A-Chao) * improve Algorithm A-Chao with block-wise prefix sum * correctly fill out_idxs * implement weighted sampling with replacement * small fix * merge host-side code of weighted/uniform sampling * enable unit tests for cuda weighted sampling * move thrust/cub wrapper to the cmake file * update docs accordingly * fix linting * fix linting * fix unit test * Bump external CUB/Thrust versions * Fix code style and update description of algorithm design * [Feature] GPU support weighted graph neighbor sampling commit by pengqirong(OPPO) * merge pengqirong's implementation * revert the change to cub and thrust * fix linting * use DeviceSegmentedSort for better performance * add more comments * add necessary notes * add necessary notes * resolve some comments * define THRUST_CUB_WRAPPED_NAMESPACE * fix doc Co-authored-by:彭齐荣 <657017034@qq.com>
-
- 15 Jul, 2022 1 commit
-
-
Quan (Andy) Gan authored
-
- 01 Jul, 2022 2 commits
-
-
Rhett Ying authored
-
Rhett Ying authored
* [Feature] extend sort_csr/csc_by_tag to edge * fix test ffailure in tensorflow * refine sorting by edges * fix docstring * remove unnecessary mem Co-authored-by:Xin Yao <xiny@nvidia.com>
-
- 27 Jun, 2022 1 commit
-
-
ndickson-nvidia authored
* * Added missing specializations for `__half` of `DLDataTypeTraits`, `IndexSelect`, `Full`, `Scatter_`, `CSRGetData`, `CSRMM`, `CSRSum`, `IndexSelectCPUFromGPU` * Fixed casting issue in `_LinearSearchKernel` that was preventing it from supporting `__half` * Added `#if`'d out specializations of `CSRGEMM`, `CSRGEAM`, and `Xgeam`, which would require functions that aren't currently provided by cublas * * Added more specific error messages for unimplemented FP16 specializations of Xgeam, CSRGEMM, and CSRGEAM * * Added missing instantiation of DLDataTypeTraits<__half>::dtype * * Fixed linter error * Added clearer comment explaining why the cast to long long is necessary * * Worked around a compile error in some particular setup, where __half can't be constructed on the host side * * Fixed linter formatting errors * * Changes to comments as recommended * * Made recommended changes to logging errors in FP16 specializations * Also changed the existing Xgeam function for unsupported data types from LOG(INFO) to LOG(FATAL)
-
- 24 Jun, 2022 1 commit
-
-
nv-dlasalle authored
* Add uva by default to embedding * More updates * Update optimizer * Add new uva functions * Expose new pinned memory function * Add unit tests * Update formatting * Fix unit test * Handle auto UVA case when training is on CPU * Allow per-embedding decisions for whether to use UVA * Address spares_optim.py comments * Remove unused templates * Update unit test * Use dgl allocate memory for pinning * allow automatically unpin * workaround for d2h copy with a different dtype * fix linting * update error message * update copyright Co-authored-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Minjie Wang <wmjlyjemaine@gmail.com>
-
- 23 Jun, 2022 1 commit
-
-
Triston authored
* Fix a cub compile error for CUDA 11.5 * Fix comparison of integer expressions of different signedness in coo_sort.cu file * Fix comparison of integer expressions of different signedness in cuda_compact_graph.cu file * Remove never referenced variable in spmm.cu * Fix comparison of integer expressions of different signedness in rowwise_pick.h file * Fix comparison of integer expressions of different signedness in choice.cc file * Remove never referenced variable col_data in spat_op_impl_coo.cc * Remove never referenced variable allowed in global_uniform.cc * Fix comparison of integer expressions of different signedness in graph.cc * Fix comparison of integer expressions of different signedness in graph_apis.cc * Fix the un-used ctx variable in ndarray_partition.cc file for cpu only build * Fix comparison of integer expressions of different signedness in libra_partition.cc * Fix comparison of integer expressions of different signedness in graph_op.cc Co-authored-by:
Triston Cao <tristonc@nvidia.com> Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com>
-
- 14 Jun, 2022 1 commit
-
-
nv-dlasalle authored
* Disable non-atomic atomic operations * Improve error message * Make error message more friendly
-
- 11 Jun, 2022 1 commit
-
-
Xin Yao authored
* Wrap all CUDA runtime API/CUB calls with macro * remove the usage of explicit cudaMalloc in favor of AllocWorkspace * fix typo Co-authored-by:Israt Nisa <neesha295@gmail.com>
-
- 07 Jun, 2022 1 commit
-
-
ndickson-nvidia authored
* * Added specialization of cublasGemm function for `__half` type, to try to address https://github.com/dmlc/dgl/issues/3988 * * Added USE_FP16 guard * * Added test cases to test_segment_mm, to test newly-added FP16 specialization of cublasGemm * * Replaced for loop in test_segment_mm with pytest.mark.parametrize, as recommended Co-authored-by:
Xin Yao <xiny@nvidia.com>
-
- 06 Jun, 2022 3 commits
-
-
ndickson-nvidia authored
* * Added support for common operations on FP16 (`half` or `__half`) for older GPU architectures * Fixed an issue with previous check for FP16 support * * Removing FP16 type checks, since they should no longer be needed * * Fixed AtomicAdd to be atomic for `float` and `double` for old GPU architectures. Unfortunately, it seems that atomicCAS for unsigned short seems to be unavailable until architecture 70, so half will have to stay non-atomic on old GPUs. * * Fixed non-atomic version of `AtomicAdd<half>` for older GPUs to return old value instead value of new
-
Quan (Andy) Gan authored
Co-authored-by:Xin Yao <xiny@nvidia.com>
-
Xin Yao authored
Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Israt Nisa <neesha295@gmail.com>
-
- 28 May, 2022 1 commit
-
-
Quan (Andy) Gan authored
-
- 26 May, 2022 1 commit
-
-
nv-dlasalle authored
* Enable FP16 for GPU builds in CI * Limit default GPU archs to pascal and above * Disable FP16 dispatching for cuda architectures less than 60 * Fix linting * Fix typos
-
- 17 May, 2022 1 commit
-
-
paoxiaode authored
* Change the curand_init parameter * Change the curand_init parameter * commit * commit * change the curandState and launch dim of CSRRowwiseSample kernel * commit * keep _CSRRowWiseSampleReplaceKernel in sync Co-authored-by:nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com>
-
- 16 May, 2022 1 commit
-
-
Xin Yao authored
* remove unnecessary induced vertices in EdgeSubgraph * add unit test
-
- 26 Apr, 2022 1 commit
-
-
ayasar70 authored
* Based on issue #3436. Improving _SegmentCopyKernel s GPU utilization by switching to nonzero based thread assignment * fixing lint issues * Update cub for cuda 11.5 compatibility (#3468) * fixing type mismatch * tx guaranteed to be smaller than nnz. Hence removing last check * minor: updating comment * adding three unit tests for csr slice method to cover some corner cases * timing repeatkernel * clean * clean * clean * updating _SegmentMaskColKernel * Working on requests: removing sorted array check and adding comments to utility functions * fixing lint issue * Optimizing disjoint union kernel * Trying to resolve compilation issue on CI * [EMPTY] Relevant commit message here * applying revision requests on cpu/disjoint_union.cc * removing unnecessary casts * remove extra space Co-authored-by:
Abdurrahman Yasar <ayasar@nvidia.com> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com> Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com>
-
- 10 Mar, 2022 1 commit
-
-
paoxiaode authored
* Change the curand_init parameter * Change the curand_init parameter * commit * commit Co-authored-by:nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com>
-
- 28 Feb, 2022 1 commit
-
-
Quan (Andy) Gan authored
* split files * fix
-
- 23 Feb, 2022 2 commits
-
-
sanchit-misra authored
-
Minjie Wang authored
* WIP: TypedLinear and new RelGraphConv * wip * further simplify RGCN * a bunch of tweak for performance; add basic cpu support * update on segmm * wip: segment.cu * new backward kernel works * fix a bunch of bugs in kernel; leave idx_a for future * add nn test for typed_linear * rgcn nn test * bugfix in corner case; update RGCN README * doc * fix cpp lint * fix lint * fix ut * wip: hgtconv; presorted flag for rgcn * hgt code and ut; WIP: some fix on reorder graph * better typed linear init * fix ut * fix lint; add docstring
-
- 21 Feb, 2022 1 commit
-
-
Quan (Andy) Gan authored
* fixes * fix * more fixes * update * oops * lint? * temporarily revert - will fix in another PR * more fixes * skipping mxnet test * address comments * fix DDP * fix edge dataloader exclusion problems * stupid bug * fix * use_uvm option * fix * fixes * fixes * fixes * fixes * add evaluation for cluster gcn and ddp * stupid bug again * fixes * move sanity checks to only support DGLGraphs * pytorch lightning compatibility fixes * remove * poke * more fixes * fix * fix * disable test * docstrings * why is it getting a memory leak? * fix * update * updates and temporarily disable forkingpickler * update * fix? * fix? * oops * oops * fix * lint * huh * uh * update * fix * made it memory efficient * refine exclude interface * fix tutorial * fix tutorial * fix graph duplication in CPU dataloader workers * lint * lint * Revert "lint" This reverts commit 805484dd553695111b5fb37f2125214a6b7276e9. * Revert "lint" This reverts commit 0bce411b2b415c2ab770343949404498436dc8b2. * Revert "fix graph duplication in CPU dataloader workers" This reverts commit 9e3a8cf34c175d3093c773f6bb023b155f2bd27f. Co-authored-by:
xiny <xiny@nvidia.com> Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com>
-
- 18 Feb, 2022 1 commit
-
-
ayasar70 authored
* Based on issue #3436. Improving _SegmentCopyKernel s GPU utilization by switching to nonzero based thread assignment * fixing lint issues * Update cub for cuda 11.5 compatibility (#3468) * fixing type mismatch * tx guaranteed to be smaller than nnz. Hence removing last check * minor: updating comment * adding three unit tests for csr slice method to cover some corner cases * timing repeatkernel * clean * clean * clean * updating _SegmentMaskColKernel * Working on requests: removing sorted array check and adding comments to utility functions * fixing lint issue Co-authored-by:
Abdurrahman Yasar <ayasar@nvidia.com> Co-authored-by:
nv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com> Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com>
-
- 15 Feb, 2022 1 commit
-
-
Israt Nisa authored
* init * init * working cublasGemm * benchmark high-mem/low-mem, err gather_mm output * cuda kernel for bmm like kernel * removed cpu copy for E_per_Rel * benchmark code from Minjie * fixed cublas results in gathermm sorted * use GPU shared mem in unsorted gather mm * minor * Added an optimal version of gather_mm_unsorted * lint * init gather_mm_scatter * cublas transpose added * fixed h_offset for multiple rel * backward unittest * cublas support to transpose W * adding missed file * forgot to add header file * lint * lint * cleanup * lint * docstring * lint * added unittest * lint * lint * unittest * changed err type * skip cpu test * skip CPU code * move in-len loop inside * lint * added check different dim length for B * w_per_len is optional now * moved gather_mm to pytorch/backend with backward support * removed a_/b_trans support * transpose op inside GEMM call * removed out alloc from API, changed W 2D to 3D * Added se_gather_mm, Separate API for sortedE * Fixed gather_mm (unsorted) user interface * unsorted gmm backward + separate CAPI for un/sorted A * typecast to float to support atomicAdd * lint typecast * lint * added gather_mm_scatter * minor * const * design changes * Added idx_a, idx_b support gmm_scatter * dgl doc * lint * adding gather_mm in ops * lint * lint * minor * removed benchmark files * minor * empty commit Co-authored-by:Israt Nisa <nisisrat@amazon.com>
-
- 11 Feb, 2022 1 commit
-
-
ranzhejiang authored
* [feature] edge softmax refact. * delete file * fix backward and cmake version * fix backward * format function * fix setting * refix * refix * refix * refix * refix * refix * refix * refix * refix * refix * refix * refix * add cuda kernel for backward and rename some function * add benchmark for edge_softmax * fix format * remove cuda_backwrd * fix code format and add comment for op on CPU * fix lint Co-authored-by:Jinjing Zhou <VoVAllen@users.noreply.github.com>
-
- 09 Feb, 2022 1 commit
-
-
Xin Yao authored
* implement pin_memory/unpin_memory/is_pinned for dgl.graph * update python docstring * update c++ docstring * add test * fix the broken UnifiedTensor * XPU_SWITCH for kDLCPUPinned * a rough version ready for testing * eliminate extra context parameter for pin/unpin * update train_sampling * fix linting * fix typo * multi-gpu uva sampling case * disable new format materialization for pinned graphs * update python doc for pin_memory_ * fix unit test * UVA sampling for link prediction * dispatch most csr ops * update graphsage example to combine uva sampling and UnifiedTensor * update graphsage example to combine uva sampling and UnifiedTensor * update graphsage example to combine uva sampling and UnifiedTensor * update doc * update examples * change unitgraph and heterograph's PinMemory to in-place * update examples for multi-gpu uva sampling * update doc * fix linting * fix cpu build * fix is_pinned for DistGraph * fix is_pinned for DistGraph * update graphsage unsupervised example * update doc for gpu sampling * update some check for sampling device switching * fix linting * adapt for new dataloader * fix linting * fix * fix some name issue * adjust device check * add unit test for uva sampling & fix some zero_copy bug * fix linting * update num_threads in graphsage examples Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com> Co-authored-by:
Jinjing Zhou <VoVAllen@users.noreply.github.com>
-
- 21 Jan, 2022 1 commit
-
-
Xin Yao authored
* implement pin_memory/unpin_memory/is_pinned for dgl.graph * update python docstring * update c++ docstring * add test * fix the broken UnifiedTensor * eliminate extra context parameter for pin/unpin * fix linting * fix typo * disable new format materialization for pinned graphs * update python doc for pin_memory_ * fix unit test * update doc * change unitgraph and heterograph's PinMemory to in-place * update comments for NDArray's PinMemory_ and PinData * update doc Co-authored-by:Jinjing Zhou <VoVAllen@users.noreply.github.com>
-
- 17 Jan, 2022 2 commits
-
-
Quan (Andy) Gan authored
* oops * test
-
Quan (Andy) Gan authored
* fix GPU global negative sampling code * Update negative_sampling.cu
-
- 11 Jan, 2022 1 commit
-
-
MaoYuan Xian authored
* Pass the std:min argument's type, to avoid the compilation error. * Update parallel_for.h * Update negative_sampling.cc Co-authored-by:Quan (Andy) Gan <coin2028@hotmail.com>
-
- 10 Jan, 2022 1 commit
-
-
Quan (Andy) Gan authored
-
- 07 Jan, 2022 1 commit
-
-
Quan (Andy) Gan authored
* first commit * a bunch of fixes * add unique * lint * lint * lint * address comments * Update negative_sampler.py * fix * description * address comments and fix * fix * replace unique with replace * test pylint * Update negative_sampler.py
-
- 16 Dec, 2021 1 commit
-
-
Israt Nisa authored
[Feature] Add CUDA support for `min` and `max` reducer in heterogeneous API for unary message functions (#3566) * CUDA support max/min reducer on forward pass * docstring * concised UpdateGradMinMax_hetero * reorganized UpdateGradMinMax_hetero * CUDA kernels for max/min reducer * variable name * lint check * changed CUDA 2D thread mapping to 1D * removed legacy cusparse for min/max reducer * git CI issue * restarting git CI * adding namespace std Co-authored-by:
Israt Nisa <nisisrat@amazon.com> Co-authored-by:
Quan (Andy) Gan <coin2028@hotmail.com>
-
- 15 Dec, 2021 1 commit
-
-
Vasimuddin Md authored
* added distgnn plus libra codebase * Dist application codes * added comments in partition code. changed the interface of partitioning call. * updated readme * create libra partitioning branch for the PR * removed disgnn files for first PR * updated kernel.cc * added libra_partition.cc and moved libra code from kernel.cc to libra_partition.cc * fixed lint error; merged libra2dgl.py and main_Libra.py to libra_partition.py; added graphsage/distgnn folder and partition script. * removed libra2dgl.py * fixed the lint error and cleaned the code. * revisions due to PR comments. added distgnn/tools contains partitions routines * update 2 PR revision I * fixed errors; also improved the runtime by 10x. * fixed minor lint error * fixed some more lints * PR revision II changed the interface of libra partition function * rewrite docstring Co-authored-by:Quan (Andy) Gan <coin2028@hotmail.com>
-