"...git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "fa633ed6dec10bf23f3743088d1740417c9ef3ab"
  1. 21 Feb, 2023 1 commit
  2. 16 Feb, 2023 1 commit
  3. 09 Feb, 2023 1 commit
  4. 06 Jan, 2023 1 commit
  5. 01 Dec, 2022 1 commit
  6. 22 Nov, 2022 1 commit
    • Muhammed Fatih BALIN's avatar
      [Feature] (La)yer-Neigh(bor) sampling implementation (#4668) · bf264d00
      Muhammed Fatih BALIN authored
      
      
      * adding LABOR sampling
      
      * add ladies and pladies samplers
      
      * fix compile error after rebase
      
      * add reference for ladies sampler
      
      * Improve ladies implementation.
      
      * weighted labor sampling initial implementation draft
      fix indentation and small bug in ladies script
      
      * importance_sampling currently doesn't work with weights
      
      * fix weighted importance sampling
      
      * move labor example into its own folder
      
      * lint fixes
      
      * Improve documentation
      
      * remove examples from the main PR
      
      * fix linting by not using c++17 features
      
      * fix documentation of labor_sampler.py
      
      * update documentation for labor.py
      
      * reformat the labor.py file with black
      
      * fix linting errors
      
      * replace exception use with if
      
      * fix typo in error comment
      
      * fixing win64 build for ci
      
      * fixing weighted implementation, works now.
      
      * fix bug in the weighted case and importance_sampling==0
      
      * address part of the reviews
      
      * remove unused code paths from cuda
      
      * remove unused code path from cpu side
      
      * remove extra features of labor making use of random seed.
      
      * fix exclude_edges bug
      
      * remove pcg and seed logic from cpu implementation, seed logic should still work for cuda.
      
      * minor style change
      
      * refactor CPU implementation, take out the importance_sampling probability computation into a function.
      
      * improve CUDAWorkspaceAllocator
      
      * refactor importance_sampling part out to a function
      
      * minor optimization
      
      * fix linting issue
      
      * Revert "remove pcg and seed logic from cpu implementation, seed logic should still work for cuda."
      
      This reverts commit c250e07ac6d7e13f57e79e8a2c2f098d777378c2.
      
      * Revert "remove extra features of labor making use of random seed."
      
      This reverts commit 7f99034353080308f4783f27d9a08bea343fb796.
      
      * fix the documentation
      
      * disable NIDs
      
      * improve the documentation in the code
      
      * use the stream argument in pcg32 instead of skipping ahead t times, can discard the use of hashmap now since it is faster this way.
      
      * fix linting issue
      
      * address another round of reviews
      
      * further optimize CPU LABOR sampling implementation
      
      * fix linting error
      
      * update the comment
      
      * reformat
      
      * rename and rephrase comment
      
      * fix formatting according to new linting specs
      
      * fix compile error due to renaming, fix linting.
      
      * lint
      
      * rename DGLHeteroGraph to DGLGraph to match master
      
      * replace other occurrences of DGLHeteroGraph to DGLGraph
      Co-authored-by: default avatarMuhammed Fatih BALIN <m.f.balin@gmail.com>
      Co-authored-by: default avatarKaan Sancak <kaansnck@gmail.com>
      Co-authored-by: default avatarQuan Gan <coin2028@hotmail.com>
      bf264d00
  7. 15 Nov, 2022 4 commits
  8. 08 Nov, 2022 1 commit
  9. 07 Nov, 2022 3 commits
  10. 06 Nov, 2022 2 commits
  11. 03 Nov, 2022 1 commit
  12. 02 Nov, 2022 1 commit
  13. 29 Oct, 2022 1 commit
    • Quan (Andy) Gan's avatar
      [Sampling] Enable sampling with edge masks in sample_etype_neighbors (#4749) · 2bca4759
      Quan (Andy) Gan authored
      * sample neighbors with masks
      
      * oops
      
      * refactor again
      
      * remove
      
      * remove debug code
      
      * rename macro
      
      * address comments
      
      * more stuff
      
      * remove
      
      * fix
      
      * try fix unit test
      
      * oops
      
      * fix test
      
      * oops
      
      * change name
      
      * rename a lot of stuff
      
      * oops
      
      * ugh
      
      * misc fixes
      
      * lint
      
      * address a lot of comments
      
      * lint
      
      * lint
      
      * fix
      
      * that was silly
      
      * fix
      
      * fix
      
      * fix
      
      * oops
      2bca4759
  14. 28 Oct, 2022 1 commit
  15. 13 Oct, 2022 1 commit
  16. 21 Sep, 2022 1 commit
  17. 19 Sep, 2022 1 commit
    • Xin Yao's avatar
      [Feature] Bump DLPack to v0.7 and decouple DLPack from the core library (#4454) · cded5b80
      Xin Yao authored
      * rename `DLContext` to `DGLContext`
      
      * rename `kDLGPU` to `kDLCUDA`
      
      * replace DLTensor with DGLArray
      
      * fix linting
      
      * Unify DGLType and DLDataType to DGLDataType
      
      * Fix FFI
      
      * rename DLDeviceType to DGLDeviceType
      
      * decouple dlpack from the core library
      
      * fix bug
      
      * fix lint
      
      * fix merge
      
      * fix build
      
      * address comments
      
      * rename dl_converter to dlpack_convert
      
      * remove redundant comments
      cded5b80
  18. 05 Sep, 2022 1 commit
  19. 01 Jul, 2022 2 commits
  20. 23 Jun, 2022 1 commit
    • Triston's avatar
      [Fix] Fix compiler warnings - part 1 (#4051) · 1ad65879
      Triston authored
      
      
      * Fix a cub compile error for CUDA 11.5
      
      * Fix comparison of integer expressions of different signedness in coo_sort.cu file
      
      * Fix comparison of integer expressions of different signedness in cuda_compact_graph.cu file
      
      * Remove never referenced variable in spmm.cu
      
      * Fix comparison of integer expressions of different signedness in rowwise_pick.h file
      
      * Fix comparison of integer expressions of different signedness in choice.cc file
      
      * Remove never referenced variable col_data in spat_op_impl_coo.cc
      
      * Remove never referenced variable allowed in global_uniform.cc
      
      * Fix comparison of integer expressions of different signedness in graph.cc
      
      * Fix comparison of integer expressions of different signedness in graph_apis.cc
      
      * Fix the un-used ctx variable in ndarray_partition.cc file for cpu only build
      
      * Fix comparison of integer expressions of different signedness in libra_partition.cc
      
      * Fix comparison of integer expressions of different signedness in graph_op.cc
      Co-authored-by: default avatarTriston Cao <tristonc@nvidia.com>
      Co-authored-by: default avatarQuan (Andy) Gan <coin2028@hotmail.com>
      1ad65879
  21. 06 Jun, 2022 1 commit
  22. 28 May, 2022 1 commit
  23. 26 Apr, 2022 1 commit
  24. 23 Feb, 2022 2 commits
    • sanchit-misra's avatar
      e7ad4c9c
    • Minjie Wang's avatar
      [NN] Rework RelGraphConv and HGTConv (#3742) · 0227ddfb
      Minjie Wang authored
      * WIP: TypedLinear and new RelGraphConv
      
      * wip
      
      * further simplify RGCN
      
      * a bunch of tweak for performance; add basic cpu support
      
      * update on segmm
      
      * wip: segment.cu
      
      * new backward kernel works
      
      * fix a bunch of bugs in kernel; leave idx_a for future
      
      * add nn test for typed_linear
      
      * rgcn nn test
      
      * bugfix in corner case; update RGCN README
      
      * doc
      
      * fix cpp lint
      
      * fix lint
      
      * fix ut
      
      * wip: hgtconv; presorted flag for rgcn
      
      * hgt code and ut; WIP: some fix on reorder graph
      
      * better typed linear init
      
      * fix ut
      
      * fix lint; add docstring
      0227ddfb
  25. 15 Feb, 2022 1 commit
    • Israt Nisa's avatar
      [Feature] Gather mm (#3641) · b3d3a2c4
      Israt Nisa authored
      
      
      * init
      
      * init
      
      * working cublasGemm
      
      * benchmark high-mem/low-mem, err gather_mm output
      
      * cuda kernel for bmm like kernel
      
      * removed cpu copy for E_per_Rel
      
      * benchmark code from Minjie
      
      * fixed cublas results in gathermm sorted
      
      * use GPU shared mem in unsorted gather mm
      
      * minor
      
      * Added an optimal version of gather_mm_unsorted
      
      * lint
      
      * init gather_mm_scatter
      
      * cublas transpose added
      
      * fixed h_offset for multiple rel
      
      * backward unittest
      
      * cublas support to transpose W
      
      * adding missed file
      
      * forgot to add header file
      
      * lint
      
      * lint
      
      * cleanup
      
      * lint
      
      * docstring
      
      * lint
      
      * added unittest
      
      * lint
      
      * lint
      
      * unittest
      
      * changed err type
      
      * skip cpu test
      
      * skip CPU code
      
      * move in-len loop inside
      
      * lint
      
      * added check different dim length for B
      
      * w_per_len is optional now
      
      * moved gather_mm to pytorch/backend with backward support
      
      * removed a_/b_trans support
      
      * transpose op inside GEMM call
      
      * removed out alloc from API, changed W 2D to 3D
      
      * Added se_gather_mm, Separate API for sortedE
      
      * Fixed gather_mm (unsorted) user interface
      
      * unsorted gmm backward + separate CAPI for un/sorted A
      
      * typecast to float to support atomicAdd
      
      * lint typecast
      
      * lint
      
      * added gather_mm_scatter
      
      * minor
      
      * const
      
      * design changes
      
      * Added idx_a, idx_b support gmm_scatter
      
      * dgl doc
      
      * lint
      
      * adding gather_mm in ops
      
      * lint
      
      * lint
      
      * minor
      
      * removed benchmark files
      
      * minor
      
      * empty commit
      Co-authored-by: default avatarIsrat Nisa <nisisrat@amazon.com>
      b3d3a2c4
  26. 11 Feb, 2022 1 commit
    • ranzhejiang's avatar
      New fused edge_softmax op (#3650) · bc8f8b0b
      ranzhejiang authored
      
      
      * [feature] edge softmax refact.
      
      * delete file
      
      * fix backward and cmake version
      
      * fix backward
      
      * format function
      
      * fix setting
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * refix
      
      * add cuda kernel for backward and rename some function
      
      * add benchmark for edge_softmax
      
      * fix format
      
      * remove cuda_backwrd
      
      * fix code format and add comment for op on CPU
      
      * fix lint
      Co-authored-by: default avatarJinjing Zhou <VoVAllen@users.noreply.github.com>
      bc8f8b0b
  27. 17 Jan, 2022 1 commit
  28. 11 Jan, 2022 1 commit
  29. 07 Jan, 2022 1 commit
    • Quan (Andy) Gan's avatar
      [Feature] Negative sampling (#3599) · 90f10b31
      Quan (Andy) Gan authored
      * first commit
      
      * a bunch of fixes
      
      * add unique
      
      * lint
      
      * lint
      
      * lint
      
      * address comments
      
      * Update negative_sampler.py
      
      * fix
      
      * description
      
      * address comments and fix
      
      * fix
      
      * replace unique with replace
      
      * test pylint
      
      * Update negative_sampler.py
      90f10b31
  30. 16 Dec, 2021 1 commit
  31. 06 Dec, 2021 1 commit
  32. 03 Dec, 2021 1 commit
    • Israt Nisa's avatar
      [Feature] Add Min/max reducer in heterogeneous API for unary message functions (#3514) · cb0e1103
      Israt Nisa authored
      
      
      * min/max support for forward CPU heterograph
      
      * Added etype with each argU values
      
      * scatter_add needs fix
      
      * added scatter_add_hetero. Grads dont match for max reducer
      
      * storing ntype in argX
      
      * fixing scatter_add_hetero
      
      * hetero matches with torch's scatter add
      
      * works copy_e forward+cpu
      
      * added backward for copy_rhs
      
      * Computes gradient for all node types in one kernel
      
      * bug fix
      
      * unnitest for max/min on CPU
      
      * renamed scatter_add_hetero to update_grad_minmax_hetero
      
      * lint check and comment out cuda call for max. Code is for CPU only
      
      * lint check
      
      * replace inf with zero
      
      * minor
      
      * lint check
      
      * removed LIBXSMM code from hetro code
      
      * fixing backward operator of UpdateGradMinMaxHetero
      
      * removed backward from update_grad_minmax_hetero
      
      * docstring
      
      * improved docstring and coding style
      
      * Added pass by pointer for output
      
      * typos and pass by references
      
      * Support for copy_rhs
      
      * Added header <string>
      
      * fix bug in copy_u_max
      
      * Added comments and dimension check of all etypes
      
      * skip mxnet check
      
      * pass by pointer output arrays
      
      * updated docstring
      Co-authored-by: default avatarIsrat Nisa <nisisrat@amazon.com>
      Co-authored-by: default avatarQuan (Andy) Gan <coin2028@hotmail.com>
      cb0e1103