1. 07 Nov, 2022 1 commit
  2. 06 Nov, 2022 1 commit
  3. 04 Nov, 2022 1 commit
  4. 19 Sep, 2022 1 commit
    • Xin Yao's avatar
      [Feature] Bump DLPack to v0.7 and decouple DLPack from the core library (#4454) · cded5b80
      Xin Yao authored
      * rename `DLContext` to `DGLContext`
      
      * rename `kDLGPU` to `kDLCUDA`
      
      * replace DLTensor with DGLArray
      
      * fix linting
      
      * Unify DGLType and DLDataType to DGLDataType
      
      * Fix FFI
      
      * rename DLDeviceType to DGLDeviceType
      
      * decouple dlpack from the core library
      
      * fix bug
      
      * fix lint
      
      * fix merge
      
      * fix build
      
      * address comments
      
      * rename dl_converter to dlpack_convert
      
      * remove redundant comments
      cded5b80
  5. 15 Sep, 2022 1 commit
    • Xin Yao's avatar
      [Feature] Import PyTorch's CUDA stream management (#4503) · 9a00cf19
      Xin Yao authored
      * add set_stream
      
      * add .record_stream for NDArray and HeteroGraph
      
      * refactor dgl stream Python APIs
      
      * test record_stream
      
      * add unit test for record stream
      
      * use pytorch's stream
      
      * fix lint
      
      * fix cpu build
      
      * address comments
      
      * address comments
      
      * add record stream tests for dgl.graph
      
      * record frames and update dataloder
      
      * add docstring
      
      * update frame
      
      * add backend check for record_stream
      
      * remove CUDAThreadEntry::stream
      
      * record stream for newly created formats
      
      * fix bug
      
      * fix cpp test
      
      * fix None c_void_p to c_handle
      9a00cf19
  6. 06 Sep, 2022 1 commit
    • Chang Liu's avatar
      [Feature] Unify the cuda stream used in core library (#4480) · 1c9d2a03
      Chang Liu authored
      
      
      * Use an internal cuda stream for CopyDataFromTo
      
      * small fix white space
      
      * Fix to compile
      
      * Make stream optional in copydata for compile
      
      * fix lint issue
      
      * Update cub functions to use internal stream
      
      * Lint check
      
      * Update CopyTo/CopyFrom/CopyFromTo to use internal stream
      
      * Address comments
      
      * Fix backward CUDA stream
      
      * Avoid overloading CopyFromTo()
      
      * Minor comment update
      
      * Overload copydatafromto in cuda device api
      Co-authored-by: default avatarxiny <xiny@nvidia.com>
      1c9d2a03
  7. 29 Jun, 2022 1 commit
  8. 11 Jun, 2022 1 commit
  9. 06 Jun, 2022 1 commit
  10. 12 May, 2022 1 commit
  11. 18 Oct, 2021 1 commit
  12. 15 Oct, 2021 1 commit
  13. 06 Sep, 2021 1 commit
  14. 27 Jun, 2021 1 commit
    • Jinjing Zhou's avatar
      [Build] Make nccl optional (#3056) · 9664cdff
      Jinjing Zhou authored
      * fix
      
      * remove nvidiasmi
      
      * fix
      
      * fix docs
      
      * fix
      
      * fix
      
      * 1
      
      * fix
      
      * remove
      
      * skip deprecated kernel
      
      * fix
      
      * Revert "skip deprecated kernel"
      
      This reverts commit c5ceb7f60dbbaf065b81cc3680757fd611d90ad3.
      
      * fix
      9664cdff
  15. 23 Jun, 2021 1 commit
  16. 11 Jun, 2021 1 commit
    • nv-dlasalle's avatar
      [Feature] Allow using NCCL for communication in dgl.NodeEmbedding and dgl.SparseOptimizer (#2824) · 17d604b5
      nv-dlasalle authored
      
      
      * Split from NCCL PR
      
      * Fix type in comment
      
      * Expand documentation for sparse_all_to_all_push
      
      * Restore previous behavior in example
      
      * Re-work optimizer to use NCCL based on gradient location
      
      * Allow for running with embedding on CPU but using NCCL for gradient exchange
      
      * Optimize single partition case
      
      * Fix pylint errors
      
      * Add missing include
      
      * fix gradient indexing
      
      * Fix line continuation
      
      * Migrate 'first_step'
      
      * Skip tests without enough GPUs to run NCCL
      
      * Improve empty tensor handling for pytorch 1.5
      
      * Fix indentation
      
      * Allow multiple NCCL communicator to coexist
      
      * Improve handling of empty message
      
      * Update python/dgl/nn/pytorch/sparse_emb.py
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      
      * Update python/dgl/nn/pytorch/sparse_emb.py
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      
      * Keepy empty tensor dimensionaless
      
      * th.empty -> th.tensor
      
      * Preserve shape for empty non-zero dimension tensors
      
      * Use shared state, when embedding is shared
      
      * Add support for gathering an embedding
      
      * Fix typo
      
      * Fix more typos
      
      * Fix backend call
      
      * Use NodeDataLoader to take advantage of ddp
      
      * Update training script to share memory
      
      * Only squeeze last dimension
      
      * Better handle empty message
      
      * Keep embedding on the target device GPU if dgl_sparse if false in RGCN example
      
      * Fix typo in comment
      
      * Add asserts
      
      * Improve documentation in example
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      17d604b5
  17. 20 May, 2021 1 commit
    • nv-dlasalle's avatar
      [Feature][Performance] Implement NCCL wrapper for communicating NodeEmbeddings... · ae8dbe6d
      nv-dlasalle authored
      
      [Feature][Performance] Implement NCCL wrapper for communicating NodeEmbeddings and sparse gradients. (#2825)
      
      * Split NCCL wrapper from sparse optimizer and sparse embedding
      
      * Add more unit tests for single node nccl
      
      * Fix unit test for tf
      
      * Switch to device histogram
      
      * Fix histgram issues
      
      * Finish migration to histogram
      
      * Handle cases with zero send/recieve data
      
      * Start on partition object
      
      * Get compiling
      
      * Updates
      
      * Add unit tests
      
      * Switch to partition object
      
      * Fix linting issues
      
      * Rename partition file
      
      * Add python doc
      
      * Fix python assert and finish doxygen comments
      
      * Remove stubs for range based partition to satisfy pylint
      
      * Wrap unit test in GPU only
      
      * Wrap explicit cuda call in ifdef
      
      * Merge with partition.py
      
      * update docstrings
      
      * Cleanup partition_op
      
      * Add Workspace object
      
      * Switch to using workspace object
      
      * Move last remainder based function out of nccl_api
      
      * Add error messages
      
      * Update docs with examples
      
      * Fix linting erros
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      ae8dbe6d