1. 25 Feb, 2023 1 commit
    • kylasa's avatar
      [DistDGL][Feature_Request]Changes in the metadata.json file for input graph dataset. (#5310) · a14f69c9
      kylasa authored
      * Implemented the following changes.
      
      * Remove NUM_NODES_PER_CHUNK
      * Remove NUM_EDGES_PER_CHUNK
      * Remove the dependency between no. of edge files per edge type and no. of partitions
      * Remove the dependency between no. of edge feature files per edge type and no. of partitions
      * Remove the dependency between no. of edge feature files and no. of edge files per edge type.
      * Remove the dependency between no. of node feature files and no. of partitions
      * Add “node_type_counts”. This will be a list of integers. Each integer will represent total count of a node-type. The index in this list and the index in the “node_type” will be the same for a given node-type.
      * Add “edge_type_counts”. This will be a list of integers. Each integer will represent total count of an edge-type. The index in this list and the index in the “edge_type” list will be the same for a given edge-type.
      
      * Applying lintrunner patch.
      
      * Adding missing keys to the metadata in the unit test framework.
      
      * lintrunner patch.
      
      * Resolving CI test failures due to merge conflicts.
      
      * Applying lintrunner patch
      
      * applying lintrunner patch
      
      * Replacing tabspace with spaces - to satisfy lintrunner
      
      * Fixing the CI Test Failure cases.
      
      * Applying lintrunner patch
      
      * lintrunner complaining about a blank line.
      
      * Resolving issues with print statement for NoneType
      
      * Removed tests for the arbitrary chunks tests. Since this functionality is not supported anymore.
      
      * Addressing CI review comments.
      
      * addressing CI review comments
      
      * lintrunner patch
      
      * lintrunner patch.
      
      * Addressing CI review comments.
      
      * lintrunner patch.
      a14f69c9
  2. 23 Feb, 2023 3 commits
    • kylasa's avatar
      New script for customers to validate partitioned graph objects (#5340) · c42fa8a5
      kylasa authored
      * A new script to validate graph partitioning pipeline
      
      * Addressing CI review comments.
      
      * lintrunner patch.
      c42fa8a5
    • kylasa's avatar
      [DistDGL][Robustness]Uneven distribution of input graph files for nodes/edges and features. (#5227) · bbc538d9
      kylasa authored
      * Uneven distribution of nodes/edges/features
      
      To handle unevenly sized files for nodes/edges and feature files for nodes and edges, we have to synchronize before starting large no. of messages (either one large message or a burst of messages).
      
      * Applying lintrunner patch.
      
      * Removing tabspaces for lintrunner.
      
      * lintrunner patch.
      
      * removed issues introduced by the merge conflicts. Lots of code was repeated
      bbc538d9
    • kylasa's avatar
      [DistDGL][Mem_Optimizations]get_partition_ids, service provided by the... · 61b6edab
      kylasa authored
      [DistDGL][Mem_Optimizations]get_partition_ids, service provided by the distributed lookup service has high memory footprint (#5226)
      
      * get_partition_ids, service provided by the distributed lookup service has high memory footprint
      
      'get_partitionid' function, which is used to retrieve owner processes of the given list of global node ids, has high memory footprint. Currently this is of the order of 8x compared to the size of the input list.
      
      For massively large datasets, this memory needs are very unrealistic and may result in OOM. In the case of CoreGraph, when retrieving owner of an edge list of size 6 Billion edges, the memory needs can be as high as 8*8*8 = 256 GB.
      
      To limit the amount of memory used by this function, we split the size of the message sent to the distributed lookup service, so that each message is limited by the number of global node ids, which is 200 million. This reduced the memory footprint of this entire function to be no more than 0.2 * 8 * 8 = 13 GB. which is within reasonable limits.
      
      Now since we send multiple small messages compared to one large message to the distributed lookup service, this may consume more wall-clock-time compared to earlier implementation.
      
      * lintrunner patch.
      
      * using np.ceil() per suggestion.
      
      * converting the output of np.ceil() as ints.
      61b6edab
  3. 22 Feb, 2023 1 commit
    • kylasa's avatar
      [DistDGL] Memory optimization to reduce memory footprint of the Dist Graph... · 5ea04713
      kylasa authored
      [DistDGL] Memory optimization to reduce memory footprint of the Dist Graph partitioning pipeline. (#5130)
      
      * Wrap np.argsort() in a function. This
      
      Use a python wrapper for the np.argsort() function for better usage of systems memory.
      
      * lintrunner patch.
      
      * lintrunner patch.
      
      * Changes to address code review comments.
      5ea04713
  4. 19 Feb, 2023 1 commit
  5. 16 Feb, 2023 2 commits
    • kylasa's avatar
      [DistDGL][Optimizations]Rehash code to optimize for loop (#5224) · 9ce800d2
      kylasa authored
      * Rehash code to optimize for loop
      
      Reduced number of instructions in for loop, which exchanging edge features. This will reduce the number of times numpy's intersect1d is invoked (saving the runtime and memory overhead needs of numpy).
      
      * Applying lintrunner patch to data_shuffle.py
      9ce800d2
    • kylasa's avatar
      [DistDGL][Mem_Optimizations]Edge Ownership processes are computed on the fly when required. (#5225) · e25f47de
      kylasa authored
      * Edge Ownership processes are computed on the fly when required.
      
      Earlier we were storing Edge ownership processes after the dataset was retrieved from the disk. For massively large datasets, each node can handle upto 5 Billion edges, this means storing owner process-ids will consume 5 * 8 = 40GB. This memory will be hanging around until the edges are exchanged.
      
      To reduce the memory footprint of the pipeline, we no longer store the ownership process-ids in the 'edge_data' dictionary after reading the dataset from the disk. Instead, we compute them on the fly at the time of exchanging edges.
      
      Another optimization is not to send/receive all the messages in a one single large message. Instead we now split the total number edges into chunks, limited by 8 GB per node. And we iterate until all the chunks are exchanged.
      
      Once all the edges are exchanged, as a sanity check, we compute the total number of edges in the system and compare it with the original value before edge shuffling, in a final assert statement before return the result to the caller.
      
      * Applying lintrunner patch.
      e25f47de
  6. 13 Feb, 2023 1 commit
    • kylasa's avatar
      Code changes to fix order sensitivity of the pipeline (#5288) · 432c71ef
      kylasa authored
      
      
      Following changes are made in this PR.
      1. In dataset_utils.py, when reading edges from disk we follow the order defined by the STR_EDGE_TYPE key in the metadata.json file. This order is implicitly used to assign edgeid to edge types. This same order is used to read edges from the disk as well.
      2. Now the unit test framework will also randomize the order of edges read from the disk. This is done for the edges when reading from the disk for the unit tests.
      Co-authored-by: default avatarQuan (Andy) Gan <coin2028@hotmail.com>
      432c71ef
  7. 10 Feb, 2023 2 commits
  8. 03 Feb, 2023 1 commit
  9. 02 Feb, 2023 1 commit
  10. 05 Jan, 2023 1 commit
  11. 03 Jan, 2023 1 commit
  12. 15 Dec, 2022 1 commit
    • Rhett Ying's avatar
      [Dist] enable to chunk node/edge data into arbitrary number of chunks (#4930) · 9731e023
      Rhett Ying authored
      
      
      * [Dist] enable to chunk node/edge data into arbitrary number of chunks
      
      * [Dist] enable to split node/edge data into arbitrary parts
      
      * refine code
      
      * Format boolean to uint8 forcely to avoid dist.scatter failure
      
      * convert boolean to int8 before scatter and revert it after scatter
      
      * refine code
      
      * fix test
      
      * refine code
      
      * move test utilities into utils.py
      
      * update comment
      
      * fix empty data
      
      * update
      
      * update
      
      * fix empty data issue
      
      * release unnecessary mem
      
      * release unnecessary mem
      
      * release unnecessary mem
      
      * release unnecessary mem
      
      * release unnecessary mem
      
      * remove unnecessary shuffle data
      
      * separate array_split into standalone utility
      
      * add example
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      9731e023
  13. 14 Dec, 2022 1 commit
  14. 07 Dec, 2022 1 commit
  15. 30 Nov, 2022 1 commit
  16. 28 Nov, 2022 2 commits
  17. 18 Nov, 2022 1 commit
    • kylasa's avatar
      [Dist] Flexible pipeline - Initial commit (#4733) · c8ea9fa4
      kylasa authored
      * Flexible pipeline - Initial commit
      
      1. Implementation of flexible pipeline feature.
      2. With this implementation, the pipeline now supports multiple partitions per process. And also assumes that num_partitions is always a multiple of num_processes.
      
      * Update test_dist_part.py
      
      * Code changes to address review comments
      
      * Code refactoring of exchange_features function into two functions for better readability
      
      * Upadting test_dist_part to fix merge issues with the master branch
      
      * corrected variable names...
      
      * Fixed code refactoring issues.
      
      * Provide missing function arguments to exchange_feature function
      
      * Providing the missing function argument to fix error.
      
      * Provide missing function argument to 'get_shuffle_nids' function.
      
      * Repositioned a variable within its scope.
      
      * Removed tab space which is causing the indentation problem
      
      * Fix issue with the CI test framework, which is the root cause for the failure of the CI tests.
      
      1. Now we read files specific to the partition-id and store this data separately, identified by the local_part_id, in the local process.
      2. Similarly as above, we also differentiate the node and edge features type_ids with the same keys as above.
      3. These above two changes will help up to get the appropriate feature data during the feature exchange and send to the destination process correctly.
      
      * Correct the parametrization for the CI unit test cases.
      
      * Addressing Rui's code review comments.
      
      * Addressing code review comments.
      c8ea9fa4
  18. 17 Nov, 2022 1 commit
  19. 09 Nov, 2022 1 commit
  20. 08 Nov, 2022 1 commit
    • kylasa's avatar
      [DIST] Message size to retrieve SHUFFLE_GLOBAL_NIDs is resulting in very large... · 4cd0a685
      kylasa authored
      [DIST] Message size to retrieve SHUFFLE_GLOBAL_NIDs is resulting in very large messages and resulting in killed process (#4790)
      
      * Send out the message to the distributed lookup service in batches.
      
      * Update function signature for allgather_sizes function call.
      
      * Removed the unnecessary if statement .
      
      * Removed logging.info message, which is not needed.
      4cd0a685
  21. 07 Nov, 2022 3 commits
  22. 04 Nov, 2022 2 commits
  23. 31 Oct, 2022 1 commit
  24. 27 Oct, 2022 1 commit
  25. 26 Oct, 2022 1 commit
  26. 19 Oct, 2022 2 commits
  27. 17 Oct, 2022 1 commit
    • Rhett Ying's avatar
      [Dist] Reduce peak memory in DistDGL (#4687) · b1309217
      Rhett Ying authored
      * [Dist] Reduce peak memory in DistDGL: avoid validation, release memory once loaded
      
      * remove orig_id from ndata/edata for partition_graph()
      
      * delete orig_id from ndata/edata in dist part pipeline
      
      * reduce dtype size and format before saving graphs
      
      * fix lint
      
      * ETYPE requires to be int32/64 for CSRSortByTag
      
      * fix test failure
      
      * refine
      b1309217
  28. 12 Oct, 2022 1 commit
  29. 11 Oct, 2022 1 commit
  30. 03 Oct, 2022 2 commits
    • kylasa's avatar
      ParMETIS wrapper script to enable ParMETIS to process chunked dataset format (#4605) · eae6ce2a
      kylasa authored
      * Creating ParMETIS wrapper script to run parmetis using one script from user perspective
      
      * Addressed all the CI comments from PR https://github.com/dmlc/dgl/pull/4529
      
      * Addressing CI comments.
      
      * Isort, and black changes.
      
      * Replaced python with python3
      
      * Replaced single quote with double quotes per suggestion.
      
      * Removed print statement
      
      * Addressing CI Commets.
      
      * Addressing CI review comments.
      
      * Addressing CI comments as per chime discussion with Rui
      
      * CI Comments, Black and isort changes
      
      * Align with code refactoring, black, isort and code review comments.
      
      * Addressing CI review comments, and fixing merge issues with the master branch
      
      * Updated with proper unit test skip decorator
      eae6ce2a
    • kylasa's avatar
      Edge Feature support for input graph datasets for dist. graph partitioning pipeline (#4623) · 1f471396
      kylasa authored
      * Added support for edge features.
      
      * Added comments and removing unnecessary print statements.
      
      * updated data_shuffle.py to remove compile error.
      
      * Repaled python3 with python to match CI test framework.
      
      * Removed unrelated files from the pull request.
      
      * Isort changes.
      
      * black changes on this file.
      
      * Addressing CI review comments.
      
      * Addressing CI comments.
      
      * Removed duplicated and resolved merge conflict code.
      
      * Addressing CI Comments from Rui.
      
      * Addressing CI comments, and fixing merge issues.
      
      * Addressing CI comments, code refactoring, isort and black
      1f471396