- 07 Aug, 2022 1 commit
-
-
kylasa authored
* Fix for node_subgraph function, which seems to generate segmentation fault for very large partitions 1. Removed three graph dgl objects and we create the final dgl object directly by maintaining the following constraints a) nodes are reordered so that local nodes are placed in the beginning of the nodes list compared to non-local nodes. b)Edges order are maintained as passed into this function. c) src/dst end points are mapped to target values based on the reshuffle'd nodes order. * Code changes addressing CI comments for this PR 1. Used Da's suggested map to map nodes from old to new order. This is much simpler and mem. efficient. * Addressing CI Comments. 1. Reduced the amount of documentation to reflect the actual implementation. 2. named the mapping object appropriately.
-
- 23 Jul, 2022 1 commit
-
-
kylasa authored
* Code changes to address the updated file format support for massively large graphs. 1. Updated the docstring for the starting function 'gen_dist_partitions" to describe the newly proposed file format for input dataset. 2. Code which was dependent on the structure of the old-metadata json object has been updated to read from the newly proposed metadata file. 3. Fixed some errors when appropriate functions were invoked and the calling function expects return values from the invoked furnction. 4. This modified code has been tested on "mag" dataset using 4-way partitions and verified the results * Code changes to address the CI review comments 1. Improved docstrings for some functions. 2. Added a new function in the utils.py to compute the id ranges and this is used in multiple places. * Added TODO to indicate the redundant data structure. Because of the new file format changes, one of the dictionaries (node_feature_tids, node_tids) will be redundant. Added TODO text so that this will be removed in the next iteration of code changes.
-
- 13 Jul, 2022 1 commit
-
-
kylasa authored
* Code changes for the following 1. Generating node data at each process 2. Reading csv files using pyarrow 3. feature complete code. * Removed some typo's because of which unit tests were failing 1. Change the file name to correct file name when loading edges from file 2. When storing node-features after shuffling, use the correct key to store the global-nids of node features which are received after transmitted. * Code changes to address CI comments by reviewers 1. Removed some redundant code and added text in the doc-strings to describe the functionality of some functions. 2 function signatures and invocations now match w.r.t argument list 3. Added detailed description of the metadata json structure so that the users understand the the type of information present in this file and how it is used through out the code. * Addressing code review comments 1. Addressed all the CI comments and some of the changes include simplifying the code related to the concatenation of lists and enhancing the docstrings of functions which are changed in this process. * Update docstring's of two functions appropriately in response to code review comments Removed "todo" from the docstring of the gen_nodedata function. Added "todo" to the gen_dist_partitions function when node-id to partition-id's are read for the first time. Removed 'num-node-weights' from the docstring for the get_dataset function and added schema_map docstring to the argument list.
-
- 05 Jul, 2022 2 commits
-
-
kylasa authored
* Added code to support multiple-file-support feature and removed single-file-support code 1. Added code to read dataset in multiple-file-format 2. Removed code for single-file format * added files missing in the previous commit This commit includes dataset_utils.py, which reads the dataset in multiple-file-format, gloo_wrapper function calls to support exchanging dictionaries as objects and helper functions in utils.py * Update convert_partition.py Updated function call "create_metadata_json" file to include partition_id so that each rank only creates its own metadata object and later on these are accumulated on rank-0 to create graph-level metadata json file. * addressing code review comments during the CI process code changes resulting from the code review comments received during the CI process. * Code reorganization Addressing CI comments and code reorganization for easier understanding. * Removed commented out line removed commented out line.
-
- 29 Jun, 2022 1 commit
-
-
kylasa authored
* code changes for bug fixes identified during mag_lsc dataset 1. Changed from call torch.Tensor() to torch.from_numpy() to address memory corruption issues when creating large tensors. Tricky thing is this works correctly for small tensors. 2. Changed dgl.graph() function call to include 'num_nodes" argument to specifically mention all the nodes in a graph partition. * Update convert_partition.py Moving the changes to the function "create_metadata_json" function to the "multiple-file-format" support, where this change is more appropriate. Since multiple machine testing was done with these code changes. * Addressing review comments. Removed space as suggested at the end of the line
-
- 23 May, 2022 1 commit
-
- 19 May, 2022 1 commit
-
-
kylasa authored
[Distributed Training Pipeline] Initial implementation of Distributed data processing step in the Dis… (#3926) * Initial implementation of Distributed data processing step in the Distributed Training pipeline Implemented the following: 1) Read the output of parmetis (node-id to partition-id mappings) 2) Read the original graph files 3) Shuffle the node/edge metadata and features 4) output the partition specific files in DGL format using convert_partition.py functionality 5) Graph meta data is serialized in json format on rank-0 machine. * Bug Fixes identified during verification of the dataset 1. When sending out global-id lookups for non-local nodes, in the msg_alltoall.py, conditional filter was used to identify the indices in node_data which is incorrect. Replaced the conditional filter with intersect1d to find out the common node ids and appropriate indices which are later used to identify the needed information to communicate. 2. When writing the graph level json file in distributed processing, the edge_offset on non-rank-0 machines was starting from 0 instead of the appropriate offset. Now added code to start the edge(s) from correct starting offset instead of 0 always. * Restructuring and consolidation of code 1) Fixed issue when running verify_mag_dataset.py, Now we read xxx_removed_edges.txt and add these edges to `edge_data`. This will ensure that the self-loops and duplicate edges are handling appropriately when compared to the original dataset. 2) Consolidated code into a fewer files and changed code to following the python naming convention. * Code changes addressing code review comments Following changes are made in this commit. 1) Naming convention is defined and code is changed accordingly. Definition of various global_ids are defined and how to read them is mentioned. 2) All the code review comments are addressed 3)Files are moved to a new directory with dgl/tools directory as per suggestion 4) README.md file is include and it contains detailed information about the Naming convention adopted by the code, high level overview of the algorithm used in data-shuffling, example command-line to use on a single machine. * addressing github review comments Made code changes addressing all the review comments from GitHub. * Addressing latest code review comments Addressed all the latest code reviewing comments. One of the major changes is treating the node and edge metadata as dictionary objects and removing all the python lists with numpy arrays. * Update README.md Text rendering corrections * Addressed code review comments Addressed code review comments for the latest code review Co-authored-by:xiang song(charlie.song) <classicxsong@gmail.com>
-