1. 10 Nov, 2021 1 commit
  2. 09 Nov, 2021 2 commits
  3. 08 Nov, 2021 3 commits
    • yifeim's avatar
      [Model] Lda subgraph (#3206) · fe6e01ad
      yifeim authored
      
      
      * add word_ids and simplify
      
      * simplify
      
      * add word_ids to be removed later
      
      * remove word_ids
      
      * seems to work
      
      * tweak
      
      * transpose word_z
      
      * add word_ids example
      
      * check api compatibility
      
      * improve compatibility
      
      * update doc
      
      * tweak verbose
      
      * restore word_z layout; tweak
      
      * tweak
      
      * tweak doc
      
      * word_cT
      
      * use log_weight and some other tweaks
      
      * rewrite README
      
      * update equations
      
      * rewrite for clarity and pass tests
      
      * tweak
      
      * bugfix import
      
      * fix unit test
      
      * fix mult to be the same as old versions
      
      * tweak
      
      * could be a bugfix
      
      * 0/0=nan
      
      * add doc_subgraph utility function
      
      * minor cache optimization
      
      * minor cache tweak
      
      * add environmental variable to trade cache speed for memory
      
      * update README
      
      * tweak
      
      * add sparse update pass unit test
      
      * simplify sparse update
      
      * improve low-memory efficiency
      
      * tweak
      
      * add sample expectation scores to allow resampling
      
      * simplify
      
      * update comment
      
      * avoid edge cases
      
      * bugfix pred scores
      
      * simplify
      
      * add save function
      Co-authored-by: default avatarYifei Ma <yifeim@amazon.com>
      Co-authored-by: default avatarQuan (Andy) Gan <coin2028@hotmail.com>
      Co-authored-by: default avatarJinjing Zhou <VoVAllen@users.noreply.github.com>
      fe6e01ad
    • Hongyu Cai's avatar
      [Doc] Fix type in CUDA.cmake (#3479) · 9c41e97c
      Hongyu Cai authored
      
      Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>
      9c41e97c
    • Rhett Ying's avatar
      Remove self-loops and duplicate edges before ParMETIS and restore when... · 2a757d4a
      Rhett Ying authored
      
      Remove self-loops and duplicate edges before ParMETIS and restore when converting to DGLGraph (#3472)
      
      * save self-loops and duplicated edges separately.
      
      * [BugFix] sort graph by dgl.ETYPE
      
      * fix bugs in verify script
      
      * fix verify logic
      
      * refine README
      Co-authored-by: default avatarDa Zheng <zhengda1936@gmail.com>
      2a757d4a
  4. 06 Nov, 2021 1 commit
  5. 05 Nov, 2021 2 commits
  6. 04 Nov, 2021 3 commits
  7. 03 Nov, 2021 3 commits
  8. 29 Oct, 2021 1 commit
  9. 28 Oct, 2021 1 commit
  10. 27 Oct, 2021 1 commit
  11. 26 Oct, 2021 2 commits
  12. 21 Oct, 2021 1 commit
    • Xin Yao's avatar
      [Sampling] Implement dgl.compact_graphs() for the GPU (#3423) · a8c81018
      Xin Yao authored
      * gpu compact graph template
      
      * cuda compact graph draft
      
      * fix typo
      
      * compact graphs
      
      * pass unit test but fail in training
      
      * example using EdgeDataLoader on the GPU
      
      * refactor cuda_compact_graph and cuda_to_block
      
      * update training scripts
      
      * fix linting
      
      * fix linting
      
      * fix exclude_edges for the GPU
      
      * add --data-cpu & fix copyright
      a8c81018
  13. 19 Oct, 2021 1 commit
  14. 18 Oct, 2021 4 commits
  15. 15 Oct, 2021 2 commits
  16. 14 Oct, 2021 6 commits
  17. 12 Oct, 2021 2 commits
  18. 11 Oct, 2021 2 commits
  19. 07 Oct, 2021 1 commit
    • K's avatar
      [Model] Refine GraphSAINT (#3328) · aef96dfa
      K authored
      
      
      * The start of experiments of Jiahang Li on GraphSAINT.
      
      * a nightly build
      
      * a nightly build
      
      Check the basic pipeline of codes. Next to check the details of samplers , GCN layer (forward propagation) and loss (backward propagation)
      
      * a night build
      
      * Implement GraphSAINT with torch.dataloader
      
      There're still some bugs with sampling in training procedure
      
      * Test validity
      
      Succeed in testing validity on ppi_node experiments without testing other setup.
      1. Online sampling on ppi_node experiments performs perfectly.
      2. Sampling speed is a bit slow because the operations on [dgl.subgraphs], next step is to improve this part by putting the conversion into parallelism
      3. Figuring out why offline+online sampling method performs bad, which does not make sense
      4. Doing experiments on other setup
      
      * Implement saint with torch.dataloader
      
      Use torch.dataloader to speed up saint sampling with experiments. Except experiments on too large dataset Amazon, we've done some experiments on other four datasets including ppi, flickr, reddit and yelp. Preliminary experimental results show consumed time and metrics reach not bad level. Next step is to employ more accurate profiler which is the line_profiler to test consumed period, and adjust num_workers to speed up sampling procedures on same certain datasets faster.
      
      * a nightly build
      
      * Update .gitignore
      
      * reorganize codes
      
      Reorganize some codes and comments.
      
      * a nightly build
      
      * Update .gitignore
      
      * fix bugs
      
      Fix bugs about why fully offline sampling and author's version don't work
      
      * reorganize files and codes
      
      Reorganize files and codes then do some experiments to test the performance of offline sampling and online sampling
      
      * do some experiments and update README
      
      * a nightly build
      
      * a nightly build
      
      * Update README.md
      
      * delete unnecessary files
      
      * Update README.md
      
      * a nightly update
      
      1. handle directory named 'graphsaintdata'
      2. control graph shift between gpu and cpu related to large dataset ('amazon')
      3. remove parameter 'train'
      4. refine annotations of the sampler
      5. update README.md including updating dataset info, dependencies info, etc
      
      * a nightly update
      
      explain config differences in TEST part
      remove a sampling time variant
      make 'online' an argument
      change 'norm' to 'sampler'
      explain parameters in README.md
      
      * Update README.md
      
      * a nightly build
      
      * make online an argument
      * refine README.md
      * refine codes of `collate_fn` in sampler.py, in training phase only return one subgraph, no need to check if the number of subgraphs larger than 1
      
      * Update sampler.py
      
      check the problem on flickr is about overfitting.
      
      * a nightly update
      
      Fix the overfitting problem of `flickr` dataset. We need to restrict the number of subgraphs (also the number of iterations) used in each epoch of training phase. Or it might overfit when validating at the end of each epoch. The method to limit the number is a formula specified by the author.
      
      * Set up a new flag `full` specifying if the number of subgraphs used in training phase equals to that of pre-sampled subgraphs
      
      * Modify codes and annotations related the new flag
      
      * Add a new parameter called `node_budget` in the base class `SAINTSampler` to compute the specific formula
      
      * set `gpu` as a command line argument
      
      * Update README.md
      
      * Finish the experiments on Flickr, which is done after adding new flag `full`
      
      * a nightly update
      
      * use half of edges in the original graph to do sampling
      * test dgl.random.choice with or without replacement with half of edges
      ~ next is to test what if put the calculating probability part out of __getitem__ can speed up sampling and try to implement sampling method of author
      
      * employ cython to implement edge sampling for per edge
      
      * employ cython to implement edge sampling for per edge
      * doing experiments to test consumed time and performance
      ** the consumed time decreased to approximately 480s, the performance decrease about 5 points.
      * deprecate cython implementation
      
      * Revert "employ cython to implement edge sampling for per edge"
      
      * This reverts commit 4ba4f092
      * Deprecate cython implementation
      * Reserve half-edges mechanism
      
      * a nightly update
      
      * delete unnecessary annotations
      Co-authored-by: default avatarMufei Li <mufeili1996@gmail.com>
      aef96dfa
  20. 30 Sep, 2021 1 commit