1. 31 Jul, 2020 1 commit
  2. 27 Jul, 2020 1 commit
  3. 22 Jul, 2020 1 commit
  4. 20 Jul, 2020 1 commit
    • Chao Ma's avatar
      [RPC] Rpc exit with explicit invocation (#1825) · 5c92f6c2
      Chao Ma authored
      * exit client
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update test
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      5c92f6c2
  5. 16 Jul, 2020 1 commit
    • Chao Ma's avatar
      [Distributed] Distributed launching script (#1772) · ca9d3216
      Chao Ma authored
      
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * fix launch script.
      Co-authored-by: default avatarDa Zheng <zhengda1936@gmail.com>
      ca9d3216
  6. 15 Jul, 2020 1 commit
  7. 14 Jul, 2020 1 commit
  8. 02 Jul, 2020 1 commit
    • Quan (Andy) Gan's avatar
      [Sampling] NodeDataLoader for node classification (#1635) · 168a88e5
      Quan (Andy) Gan authored
      
      
      * neighbor sampler data loader first commit
      
      * more commit
      
      * nodedataloader
      
      * fix
      
      * update RGCN example
      
      * update OGB
      
      * fixes
      
      * fix minibatch RGCN crashing with self loop
      
      * reverting gatconv test code
      
      * fix
      
      * change to new solution that doesn't require tf dataloader
      
      * fix
      
      * lint
      
      * fix
      
      * fixes
      
      * change doc
      
      * fix docstring
      
      * docstring fixes
      
      * return seeds and input nodes from data loader
      
      * fixes
      
      * fix test
      
      * fix windows build problem
      
      * add pytorch wrapper
      
      * fixes
      
      * add pytorch wrapper
      
      * add unit test
      
      * add -1 support to sample_neighbors & fix docstrings
      
      * docstring fix
      
      * lint
      
      * add minibatch rgcn evaluations
      Co-authored-by: default avatarxiang song(charlie.song) <classicxsong@gmail.com>
      Co-authored-by: default avatarTong He <hetong007@gmail.com>
      168a88e5
  9. 01 Jul, 2020 1 commit
  10. 28 Jun, 2020 1 commit
    • Da Zheng's avatar
      [Distributed] Pytorch example of distributed GraphSage. (#1495) · 02d31974
      Da Zheng authored
      
      
      * add train_dist.
      
      * Fix sampling example.
      
      * use distributed sampler.
      
      * fix a bug in DistTensor.
      
      * fix distributed training example.
      
      * add graph partition.
      
      * add command
      
      * disable pytorch parallel.
      
      * shutdown correctly.
      
      * load diff graphs.
      
      * add ip_config.txt.
      
      * record timing for each step.
      
      * use ogb
      
      * add profiler.
      
      * fix a bug.
      
      * add train_dist.
      
      * Fix sampling example.
      
      * use distributed sampler.
      
      * fix a bug in DistTensor.
      
      * fix distributed training example.
      
      * add graph partition.
      
      * add command
      
      * disable pytorch parallel.
      
      * shutdown correctly.
      
      * load diff graphs.
      
      * add ip_config.txt.
      
      * record timing for each step.
      
      * use ogb
      
      * add profiler.
      
      * add Ips of the cluster.
      
      * fix exit.
      
      * support multiple clients.
      
      * balance node types and edges.
      
      * move code.
      
      * remove run.sh
      
      * Revert "support multiple clients."
      
      * fix.
      
      * update train_sampling.
      
      * fix.
      
      * fix
      
      * remove run.sh
      
      * update readme.
      
      * update readme.
      
      * use pytorch distributed.
      
      * ensure all trainers run the same number of steps.
      
      * Update README.md
      Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-16-250.us-west-2.compute.internal>
      02d31974