We test scalability of the code with dataset "ogbg-molhiv" in a machine of type <ahref="https://aws.amazon.com/blogs/aws/now-available-ec2-instances-g4-with-nvidia-t4-tensor-core-gpus/">Amazon EC2 g4dn.metal</a>
, which has **8 Nvidia T4 Tensor Core GPUs**.
|GPU number |Speed Up |Batch size |Test accuracy |Average epoch Time|
This is an example of implementing [NGNN](https://arxiv.org/abs/2111.11638) for link prediction in DGL.
We use a model-agnostic methodology, namely Network In Graph Neural Network (NGNN), which allows arbitrary GNN models to increase their model capacity.
The script in this folder experiments full-batch GCN/GraphSage (with/without NGNN) on the datasets: ogbl-ddi, ogbl-collab and ogbl-ppa.
## Installation requirements
```
ogb>=1.3.3
torch>=1.11.0
dgl>=0.8
```
## Experiments
We do not fix random seeds at all, and take over 10 runs for all models. All models are trained on a single V100 GPU with 16GB memory.
parser.add_argument('--device',type=int,default=0,help='GPU device ID. Use -1 for CPU training.')
# model structure settings
parser.add_argument('--use_sage',action='store_true',help='If not set, use GCN by default.')
parser.add_argument('--ngnn_type',type=str,default="input",choices=['input','hidden'],help="You can set this value from 'input' or 'hidden' to apply NGNN to different GNN layers.")
parser.add_argument('--num_layers',type=int,default=3,help='number of GNN layers')