Unverified Commit 266b21e5 authored by Jinjing Zhou's avatar Jinjing Zhou Committed by GitHub
Browse files

[DGL-Go] Change name to dglgo (#3778)



* add

* remove

* fix

* rework the readme and some changes

* add png

* update png

* add recipe get
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>
Co-authored-by: default avatarQuan (Andy) Gan <coin2028@hotmail.com>
parent d41d07d0
# DGL-Enter # DGL-Go
(What is DGL-Enter? Why design this? What is it for?)
DGL-Enter is a commanline tool for user to quickly bootstrap models with multiple datasets. And provide full capability for user to customize the pipeline into their own takks. DGL-Go is a command line tool for users to get started with training, using and
studying Graph Neural Networks (GNNs). Data scientists can quickly apply GNNs
to their problems, whereas researchers will find it useful to customize their
experiments.
## Installation guide
You can install DGL-enter easily by `pip install dglenter`. Then you should be able to use DGL-Enter in you commandline tool by type in `dgl-enter` ## Installation and get started
DGL-Go requires DGL v0.8+ so please make sure DGL is updated properly.
Install DGL-Go by `pip install dglgo` and type `dgl` in your console:
``` ```
Usage: dgl-enter [OPTIONS] COMMAND [ARGS]... Usage: dgl [OPTIONS] COMMAND [ARGS]...
Options: Options:
--help Show this message and exit. --help Show this message and exit.
Commands: Commands:
config Generate the config files configure Generate a configuration file
export Export the python file from config export Export a runnable python script
train Train the model recipe Get example recipes
train Launch training
```
![img](./dglgo.png)
Using DGL-Go is as easy as three steps:
1. Use `dgl configure` to pick the task, dataset and model of your interests. It generates
a configuration file for later use. You could also use `dgl recipe get` to retrieve
a configuration file we provided.
1. Use `dgl train` to launch training according to the configuration and see the results.
1. Use `dgl export` to generate a *self-contained, reproducible* Python script for advanced
customization, or try the model on custom data stored in CSV format.
Next, we will walk through all these steps one-by-one.
## Training GraphSAGE for node classification on Cora
Let's use one of the most classical setups -- training a GraphSAGE model for node
classification on the Cora citation graph dataset as an
example.
### Step one: `dgl configure`
First step, use `dgl configure` to generate a YAML configuration file.
``` ```
dgl configure nodepred --data cora --model sage --cfg cora_sage.yaml
```
Note that `nodepred` is the name of DGL-Go *pipeline*. For now, you can think of
pipeline as training task: `nodepred` is for node prediction task; other
options include `linkpred` for link prediction task, etc. The command will
generate a configurate file `cora_sage.yaml` which includes:
* Options for the selected dataset (i.e., `cora` here).
* Model hyperparameters (e.g., number of layers, hidden size, etc.).
* Training hyperparameters (e.g., learning rate, loss function, etc.).
## Train GraphSAGE on Cora from scratch Different choices of task, model and datasets may give very different options,
Here we'll use one of the most classic model GraphSAGE and Cora citation graph dataset as an example, to show how easy to train a model with DGL-Enter. so DGL-Go also adds a comment for what each option does in the file.
### Step 1: Use `dgl-enter config` to generate a yaml configuration file At this point you can also change options to explore optimization potentials.
Run `dgl-enter config nodepred --data cora --model sage --cfg cora_sage.yml`. Then you'll get a configuration file `cora_sage.yml` includes all the configuration to be tuned, with the comments
Optionally, You can change the config as you want to acheive a better performance. Below is a modified sample based on the template generated by the command above. Below shows the configuration file generated by the command above.
The early stop part is removed for simplicity
```yaml ```yaml
version: 0.0.1 version: 0.0.1
...@@ -43,6 +82,9 @@ model: ...@@ -43,6 +82,9 @@ model:
dropout: 0.5 # Dropout rate. dropout: 0.5 # Dropout rate.
aggregator_type: gcn # Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``). aggregator_type: gcn # Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
general_pipeline: general_pipeline:
early_stop:
patience: 20 # Steps before early stop
checkpoint_path: checkpoint.pth # Early stop checkpoint model file path
num_epochs: 200 # Number of training epochs num_epochs: 200 # Number of training epochs
eval_period: 5 # Interval epochs between evaluations eval_period: 5 # Interval epochs between evaluations
optimizer: optimizer:
...@@ -50,13 +92,52 @@ general_pipeline: ...@@ -50,13 +92,52 @@ general_pipeline:
lr: 0.01 lr: 0.01
weight_decay: 0.0005 weight_decay: 0.0005
loss: CrossEntropyLoss loss: CrossEntropyLoss
save_path: model.pth # Path to save the model
num_runs: 1 # Number of experiments to run num_runs: 1 # Number of experiments to run
```
Apart from `dgl configure`, you could also get one of DGL-Go's built-in configuration files
(called *recipe*) using `dgl recipe`. There are two sub-commands:
``` ```
dgl recipe list
```
will list the available recipes:
```
➜ dgl recipe list
===============================================================================
| Filename | Pipeline | Dataset |
===============================================================================
| linkpred_citation2_sage.yaml | linkpred | ogbl-citation2 |
| linkpred_collab_sage.yaml | linkpred | ogbl-collab |
| nodepred_citeseer_sage.yaml | nodepred | citeseer |
| nodepred_citeseer_gcn.yaml | nodepred | citeseer |
| nodepred-ns_arxiv_gcn.yaml | nodepred-ns | ogbn-arxiv |
| nodepred_cora_gat.yaml | nodepred | cora |
| nodepred_pubmed_sage.yaml | nodepred | pubmed |
| linkpred_cora_sage.yaml | linkpred | cora |
| nodepred_pubmed_gcn.yaml | nodepred | pubmed |
| nodepred_pubmed_gat.yaml | nodepred | pubmed |
| nodepred_cora_gcn.yaml | nodepred | cora |
| nodepred_cora_sage.yaml | nodepred | cora |
| nodepred_citeseer_gat.yaml | nodepred | citeseer |
| nodepred-ns_product_sage.yaml | nodepred-ns | ogbn-products |
===============================================================================
```
Then use
```
dgl recipe get nodepred_cora_sage.yaml
```
to copy the YAML configuration file to your local folder.
### Step 2: Use `dgl-enter train` to initiate the training process. ### Step 2: `dgl train`
Simply run `dgl-enter train --cfg cora_sage.yml` will start the training process Simply run `dgl train --cfg cora_sage.yaml` will start the training process.
```log ```log
... ...
Epoch 00190 | Loss 1.5225 | TrainAcc 0.9500 | ValAcc 0.6840 Epoch 00190 | Loss 1.5225 | TrainAcc 0.9500 | ValAcc 0.6840
...@@ -74,18 +155,83 @@ Test Accuracy 0.7740 ...@@ -74,18 +155,83 @@ Test Accuracy 0.7740
Accuracy across 1 runs: 0.774 ± 0.0 Accuracy across 1 runs: 0.774 ± 0.0
``` ```
That's all! Basically you only need two line of command to train a graph neural network. That's all! Basically you only need two commands to train a graph neural network.
## Debug your model and advanced customization
That's not everything yet. We belive you may want to change more than the configuration files, to change the training pipeline, calculate new metrics, or look into the code for details. ### Step 3: `dgl export` for more advanced customization
DGL-Enter can export a self-contained, runnable python script for you to do anything you like.
Try `dgl-enter export --cfg cora_sage.yml --output script.py`, and you'll get the script used to train the model, like a magic! That's not everything yet. You may want to open the hood and and invoke deeper
customization. DGL-Go can export a **self-contained, reproducible** Python
script for you to do anything you like.
Try `dgl export --cfg cora_sage.yaml --output script.py`,
and you'll get the script used to train the model. Here's the code snippet:
Below
```python ```python
... ...
class GraphSAGE(nn.Module):
def __init__(self,
data_info: dict,
embed_size: int = -1,
hidden_size: int = 16,
num_layers: int = 1,
activation: str = "relu",
dropout: float = 0.5,
aggregator_type: str = "gcn"):
"""GraphSAGE model
Parameters
----------
data_info : dict
The information about the input dataset.
embed_size : int
The dimension of created embedding table. -1 means using original node embedding
hidden_size : int
Hidden size.
num_layers : int
Number of hidden layers.
dropout : float
Dropout rate.
activation : str
Activation function name under torch.nn.functional
aggregator_type : str
Aggregator type to use (``mean``, ``gcn``, ``pool``, ``lstm``).
"""
super(GraphSAGE, self).__init__()
self.data_info = data_info
self.embed_size = embed_size
if embed_size > 0:
self.embed = nn.Embedding(data_info["num_nodes"], embed_size)
in_size = embed_size
else:
in_size = data_info["in_size"]
self.layers = nn.ModuleList()
self.dropout = nn.Dropout(dropout)
self.activation = getattr(nn.functional, activation)
for i in range(num_layers):
in_hidden = hidden_size if i > 0 else in_size
out_hidden = hidden_size if i < num_layers - 1 else data_info["out_size"]
self.layers.append(dgl.nn.SAGEConv( in_hidden, out_hidden, aggregator_type))
def forward(self, graph, node_feat, edge_feat=None):
if self.embed_size > 0:
dgl_warning(
"The embedding for node feature is used, and input node_feat is ignored, due to the provided embed_size.",
norepeat=True)
h = self.embed.weight
else:
h = node_feat
h = self.dropout(h)
for l, layer in enumerate(self.layers):
h = layer(graph, h, edge_feat)
if l != len(self.layers) - 1:
h = self.activation(h)
h = self.dropout(h)
return h
...
def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn): def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn):
g = data[0] # Only train on the first graph g = data[0] # Only train on the first graph
g = dgl.remove_self_loop(g) g = dgl.remove_self_loop(g)
...@@ -98,6 +244,8 @@ def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn): ...@@ -98,6 +244,8 @@ def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn):
train_mask, val_mask, test_mask = g.ndata['train_mask'].bool( train_mask, val_mask, test_mask = g.ndata['train_mask'].bool(
), g.ndata['val_mask'].bool(), g.ndata['test_mask'].bool() ), g.ndata['val_mask'].bool(), g.ndata['test_mask'].bool()
stopper = EarlyStopping(**pipeline_cfg['early_stop'])
val_acc = 0. val_acc = 0.
for epoch in range(pipeline_cfg['num_epochs']): for epoch in range(pipeline_cfg['num_epochs']):
model.train() model.train()
...@@ -112,9 +260,14 @@ def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn): ...@@ -112,9 +260,14 @@ def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn):
if epoch != 0 and epoch % pipeline_cfg['eval_period'] == 0: if epoch != 0 and epoch % pipeline_cfg['eval_period'] == 0:
val_acc = accuracy(logits[val_mask], label[val_mask]) val_acc = accuracy(logits[val_mask], label[val_mask])
if stopper.step(val_acc, model):
break
print("Epoch {:05d} | Loss {:.4f} | TrainAcc {:.4f} | ValAcc {:.4f}". print("Epoch {:05d} | Loss {:.4f} | TrainAcc {:.4f} | ValAcc {:.4f}".
format(epoch, loss.item(), train_acc, val_acc)) format(epoch, loss.item(), train_acc, val_acc))
stopper.load_checkpoint(model)
model.eval() model.eval()
with torch.no_grad(): with torch.no_grad():
logits = model(g, node_feat, edge_feat) logits = model(g, node_feat, edge_feat)
...@@ -125,24 +278,26 @@ def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn): ...@@ -125,24 +278,26 @@ def train(cfg, pipeline_cfg, device, data, model, optimizer, loss_fcn):
def main(): def main():
cfg = { cfg = {
'version': '0.0.1', 'version': '0.0.1',
'device': 'cpu', 'device': 'cuda:0',
'data': {
'split_ratio': None},
'model': { 'model': {
'embed_size': -1, 'embed_size': -1,
'hidden_size': 16, 'hidden_size': 16,
'num_layers': 1, 'num_layers': 2,
'activation': 'relu', 'activation': 'relu',
'dropout': 0.5, 'dropout': 0.5,
'aggregator_type': 'gcn'}, 'aggregator_type': 'gcn'},
'general_pipeline': { 'general_pipeline': {
'early_stop': {
'patience': 100,
'checkpoint_path': 'checkpoint.pth'},
'num_epochs': 200, 'num_epochs': 200,
'eval_period': 5, 'eval_period': 5,
'optimizer': { 'optimizer': {
'lr': 0.01, 'lr': 0.01,
'weight_decay': 0.0005}, 'weight_decay': 0.0005},
'loss': 'CrossEntropyLoss', 'loss': 'CrossEntropyLoss',
'num_runs': 1}} 'save_path': 'model.pth',
'num_runs': 10}}
device = cfg['device'] device = cfg['device']
pipeline_cfg = cfg['general_pipeline'] pipeline_cfg = cfg['general_pipeline']
# load data # load data
...@@ -162,109 +317,81 @@ def main(): ...@@ -162,109 +317,81 @@ def main():
**pipeline_cfg["optimizer"]) **pipeline_cfg["optimizer"])
# train # train
test_acc = train(cfg, pipeline_cfg, device, data, model, optimizer, loss) test_acc = train(cfg, pipeline_cfg, device, data, model, optimizer, loss)
torch.save(model, pipeline_cfg["save_path"])
return test_acc return test_acc
... ...
``` ```
## Recipes You can see that everything is collected into one Python script which includes the
entire `GraphSAGE` model definition, data processing and training loop. Simply running
`python script.py` will give you the *exact same* result as you've seen by `dgl train`.
At this point, you can change any part as you wish such as plugging your own GNN module,
changing the loss function and so on.
We've prepared a set of finetuned config under `enter/recipes`, that you can try easily to get a reproducable result. ## Use DGL-Go on your own dataset
For example, using GCN with pubmet dataset, you can use `enter/recipes/nodepred_pubmed_gcn.yml`. DGL-Go supports training a model on custom dataset by DGL's `CSVDataset`.
To try it, type in `dgl-enter train --cfg recipes/nodepred_pubmed_gcn.yml` to train the model, or `dgl-enter export --cfg recipes/nodepred_pubmed_gcn.yml` to get the full training script. ### Step 1: Prepare your CSV and metadata file.
## Use DGL-Enter on your own dataset Follow the tutorial at [Loading data from CSV
You can modify the generated script in anyway you want. However, we also provided an end2end way to use your own dataset, by using our `CSVDataset`. files](https://docs.dgl.ai/en/latest/guide/data-loadcsv.html#guide-data-pipeline-loadcsv`)
to prepare your dataset. Generally, the dataset folder should include:
* At least one CSV file for node data.
* At least one CSV file for edge data.
* A metadata file called `meta.yaml`.
Step 1: Prepare your csv and metadata file. ### Step 2: `dgl configure` with `--data csv` option
Run
Following the tutorial at [Loading data from CSV files](https://docs.dgl.ai/en/latest/guide/data-loadcsv.html#guide-data-pipeline-loadcsv`), Prepare your own CSV dataset includes three files minimally, node data csv, edge data csv and the meta data file (meta.yml). ```
dgl configure nodepred --data csv --model sage --cfg csv_sage.yaml
```yml
dataset_name: my_csv_dataset
edge_data:
- file_name: edges.csv
node_data:
- file_name: nodes.csv
``` ```
Step 2: Choose to csv dataset in the `dgl-enter config` stage to generate the configuration file. You will see that the file includes a section like
Try `dgl-enter config nodepred --data csv --model sage --cfg csv_sage.yml`, to use SAGE model for your dataset. You'll see the data part is now the configuration related to CSV dataset. `data_path` is used to specify the data folder, and `./` means the current folder. the followings:
If your dataset doesn't have the builtin split on the nodes for train/val/test, you need to manually set the split ratio in the config yml file, DGL will random generate the split for you.
```yml ```yaml
...
data: data:
name: csv name: csv
split_ratio: # Ratio to generate split masks, for example set to [0.8, 0.1, 0.1] for 80% train/10% val/10% test. Leave blank to use builtin split in original dataset split_ratio: # Ratio to generate split masks, for example set to [0.8, 0.1, 0.1] for 80% train/10% val/10% test. Leave blank to use builtin split in original dataset
data_path: ./ # metadata.yaml, nodes.csv, edges.csv should in this folder data_path: ./ # metadata.yaml, nodes.csv, edges.csv should in this folder
...
``` ```
Fill in the `data_path` option with the path to your dataset folder.
Step 3: `train` the model/`export` the script If your dataset does not have any native split for training, validation and test sets,
Then you can do the same as the tutorial above, either train the model by `dgl-eneter train --cfg csv_sage.yaml` or use `dgl-enter export --cfg csv_sage.yml --output my_dataset.py` to get the training script. you can set the split ratio in the `split_ratio` option, which will
generate a random split for you.
## API Referencce
DGL enter is a new tool for user to bootstrap datasets and common models.
The entry point of enter is `dgl-enter`, and it has three subcommand `config`, `train` and `export`.
### Config
The config stage is to generate a configuration file on the specific pipeline.
`dgl-enter` currently provides 3 pipelines:
- nodepred (Node prediction tasks, suitable for small dataset to prototype)
- nodepred-ns (Node prediction tasks with sampling method, suitable for medium and large dataset)
- linkpred (Link prediction tasks, to predict whether edge exists among node pairs based on node features)
You can get the full list by `dgl-enter config --help`
```
Usage: dgl-enter config [OPTIONS] COMMAND [ARGS]...
Generate the config files
Options:
--help Show this message and exit.
Commands:
linkpred Link prediction pipeline
nodepred Node classification pipeline
nodepred-ns Node classification sampling pipeline
```
For each pipeline it will have diffirent options to specified. For example, for node prediction pipeline, you can do `dgl-enter config nodepred --help`, you'll get:
```
Usage: dgl-enter config nodepred [OPTIONS]
Node classification pipeline
Options: ### Step 3: `train` the model / `export` the script
--data [cora|citeseer|ogbl-collab|csv|reddit|co-buy-computer] Then you can do the same as the tutorial above, either train the model by
input data name [required] `dgl train --cfg csv_sage.yaml` or use `dgl export --cfg csv_sage.yaml
--cfg TEXT output configuration path [default: --output script.py` to get the training script.
cfg.yml]
--model [gcn|gat|sage|sgc|gin] Model name [required]
--device [cpu|cuda] Device, cpu or cuda [default: cpu]
--help Show this message and exit.
```
You can always get the detailed help information by adding `--help` to the command line ## FAQ
### Train **Q: What are the available options for each command?**
You can train a model on the dataset based on the configuration file generated by `dgl-enter config`, by `dgl-enter train`. A: You can use `--help` for all commands. For example, use `dgl --help` for general
``` help message; use `dgl configure --help` for the configuration options; use
Usage: dgl-enter train [OPTIONS] `dgl configure nodepred --help` for the configuration options of node prediction pipeline.
Train the model **Q: What exactly is nodepred/linkpred? How many are they?**
A: They are called DGl-Go pipelines. A pipeline represents the training methodology for
a certain task. Therefore, its naming convention is *<task_name>[-<method_name>]*. For example,
`nodepred` trains the selected GNN model for node classification using full-graph training method;
while `nodepred-ns` trains the model for node classifiation but using neighbor sampling.
The first release included three training pipelines (`nodepred`, `nodepred-ns` and `linkpred`)
but you can expect more will be coming in the future. Use `dgl configure --help` to see
all the available pipelines.
Options: **Q: How to add my model to the official model recipe zoo?**
--cfg TEXT yaml file name [default: cfg.yml] A: Currently not supported. We will enable this feature soon. Please stay tuned!
--help Show this message and exit.
```
### Export **Q: After training a model on some dataset, how can I apply it to another one?**
Get the self-contained, runnable python script derived from the configuration file by `dgl-enter export`. A: The `save_path` option in the generated configuration file allows you to specify where
to save the model after training. You can then modify the script generated by `dgl export`
to load the the model checkpoint and evaluate it on another dataset.
...@@ -4,12 +4,14 @@ from ..model import * ...@@ -4,12 +4,14 @@ from ..model import *
from .config_cli import config_app from .config_cli import config_app
from .train_cli import train from .train_cli import train
from .export_cli import export from .export_cli import export
from .recipe_cli import recipe_app
no_args_is_help = False no_args_is_help = False
app = typer.Typer(no_args_is_help=no_args_is_help, add_completion=False) app = typer.Typer(no_args_is_help=True, add_completion=False)
app.add_typer(config_app, name="config", no_args_is_help=no_args_is_help) app.add_typer(config_app, name="configure", no_args_is_help=no_args_is_help)
app.command(help="Train the model", no_args_is_help=no_args_is_help)(train) app.add_typer(recipe_app, name="recipe", no_args_is_help=True)
app.command(help="Export the python file from config", no_args_is_help=no_args_is_help)(export) app.command(help="Launch training", no_args_is_help=no_args_is_help)(train)
app.command(help="Export a runnable python script", no_args_is_help=no_args_is_help)(export)
def main(): def main():
app() app()
......
...@@ -6,7 +6,7 @@ import typing ...@@ -6,7 +6,7 @@ import typing
import yaml import yaml
from pathlib import Path from pathlib import Path
config_app = typer.Typer(help="Generate the config files") config_app = typer.Typer(help="Generate a configuration file")
for key, pipeline in PipelineFactory.registry.items(): for key, pipeline in PipelineFactory.registry.items():
config_app.command(key, help=pipeline.get_description())(pipeline.get_cfg_func()) config_app.command(key, help=pipeline.get_description())(pipeline.get_cfg_func())
......
...@@ -10,8 +10,8 @@ import isort ...@@ -10,8 +10,8 @@ import isort
import autopep8 import autopep8
def export( def export(
cfg: str = typer.Option("cfg.yml", help="config yaml file name"), cfg: str = typer.Option("cfg.yaml", help="config yaml file name"),
output: str = typer.Option("output.py", help="output python file name") output: str = typer.Option("script.py", help="output python file name")
): ):
user_cfg = yaml.safe_load(Path(cfg).open("r")) user_cfg = yaml.safe_load(Path(cfg).open("r"))
pipeline_name = user_cfg["pipeline_name"] pipeline_name = user_cfg["pipeline_name"]
......
from pathlib import Path
from typing import Optional
import typer
import os
import shutil
import yaml
def list_recipes():
file_current_dir = Path(__file__).resolve().parent
recipe_dir = file_current_dir.parent.parent / "recipes"
file_list = list(recipe_dir.glob("*.yaml"))
header = "| {:<30} | {:<18} | {:<20} |".format("Filename", "Pipeline", "Dataset")
typer.echo("="*len(header))
typer.echo(header)
typer.echo("="*len(header))
for file in file_list:
cfg = yaml.safe_load(Path(file).open("r"))
typer.echo("| {:<30} | {:<18} | {:<20} |".format(file.name, cfg["pipeline_name"], cfg["data"]["name"]))
typer.echo("="*len(header))
def copy_recipes(dir: str = typer.Option("dglgo_example_recipes", help="directory name for recipes")):
file_current_dir = Path(__file__).resolve().parent
recipe_dir = file_current_dir.parent.parent / "recipes"
current_dir = Path(os.getcwd())
new_dir = current_dir / dir
new_dir.mkdir(parents=True, exist_ok=True)
for file in recipe_dir.glob("*.yaml"):
shutil.copy(file, new_dir)
print("Example recipes are copied to {}".format(new_dir.absolute()))
def get_recipe(recipe_name: Optional[str] = typer.Argument(None, help="The recipe filename to get, e.q. nodepred_citeseer_gcn.yaml")):
if recipe_name is None:
typer.echo("Usage: dgl recipe get [RECIPE_NAME] \n")
typer.echo(" Copy the recipe to current directory \n")
typer.echo(" Arguments:")
typer.echo(" [RECIPE_NAME] The recipe filename to get, e.q. nodepred_citeseer_gcn.yaml\n")
typer.echo("Here are all avaliable recipe filename")
list_recipes()
else:
file_current_dir = Path(__file__).resolve().parent
recipe_dir = file_current_dir.parent.parent / "recipes"
current_dir = Path(os.getcwd())
recipe_path = recipe_dir / recipe_name
shutil.copy(recipe_path, current_dir)
print("Recipe {} is copied to {}".format(recipe_path.absolute(), current_dir.absolute()))
recipe_app = typer.Typer(help="Get example recipes")
recipe_app.command(name="list", help="List all available example recipes")(list_recipes)
recipe_app.command(name="copy", help="Copy all available example recipes to current directory")(copy_recipes)
recipe_app.command(name="get", help="Copy the recipe to current directory")(get_recipe)
if __name__ == "__main__":
recipe_app()
...@@ -5,12 +5,11 @@ from enum import Enum ...@@ -5,12 +5,11 @@ from enum import Enum
import typing import typing
import yaml import yaml
from pathlib import Path from pathlib import Path
import isort import isort
import autopep8 import autopep8
def train( def train(
cfg: str = typer.Option("cfg.yml", help="config yaml file name"), cfg: str = typer.Option("cfg.yaml", help="config yaml file name"),
): ):
user_cfg = yaml.safe_load(Path(cfg).open("r")) user_cfg = yaml.safe_load(Path(cfg).open("r"))
pipeline_name = user_cfg["pipeline_name"] pipeline_name = user_cfg["pipeline_name"]
...@@ -18,8 +17,8 @@ def train( ...@@ -18,8 +17,8 @@ def train(
f_code = autopep8.fix_code(output_file_content, options={'aggressive': 1}) f_code = autopep8.fix_code(output_file_content, options={'aggressive': 1})
f_code = isort.code(f_code) f_code = isort.code(f_code)
exec(f_code, {'__name__': '__main__'}) code = compile(f_code, 'dglgo_tmp.py', 'exec')
exec(code, {'__name__': '__main__'})
if __name__ == "__main__": if __name__ == "__main__":
train_app = typer.Typer() train_app = typer.Typer()
......
...@@ -49,7 +49,7 @@ class GCN(nn.Module): ...@@ -49,7 +49,7 @@ class GCN(nn.Module):
in_hidden = hidden_size if i > 0 else in_size in_hidden = hidden_size if i > 0 else in_size
out_hidden = hidden_size if i < num_layers - 1 else data_info["out_size"] out_hidden = hidden_size if i < num_layers - 1 else data_info["out_size"]
self.layers.append(dgl.nn.GraphConv(in_hidden, out_hidden, norm=norm)) self.layers.append(dgl.nn.GraphConv(in_hidden, out_hidden, norm=norm, allow_zero_in_degree=True))
self.dropout = nn.Dropout(p=dropout) self.dropout = nn.Dropout(p=dropout)
self.act = getattr(torch, activation) self.act = getattr(torch, activation)
......
...@@ -12,6 +12,8 @@ class GIN(nn.Module): ...@@ -12,6 +12,8 @@ class GIN(nn.Module):
aggregator_type='sum'): aggregator_type='sum'):
"""Graph Isomophism Networks """Graph Isomophism Networks
Edge feature is ignored in this model.
Parameters Parameters
---------- ----------
data_info : dict data_info : dict
......
...@@ -55,7 +55,7 @@ class GraphSAGE(nn.Module): ...@@ -55,7 +55,7 @@ class GraphSAGE(nn.Module):
h = node_feat h = node_feat
h = self.dropout(h) h = self.dropout(h)
for l, layer in enumerate(self.layers): for l, layer in enumerate(self.layers):
h = layer(graph, h) h = layer(graph, h, edge_feat)
if l != len(self.layers) - 1: if l != len(self.layers) - 1:
h = self.activation(h) h = self.activation(h)
h = self.dropout(h) h = self.dropout(h)
...@@ -64,7 +64,7 @@ class GraphSAGE(nn.Module): ...@@ -64,7 +64,7 @@ class GraphSAGE(nn.Module):
def forward_block(self, blocks, node_feat, edge_feat = None): def forward_block(self, blocks, node_feat, edge_feat = None):
h = node_feat h = node_feat
for l, (layer, block) in enumerate(zip(self.layers, blocks)): for l, (layer, block) in enumerate(zip(self.layers, blocks)):
h = layer(block, h) h = layer(block, h, edge_feat)
if l != len(self.layers) - 1: if l != len(self.layers) - 1:
h = self.activation(h) h = self.activation(h)
h = self.dropout(h) h = self.dropout(h)
......
...@@ -14,6 +14,8 @@ class SGC(nn.Module): ...@@ -14,6 +14,8 @@ class SGC(nn.Module):
bias=True, k=2): bias=True, k=2):
""" Simplifying Graph Convolutional Networks """ Simplifying Graph Convolutional Networks
Edge feature is ignored in this model.
Parameters Parameters
---------- ----------
data_info : dict data_info : dict
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment