"docs/git@developer.sourcefind.cn:OpenDAS/dgl.git" did not exist on "6d2129831d83f06ee3186de45eaa0547ba7d8ccb"
Commit e5aeeeed authored by Hongzhi (Steve), Chen's avatar Hongzhi (Steve), Chen
Browse files

Created using Colaboratory

parent 5dfaf99e
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"collapsed_sections": [
"BjkAK37xopp1"
],
"gpuType": "T4",
"private_outputs": true,
"authorship_tag": "ABX9TyMa/mQpKaVWFeVZfkCXcqlp",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/dmlc/dgl/blob/master/notebooks/graphbolt/walkthrough.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# Graphbolt Quick Walkthrough\n",
"\n",
"The tutorial provides a quick walkthrough of operators provided by the `dgl.graphbolt` package, and illustrates how to create a GNN datapipe with the package. To learn more details about Stochastic Training of GNNs, please read the [materials](https://docs.dgl.ai/tutorials/large/index.html) provided by DGL.\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dmlc/dgl/blob/master/notebooks/graphbolt/walkthrough.ipynb) [![GitHub](https://img.shields.io/badge/-View%20on%20GitHub-181717?logo=github&logoColor=ffffff)](https://github.com/dmlc/dgl/blob/master/notebooks/graphbolt/walkthrough.ipynb)"
],
"metadata": {
"id": "e1qfiZMOJYYv"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fWiaC1WaDE-W"
},
"outputs": [],
"source": [
"# Install required packages.\n",
"import os\n",
"import torch\n",
"os.environ['TORCH'] = torch.__version__\n",
"os.environ['DGLBACKEND'] = \"pytorch\"\n",
"\n",
"!pip install --pre dgl -f https://data.dgl.ai/wheels-test/cu118/repo.html > /dev/null\n",
"\n",
"try:\n",
" import dgl.graphbolt as gb\n",
" installed = True\n",
"except ImportError as error:\n",
" installed = False\n",
" print(error)\n",
"print(\"DGL installed!\" if installed else \"DGL not found!\")"
]
},
{
"cell_type": "markdown",
"source": [
"## Dataset\n",
"\n",
"The dataset has three primary components. *1*. An itemset, which can be iterated over as the training target. *2*. A sampling graph, which is used by the subgraph sampling algorithm to generate a subgraph. *3*. A feature store, which stores node, edge, and graph features.\n",
"\n",
"* The **Itemset** is created from iterable data or tuple of iterable data."
],
"metadata": {
"id": "8O7PfsY4sPoN"
}
},
{
"cell_type": "code",
"source": [
"node_pairs = torch.tensor(\n",
" [[7, 0], [6, 0], [1, 3], [3, 3], [2, 4], [8, 4], [1, 4], [2, 4], [1, 5],\n",
" [9, 6], [0, 6], [8, 6], [7, 7], [7, 7], [4, 7], [6, 8], [5, 8], [9, 9],\n",
" [4, 9], [4, 9], [5, 9], [9, 9], [5, 9], [9, 9], [7, 9]]\n",
")\n",
"item_set = gb.ItemSet(node_pairs, names=\"node_pairs\")\n",
"print(item_set)"
],
"metadata": {
"id": "g73ZAbMQsSgV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"* The **SamplingGraph** is used by the subgraph sampling algorithm to generate a subgraph. In graphbolt, we provide a canonical solution, the CSCSamplingGraph, which achieves state-of-the-art time and space efficiency on CPU sampling. However, this requires enough CPU memory to host all CSCSamplingGraph objects in memory."
],
"metadata": {
"id": "Lqty9p4cs0OR"
}
},
{
"cell_type": "code",
"source": [
"indptr = torch.tensor([0, 2, 2, 2, 4, 8, 9, 12, 15, 17, 25])\n",
"indices = torch.tensor(\n",
" [7, 6, 1, 3, 2, 8, 1, 2, 1, 9, 0, 8, 7, 7, 4, 6, 5, 9, 4, 4, 5, 9, 5, 9, 7]\n",
")\n",
"num_edges = 25\n",
"eid = torch.arange(num_edges)\n",
"edge_attributes = {gb.ORIGINAL_EDGE_ID: eid}\n",
"graph = gb.from_csc(indptr, indices, None, None, edge_attributes, None)\n",
"print(graph)"
],
"metadata": {
"id": "jDjY149xs3PI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"* The FeatureStore is used to store node, edge, and graph features. In graphbolt, we provide the TorchBasedFeature and related optimizations, such as the GPUCachedFeature, for different use cases."
],
"metadata": {
"id": "mNp2S2_Vs8af"
}
},
{
"cell_type": "code",
"source": [
"num_nodes = 10\n",
"num_edges = 25\n",
"node_feature_data = torch.rand((num_nodes, 2))\n",
"edge_feature_data = torch.rand((num_edges, 3))\n",
"node_feature = gb.TorchBasedFeature(node_feature_data)\n",
"edge_feature = gb.TorchBasedFeature(edge_feature_data)\n",
"features = {\n",
" (\"node\", None, \"feat\") : node_feature,\n",
" (\"edge\", None, \"feat\") : edge_feature,\n",
"}\n",
"feature_store = gb.BasicFeatureStore(features)\n",
"print(feature_store)"
],
"metadata": {
"id": "zIU6KWe1Sm2g"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## DataPipe\n",
"\n",
"The DataPipe in Graphbolt is an extension of the PyTorch DataPipe, but it is specifically designed to address the challenges of training graph neural networks (GNNs). Each stage of the data pipeline loads data from different sources and can be combined with other stages to create more complex data pipelines. The intermediate data will be stored in **MiniBatch** data packs.\n",
"\n",
"* **ItemSampler** iterates over input **Itemset** and create subsets."
],
"metadata": {
"id": "Oh2ockWWoXQ0"
}
},
{
"cell_type": "code",
"source": [
"datapipe = gb.ItemSampler(item_set, batch_size=3, shuffle=False)\n",
"print(next(iter(datapipe)))"
],
"metadata": {
"id": "XtqPDprrogR7"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"* **NegativeSampler** generate negative samples and return a mix of positive and negative samples."
],
"metadata": {
"id": "BjkAK37xopp1"
}
},
{
"cell_type": "code",
"source": [
"datapipe = datapipe.sample_uniform_negative(graph, 1)\n",
"print(next(iter(datapipe)))"
],
"metadata": {
"id": "PrFpGoOGopJy"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"* **SubgraphSampler** samples a subgraph from a given set of nodes from a larger graph."
],
"metadata": {
"id": "fYO_oIwkpmb3"
}
},
{
"cell_type": "code",
"source": [
"fanouts = torch.tensor([1])\n",
"datapipe = datapipe.sample_neighbor(graph, [fanouts])\n",
"print(next(iter(datapipe)))"
],
"metadata": {
"id": "4UsY3PL3ppYV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"* **FeatureFetcher** fetchs features for node/edge in graphbolt."
],
"metadata": {
"id": "0uIydsjUqMA0"
}
},
{
"cell_type": "code",
"source": [
"datapipe = datapipe.fetch_feature(feature_store, node_feature_keys=[\"feat\"], edge_feature_keys=[\"feat\"])\n",
"print(next(iter(datapipe)))"
],
"metadata": {
"id": "YAj8G7YBqO6G"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"After retrieving the required data, Graphbolt provides helper methods to convert it to the output format needed for subsequent GNN training.\n",
"\n",
"* Convert to **DGLMiniBatch** format for training with DGL."
],
"metadata": {
"id": "Gt059n1xrmj-"
}
},
{
"cell_type": "code",
"source": [
"datapipe = datapipe.to_dgl()\n",
"print(next(iter(datapipe)))"
],
"metadata": {
"id": "o8Yoi8BeqSdu"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"* Copy the data to the GPU for training on the GPU."
],
"metadata": {
"id": "hjBSLPRPrsD2"
}
},
{
"cell_type": "code",
"source": [
"datapipe = datapipe.copy_to(device=\"cuda\")\n",
"print(next(iter(datapipe)))"
],
"metadata": {
"id": "RofiZOUMqt_u"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Exercise: Node classification\n",
"\n",
"Similarly, the following Dataset is created for node classification, can you implement the data pipeline for the dataset?"
],
"metadata": {
"id": "xm9HnyHRvxXj"
}
},
{
"cell_type": "code",
"source": [
"# Dataset for node classification.\n",
"num_nodes = 10\n",
"nodes = torch.arange(num_nodes)\n",
"labels = torch.tensor([1, 2, 0, 2, 2, 0, 2, 2, 2, 2])\n",
"item_set = gb.ItemSet((nodes, labels), names=(\"seed_nodes\", \"labels\"))\n",
"\n",
"indptr = torch.tensor([0, 2, 2, 2, 4, 8, 9, 12, 15, 17, 25])\n",
"indices = torch.tensor(\n",
" [7, 6, 1, 3, 2, 8, 1, 2, 1, 9, 0, 8, 7, 7, 4, 6, 5, 9, 4, 4, 5, 9, 5, 9, 7]\n",
")\n",
"eid = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,\n",
" 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24])\n",
"edge_attributes = {gb.ORIGINAL_EDGE_ID: eid}\n",
"graph = gb.from_csc(indptr, indices, None, None, edge_attributes, None)\n",
"\n",
"num_nodes = 10\n",
"num_edges = 25\n",
"node_feature_data = torch.rand((num_nodes, 2))\n",
"edge_feature_data = torch.rand((num_edges, 3))\n",
"node_feature = gb.TorchBasedFeature(node_feature_data)\n",
"edge_feature = gb.TorchBasedFeature(edge_feature_data)\n",
"features = {\n",
" (\"node\", None, \"feat\") : node_feature,\n",
" (\"edge\", None, \"feat\") : edge_feature,\n",
"}\n",
"feature_store = gb.BasicFeatureStore(features)\n",
"\n",
"# Datapipe.\n",
"...\n",
"print(next(iter(datapipe)))"
],
"metadata": {
"id": "YV-mk-xAv78v"
},
"execution_count": null,
"outputs": []
}
]
}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment