{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "FnFhPMaAfLtJ" }, "source": [ "# OnDiskDataset for Heterogeneous Graph\n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dmlc/dgl/blob/master/notebooks/stochastic_training/ondisk_dataset_heterograph.ipynb) [![GitHub](https://img.shields.io/badge/-View%20on%20GitHub-181717?logo=github&logoColor=ffffff)](https://github.com/dmlc/dgl/blob/master/notebooks/stochastic_training/ondisk_dataset_heterograph.ipynb)\n", "\n", "This tutorial shows how to create `OnDiskDataset` for heterogeneous graph that could be used in **GraphBolt** framework. The major difference from creating dataset for homogeneous graph is that we need to specify node/edge types for edges, feature data, training/validation/test sets.\n", "\n", "By the end of this tutorial, you will be able to\n", "\n", "- organize graph structure data.\n", "- organize feature data.\n", "- organize training/validation/test set for specific tasks.\n", "\n", "To create an ``OnDiskDataset`` object, you need to organize all the data including graph structure, feature data and tasks into a directory. The directory should contain a ``metadata.yaml`` file that describes the metadata of the dataset.\n", "\n", "Now let's generate various data step by step and organize them together to instantiate `OnDiskDataset` finally." ] }, { "cell_type": "markdown", "metadata": { "id": "Wlb19DtWgtzq" }, "source": [ "## Install DGL package" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UojlT9ZGgyr9" }, "outputs": [], "source": [ "# Install required packages.\n", "import os\n", "import torch\n", "import numpy as np\n", "os.environ['TORCH'] = torch.__version__\n", "os.environ['DGLBACKEND'] = \"pytorch\"\n", "\n", "# Install the CPU version.\n", "device = torch.device(\"cpu\")\n", "!pip install --pre dgl -f https://data.dgl.ai/wheels-test/repo.html\n", "\n", "try:\n", " import dgl\n", " import dgl.graphbolt as gb\n", " installed = True\n", "except ImportError as error:\n", " installed = False\n", " print(error)\n", "print(\"DGL installed!\" if installed else \"DGL not found!\")" ] }, { "cell_type": "markdown", "metadata": { "id": "2R7WnSbjsfbr" }, "source": [ "## Data preparation\n", "In order to demonstrate how to organize various data, let's create a base directory first." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SZipbzyltLfO" }, "outputs": [], "source": [ "base_dir = './ondisk_dataset_heterograph'\n", "os.makedirs(base_dir, exist_ok=True)\n", "print(f\"Created base directory: {base_dir}\")" ] }, { "cell_type": "markdown", "metadata": { "id": "qhNtIn_xhlnl" }, "source": [ "### Generate graph structure data\n", "For heterogeneous graph, we need to save different edge edges(namely seeds) into separate **Numpy** or **CSV** files.\n", "\n", "Note:\n", "- when saving to **Numpy**, the array requires to be in shape of `(2, N)`. This format is recommended as constructing graph from it is much faster than **CSV** file.\n", "- when saving to **CSV** file, do not save index and header.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HcBt4G5BmSjr" }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "\n", "# For simplicity, we create a heterogeneous graph with\n", "# 2 node types: `user`, `item`\n", "# 2 edge types: `user:like:item`, `user:follow:user`\n", "# And each node/edge type has the same number of nodes/edges.\n", "num_nodes = 1000\n", "num_edges = 10 * num_nodes\n", "\n", "# Edge type: \"user:like:item\"\n", "like_edges_path = os.path.join(base_dir, \"like-edges.csv\")\n", "like_edges = np.random.randint(0, num_nodes, size=(num_edges, 2))\n", "print(f\"Part of [user:like:item] edges: {like_edges[:5, :]}\\n\")\n", "\n", "df = pd.DataFrame(like_edges)\n", "df.to_csv(like_edges_path, index=False, header=False)\n", "print(f\"[user:like:item] edges are saved into {like_edges_path}\\n\")\n", "\n", "# Edge type: \"user:follow:user\"\n", "follow_edges_path = os.path.join(base_dir, \"follow-edges.csv\")\n", "follow_edges = np.random.randint(0, num_nodes, size=(num_edges, 2))\n", "print(f\"Part of [user:follow:user] edges: {follow_edges[:5, :]}\\n\")\n", "\n", "df = pd.DataFrame(follow_edges)\n", "df.to_csv(follow_edges_path, index=False, header=False)\n", "print(f\"[user:follow:user] edges are saved into {follow_edges_path}\\n\")" ] }, { "cell_type": "markdown", "metadata": { "id": "kh-4cPtzpcaH" }, "source": [ "### Generate feature data for graph\n", "For feature data, numpy arrays and torch tensors are supported for now. Let's generate feature data for each node/edge type." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_PVu1u5brBhF" }, "outputs": [], "source": [ "# Generate node[user] feature in numpy array.\n", "node_user_feat_0_path = os.path.join(base_dir, \"node-user-feat-0.npy\")\n", "node_user_feat_0 = np.random.rand(num_nodes, 5)\n", "print(f\"Part of node[user] feature [feat_0]: {node_user_feat_0[:3, :]}\")\n", "np.save(node_user_feat_0_path, node_user_feat_0)\n", "print(f\"Node[user] feature [feat_0] is saved to {node_user_feat_0_path}\\n\")\n", "\n", "# Generate another node[user] feature in torch tensor\n", "node_user_feat_1_path = os.path.join(base_dir, \"node-user-feat-1.pt\")\n", "node_user_feat_1 = torch.rand(num_nodes, 5)\n", "print(f\"Part of node[user] feature [feat_1]: {node_user_feat_1[:3, :]}\")\n", "torch.save(node_user_feat_1, node_user_feat_1_path)\n", "print(f\"Node[user] feature [feat_1] is saved to {node_user_feat_1_path}\\n\")\n", "\n", "# Generate node[item] feature in numpy array.\n", "node_item_feat_0_path = os.path.join(base_dir, \"node-item-feat-0.npy\")\n", "node_item_feat_0 = np.random.rand(num_nodes, 5)\n", "print(f\"Part of node[item] feature [feat_0]: {node_item_feat_0[:3, :]}\")\n", "np.save(node_item_feat_0_path, node_item_feat_0)\n", "print(f\"Node[item] feature [feat_0] is saved to {node_item_feat_0_path}\\n\")\n", "\n", "# Generate another node[item] feature in torch tensor\n", "node_item_feat_1_path = os.path.join(base_dir, \"node-item-feat-1.pt\")\n", "node_item_feat_1 = torch.rand(num_nodes, 5)\n", "print(f\"Part of node[item] feature [feat_1]: {node_item_feat_1[:3, :]}\")\n", "torch.save(node_item_feat_1, node_item_feat_1_path)\n", "print(f\"Node[item] feature [feat_1] is saved to {node_item_feat_1_path}\\n\")\n", "\n", "# Generate edge[user:like:item] feature in numpy array.\n", "edge_like_feat_0_path = os.path.join(base_dir, \"edge-like-feat-0.npy\")\n", "edge_like_feat_0 = np.random.rand(num_edges, 5)\n", "print(f\"Part of edge[user:like:item] feature [feat_0]: {edge_like_feat_0[:3, :]}\")\n", "np.save(edge_like_feat_0_path, edge_like_feat_0)\n", "print(f\"Edge[user:like:item] feature [feat_0] is saved to {edge_like_feat_0_path}\\n\")\n", "\n", "# Generate another edge[user:like:item] feature in torch tensor\n", "edge_like_feat_1_path = os.path.join(base_dir, \"edge-like-feat-1.pt\")\n", "edge_like_feat_1 = torch.rand(num_edges, 5)\n", "print(f\"Part of edge[user:like:item] feature [feat_1]: {edge_like_feat_1[:3, :]}\")\n", "torch.save(edge_like_feat_1, edge_like_feat_1_path)\n", "print(f\"Edge[user:like:item] feature [feat_1] is saved to {edge_like_feat_1_path}\\n\")\n", "\n", "# Generate edge[user:follow:user] feature in numpy array.\n", "edge_follow_feat_0_path = os.path.join(base_dir, \"edge-follow-feat-0.npy\")\n", "edge_follow_feat_0 = np.random.rand(num_edges, 5)\n", "print(f\"Part of edge[user:follow:user] feature [feat_0]: {edge_follow_feat_0[:3, :]}\")\n", "np.save(edge_follow_feat_0_path, edge_follow_feat_0)\n", "print(f\"Edge[user:follow:user] feature [feat_0] is saved to {edge_follow_feat_0_path}\\n\")\n", "\n", "# Generate another edge[user:follow:user] feature in torch tensor\n", "edge_follow_feat_1_path = os.path.join(base_dir, \"edge-follow-feat-1.pt\")\n", "edge_follow_feat_1 = torch.rand(num_edges, 5)\n", "print(f\"Part of edge[user:follow:user] feature [feat_1]: {edge_follow_feat_1[:3, :]}\")\n", "torch.save(edge_follow_feat_1, edge_follow_feat_1_path)\n", "print(f\"Edge[user:follow:user] feature [feat_1] is saved to {edge_follow_feat_1_path}\\n\")" ] }, { "cell_type": "markdown", "metadata": { "id": "ZyqgOtsIwzh_" }, "source": [ "### Generate tasks\n", "`OnDiskDataset` supports multiple tasks. For each task, we need to prepare training/validation/test sets respectively. Such sets usually vary among different tasks. In this tutorial, let's create a **Node Classification** task and **Link Prediction** task." ] }, { "cell_type": "markdown", "metadata": { "id": "hVxHaDIfzCkr" }, "source": [ "#### Node Classification Task\n", "For node classification task, we need **node IDs** and corresponding **labels** for each training/validation/test set. Like feature data, numpy arrays and torch tensors are supported for these sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "S5-fyBbHzTCO" }, "outputs": [], "source": [ "# For illustration, let's generate item sets for each node type.\n", "num_trains = int(num_nodes * 0.6)\n", "num_vals = int(num_nodes * 0.2)\n", "num_tests = num_nodes - num_trains - num_vals\n", "\n", "user_ids = np.arange(num_nodes)\n", "np.random.shuffle(user_ids)\n", "\n", "item_ids = np.arange(num_nodes)\n", "np.random.shuffle(item_ids)\n", "\n", "# Train IDs for user.\n", "nc_train_user_ids_path = os.path.join(base_dir, \"nc-train-user-ids.npy\")\n", "nc_train_user_ids = user_ids[:num_trains]\n", "print(f\"Part of train ids[user] for node classification: {nc_train_user_ids[:3]}\")\n", "np.save(nc_train_user_ids_path, nc_train_user_ids)\n", "print(f\"NC train ids[user] are saved to {nc_train_user_ids_path}\\n\")\n", "\n", "# Train labels for user.\n", "nc_train_user_labels_path = os.path.join(base_dir, \"nc-train-user-labels.pt\")\n", "nc_train_user_labels = torch.randint(0, 10, (num_trains,))\n", "print(f\"Part of train labels[user] for node classification: {nc_train_user_labels[:3]}\")\n", "torch.save(nc_train_user_labels, nc_train_user_labels_path)\n", "print(f\"NC train labels[user] are saved to {nc_train_user_labels_path}\\n\")\n", "\n", "# Train IDs for item.\n", "nc_train_item_ids_path = os.path.join(base_dir, \"nc-train-item-ids.npy\")\n", "nc_train_item_ids = item_ids[:num_trains]\n", "print(f\"Part of train ids[item] for node classification: {nc_train_item_ids[:3]}\")\n", "np.save(nc_train_item_ids_path, nc_train_item_ids)\n", "print(f\"NC train ids[item] are saved to {nc_train_item_ids_path}\\n\")\n", "\n", "# Train labels for item.\n", "nc_train_item_labels_path = os.path.join(base_dir, \"nc-train-item-labels.pt\")\n", "nc_train_item_labels = torch.randint(0, 10, (num_trains,))\n", "print(f\"Part of train labels[item] for node classification: {nc_train_item_labels[:3]}\")\n", "torch.save(nc_train_item_labels, nc_train_item_labels_path)\n", "print(f\"NC train labels[item] are saved to {nc_train_item_labels_path}\\n\")\n", "\n", "# Val IDs for user.\n", "nc_val_user_ids_path = os.path.join(base_dir, \"nc-val-user-ids.npy\")\n", "nc_val_user_ids = user_ids[num_trains:num_trains+num_vals]\n", "print(f\"Part of val ids[user] for node classification: {nc_val_user_ids[:3]}\")\n", "np.save(nc_val_user_ids_path, nc_val_user_ids)\n", "print(f\"NC val ids[user] are saved to {nc_val_user_ids_path}\\n\")\n", "\n", "# Val labels for user.\n", "nc_val_user_labels_path = os.path.join(base_dir, \"nc-val-user-labels.pt\")\n", "nc_val_user_labels = torch.randint(0, 10, (num_vals,))\n", "print(f\"Part of val labels[user] for node classification: {nc_val_user_labels[:3]}\")\n", "torch.save(nc_val_user_labels, nc_val_user_labels_path)\n", "print(f\"NC val labels[user] are saved to {nc_val_user_labels_path}\\n\")\n", "\n", "# Val IDs for item.\n", "nc_val_item_ids_path = os.path.join(base_dir, \"nc-val-item-ids.npy\")\n", "nc_val_item_ids = item_ids[num_trains:num_trains+num_vals]\n", "print(f\"Part of val ids[item] for node classification: {nc_val_item_ids[:3]}\")\n", "np.save(nc_val_item_ids_path, nc_val_item_ids)\n", "print(f\"NC val ids[item] are saved to {nc_val_item_ids_path}\\n\")\n", "\n", "# Val labels for item.\n", "nc_val_item_labels_path = os.path.join(base_dir, \"nc-val-item-labels.pt\")\n", "nc_val_item_labels = torch.randint(0, 10, (num_vals,))\n", "print(f\"Part of val labels[item] for node classification: {nc_val_item_labels[:3]}\")\n", "torch.save(nc_val_item_labels, nc_val_item_labels_path)\n", "print(f\"NC val labels[item] are saved to {nc_val_item_labels_path}\\n\")\n", "\n", "# Test IDs for user.\n", "nc_test_user_ids_path = os.path.join(base_dir, \"nc-test-user-ids.npy\")\n", "nc_test_user_ids = user_ids[-num_tests:]\n", "print(f\"Part of test ids[user] for node classification: {nc_test_user_ids[:3]}\")\n", "np.save(nc_test_user_ids_path, nc_test_user_ids)\n", "print(f\"NC test ids[user] are saved to {nc_test_user_ids_path}\\n\")\n", "\n", "# Test labels for user.\n", "nc_test_user_labels_path = os.path.join(base_dir, \"nc-test-user-labels.pt\")\n", "nc_test_user_labels = torch.randint(0, 10, (num_tests,))\n", "print(f\"Part of test labels[user] for node classification: {nc_test_user_labels[:3]}\")\n", "torch.save(nc_test_user_labels, nc_test_user_labels_path)\n", "print(f\"NC test labels[user] are saved to {nc_test_user_labels_path}\\n\")\n", "\n", "# Test IDs for item.\n", "nc_test_item_ids_path = os.path.join(base_dir, \"nc-test-item-ids.npy\")\n", "nc_test_item_ids = item_ids[-num_tests:]\n", "print(f\"Part of test ids[item] for node classification: {nc_test_item_ids[:3]}\")\n", "np.save(nc_test_item_ids_path, nc_test_item_ids)\n", "print(f\"NC test ids[item] are saved to {nc_test_item_ids_path}\\n\")\n", "\n", "# Test labels for item.\n", "nc_test_item_labels_path = os.path.join(base_dir, \"nc-test-item-labels.pt\")\n", "nc_test_item_labels = torch.randint(0, 10, (num_tests,))\n", "print(f\"Part of test labels[item] for node classification: {nc_test_item_labels[:3]}\")\n", "torch.save(nc_test_item_labels, nc_test_item_labels_path)\n", "print(f\"NC test labels[item] are saved to {nc_test_item_labels_path}\\n\")" ] }, { "cell_type": "markdown", "metadata": { "id": "LhAcDCHQ_KJ0" }, "source": [ "#### Link Prediction Task\n", "For link prediction task, we need **seeds** or **corresponding labels and indexes** which representing the pos/neg property and group of the seeds for each training/validation/test set. Like feature data, numpy arrays and torch tensors are supported for these sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "u0jCnXIcAQy4" }, "outputs": [], "source": [ "# For illustration, let's generate item sets for each edge type.\n", "num_trains = int(num_edges * 0.6)\n", "num_vals = int(num_edges * 0.2)\n", "num_tests = num_edges - num_trains - num_vals\n", "\n", "# Train seeds for user:like:item.\n", "lp_train_like_seeds_path = os.path.join(base_dir, \"lp-train-like-seeds.npy\")\n", "lp_train_like_seeds = like_edges[:num_trains, :]\n", "print(f\"Part of train seeds[user:like:item] for link prediction: {lp_train_like_seeds[:3]}\")\n", "np.save(lp_train_like_seeds_path, lp_train_like_seeds)\n", "print(f\"LP train seeds[user:like:item] are saved to {lp_train_like_seeds_path}\\n\")\n", "\n", "# Train seeds for user:follow:user.\n", "lp_train_follow_seeds_path = os.path.join(base_dir, \"lp-train-follow-seeds.npy\")\n", "lp_train_follow_seeds = follow_edges[:num_trains, :]\n", "print(f\"Part of train seeds[user:follow:user] for link prediction: {lp_train_follow_seeds[:3]}\")\n", "np.save(lp_train_follow_seeds_path, lp_train_follow_seeds)\n", "print(f\"LP train seeds[user:follow:user] are saved to {lp_train_follow_seeds_path}\\n\")\n", "\n", "# Val seeds for user:like:item.\n", "lp_val_like_seeds_path = os.path.join(base_dir, \"lp-val-like-seeds.npy\")\n", "lp_val_like_seeds = like_edges[num_trains:num_trains+num_vals, :]\n", "lp_val_like_neg_dsts = np.random.randint(0, num_nodes, (num_vals, 10)).reshape(-1)\n", "lp_val_like_neg_srcs = np.repeat(lp_val_like_seeds[:,0], 10)\n", "lp_val_like_neg_seeds = np.concatenate((lp_val_like_neg_srcs, lp_val_like_neg_dsts)).reshape(2,-1).T\n", "lp_val_like_seeds = np.concatenate((lp_val_like_seeds, lp_val_like_neg_seeds))\n", "print(f\"Part of val seeds[user:like:item] for link prediction: {lp_val_like_seeds[:3]}\")\n", "np.save(lp_val_like_seeds_path, lp_val_like_seeds)\n", "print(f\"LP val seeds[user:like:item] are saved to {lp_val_like_seeds_path}\\n\")\n", "\n", "# Val labels for user:like:item.\n", "lp_val_like_labels_path = os.path.join(base_dir, \"lp-val-like-labels.npy\")\n", "lp_val_like_labels = np.empty(num_vals * (10 + 1))\n", "lp_val_like_labels[:num_vals] = 1\n", "lp_val_like_labels[num_vals:] = 0\n", "print(f\"Part of val labels[user:like:item] for link prediction: {lp_val_like_labels[:3]}\")\n", "np.save(lp_val_like_labels_path, lp_val_like_labels)\n", "print(f\"LP val labels[user:like:item] are saved to {lp_val_like_labels_path}\\n\")\n", "\n", "# Val indexes for user:like:item.\n", "lp_val_like_indexes_path = os.path.join(base_dir, \"lp-val-like-indexes.npy\")\n", "lp_val_like_indexes = np.arange(0, num_vals)\n", "lp_val_like_neg_indexes = np.repeat(lp_val_like_indexes, 10)\n", "lp_val_like_indexes = np.concatenate([lp_val_like_indexes, lp_val_like_neg_indexes])\n", "print(f\"Part of val indexes[user:like:item] for link prediction: {lp_val_like_indexes[:3]}\")\n", "np.save(lp_val_like_indexes_path, lp_val_like_indexes)\n", "print(f\"LP val indexes[user:like:item] are saved to {lp_val_like_indexes_path}\\n\")\n", "\n", "# Val seeds for user:follow:item.\n", "lp_val_follow_seeds_path = os.path.join(base_dir, \"lp-val-follow-seeds.npy\")\n", "lp_val_follow_seeds = follow_edges[num_trains:num_trains+num_vals, :]\n", "lp_val_follow_neg_dsts = np.random.randint(0, num_nodes, (num_vals, 10)).reshape(-1)\n", "lp_val_follow_neg_srcs = np.repeat(lp_val_follow_seeds[:,0], 10)\n", "lp_val_follow_neg_seeds = np.concatenate((lp_val_follow_neg_srcs, lp_val_follow_neg_dsts)).reshape(2,-1).T\n", "lp_val_follow_seeds = np.concatenate((lp_val_follow_seeds, lp_val_follow_neg_seeds))\n", "print(f\"Part of val seeds[user:follow:item] for link prediction: {lp_val_follow_seeds[:3]}\")\n", "np.save(lp_val_follow_seeds_path, lp_val_follow_seeds)\n", "print(f\"LP val seeds[user:follow:item] are saved to {lp_val_follow_seeds_path}\\n\")\n", "\n", "# Val labels for user:follow:item.\n", "lp_val_follow_labels_path = os.path.join(base_dir, \"lp-val-follow-labels.npy\")\n", "lp_val_follow_labels = np.empty(num_vals * (10 + 1))\n", "lp_val_follow_labels[:num_vals] = 1\n", "lp_val_follow_labels[num_vals:] = 0\n", "print(f\"Part of val labels[user:follow:item] for link prediction: {lp_val_follow_labels[:3]}\")\n", "np.save(lp_val_follow_labels_path, lp_val_follow_labels)\n", "print(f\"LP val labels[user:follow:item] are saved to {lp_val_follow_labels_path}\\n\")\n", "\n", "# Val indexes for user:follow:item.\n", "lp_val_follow_indexes_path = os.path.join(base_dir, \"lp-val-follow-indexes.npy\")\n", "lp_val_follow_indexes = np.arange(0, num_vals)\n", "lp_val_follow_neg_indexes = np.repeat(lp_val_follow_indexes, 10)\n", "lp_val_follow_indexes = np.concatenate([lp_val_follow_indexes, lp_val_follow_neg_indexes])\n", "print(f\"Part of val indexes[user:follow:item] for link prediction: {lp_val_follow_indexes[:3]}\")\n", "np.save(lp_val_follow_indexes_path, lp_val_follow_indexes)\n", "print(f\"LP val indexes[user:follow:item] are saved to {lp_val_follow_indexes_path}\\n\")\n", "\n", "# Test seeds for user:like:item.\n", "lp_test_like_seeds_path = os.path.join(base_dir, \"lp-test-like-seeds.npy\")\n", "lp_test_like_seeds = like_edges[-num_tests:, :]\n", "lp_test_like_neg_dsts = np.random.randint(0, num_nodes, (num_tests, 10)).reshape(-1)\n", "lp_test_like_neg_srcs = np.repeat(lp_test_like_seeds[:,0], 10)\n", "lp_test_like_neg_seeds = np.concatenate((lp_test_like_neg_srcs, lp_test_like_neg_dsts)).reshape(2,-1).T\n", "lp_test_like_seeds = np.concatenate((lp_test_like_seeds, lp_test_like_neg_seeds))\n", "print(f\"Part of test seeds[user:like:item] for link prediction: {lp_test_like_seeds[:3]}\")\n", "np.save(lp_test_like_seeds_path, lp_test_like_seeds)\n", "print(f\"LP test seeds[user:like:item] are saved to {lp_test_like_seeds_path}\\n\")\n", "\n", "# Test labels for user:like:item.\n", "lp_test_like_labels_path = os.path.join(base_dir, \"lp-test-like-labels.npy\")\n", "lp_test_like_labels = np.empty(num_tests * (10 + 1))\n", "lp_test_like_labels[:num_tests] = 1\n", "lp_test_like_labels[num_tests:] = 0\n", "print(f\"Part of test labels[user:like:item] for link prediction: {lp_test_like_labels[:3]}\")\n", "np.save(lp_test_like_labels_path, lp_test_like_labels)\n", "print(f\"LP test labels[user:like:item] are saved to {lp_test_like_labels_path}\\n\")\n", "\n", "# Test indexes for user:like:item.\n", "lp_test_like_indexes_path = os.path.join(base_dir, \"lp-test-like-indexes.npy\")\n", "lp_test_like_indexes = np.arange(0, num_tests)\n", "lp_test_like_neg_indexes = np.repeat(lp_test_like_indexes, 10)\n", "lp_test_like_indexes = np.concatenate([lp_test_like_indexes, lp_test_like_neg_indexes])\n", "print(f\"Part of test indexes[user:like:item] for link prediction: {lp_test_like_indexes[:3]}\")\n", "np.save(lp_test_like_indexes_path, lp_test_like_indexes)\n", "print(f\"LP test indexes[user:like:item] are saved to {lp_test_like_indexes_path}\\n\")\n", "\n", "# Test seeds for user:follow:item.\n", "lp_test_follow_seeds_path = os.path.join(base_dir, \"lp-test-follow-seeds.npy\")\n", "lp_test_follow_seeds = follow_edges[-num_tests:, :]\n", "lp_test_follow_neg_dsts = np.random.randint(0, num_nodes, (num_tests, 10)).reshape(-1)\n", "lp_test_follow_neg_srcs = np.repeat(lp_test_follow_seeds[:,0], 10)\n", "lp_test_follow_neg_seeds = np.concatenate((lp_test_follow_neg_srcs, lp_test_follow_neg_dsts)).reshape(2,-1).T\n", "lp_test_follow_seeds = np.concatenate((lp_test_follow_seeds, lp_test_follow_neg_seeds))\n", "print(f\"Part of test seeds[user:follow:item] for link prediction: {lp_test_follow_seeds[:3]}\")\n", "np.save(lp_test_follow_seeds_path, lp_test_follow_seeds)\n", "print(f\"LP test seeds[user:follow:item] are saved to {lp_test_follow_seeds_path}\\n\")\n", "\n", "# Test labels for user:follow:item.\n", "lp_test_follow_labels_path = os.path.join(base_dir, \"lp-test-follow-labels.npy\")\n", "lp_test_follow_labels = np.empty(num_tests * (10 + 1))\n", "lp_test_follow_labels[:num_tests] = 1\n", "lp_test_follow_labels[num_tests:] = 0\n", "print(f\"Part of test labels[user:follow:item] for link prediction: {lp_test_follow_labels[:3]}\")\n", "np.save(lp_test_follow_labels_path, lp_test_follow_labels)\n", "print(f\"LP test labels[user:follow:item] are saved to {lp_test_follow_labels_path}\\n\")\n", "\n", "# Test indexes for user:follow:item.\n", "lp_test_follow_indexes_path = os.path.join(base_dir, \"lp-test-follow-indexes.npy\")\n", "lp_test_follow_indexes = np.arange(0, num_tests)\n", "lp_test_follow_neg_indexes = np.repeat(lp_test_follow_indexes, 10)\n", "lp_test_follow_indexes = np.concatenate([lp_test_follow_indexes, lp_test_follow_neg_indexes])\n", "print(f\"Part of test indexes[user:follow:item] for link prediction: {lp_test_follow_indexes[:3]}\")\n", "np.save(lp_test_follow_indexes_path, lp_test_follow_indexes)\n", "print(f\"LP test indexes[user:follow:item] are saved to {lp_test_follow_indexes_path}\\n\")" ] }, { "cell_type": "markdown", "metadata": { "id": "wbk6-wxRK-6S" }, "source": [ "## Organize Data into YAML File\n", "Now we need to create a `metadata.yaml` file which contains the paths, dadta types of graph structure, feature data, training/validation/test sets. Please note that all path should be relative to `metadata.yaml`.\n", "\n", "For heterogeneous graph, we need to specify the node/edge type in **type** fields. For edge type, canonical etype is required which is a string that's concatenated by source node type, etype, and destination node type together with `:`.\n", "\n", "Notes:\n", "- all path should be relative to `metadata.yaml`.\n", "- Below fields are optional and not specified in below example.\n", " - `in_memory`: indicates whether to load dada into memory or `mmap`. Default is `True`.\n", "\n", "Please refer to [YAML specification](https://github.com/dmlc/dgl/blob/master/docs/source/stochastic_training/ondisk-dataset-specification.rst) for more details." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ddGTWW61Lpwp" }, "outputs": [], "source": [ "yaml_content = f\"\"\"\n", " dataset_name: heterogeneous_graph_nc_lp\n", " graph:\n", " nodes:\n", " - type: user\n", " num: {num_nodes}\n", " - type: item\n", " num: {num_nodes}\n", " edges:\n", " - type: \"user:like:item\"\n", " format: csv\n", " path: {os.path.basename(like_edges_path)}\n", " - type: \"user:follow:user\"\n", " format: csv\n", " path: {os.path.basename(follow_edges_path)}\n", " feature_data:\n", " - domain: node\n", " type: user\n", " name: feat_0\n", " format: numpy\n", " path: {os.path.basename(node_user_feat_0_path)}\n", " - domain: node\n", " type: user\n", " name: feat_1\n", " format: torch\n", " path: {os.path.basename(node_user_feat_1_path)}\n", " - domain: node\n", " type: item\n", " name: feat_0\n", " format: numpy\n", " path: {os.path.basename(node_item_feat_0_path)}\n", " - domain: node\n", " type: item\n", " name: feat_1\n", " format: torch\n", " path: {os.path.basename(node_item_feat_1_path)}\n", " - domain: edge\n", " type: \"user:like:item\"\n", " name: feat_0\n", " format: numpy\n", " path: {os.path.basename(edge_like_feat_0_path)}\n", " - domain: edge\n", " type: \"user:like:item\"\n", " name: feat_1\n", " format: torch\n", " path: {os.path.basename(edge_like_feat_1_path)}\n", " - domain: edge\n", " type: \"user:follow:user\"\n", " name: feat_0\n", " format: numpy\n", " path: {os.path.basename(edge_follow_feat_0_path)}\n", " - domain: edge\n", " type: \"user:follow:user\"\n", " name: feat_1\n", " format: torch\n", " path: {os.path.basename(edge_follow_feat_1_path)}\n", " tasks:\n", " - name: node_classification\n", " num_classes: 10\n", " train_set:\n", " - type: user\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(nc_train_user_ids_path)}\n", " - name: labels\n", " format: torch\n", " path: {os.path.basename(nc_train_user_labels_path)}\n", " - type: item\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(nc_train_item_ids_path)}\n", " - name: labels\n", " format: torch\n", " path: {os.path.basename(nc_train_item_labels_path)}\n", " validation_set:\n", " - type: user\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(nc_val_user_ids_path)}\n", " - name: labels\n", " format: torch\n", " path: {os.path.basename(nc_val_user_labels_path)}\n", " - type: item\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(nc_val_item_ids_path)}\n", " - name: labels\n", " format: torch\n", " path: {os.path.basename(nc_val_item_labels_path)}\n", " test_set:\n", " - type: user\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(nc_test_user_ids_path)}\n", " - name: labels\n", " format: torch\n", " path: {os.path.basename(nc_test_user_labels_path)}\n", " - type: item\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(nc_test_item_ids_path)}\n", " - name: labels\n", " format: torch\n", " path: {os.path.basename(nc_test_item_labels_path)}\n", " - name: link_prediction\n", " num_classes: 10\n", " train_set:\n", " - type: \"user:like:item\"\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(lp_train_like_seeds_path)}\n", " - type: \"user:follow:user\"\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(lp_train_follow_seeds_path)}\n", " validation_set:\n", " - type: \"user:like:item\"\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(lp_val_like_seeds_path)}\n", " - name: labels\n", " format: numpy\n", " path: {os.path.basename(lp_val_like_labels_path)}\n", " - name: indexes\n", " format: numpy\n", " path: {os.path.basename(lp_val_like_indexes_path)}\n", " - type: \"user:follow:user\"\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(lp_val_follow_seeds_path)}\n", " - name: labels\n", " format: numpy\n", " path: {os.path.basename(lp_val_follow_labels_path)}\n", " - name: indexes\n", " format: numpy\n", " path: {os.path.basename(lp_val_follow_indexes_path)}\n", " test_set:\n", " - type: \"user:like:item\"\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(lp_test_like_seeds_path)}\n", " - name: labels\n", " format: numpy\n", " path: {os.path.basename(lp_test_like_labels_path)}\n", " - name: indexes\n", " format: numpy\n", " path: {os.path.basename(lp_test_like_indexes_path)}\n", " - type: \"user:follow:user\"\n", " data:\n", " - name: seeds\n", " format: numpy\n", " path: {os.path.basename(lp_test_follow_seeds_path)}\n", " - name: labels\n", " format: numpy\n", " path: {os.path.basename(lp_test_follow_labels_path)}\n", " - name: indexes\n", " format: numpy\n", " path: {os.path.basename(lp_test_follow_indexes_path)}\n", "\"\"\"\n", "metadata_path = os.path.join(base_dir, \"metadata.yaml\")\n", "with open(metadata_path, \"w\") as f:\n", " f.write(yaml_content)" ] }, { "cell_type": "markdown", "metadata": { "id": "kEfybHGhOW7O" }, "source": [ "## Instantiate `OnDiskDataset`\n", "Now we're ready to load dataset via `dgl.graphbolt.OnDiskDataset`. When instantiating, we just pass in the base directory where `metadata.yaml` file lies.\n", "\n", "During first instantiation, GraphBolt preprocesses the raw data such as constructing `FusedCSCSamplingGraph` from edges. All data including graph, feature data, training/validation/test sets are put into `preprocessed` directory after preprocessing. Any following dataset loading will skip the preprocess stage.\n", "\n", "After preprocessing, `load()` is required to be called explicitly in order to load graph, feature data and tasks." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "W58CZoSzOiyo" }, "outputs": [], "source": [ "dataset = gb.OnDiskDataset(base_dir).load()\n", "graph = dataset.graph\n", "print(f\"Loaded graph: {graph}\\n\")\n", "\n", "feature = dataset.feature\n", "print(f\"Loaded feature store: {feature}\\n\")\n", "\n", "tasks = dataset.tasks\n", "nc_task = tasks[0]\n", "print(f\"Loaded node classification task: {nc_task}\\n\")\n", "lp_task = tasks[1]\n", "print(f\"Loaded link prediction task: {lp_task}\\n\")" ] } ], "metadata": { "colab": { "private_outputs": true, "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 0 }