Unverified Commit 5eff9544 authored by Hongzhi (Steve), Chen's avatar Hongzhi (Steve), Chen Committed by GitHub
Browse files

Change hat to bar. (#5640)


Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-28-63.ap-northeast-1.compute.internal>
parent d815a001
......@@ -63,16 +63,16 @@
"\n",
"Mathematically, the graph convolutional layer is defined as:\n",
"\n",
"$$f(X^{(l)}, A) = \\sigma(\\hat{D}^{-\\frac{1}{2}}\\hat{A}\\hat{D}^{-\\frac{1}{2}}X^{(l)}W^{(l)})$$\n",
"$$f(X^{(l)}, A) = \\sigma(\\bar{D}^{-\\frac{1}{2}}\\bar{A}\\bar{D}^{-\\frac{1}{2}}X^{(l)}W^{(l)})$$\n",
"\n",
"with $\\hat{A} = A + I$, where $A$ denotes the adjacency matrix and $I$ denotes the identity matrix, $\\hat{D}$ refers to the diagonal node degree matrix of $\\hat{A}$ and $W^{(l)}$ denotes a trainable weight matrix. $\\sigma$ refers to a non-linear activation (e.g. relu).\n",
"with $\\bar{A} = A + I$, where $A$ denotes the adjacency matrix and $I$ denotes the identity matrix, $\\bar{D}$ refers to the diagonal node degree matrix of $\\bar{A}$ and $W^{(l)}$ denotes a trainable weight matrix. $\\sigma$ refers to a non-linear activation (e.g. relu).\n",
"\n",
"The code below shows how to implement it using the `dgl.sparse` package. The core operations are:\n",
"\n",
"* `dgl.sparse.identity` creates the identity matrix $I$.\n",
"* The augmented adjacency matrix $\\hat{A}$ is then computed by adding the identity matrix to the adjacency matrix $A$.\n",
"* `A_hat.sum(0)` aggregates the augmented adjacency matrix $\\hat{A}$ along the first dimension which gives the degree vector of the augmented graph. The diagonal degree matrix $\\hat{D}$ is then created by `dgl.sparse.diag`.\n",
"* Compute $\\hat{D}^{-\\frac{1}{2}}$.\n",
"* The augmented adjacency matrix $\\bar{A}$ is then computed by adding the identity matrix to the adjacency matrix $A$.\n",
"* `A_hat.sum(0)` aggregates the augmented adjacency matrix $\\bar{A}$ along the first dimension which gives the degree vector of the augmented graph. The diagonal degree matrix $\\bar{D}$ is then created by `dgl.sparse.diag`.\n",
"* Compute $\\bar{D}^{-\\frac{1}{2}}$.\n",
"* `D_hat_invsqrt @ A_hat @ D_hat_invsqrt` computes the convolution matrix which is then multiplied by the linearly transformed node features."
],
"metadata": {
......
......@@ -125,9 +125,9 @@
"source": [
"We use the graph convolution matrix from Graph Convolution Networks as the diffusion matrix in this example. The graph convolution matrix is defined as:\n",
"\n",
"$$\\tilde{A} = \\hat{D}^{-\\frac{1}{2}}\\hat{A}\\hat{D}^{-\\frac{1}{2}}$$\n",
"$$\\tilde{A} = \\bar{D}^{-\\frac{1}{2}}\\bar{A}\\bar{D}^{-\\frac{1}{2}}$$\n",
"\n",
"with $\\hat{A} = A + I$, where $A$ denotes the adjacency matrix and $I$ denotes the identity matrix, $\\hat{D}$ refers to the diagonal node degree matrix of $\\hat{A}$."
"with $\\bar{A} = A + I$, where $A$ denotes the adjacency matrix and $I$ denotes the identity matrix, $\\bar{D}$ refers to the diagonal node degree matrix of $\\bar{A}$."
],
"metadata": {
"id": "wJMT4oHOCCqJ"
......
......@@ -1219,9 +1219,9 @@
"source": [
"*Let's test what you've learned. Feel free to [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/dmlc/dgl/blob/master/notebooks/sparse/quickstart.ipynb).*\n",
"\n",
"Given a sparse symmetrical adjacency matrix $A$, calculate its symmetrically normalized adjacency matrix: $$norm = \\hat{D}^{-\\frac{1}{2}}\\hat{A}\\hat{D}^{-\\frac{1}{2}}$$\n",
"Given a sparse symmetrical adjacency matrix $A$, calculate its symmetrically normalized adjacency matrix: $$norm = \\bar{D}^{-\\frac{1}{2}}\\bar{A}\\bar{D}^{-\\frac{1}{2}}$$\n",
"\n",
"Where $\\hat{A} = A + I$, $I$ is the identity matrix, and $\\hat{D}$ is the diagonal node degree matrix of $\\hat{A}$."
"Where $\\bar{A} = A + I$, $I$ is the identity matrix, and $\\bar{D}$ is the diagonal node degree matrix of $\\bar{A}$."
],
"metadata": {
"id": "yDQ4Kmr_08St"
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment