Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
pyg_autoscale
Commits
3eb4a596
Commit
3eb4a596
authored
Jun 09, 2021
by
rusty1s
Browse files
improve doc
parent
fa8d6229
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
17 additions
and
12 deletions
+17
-12
torch_geometric_autoscale/models/base.py
torch_geometric_autoscale/models/base.py
+17
-12
No files found.
torch_geometric_autoscale/models/base.py
View file @
3eb4a596
...
@@ -15,12 +15,14 @@ class ScalableGNN(torch.nn.Module):
...
@@ -15,12 +15,14 @@ class ScalableGNN(torch.nn.Module):
embeddings.
embeddings.
This class will take care of initializing :obj:`num_layers - 1` historical
This class will take care of initializing :obj:`num_layers - 1` historical
embeddings, and provides a convenient interface to push recent node
embeddings, and provides a convenient interface to push recent node
embeddings to the history and pulling embeddings from the history.
embeddings to the history, and to pull previous embeddings from the
history.
In case historical embeddings are stored on the CPU, they will reside
In case historical embeddings are stored on the CPU, they will reside
inside pinned memory, which allows for
an
asynchronous memory transfers of
inside pinned memory, which allows for asynchronous memory transfers of
histori
e
s.
histori
cal embedding
s.
For this, this class maintains a :class:`AsyncIOPool` object that
For this, this class maintains a :class:`AsyncIOPool` object that
implements the underlying mechanisms of asynchronous memory transfers.
implements the underlying mechanisms of asynchronous memory transfers as
described in our paper.
Args:
Args:
num_nodes (int): The number of nodes in the graph.
num_nodes (int): The number of nodes in the graph.
...
@@ -91,17 +93,17 @@ class ScalableGNN(torch.nn.Module):
...
@@ -91,17 +93,17 @@ class ScalableGNN(torch.nn.Module):
loader
:
EvalSubgraphLoader
=
None
,
loader
:
EvalSubgraphLoader
=
None
,
**
kwargs
,
**
kwargs
,
)
->
Tensor
:
)
->
Tensor
:
r
"""E
xtend
s the call of forward propagation by immediately start
r
"""E
nhance
s the call of forward propagation by immediately start
pulling historical embeddings for
each
layer asynchronously.
pulling historical embeddings for
all
layer
s
asynchronously.
After forward propogation
,
push
ing
node embeddings to
histories will be
After forward propogation
is completed, the
push
of
node embeddings to
synchronized.
the histories will be
synchronized.
For example, given a mini-batch with
For example, given a mini-batch with
node indices
:obj:`n_id = [0, 1, 5, 6, 7, 3, 4]`, where the first 5 nodes
:obj:`n_id = [0, 1, 5, 6, 7, 3, 4]`, where the first 5 nodes
represent the mini-batched nodes, and nodes :obj:`3` and :obj:`4`
represent the mini-batched nodes, and nodes :obj:`3` and :obj:`4`
denote out-of-mini-batched nodes (i.e. the 1-hop neighbors of the
denote out-of-mini-batched nodes (i.e. the 1-hop neighbors of the
mini-batch that are not included in the current mini-batch), then
mini-batch that are not included in the current mini-batch), then
other input arguments
ar
e given as:
other input arguments
should b
e given as:
.. code-block:: python
.. code-block:: python
...
@@ -196,8 +198,11 @@ class ScalableGNN(torch.nn.Module):
...
@@ -196,8 +198,11 @@ class ScalableGNN(torch.nn.Module):
@
torch
.
no_grad
()
@
torch
.
no_grad
()
def
mini_inference
(
self
,
loader
:
SubgraphLoader
)
->
Tensor
:
def
mini_inference
(
self
,
loader
:
SubgraphLoader
)
->
Tensor
:
r
"""An implementation of a layer-wise evaluation of GNNs.
r
"""An implementation of layer-wise evaluation of GNNs.
For each layer, :meth:`forward_layer` will be called."""
For each individual layer and mini-batch, :meth:`forward_layer` takes
care of computing the next state of node embeddings.
Additional state (such as residual connections) can be stored in
a `state` directory."""
# We iterate over the loader in a layer-wise fashsion.
# We iterate over the loader in a layer-wise fashsion.
# In order to re-use some intermediate representations, we maintain a
# In order to re-use some intermediate representations, we maintain a
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment