"...git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "7853bfbed7310f7d46379e3cd0415002cf5e3eb8"
Unverified Commit 9699b931 authored by peizhou001's avatar peizhou001 Committed by GitHub
Browse files

[API Deprecation]Change DGLHeteroGraph to DGLGraph in DOC (#4840)



* Update from master (#4584)

* [Example][Refactor] Refactor graphsage multigpu and full-graph example (#4430)

* Add refactors for multi-gpu and full-graph example

* Fix format

* Update

* Update

* Update

* [Cleanup] Remove async_transferer (#4505)

* Remove async_transferer

* remove test

* Remove AsyncTransferer
Co-authored-by: default avatarXin Yao <xiny@nvidia.com>
Co-authored-by: default avatarXin Yao <yaox12@outlook.com>

* [Cleanup] Remove duplicate entries of CUB submodule   (issue# 4395) (#4499)

* remove third_part/cub

* remove from third_party
Co-authored-by: default avatarIsrat Nisa <nisisrat@amazon.com>
Co-authored-by: default avatarXin Yao <xiny@nvidia.com>

* [Bug] Enable turn on/off libxsmm at runtime (#4455)

* enable turn on/off libxsmm at runtime by adding a global config and related API
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal>

* [Feature] Unify the cuda stream used in core library (#4480)

* Use an internal cuda stream for CopyDataFromTo

* small fix white space

* Fix to compile

* Make stream optional in copydata for compile

* fix lint issue

* Update cub functions to use internal stream

* Lint check

* Update CopyTo/CopyFrom/CopyFromTo to use internal stream

* Address comments

* Fix backward CUDA stream

* Avoid overloading CopyFromTo()

* Minor comment update

* Overload copydatafromto in cuda device api
Co-authored-by: default avatarxiny <xiny@nvidia.com>

* [Feature] Added exclude_self and output_batch to knn graph construction (Issues #4323 #4316) (#4389)

* * Added "exclude_self" and "output_batch" options to knn_graph and segmented_knn_graph
* Updated out-of-date comments on remove_edges and remove_self_loop, since they now preserve batch information

* * Changed defaults on new knn_graph and segmented_knn_graph function parameters, for compatibility; pytorch/test_geometry.py was failing

* * Added test to ensure dgl.remove_self_loop function correctly updates batch information

* * Added new knn_graph and segmented_knn_graph parameters to dgl.nn.KNNGraph and dgl.nn.SegmentedKNNGraph

* * Formatting

* * Oops, I missed the one in segmented_knn_graph when I fixed the similar thing in knn_graph

* * Fixed edge case handling when invalid k specified, since it still needs to be handled consistently for tests to pass
* Fixed context of batch info, since it must match the context of the input position data for remove_self_loop to succeed

* * Fixed batch info resulting from knn_graph when output_batch is true, for case of 3D input tensor, representing multiple segments

* * Added testing of new exclude_self and output_batch parameters on knn_graph and segmented_knn_graph, and their wrappers, KNNGraph and SegmentedKNNGraph, into the test_knn_cuda test

* * Added doc comments for new parameters

* * Added correct handling for uncommon case of k or more coincident points when excluding self edges in knn_graph and segmented_knn_graph
* Added test cases for more than k coincident points

* * Updated doc comments for output_batch parameters for clarity

* * Linter formatting fixes

* * Extracted out common function for test_knn_cpu and test_knn_cuda, to add the new test cases to test_knn_cpu

* * Rewording in doc comments

* * Removed output_batch parameter from knn_graph and segmented_knn_graph, in favour of always setting the batch information, except in knn_graph if x is a 2D tensor
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>

* [CI] only known devs are authorized to trigger CI (#4518)

* [CI] only known devs are authorized to trigger CI

* fix if author is null

* add comments

* [Readability] Auto fix setup.py and update-version.py (#4446)

* Auto fix update-version

* Auto fix setup.py

* Auto fix update-version

* Auto fix setup.py

* [Doc] Change random.py to random_partition.py in guide on distributed partition pipeline (#4438)

* Update distributed-preprocessing.rst

* Update
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>

* fix unpinning when tensoradaptor is not available (#4450)

* [Doc] fix print issue in tutorial (#4459)

* [Example][Refactor] Refactor RGCN example (#4327)

* Refactor full graph entity classification

* Refactor rgcn with sampling

* README update

* Update

* Results update

* Respect default setting of self_loop=false in entity.py

* Update

* Update README

* Update for multi-gpu

* Update

* [doc] fix invalid link in user guide (#4468)

* [Example] directional_GSN for ogbg-molpcba (#4405)

* version-1

* version-2

* version-3

* update examples/README

* Update .gitignore

* update performance in README, delete scripts

* 1st approving review

* 2nd approving review
Co-authored-by: default avatarMufei Li <mufeili1996@gmail.com>

* Clarify the message name, which is 'm'. (#4462)
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
Co-authored-by: default avatarRhett Ying <85214957+Rhett-Ying@users.noreply.github.com>

* [Refactor] Auto fix view.py. (#4461)
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>

* [Example] SEAL for OGBL (#4291)

* [Example] SEAL for OGBL

* update index

* update

* fix readme typo

* add seal sampler

* modify set ops

* prefetch

* efficiency test

* update

* optimize

* fix ScatterAdd dtype issue

* update sampler style

* update
Co-authored-by: default avatarQuan Gan <coin2028@hotmail.com>

* [CI] use https instead of http (#4488)

* [BugFix] fix crash due to incorrect dtype in dgl.to_block() (#4487)

* [BugFix] fix crash due to incorrect dtype in dgl.to_block()

* fix test failure in TF

* [Feature] Make TensorAdapter Stream Aware (#4472)

* Allocate tensors in DGL's current stream

* make tensoradaptor stream-aware

* replace TAemtpy with cpu allocator

* fix typo

* try fix cpu allocation

* clean header

* redirect AllocDataSpace as well

* resolve comments

* [Build][Doc] Specify the sphinx version (#4465)
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>

* reformat

* reformat

* Auto fix update-version

* Auto fix setup.py

* reformat

* reformat
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
Co-authored-by: default avatarRhett Ying <85214957+Rhett-Ying@users.noreply.github.com>
Co-authored-by: default avatarMufei Li <mufeili1996@gmail.com>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>
Co-authored-by: default avatarXin Yao <xiny@nvidia.com>
Co-authored-by: default avatarChang Liu <chang.liu@utexas.edu>
Co-authored-by: default avatarZhiteng Li <55398076+ZHITENGLI@users.noreply.github.com>
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>
Co-authored-by: default avatarrudongyu <ru_dongyu@outlook.com>
Co-authored-by: default avatarQuan Gan <coin2028@hotmail.com>

* Move mock version of dgl_sparse library to DGL main repo (#4524)

* init

* Add api doc for sparse library

* support op btwn matrices with differnt sparsity

* Fixed docstring

* addresses comments

* lint check

* change keyword format to fmt
Co-authored-by: default avatarIsrat Nisa <nisisrat@amazon.com>

* [DistPart] expose timeout config for process group (#4532)

* [DistPart] expose timeout config for process group

* refine code

* Update tools/distpartitioning/data_proc_pipeline.py
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>

* [Feature] Import PyTorch's CUDA stream management (#4503)

* add set_stream

* add .record_stream for NDArray and HeteroGraph

* refactor dgl stream Python APIs

* test record_stream

* add unit test for record stream

* use pytorch's stream

* fix lint

* fix cpu build

* address comments

* address comments

* add record stream tests for dgl.graph

* record frames and update dataloder

* add docstring

* update frame

* add backend check for record_stream

* remove CUDAThreadEntry::stream

* record stream for newly created formats

* fix bug

* fix cpp test

* fix None c_void_p to c_handle

* [examples]educe memory consumption (#4558)

* [examples]educe memory consumption

* reffine help message

* refine

* [Feature][REVIEW] Enable DGL cugaph nightly CI  (#4525)

* Added cugraph nightly scripts

* Removed nvcr.io//nvidia/pytorch:22.04-py3 reference
Co-authored-by: default avatarRhett Ying <85214957+Rhett-Ying@users.noreply.github.com>

* Revert "[Feature][REVIEW] Enable DGL cugaph nightly CI  (#4525)" (#4563)

This reverts commit ec171c64

.

* [Misc] Add flake8 lint workflow. (#4566)

* Add pyproject.toml for autopep8.

* Add pyproject.toml for autopep8.

* Add flake8 annotation in workflow.

* remove

* add

* clean up
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* [Misc] Try use official pylint workflow. (#4568)

* polish update_version

* update pylint workflow.

* add

* revert.
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* [CI] refine stage logic (#4565)

* [CI] refine stage logic

* refine

* refine

* remove (#4570)
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* Add Pylint workflow for flake8. (#4571)

* remove

* Add pylint.
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* [Misc] Update the python version in Pylint workflow for flake8. (#4572)

* remove

* Add pylint.

* Change the python version for pylint.
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* Update pylint. (#4574)
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* [Misc] Use another workflow. (#4575)

* Update pylint.

* Use another workflow.
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* Update pylint. (#4576)
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* Update pylint.yml

* Update pylint.yml

* Delete pylint.yml

* [Misc]Add pyproject.toml for autopep8 & black. (#4543)

* Add pyproject.toml for autopep8.

* Add pyproject.toml for autopep8.
Co-authored-by: default avatarSteve <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>

* [Feature] Bump DLPack to v0.7 and decouple DLPack from the core library (#4454)

* rename `DLContext` to `DGLContext`

* rename `kDLGPU` to `kDLCUDA`

* replace DLTensor with DGLArray

* fix linting

* Unify DGLType and DLDataType to DGLDataType

* Fix FFI

* rename DLDeviceType to DGLDeviceType

* decouple dlpack from the core library

* fix bug

* fix lint

* fix merge

* fix build

* address comments

* rename dl_converter to dlpack_convert

* remove redundant comments
Co-authored-by: default avatarChang Liu <chang.liu@utexas.edu>
Co-authored-by: default avatarnv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com>
Co-authored-by: default avatarXin Yao <xiny@nvidia.com>
Co-authored-by: default avatarXin Yao <yaox12@outlook.com>
Co-authored-by: default avatarIsrat Nisa <neesha295@gmail.com>
Co-authored-by: default avatarIsrat Nisa <nisisrat@amazon.com>
Co-authored-by: default avatarpeizhou001 <110809584+peizhou001@users.noreply.github.com>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal>
Co-authored-by: default avatarndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com>
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>
Co-authored-by: default avatarRhett Ying <85214957+Rhett-Ying@users.noreply.github.com>
Co-authored-by: default avatarHongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>
Co-authored-by: default avatarZhiteng Li <55398076+ZHITENGLI@users.noreply.github.com>
Co-authored-by: default avatarrudongyu <ru_dongyu@outlook.com>
Co-authored-by: default avatarQuan Gan <coin2028@hotmail.com>
Co-authored-by: default avatarVibhu Jawa <vibhujawa@gmail.com>

* [Deprecation] Dataset Attributes (#4546)

* Update

* CI

* CI

* Update
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>

* [Example] Bug Fix (#4665)

* Update

* CI

* CI

* Update

* Update
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>

* Update

* Update (#4724)
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>

* change DGLHeteroGraph to DGLGraph in DOC

* revert c change
Co-authored-by: default avatarMufei Li <mufeili1996@gmail.com>
Co-authored-by: default avatarChang Liu <chang.liu@utexas.edu>
Co-authored-by: default avatarnv-dlasalle <63612878+nv-dlasalle@users.noreply.github.com>
Co-authored-by: default avatarXin Yao <xiny@nvidia.com>
Co-authored-by: default avatarXin Yao <yaox12@outlook.com>
Co-authored-by: default avatarIsrat Nisa <neesha295@gmail.com>
Co-authored-by: default avatarIsrat Nisa <nisisrat@amazon.com>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-19-194.ap-northeast-1.compute.internal>
Co-authored-by: default avatarndickson-nvidia <99772994+ndickson-nvidia@users.noreply.github.com>
Co-authored-by: default avatarMinjie Wang <wmjlyjemaine@gmail.com>
Co-authored-by: default avatarRhett Ying <85214957+Rhett-Ying@users.noreply.github.com>
Co-authored-by: default avatarHongzhi (Steve), Chen <chenhongzhi.nkcs@gmail.com>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-34-29.ap-northeast-1.compute.internal>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-9-26.ap-northeast-1.compute.internal>
Co-authored-by: default avatarZhiteng Li <55398076+ZHITENGLI@users.noreply.github.com>
Co-authored-by: default avatarrudongyu <ru_dongyu@outlook.com>
Co-authored-by: default avatarQuan Gan <coin2028@hotmail.com>
Co-authored-by: default avatarVibhu Jawa <vibhujawa@gmail.com>
Co-authored-by: default avatarUbuntu <ubuntu@ip-172-31-16-19.ap-northeast-1.compute.internal>
parent 9fe216df
...@@ -115,7 +115,7 @@ input features. ...@@ -115,7 +115,7 @@ input features.
The input to the latter part is usually the output from the The input to the latter part is usually the output from the
former part, as well as the subgraph of the original graph induced by the former part, as well as the subgraph of the original graph induced by the
edges in the minibatch. The subgraph is yielded from the same data edges in the minibatch. The subgraph is yielded from the same data
loader. One can call :meth:`dgl.DGLHeteroGraph.apply_edges` to compute the loader. One can call :meth:`dgl.DGLGraph.apply_edges` to compute the
scores on the edges with the edge subgraph. scores on the edges with the edge subgraph.
The following code shows an example of predicting scores on the edges by The following code shows an example of predicting scores on the edges by
...@@ -212,7 +212,7 @@ classification/regression. ...@@ -212,7 +212,7 @@ classification/regression.
For score prediction, the only implementation difference between the For score prediction, the only implementation difference between the
homogeneous graph and the heterogeneous graph is that we are looping homogeneous graph and the heterogeneous graph is that we are looping
over the edge types for :meth:`~dgl.DGLHeteroGraph.apply_edges`. over the edge types for :meth:`~dgl.DGLGraph.apply_edges`.
.. code:: python .. code:: python
......
...@@ -194,7 +194,7 @@ classification/regression. ...@@ -194,7 +194,7 @@ classification/regression.
For score prediction, the only implementation difference between the For score prediction, the only implementation difference between the
homogeneous graph and the heterogeneous graph is that we are looping homogeneous graph and the heterogeneous graph is that we are looping
over the edge types for :meth:`dgl.DGLHeteroGraph.apply_edges`. over the edge types for :meth:`dgl.DGLGraph.apply_edges`.
.. code:: python .. code:: python
......
...@@ -72,21 +72,21 @@ MFGs. ...@@ -72,21 +72,21 @@ MFGs.
- Obtain the features for output nodes from the input features by - Obtain the features for output nodes from the input features by
slicing the first few rows. The number of rows can be obtained by slicing the first few rows. The number of rows can be obtained by
:meth:`block.number_of_dst_nodes <dgl.DGLHeteroGraph.number_of_dst_nodes>`. :meth:`block.number_of_dst_nodes <dgl.DGLGraph.number_of_dst_nodes>`.
- Replace - Replace
:attr:`g.ndata <dgl.DGLHeteroGraph.ndata>` with either :attr:`g.ndata <dgl.DGLGraph.ndata>` with either
:attr:`block.srcdata <dgl.DGLHeteroGraph.srcdata>` for features on input nodes or :attr:`block.srcdata <dgl.DGLGraph.srcdata>` for features on input nodes or
:attr:`block.dstdata <dgl.DGLHeteroGraph.dstdata>` for features on output nodes, if :attr:`block.dstdata <dgl.DGLGraph.dstdata>` for features on output nodes, if
the original graph has only one node type. the original graph has only one node type.
- Replace - Replace
:attr:`g.nodes <dgl.DGLHeteroGraph.nodes>` with either :attr:`g.nodes <dgl.DGLGraph.nodes>` with either
:attr:`block.srcnodes <dgl.DGLHeteroGraph.srcnodes>` for features on input nodes or :attr:`block.srcnodes <dgl.DGLGraph.srcnodes>` for features on input nodes or
:attr:`block.dstnodes <dgl.DGLHeteroGraph.dstnodes>` for features on output nodes, :attr:`block.dstnodes <dgl.DGLGraph.dstnodes>` for features on output nodes,
if the original graph has multiple node types. if the original graph has multiple node types.
- Replace - Replace
:meth:`g.number_of_nodes <dgl.DGLHeteroGraph.number_of_nodes>` with either :meth:`g.number_of_nodes <dgl.DGLGraph.number_of_nodes>` with either
:meth:`block.number_of_src_nodes <dgl.DGLHeteroGraph.number_of_src_nodes>` or :meth:`block.number_of_src_nodes <dgl.DGLGraph.number_of_src_nodes>` or
:meth:`block.number_of_dst_nodes <dgl.DGLHeteroGraph.number_of_dst_nodes>` for the number of :meth:`block.number_of_dst_nodes <dgl.DGLGraph.number_of_dst_nodes>` for the number of
input nodes or output nodes respectively. input nodes or output nodes respectively.
Heterogeneous graphs Heterogeneous graphs
......
...@@ -134,7 +134,7 @@ Edge classification on heterogeneous graphs is not very different from ...@@ -134,7 +134,7 @@ Edge classification on heterogeneous graphs is not very different from
that on homogeneous graphs. If you wish to perform edge classification that on homogeneous graphs. If you wish to perform edge classification
on one edge type, you only need to compute the node representation for on one edge type, you only need to compute the node representation for
all node types, and predict on that edge type with all node types, and predict on that edge type with
:meth:`~dgl.DGLHeteroGraph.apply_edges` method. :meth:`~dgl.DGLGraph.apply_edges` method.
For example, to make ``DotProductPredictor`` work on one edge type of a For example, to make ``DotProductPredictor`` work on one edge type of a
heterogeneous graph, you only need to specify the edge type in heterogeneous graph, you only need to specify the edge type in
......
...@@ -141,20 +141,20 @@ DGL提供了 :func:`dgl.to_block` 以将任何边界转换为块。其中第一 ...@@ -141,20 +141,20 @@ DGL提供了 :func:`dgl.to_block` 以将任何边界转换为块。其中第一
block = dgl.to_block(frontier, output_nodes) block = dgl.to_block(frontier, output_nodes)
要查找给定节点类型的输入节点和输出节点的数量,可以使用 要查找给定节点类型的输入节点和输出节点的数量,可以使用
:meth:`dgl.DGLHeteroGraph.number_of_src_nodes` :meth:`dgl.DGLGraph.number_of_src_nodes`
:meth:`dgl.DGLHeteroGraph.number_of_dst_nodes` 方法。 :meth:`dgl.DGLGraph.number_of_dst_nodes` 方法。
.. code:: python .. code:: python
num_input_nodes, num_output_nodes = block.number_of_src_nodes(), block.number_of_dst_nodes() num_input_nodes, num_output_nodes = block.number_of_src_nodes(), block.number_of_dst_nodes()
print(num_input_nodes, num_output_nodes) print(num_input_nodes, num_output_nodes)
可以通过 :attr:`dgl.DGLHeteroGraph.srcdata` 可以通过 :attr:`dgl.DGLGraph.srcdata`
:attr:`dgl.DGLHeteroGraph.srcnodes` 访问该块的输入节点特征, :attr:`dgl.DGLGraph.srcnodes` 访问该块的输入节点特征,
并且可以通过 :attr:`dgl.DGLHeteroGraph.dstdata` 并且可以通过 :attr:`dgl.DGLGraph.dstdata`
:attr:`dgl.DGLHeteroGraph.dstnodes` 访问其输出节点特征。 :attr:`dgl.DGLGraph.dstnodes` 访问其输出节点特征。
``srcdata``/``dstdata`` ``srcnodes``/``dstnodes`` ``srcdata``/``dstdata`` ``srcnodes``/``dstnodes``
的语法与常规图中的 :attr:`dgl.DGLHeteroGraph.ndata` :attr:`dgl.DGLHeteroGraph.nodes` 相同。 的语法与常规图中的 :attr:`dgl.DGLGraph.ndata` :attr:`dgl.DGLGraph.nodes` 相同。
.. code:: python .. code:: python
......
...@@ -94,7 +94,7 @@ ...@@ -94,7 +94,7 @@
return x return x
第二部分的输入通常是前一部分的输出,以及由小批次边导出的原始图的子图。 第二部分的输入通常是前一部分的输出,以及由小批次边导出的原始图的子图。
子图是从相同的数据加载器产生的。用户可以调用 :meth:`dgl.DGLHeteroGraph.apply_edges` 计算边子图中边的得分。 子图是从相同的数据加载器产生的。用户可以调用 :meth:`dgl.DGLGraph.apply_edges` 计算边子图中边的得分。
以下代码片段实现了通过合并边两端节点的特征并将其映射到全连接层来预测边的得分。 以下代码片段实现了通过合并边两端节点的特征并将其映射到全连接层来预测边的得分。
...@@ -180,7 +180,7 @@ DGL保证边子图中的节点与生成的块列表中最后一个块的输出 ...@@ -180,7 +180,7 @@ DGL保证边子图中的节点与生成的块列表中最后一个块的输出
return x return x
在同构图和异构图上做评分预测时,代码实现的唯一不同在于调用 在同构图和异构图上做评分预测时,代码实现的唯一不同在于调用
:meth:`~dgl.DGLHeteroGraph.apply_edges` :meth:`~dgl.DGLGraph.apply_edges`
时需要在特定类型的边上进行迭代。 时需要在特定类型的边上进行迭代。
.. code:: python .. code:: python
......
...@@ -167,7 +167,7 @@ DGL提供了在同构图上做链路预测的一个示例: ...@@ -167,7 +167,7 @@ DGL提供了在同构图上做链路预测的一个示例:
return x return x
对于得分的预测,同构图和异构图之间唯一的实现差异是后者需要用 对于得分的预测,同构图和异构图之间唯一的实现差异是后者需要用
:meth:`dgl.DGLHeteroGraph.apply_edges` :meth:`dgl.DGLGraph.apply_edges`
来遍历所有的边类型。 来遍历所有的边类型。
.. code:: python .. code:: python
......
...@@ -56,20 +56,20 @@ ...@@ -56,20 +56,20 @@
通常,需要对用于整图的GNN模块进行如下调整以将其用于块作为输入的情况: 通常,需要对用于整图的GNN模块进行如下调整以将其用于块作为输入的情况:
- 切片取输入特征的前几行,得到输出节点的特征。切片行数可以通过 - 切片取输入特征的前几行,得到输出节点的特征。切片行数可以通过
:meth:`block.number_of_dst_nodes <dgl.DGLHeteroGraph.number_of_dst_nodes>` 获得。 :meth:`block.number_of_dst_nodes <dgl.DGLGraph.number_of_dst_nodes>` 获得。
- 如果原图只包含一种节点类型,对输入节点特征,将 :attr:`g.ndata <dgl.DGLHeteroGraph.ndata>` 替换为 - 如果原图只包含一种节点类型,对输入节点特征,将 :attr:`g.ndata <dgl.DGLGraph.ndata>` 替换为
:attr:`block.srcdata <dgl.DGLHeteroGraph.srcdata>`;对于输出节点特征,将 :attr:`block.srcdata <dgl.DGLGraph.srcdata>`;对于输出节点特征,将
:attr:`g.ndata <dgl.DGLHeteroGraph.ndata>` 替换为 :attr:`g.ndata <dgl.DGLGraph.ndata>` 替换为
:attr:`block.dstdata <dgl.DGLHeteroGraph.dstdata>`。 :attr:`block.dstdata <dgl.DGLGraph.dstdata>`。
- 如果原图包含多种节点类型,对于输入节点特征,将 - 如果原图包含多种节点类型,对于输入节点特征,将
:attr:`g.nodes <dgl.DGLHeteroGraph.nodes>` 替换为 :attr:`g.nodes <dgl.DGLGraph.nodes>` 替换为
:attr:`block.srcnodes <dgl.DGLHeteroGraph.srcnodes>`;对于输出节点特征,将 :attr:`block.srcnodes <dgl.DGLGraph.srcnodes>`;对于输出节点特征,将
:attr:`g.nodes <dgl.DGLHeteroGraph.nodes>` 替换为 :attr:`g.nodes <dgl.DGLGraph.nodes>` 替换为
:attr:`block.dstnodes <dgl.DGLHeteroGraph.dstnodes>`。 :attr:`block.dstnodes <dgl.DGLGraph.dstnodes>`。
- 对于输入节点数量,将 :meth:`g.number_of_nodes <dgl.DGLHeteroGraph.number_of_nodes>` 替换为 - 对于输入节点数量,将 :meth:`g.number_of_nodes <dgl.DGLGraph.number_of_nodes>` 替换为
:meth:`block.number_of_src_nodes <dgl.DGLHeteroGraph.number_of_src_nodes>` ; :meth:`block.number_of_src_nodes <dgl.DGLGraph.number_of_src_nodes>` ;
对于输出节点数量,将 :meth:`g.number_of_nodes <dgl.DGLHeteroGraph.number_of_nodes>` 替换为 对于输出节点数量,将 :meth:`g.number_of_nodes <dgl.DGLGraph.number_of_nodes>` 替换为
:meth:`block.number_of_dst_nodes <dgl.DGLHeteroGraph.number_of_dst_nodes>` 。 :meth:`block.number_of_dst_nodes <dgl.DGLGraph.number_of_dst_nodes>` 。
异构图上的模型定制 异构图上的模型定制
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
......
...@@ -115,7 +115,7 @@ ...@@ -115,7 +115,7 @@
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
例如想在某一特定类型的边上进行分类任务,用户只需要计算所有节点类型的节点表示, 例如想在某一特定类型的边上进行分类任务,用户只需要计算所有节点类型的节点表示,
然后同样通过调用 :meth:`~dgl.DGLHeteroGraph.apply_edges` 方法计算预测值即可。 然后同样通过调用 :meth:`~dgl.DGLGraph.apply_edges` 方法计算预测值即可。
唯一的区别是在调用 ``apply_edges`` 时需要指定边的类型。 唯一的区别是在调用 ``apply_edges`` 时需要指定边的类型。
.. code:: python .. code:: python
......
...@@ -128,15 +128,15 @@ DGL은 임의의 프론티어를 MFG로 변환하는 :func:`dgl.to_block` 함수 ...@@ -128,15 +128,15 @@ DGL은 임의의 프론티어를 MFG로 변환하는 :func:`dgl.to_block` 함수
dst_nodes = torch.LongTensor([8]) dst_nodes = torch.LongTensor([8])
block = dgl.to_block(frontier, dst_nodes) block = dgl.to_block(frontier, dst_nodes)
:meth:`dgl.DGLHeteroGraph.number_of_src_nodes` :meth:`dgl.DGLGraph.number_of_src_nodes`
:meth:`dgl.DGLHeteroGraph.number_of_dst_nodes` 메소스들 사용해서 특정 노트 타입의 소스 노드 목적지 노드의 수를 알아낼 있다. :meth:`dgl.DGLGraph.number_of_dst_nodes` 메소스들 사용해서 특정 노트 타입의 소스 노드 목적지 노드의 수를 알아낼 있다.
.. code:: python .. code:: python
num_src_nodes, num_dst_nodes = block.number_of_src_nodes(), block.number_of_dst_nodes() num_src_nodes, num_dst_nodes = block.number_of_src_nodes(), block.number_of_dst_nodes()
print(num_src_nodes, num_dst_nodes) print(num_src_nodes, num_dst_nodes)
:attr:`dgl.DGLHeteroGraph.srcdata` :attr:`dgl.DGLHeteroGraph.srcnodes` 같은 멤머를 통해서 MFG 소스 노드 피쳐들을 접근할 있고, :attr:`dgl.DGLHeteroGraph.dstdata` :attr:`dgl.DGLHeteroGraph.dstnodes` 통해서는 목적지 노드의 피쳐들을 접근할 있다. ``srcdata`` / ``dstdata`` ``srcnodes`` / ``dstnodes`` 사용법은 일반 그래프에 사용하는 :attr:`dgl.DGLHeteroGraph.ndata` :attr:`dgl.DGLHeteroGraph.nodes` 동일하다. :attr:`dgl.DGLGraph.srcdata` :attr:`dgl.DGLGraph.srcnodes` 같은 멤머를 통해서 MFG 소스 노드 피쳐들을 접근할 있고, :attr:`dgl.DGLGraph.dstdata` :attr:`dgl.DGLGraph.dstnodes` 통해서는 목적지 노드의 피쳐들을 접근할 있다. ``srcdata`` / ``dstdata`` ``srcnodes`` / ``dstnodes`` 사용법은 일반 그래프에 사용하는 :attr:`dgl.DGLGraph.ndata` :attr:`dgl.DGLGraph.nodes` 동일하다.
.. code:: python .. code:: python
......
...@@ -85,7 +85,7 @@ ...@@ -85,7 +85,7 @@
x = F.relu(self.conv2(blocks[1], x)) x = F.relu(self.conv2(blocks[1], x))
return x return x
두번째 부분에 대한 입력은 보통은 이전 부분의 출력과 미니배치의 에지들에 의해서 유도된 원본 그래프의 서브 그래프가 된다. 서브 그래프는 같은 데이터 로더에서 리턴된다. :meth:`dgl.DGLHeteroGraph.apply_edges` 사용해서 에지 서브 그래프를 사용해서 에지들의 점수를 계산한다. 두번째 부분에 대한 입력은 보통은 이전 부분의 출력과 미니배치의 에지들에 의해서 유도된 원본 그래프의 서브 그래프가 된다. 서브 그래프는 같은 데이터 로더에서 리턴된다. :meth:`dgl.DGLGraph.apply_edges` 사용해서 에지 서브 그래프를 사용해서 에지들의 점수를 계산한다.
다음 코드는 부속 노드 피처들을 연결하고, 이를 dense 레이어에 입력해서 얻은 결과로 에지들의 점수를 예측하는 예를 보여준다. 다음 코드는 부속 노드 피처들을 연결하고, 이를 dense 레이어에 입력해서 얻은 결과로 에지들의 점수를 예측하는 예를 보여준다.
...@@ -169,7 +169,7 @@ Heterogeneous 그래프들의 노드 representation들을 계산하는 모델은 ...@@ -169,7 +169,7 @@ Heterogeneous 그래프들의 노드 representation들을 계산하는 모델은
x = self.conv2(blocks[1], x) x = self.conv2(blocks[1], x)
return x return x
점수를 예측하기 위한 homogeneous 그래프와 heterogeneous 그래프간의 유일한 구현상의 차이점은 :meth:`~dgl.DGLHeteroGraph.apply_edges` 호출할 에지 타입들을 사용한다는 점이다. 점수를 예측하기 위한 homogeneous 그래프와 heterogeneous 그래프간의 유일한 구현상의 차이점은 :meth:`~dgl.DGLGraph.apply_edges` 호출할 에지 타입들을 사용한다는 점이다.
.. code:: python .. code:: python
......
...@@ -166,7 +166,7 @@ representation들을 구하는데 사용될 수 있다. ...@@ -166,7 +166,7 @@ representation들을 구하는데 사용될 수 있다.
return x return x
점수를 예측하기 위한 homogeneous 그래프와 heterogeneous 그래프간의 유일한 구현상의 차이점은 점수를 예측하기 위한 homogeneous 그래프와 heterogeneous 그래프간의 유일한 구현상의 차이점은
:meth:`dgl.DGLHeteroGraph.apply_edges` 호출할 에지 타입들을 사용한다는 점이다. :meth:`dgl.DGLGraph.apply_edges` 호출할 에지 타입들을 사용한다는 점이다.
.. code:: python .. code:: python
......
...@@ -52,10 +52,10 @@ Homogeneous 그래프나 heterogeneous 그래프를 대상으로 전체 그래 ...@@ -52,10 +52,10 @@ Homogeneous 그래프나 heterogeneous 그래프를 대상으로 전체 그래
일반적으로, 직접 구현한 NN 모듈이 MFG에서 동작하게 만들기 위해서는 다음과 같은 것을 해야한다. 일반적으로, 직접 구현한 NN 모듈이 MFG에서 동작하게 만들기 위해서는 다음과 같은 것을 해야한다.
- 첫 몇 행들(row)을 잘라서 입력 피쳐들로부터 출력 노드의 피처를 얻는다. 행의 개수는 :meth:`block.number_of_dst_nodes <dgl.DGLHeteroGraph.number_of_dst_nodes>` 로 얻는다. - 첫 몇 행들(row)을 잘라서 입력 피쳐들로부터 출력 노드의 피처를 얻는다. 행의 개수는 :meth:`block.number_of_dst_nodes <dgl.DGLGraph.number_of_dst_nodes>` 로 얻는다.
- 원본 그래프가 한 하나의 노드 타입을 갖는 경우, :attr:`g.ndata <dgl.DGLHeteroGraph.ndata>` 를 입력 노드의 피쳐의 경우 :attr:`block.srcdata <dgl.DGLHeteroGraph.srcdata>` 로 또는 출력 노드의 피쳐의 경우 :attr:`block.dstdata <dgl.DGLHeteroGraph.dstdata>` 로 교체한다. - 원본 그래프가 한 하나의 노드 타입을 갖는 경우, :attr:`g.ndata <dgl.DGLGraph.ndata>` 를 입력 노드의 피쳐의 경우 :attr:`block.srcdata <dgl.DGLGraph.srcdata>` 로 또는 출력 노드의 피쳐의 경우 :attr:`block.dstdata <dgl.DGLGraph.dstdata>` 로 교체한다.
- 원본 그래프가 여러 종류의 노드 타입을 갖는 경우, :attr:`g.nodes <dgl.DGLHeteroGraph.nodes>` 를 입력 노드의 피쳐의 경우 :attr:`block.srcnodes <dgl.DGLHeteroGraph.srcnodes>` 로 또는 출력 노드의 피처의 경우 :attr:`block.dstnodes <dgl.DGLHeteroGraph.dstnodes>` 로 교체한다. - 원본 그래프가 여러 종류의 노드 타입을 갖는 경우, :attr:`g.nodes <dgl.DGLGraph.nodes>` 를 입력 노드의 피쳐의 경우 :attr:`block.srcnodes <dgl.DGLGraph.srcnodes>` 로 또는 출력 노드의 피처의 경우 :attr:`block.dstnodes <dgl.DGLGraph.dstnodes>` 로 교체한다.
- :meth:`g.number_of_nodes <dgl.DGLHeteroGraph.number_of_nodes>` 를 입력 노드의 개수는 :meth:`block.number_of_src_nodes <dgl.DGLHeteroGraph.number_of_src_nodes>` 로 출력 노드의 개수는 :meth:`block.number_of_dst_nodes <dgl.DGLHeteroGraph.number_of_dst_nodes>` 로 각각 교체한다. - :meth:`g.number_of_nodes <dgl.DGLGraph.number_of_nodes>` 를 입력 노드의 개수는 :meth:`block.number_of_src_nodes <dgl.DGLGraph.number_of_src_nodes>` 로 출력 노드의 개수는 :meth:`block.number_of_dst_nodes <dgl.DGLGraph.number_of_dst_nodes>` 로 각각 교체한다.
Heterogeneous 그래프들 Heterogeneous 그래프들
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
......
...@@ -111,7 +111,7 @@ ...@@ -111,7 +111,7 @@
Heterogeneous 그래프 Heterogeneous 그래프
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
Heterogeneous 그래프들에 대한 에지 분류는 homogeneous 그래프와 크게 다르지 않다. 하나의 에지 타입에 대해서 에지 분류를 수행하자 한다면, 모든 노드 티압에 대한 노드 representation 구하고, :meth:`~dgl.DGLHeteroGraph.apply_edges` 메소드를 사용해서 에지 타입을 예측하면 된다. Heterogeneous 그래프들에 대한 에지 분류는 homogeneous 그래프와 크게 다르지 않다. 하나의 에지 타입에 대해서 에지 분류를 수행하자 한다면, 모든 노드 티압에 대한 노드 representation 구하고, :meth:`~dgl.DGLGraph.apply_edges` 메소드를 사용해서 에지 타입을 예측하면 된다.
예를 들면, heterogeneous 그래프의 하나의 에지 타입에 대한 동작하는 ``DotProductPredictor`` 작성하고자 한다면, ``apply_edges`` 메소드에 해당 에지 타입을 명시하기만 하면 된다. 예를 들면, heterogeneous 그래프의 하나의 에지 타입에 대한 동작하는 ``DotProductPredictor`` 작성하고자 한다면, ``apply_edges`` 메소드에 해당 에지 타입을 명시하기만 하면 된다.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment