Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
6106a99d
Commit
6106a99d
authored
Feb 17, 2022
by
RhettYing
Browse files
refine
parent
3b49370d
Changes
4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
17 additions
and
24 deletions
+17
-24
docs/source/api/python/dgl.data.rst
docs/source/api/python/dgl.data.rst
+4
-1
docs/source/guide/data-loadcsv.rst
docs/source/guide/data-loadcsv.rst
+7
-10
python/dgl/data/csv_dataset.py
python/dgl/data/csv_dataset.py
+2
-7
tutorials/blitz/6_load_data.py
tutorials/blitz/6_load_data.py
+4
-6
No files found.
docs/source/api/python/dgl.data.rst
View file @
6106a99d
...
...
@@ -18,7 +18,10 @@ Base Dataset Class
.. autoclass:: DGLDataset
:members: download, save, load, process, has_cache, __getitem__, __len__
.. autoclass:: DGLCSVDataset
CSV Dataset Class
-----------------
.. autoclass:: CSVDataset
Node Prediction Datasets
---------------------------------------
...
...
docs/source/guide/data-loadcsv.rst
View file @
6106a99d
...
...
@@ -171,8 +171,8 @@ for edges:
3,0,False,True,False,"0.9784264442230887, 0.22131880861864428, 0.3161154827254189"
4,1,True,True,False,"0.23142237259162102, 0.8715767748481147, 0.19117861103555467"
After loaded, the dataset has one graph. Node/edge features are stored in ``
`
ndata`` and ``edata``
with the same column names. The example demonstrates how to specify a vector-shaped feature
--
After loaded, the dataset has one graph. Node/edge features are stored in ``ndata`` and ``edata``
with the same column names. The example demonstrates how to specify a vector-shaped feature
using comma-separated list enclosed by double quotes ``"..."``.
.. code:: python
...
...
@@ -378,18 +378,14 @@ After loaded, the dataset has multiple homographs with features and labels:
ndata_schemes={'feat': Scheme(shape=(3,), dtype=torch.float64)}
edata_schemes={'feat': Scheme(shape=(3,), dtype=torch.float64)})
>>> print(data0)
{'feat': tensor([0.7426, 0.5197, 0.8149]), 'label': tensor(
[0]
)}
{'feat': tensor([0.7426, 0.5197, 0.8149]
, dtype=torch.float64
), 'label': tensor(
0
)}
>>> graph1, data1 = dataset[1]
>>> print(graph1)
Graph(num_nodes=5, num_edges=10,
ndata_schemes={'feat': Scheme(shape=(3,), dtype=torch.float64)}
edata_schemes={'feat': Scheme(shape=(3,), dtype=torch.float64)})
>>> print(data1)
{'feat': tensor([0.5348, 0.2864, 0.1155]), 'label': tensor([0])}
.. note::
When there are multiple graphs, ``CSVDataset`` currently requires them to be homogeneous.
{'feat': tensor([0.5348, 0.2864, 0.1155], dtype=torch.float64), 'label': tensor(0)}
Custom Data Parser
...
...
@@ -469,10 +465,11 @@ To parse the string type labels, one can define a ``DataParser`` class as follow
parsed[header] = dt
return parsed
Create a ``CSVDataset`` using the defined ``DataParser``:
Create a ``CSVDataset`` using the defined ``DataParser``:
.. code:: python
>>> import dgl
>>> dataset = dgl.data.CSVDataset('./customized_parser_dataset',
... ndata_parser=MyDataParser(),
... edata_parser=MyDataParser())
...
...
@@ -483,7 +480,7 @@ To parse the string type labels, one can define a ``DataParser`` class as follow
.. note::
To specify different ``DataParser``
s for different node/edge types, pass a dictionary to
To specify different ``DataParser``
\
s for different node/edge types, pass a dictionary to
``ndata_parser`` and ``edata_parser``, where the key is type name (a single string for
node type; a string triplet for edge type) and the value is the ``DataParser`` to use.
...
...
python/dgl/data/csv_dataset.py
View file @
6106a99d
...
...
@@ -6,8 +6,7 @@ from ..base import DGLError
class
CSVDataset
(
DGLDataset
):
""" This class aims to parse data from CSV files, construct DGLGraph
and behaves as a DGLDataset.
"""Dataset class that loads and parses graph data from CSV files.
Parameters
----------
...
...
@@ -52,11 +51,7 @@ class CSVDataset(DGLDataset):
Examples
--------
``meta.yaml`` and CSV files are under ``csv_dir``.
>>> csv_dataset = dgl.data.DGLCSVDataset(csv_dir)
See more details in :ref:`guide-data-pipeline-loadcsv`.
Please refer to :ref:`guide-data-pipeline-loadcsv`.
"""
META_YAML_NAME
=
'meta.yaml'
...
...
tutorials/blitz/6_load_data.py
View file @
6106a99d
...
...
@@ -226,12 +226,10 @@ print(graph, label)
# Creating Dataset from CSV via :class:`~dgl.data.DGLCSVDataset`
# ------------------------------------------------------------
#
# In the previous examples, dataset is created directly from raw CSV
# files via :class:`~dgl.data.DGLDataset`. DGL provides utility class
# :class:`~dgl.data.DGLCSVDataset` to read data from CSV files and
# construct :class:`~dgl.DGLGraph` more flexibly. Please refer to
# :ref:`guide-data-pipeline-loadcsv` and see if this utility is more
# suitable for your case.
# The previous examples describe how to create a dataset from CSV files
# step-by-step. DGL also provides a utility class :class:`~dgl.data.CSVDataset`
# for reading and parsing data from CSV files. See :ref:`guide-data-pipeline-loadcsv`
# for more details.
#
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment