Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
94ab9709
"...text-generation-inference.git" did not exist on "31d76e238df7654157ab1e372b7d57ef859daaa7"
Unverified
Commit
94ab9709
authored
Feb 04, 2021
by
张恒瑞
Committed by
GitHub
Feb 04, 2021
Browse files
[Bugfix] fix a typo in consis_loss function (#2616)
parent
ff345c2e
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
1 addition
and
3 deletions
+1
-3
examples/pytorch/grand/README.md
examples/pytorch/grand/README.md
+0
-1
examples/pytorch/grand/main.py
examples/pytorch/grand/main.py
+1
-2
No files found.
examples/pytorch/grand/README.md
View file @
94ab9709
...
...
@@ -61,7 +61,6 @@ Train a model which follows the original hyperparameters on different datasets.
python main.py
--dataname
cora
--gpu
0
--lam
1.0
--tem
0.5
--order
8
--sample
4
--input_droprate
0.5
--hidden_droprate
0.5
--dropnode_rate
0.5
--hid_dim
32
--early_stopping
100
--lr
1e-2
--epochs
2000
# Citeseer:
python main.py
--dataname
citeseer
--gpu
0
--lam
0.7
--tem
0.3
--order
2
--sample
2
--input_droprate
0.0
--hidden_droprate
0.2
--dropnode_rate
0.5
--hid_dim
32
--early_stopping
100
--lr
1e-2
--epochs
2000
# Pubmed:
python main.py
--dataname
pubmed
--gpu
0
--lam
1.0
--tem
0.2
--order
5
--sample
4
--input_droprate
0.6
--hidden_droprate
0.8
--dropnode_rate
0.5
--hid_dim
32
--early_stopping
200
--lr
0.2
--epochs
2000
--use_bn
```
...
...
examples/pytorch/grand/main.py
View file @
94ab9709
...
...
@@ -58,7 +58,7 @@ def consis_loss(logps, temp, lam):
sharp_p
=
(
th
.
pow
(
avg_p
,
1.
/
temp
)
/
th
.
sum
(
th
.
pow
(
avg_p
,
1.
/
temp
),
dim
=
1
,
keepdim
=
True
)).
detach
()
sharp_p
=
sharp_p
.
unsqueeze
(
2
)
loss
=
th
.
mean
(
th
.
sum
(
th
.
pow
(
ps
-
sharp_p
,
1.
/
temp
),
dim
=
1
,
keepdim
=
True
))
loss
=
th
.
mean
(
th
.
sum
(
th
.
pow
(
ps
-
sharp_p
,
2
),
dim
=
1
,
keepdim
=
True
))
loss
=
lam
*
loss
return
loss
...
...
@@ -102,7 +102,6 @@ if __name__ == '__main__':
test_idx
=
th
.
nonzero
(
test_mask
,
as_tuple
=
False
).
squeeze
().
to
(
device
)
# Step 2: Create model =================================================================== #
model
=
GRAND
(
n_features
,
args
.
hid_dim
,
n_classes
,
args
.
sample
,
args
.
order
,
args
.
dropnode_rate
,
args
.
input_droprate
,
args
.
hidden_droprate
,
args
.
use_bn
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment