Unverified Commit e9dbef6b authored by Mark Daoust's avatar Mark Daoust Committed by GitHub
Browse files

Subtract the entropy to encourage exploration.

parent 949b1987
...@@ -353,7 +353,7 @@ class Worker(threading.Thread): ...@@ -353,7 +353,7 @@ class Worker(threading.Thread):
policy_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=memory.actions, policy_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=memory.actions,
logits=logits) logits=logits)
policy_loss *= tf.stop_gradient(advantage) policy_loss *= tf.stop_gradient(advantage)
policy_loss = 0.01 * entropy policy_loss -= 0.01 * entropy
total_loss = tf.reduce_mean((0.5 * value_loss + policy_loss)) total_loss = tf.reduce_mean((0.5 * value_loss + policy_loss))
return total_loss return total_loss
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment