Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
tianlh
LightGBM-DCU
Commits
c23023bd
"git@developer.sourcefind.cn:tianlh/lightgbm-dcu.git" did not exist on "c454d5f8ccc7e96350719bd6af43b7ecc5288a8a"
Commit
c23023bd
authored
Oct 19, 2016
by
Qiwei Ye
Browse files
fix some typos
parent
39e47323
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
8 additions
and
8 deletions
+8
-8
src/treelearner/leaf_splits.hpp
src/treelearner/leaf_splits.hpp
+4
-4
src/treelearner/parallel_tree_learner.h
src/treelearner/parallel_tree_learner.h
+4
-4
No files found.
src/treelearner/leaf_splits.hpp
View file @
c23023bd
...
...
@@ -10,7 +10,7 @@
namespace
LightGBM
{
/*!
* \brief used to find split
s
candidates for a leaf
* \brief used to find split candidates for a leaf
*/
class
LeafSplits
{
public:
...
...
@@ -26,7 +26,7 @@ public:
}
/*!
* \brief Init split
s
on current leaf
, don't need to travesal al
l data
* \brief Init split on current leaf
on partia
l data
.
* \param leaf Index of current leaf
* \param data_partition current data partition
* \param sum_gradients
...
...
@@ -43,7 +43,7 @@ public:
}
/*!
* \brief Init splits on current leaf,
need to
traves
al
all data to sum up
* \brief Init splits on current leaf,
it will
traves
e
all data to sum up
the results
* \param gradients
* \param hessians
*/
...
...
@@ -66,7 +66,7 @@ public:
}
/*!
* \brief Init splits on current leaf
, need to travesal all data to sum up
* \brief Init splits on current leaf
of partial data.
* \param leaf Index of current leaf
* \param data_partition current data partition
* \param gradients
...
...
src/treelearner/parallel_tree_learner.h
View file @
c23023bd
...
...
@@ -14,8 +14,8 @@ namespace LightGBM {
/*!
* \brief Feature parallel learning algorithm.
* Different machine will find best split on different features, then sync global best split
*
W
hen #data is small or #feature is large
, you can use this to have better speed-up
*
Different machine will find best split on different features, then sync global best split
*
It is recommonded used w
hen #data is small or #feature is large
*/
class
FeatureParallelTreeLearner
:
public
SerialTreeLearner
{
public:
...
...
@@ -39,8 +39,8 @@ private:
/*!
* \brief Data parallel learning algorithm.
* Workers use local data to construct histograms locally, then sync up global histograms.
*
W
hen #data is large or #feature is small
, you can use this to have better speed-up
*
Workers use local data to construct histograms locally, then sync up global histograms.
*
It is recommonded used w
hen #data is large or #feature is small
*/
class
DataParallelTreeLearner
:
public
SerialTreeLearner
{
public:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment