Commit 102dc5fd authored by Guolin Ke's avatar Guolin Ke
Browse files

Add docs for GPU.

parent 0bb4a825
...@@ -16,6 +16,9 @@ For more details, please refer to [Features](https://github.com/Microsoft/LightG ...@@ -16,6 +16,9 @@ For more details, please refer to [Features](https://github.com/Microsoft/LightG
News News
---- ----
04/10/2017 : Support use GPU to accelerate the tree learning.
02/20/2017 : Update to LightGBM v2. 02/20/2017 : Update to LightGBM v2.
01/08/2017 : Release [**R-package**](./R-package) beta version, welcome to have a try and provide feedback. 01/08/2017 : Release [**R-package**](./R-package) beta version, welcome to have a try and provide feedback.
......
...@@ -53,6 +53,10 @@ The parameter format is ```key1=value1 key2=value2 ... ``` . And parameters can ...@@ -53,6 +53,10 @@ The parameter format is ```key1=value1 key2=value2 ... ``` . And parameters can
* Number of threads for LightGBM. * Number of threads for LightGBM.
* For the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPU using [hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) to generate 2 threads per CPU core). * For the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPU using [hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) to generate 2 threads per CPU core).
* For parallel learning, should not use full CPU cores since this will cause poor performance for the network. * For parallel learning, should not use full CPU cores since this will cause poor performance for the network.
* ```device```, default=```cpu```, options=```cpu```,```gpu```
* Choose device for the tree learning, can use gpu to achieve the faster learning.
* Note: 1. Recommend use the smaller ```max_bin```(e.g ```63```) to get the better speed up. 2. For the faster speed, GPU use 32-bit float point to sum up by default, may affect the accuracy for some tasks. You can set ```gpu_use_dp=true``` to enable 64-bit float point, but it will slow down the training. 3. Refer to [Installation Guide](https://github.com/Microsoft/LightGBM/wiki/Installation-Guide#with-gpu-support) to build with GPU .
## Learning control parameters ## Learning control parameters
* ```max_depth```, default=```-1```, type=int * ```max_depth```, default=```-1```, type=int
...@@ -235,6 +239,17 @@ Following parameters are used for parallel learning, and only used for base(sock ...@@ -235,6 +239,17 @@ Following parameters are used for parallel learning, and only used for base(sock
* File that list machines for this parallel learning application * File that list machines for this parallel learning application
* Each line contains one IP and one port for one machine. The format is ```ip port```, separate by space. * Each line contains one IP and one port for one machine. The format is ```ip port```, separate by space.
## GPU parameters
* ```gpu_platform_id```, default=```-1```, type=int
* OpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform.
* Default value is -1, using the system-wide default platform.
* ```gpu_device_id```, default=```-1```, type=int
* OpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID.
* Default value is -1, using the default device in the selected platform.
* ```gpu_use_dp```, default=```false```, type=bool
* Set to true to use double precision math on GPU (default using single precision).
## Others ## Others
### Continued training with input score ### Continued training with input score
......
...@@ -42,7 +42,7 @@ public: ...@@ -42,7 +42,7 @@ public:
inline VAL_T InnerRawGet(data_size_t idx); inline VAL_T InnerRawGet(data_size_t idx);
inline uint32_t Get( data_size_t idx) override { inline uint32_t Get( data_size_t idx) override {
VAL_T ret = RawGet(idx); VAL_T ret = InnerRawGet(idx);
if (ret >= min_bin_ && ret <= max_bin_) { if (ret >= min_bin_ && ret <= max_bin_) {
return ret - min_bin_ + bias_; return ret - min_bin_ + bias_;
} else { } else {
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment