Unverified Commit 16004228 authored by James Lamb's avatar James Lamb Committed by GitHub
Browse files

[docs] fix broken links (#6161)

parent 72e8106d
...@@ -11,7 +11,7 @@ ignore= ...@@ -11,7 +11,7 @@ ignore=
http.*amd.com/.* http.*amd.com/.*
https.*dl.acm.org/doi/.* https.*dl.acm.org/doi/.*
https.*tandfonline.com/.* https.*tandfonline.com/.*
ignorewarnings=http-robots-denied,https-certificate-error ignorewarnings=http-redirected,http-robots-denied,https-certificate-error
checkextern=1 checkextern=1
[output] [output]
......
...@@ -25,7 +25,7 @@ We used 5 datasets to conduct our comparison experiments. Details of data are li ...@@ -25,7 +25,7 @@ We used 5 datasets to conduct our comparison experiments. Details of data are li
+-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+ +-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+
| Yahoo LTR | Learning to rank | `link <https://webscope.sandbox.yahoo.com/catalog.php?datatype=c>`__ | 473,134 | 700 | set1.train as train, set1.test as test | | Yahoo LTR | Learning to rank | `link <https://webscope.sandbox.yahoo.com/catalog.php?datatype=c>`__ | 473,134 | 700 | set1.train as train, set1.test as test |
+-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+ +-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+
| MS LTR | Learning to rank | `link <http://research.microsoft.com/en-us/projects/mslr/>`__ | 2,270,296 | 137 | {S1,S2,S3} as train set, {S5} as test set | | MS LTR | Learning to rank | `link <https://www.microsoft.com/en-us/research/project/mslr/>`__ | 2,270,296 | 137 | {S1,S2,S3} as train set, {S5} as test set |
+-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+ +-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+
| Expo | Binary classification | `link <http://stat-computing.org/dataexpo/2009/>`__ | 11,000,000 | 700 | last 1,000,000 samples were used as test set | | Expo | Binary classification | `link <http://stat-computing.org/dataexpo/2009/>`__ | 11,000,000 | 700 | last 1,000,000 samples were used as test set |
+-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+ +-----------+-----------------------+------------------------------------------------------------------------+-------------+----------+----------------------------------------------+
......
...@@ -289,7 +289,7 @@ Python-package ...@@ -289,7 +289,7 @@ Python-package
This error should be solved in latest version. This error should be solved in latest version.
If you still meet this error, try to remove ``lightgbm.egg-info`` folder in your Python-package and reinstall, If you still meet this error, try to remove ``lightgbm.egg-info`` folder in your Python-package and reinstall,
or check `this thread on stackoverflow <http://stackoverflow.com/questions/18085571/pip-install-error-setup-script-specifies-an-absolute-path>`__. or check `this thread on stackoverflow <https://stackoverflow.com/questions/18085571/pip-install-error-setup-script-specifies-an-absolute-path>`__.
2. Error messages: ``Cannot ... before construct dataset``. 2. Error messages: ``Cannot ... before construct dataset``.
----------------------------------------------------------- -----------------------------------------------------------
......
...@@ -196,7 +196,7 @@ Huan Zhang, Si Si and Cho-Jui Hsieh. `GPU Acceleration for Large-scale Tree Boos ...@@ -196,7 +196,7 @@ Huan Zhang, Si Si and Cho-Jui Hsieh. `GPU Acceleration for Large-scale Tree Boos
.. _link1: https://archive.ics.uci.edu/ml/datasets/HIGGS .. _link1: https://archive.ics.uci.edu/ml/datasets/HIGGS
.. _link2: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html .. _link2: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html
.. _link3: https://www.kaggle.com/c/bosch-production-line-performance/data .. _link3: https://www.kaggle.com/c/bosch-production-line-performance/data
......
...@@ -960,7 +960,7 @@ gcc ...@@ -960,7 +960,7 @@ gcc
.. _Boost Binaries: https://sourceforge.net/projects/boost/files/boost-binaries/ .. _Boost Binaries: https://sourceforge.net/projects/boost/files/boost-binaries/
.. _SWIG: http://www.swig.org/download.html .. _SWIG: https://www.swig.org/download.html
.. _this detailed guide: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html .. _this detailed guide: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
......
...@@ -384,8 +384,6 @@ From the point forward, you can use any of the following methods to save the Boo ...@@ -384,8 +384,6 @@ From the point forward, you can use any of the following methods to save the Boo
Kubeflow Kubeflow
^^^^^^^^ ^^^^^^^^
`Kubeflow Fairing`_ supports LightGBM distributed training. `These examples`_ show how to get started with LightGBM and Kubeflow Fairing in a hybrid cloud environment.
Kubeflow users can also use the `Kubeflow XGBoost Operator`_ for machine learning workflows with LightGBM. You can see `this example`_ for more details. Kubeflow users can also use the `Kubeflow XGBoost Operator`_ for machine learning workflows with LightGBM. You can see `this example`_ for more details.
Kubeflow integrations for LightGBM are not maintained by LightGBM's maintainers. Kubeflow integrations for LightGBM are not maintained by LightGBM's maintainers.
...@@ -528,10 +526,6 @@ See `the mars documentation`_ for usage examples. ...@@ -528,10 +526,6 @@ See `the mars documentation`_ for usage examples.
.. _these Dask examples: https://github.com/microsoft/lightgbm/tree/master/examples/python-guide/dask .. _these Dask examples: https://github.com/microsoft/lightgbm/tree/master/examples/python-guide/dask
.. _Kubeflow Fairing: https://www.kubeflow.org/docs/components/fairing/fairing-overview
.. _These examples: https://github.com/kubeflow/fairing/tree/master/examples/lightgbm
.. _Kubeflow XGBoost Operator: https://github.com/kubeflow/xgboost-operator .. _Kubeflow XGBoost Operator: https://github.com/kubeflow/xgboost-operator
.. _this example: https://github.com/kubeflow/xgboost-operator/tree/master/config/samples/lightgbm-dist .. _this example: https://github.com/kubeflow/xgboost-operator/tree/master/config/samples/lightgbm-dist
......
...@@ -25,8 +25,6 @@ You can find more details on the experimentation below: ...@@ -25,8 +25,6 @@ You can find more details on the experimentation below:
- `Laurae's Benchmark Master Data (Interactive) <https://public.tableau.com/views/gbt_benchmarks/Master-Data?:showVizHome=no>`__ - `Laurae's Benchmark Master Data (Interactive) <https://public.tableau.com/views/gbt_benchmarks/Master-Data?:showVizHome=no>`__
- `Kaggle Paris Meetup #12 Slides <https://drive.google.com/file/d/0B6qJBmoIxFe0ZHNCOXdoRWMxUm8/view>`__
The image below compares the runtime for training with different compiler options to a baseline using LightGBM compiled with ``-O2 --mtune=core2``. All three options are faster than that baseline. The best performance was achieved with ``-O3 --mtune=native``. The image below compares the runtime for training with different compiler options to a baseline using LightGBM compiled with ``-O2 --mtune=core2``. All three options are faster than that baseline. The best performance was achieved with ``-O3 --mtune=native``.
.. image:: ./_static/images/gcc-comparison-2.png .. image:: ./_static/images/gcc-comparison-2.png
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment