Commit 3e39c96a authored by Yan Ni's avatar Yan Ni Committed by QuanluZhang
Browse files

Add test for documentation build (#1924)

parent bf2b9290
...@@ -31,6 +31,7 @@ jobs: ...@@ -31,6 +31,7 @@ jobs:
python3 -m pip install tensorflow==1.13.1 --user python3 -m pip install tensorflow==1.13.1 --user
python3 -m pip install keras==2.1.6 --user python3 -m pip install keras==2.1.6 --user
python3 -m pip install gym onnx --user python3 -m pip install gym onnx --user
python3 -m pip install sphinx==1.8.3 sphinx-argparse==0.2.5 sphinx-markdown-tables==0.0.9 sphinx-rtd-theme==0.4.2 sphinxcontrib-websupport==1.1.0 recommonmark==0.5.0 --user
sudo apt-get install swig -y sudo apt-get install swig -y
nnictl package install --name=SMAC nnictl package install --name=SMAC
nnictl package install --name=BOHB nnictl package install --name=BOHB
...@@ -69,6 +70,10 @@ jobs: ...@@ -69,6 +70,10 @@ jobs:
cd test cd test
python3 cli_test.py python3 cli_test.py
displayName: 'nnicli test' displayName: 'nnicli test'
- script: |
cd docs/en_US/
sphinx-build -M html . _build -W
displayName: 'Sphinx Documentation Build check'
- job: 'basic_test_pr_macOS' - job: 'basic_test_pr_macOS'
pool: pool:
......
...@@ -342,5 +342,3 @@ You can view example for more information ...@@ -342,5 +342,3 @@ You can view example for more information
- **sparsity:** How much percentage of convolutional filters are to be pruned. - **sparsity:** How much percentage of convolutional filters are to be pruned.
- **op_types:** Only Conv2d is supported in ActivationMeanRankFilterPruner - **op_types:** Only Conv2d is supported in ActivationMeanRankFilterPruner
***
\ No newline at end of file
...@@ -5,11 +5,9 @@ Quantizer on NNI Compressor ...@@ -5,11 +5,9 @@ Quantizer on NNI Compressor
We provide Naive Quantizer to quantizer weight to default 8 bits, you can use it to test quantize algorithm without any configure. We provide Naive Quantizer to quantizer weight to default 8 bits, you can use it to test quantize algorithm without any configure.
### Usage ### Usage
tensorflow
```python nni.compression.tensorflow.NaiveQuantizer(model_graph).compress()
```
pytorch pytorch
```python nni.compression.torch.NaiveQuantizer(model).compress() ```python
model = nni.compression.torch.NaiveQuantizer(model).compress()
``` ```
*** ***
......
...@@ -186,7 +186,7 @@ ...@@ -186,7 +186,7 @@
* Run trial jobs on the GPU running non-NNI jobs * Run trial jobs on the GPU running non-NNI jobs
* Kubeflow v1beta2 operator * Kubeflow v1beta2 operator
* Support Kubeflow TFJob/PyTorchJob v1beta2 * Support Kubeflow TFJob/PyTorchJob v1beta2
* [General NAS programming interface](AdvancedFeature/GeneralNasInterfaces.md) * [General NAS programming interface](https://github.com/microsoft/nni/blob/v0.8/docs/en_US/GeneralNasInterfaces.md)
* Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation * Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation
* Provide a new command `nnictl trial codegen` for debugging the NAS code * Provide a new command `nnictl trial codegen` for debugging the NAS code
* Tutorial of NAS programming interface, example of NAS on MNIST, customized random tuner for NAS * Tutorial of NAS programming interface, example of NAS on MNIST, customized random tuner for NAS
...@@ -299,7 +299,7 @@ ...@@ -299,7 +299,7 @@
* Support [Metis tuner](Tuner/MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning. * Support [Metis tuner](Tuner/MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning.
* Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS. * Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS.
* Support [Curve fitting assessor](Assessor/CurvefittingAssessor.md) for early stop policy using learning curve extrapolation. * Support [Curve fitting assessor](Assessor/CurvefittingAssessor.md) for early stop policy using learning curve extrapolation.
* Advanced Support of [Weight Sharing](AdvancedFeature/AdvancedNas.md): Enable weight sharing for NAS tuners, currently through NFS. * Advanced Support of [Weight Sharing](https://github.com/microsoft/nni/blob/v0.5/docs/AdvancedNAS.md): Enable weight sharing for NAS tuners, currently through NFS.
#### Training Service Enhancement #### Training Service Enhancement
......
...@@ -106,7 +106,7 @@ nnictl create --config exp_paiYarn.yml ...@@ -106,7 +106,7 @@ nnictl create --config exp_paiYarn.yml
``` ```
to start the experiment in paiYarn mode. NNI will create OpenpaiYarn job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`. to start the experiment in paiYarn mode. NNI will create OpenpaiYarn job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see jobs created by NNI in the OpenpaiYarn cluster's web portal, like: You can see jobs created by NNI in the OpenpaiYarn cluster's web portal, like:
![](../../img/nni_paiYarn_joblist.jpg) ![](../../img/nni_pai_joblist.jpg)
Notice: In paiYarn mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic. Notice: In paiYarn mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
......
...@@ -72,7 +72,7 @@ language = None ...@@ -72,7 +72,7 @@ language = None
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path. # This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'Release_v1.0.md']
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = None pygments_style = None
......
...@@ -11,3 +11,4 @@ Examples ...@@ -11,3 +11,4 @@ Examples
EvolutionSQuAD<./TrialExample/SquadEvolutionExamples> EvolutionSQuAD<./TrialExample/SquadEvolutionExamples>
GBDT<./TrialExample/GbdtExample> GBDT<./TrialExample/GbdtExample>
RocksDB <./TrialExample/RocksdbExamples> RocksDB <./TrialExample/RocksdbExamples>
KDExample <./TrialExample/KDExample>
...@@ -18,7 +18,7 @@ For details, please refer to the following tutorials: ...@@ -18,7 +18,7 @@ For details, please refer to the following tutorials:
Overview <Compressor/Overview> Overview <Compressor/Overview>
Level Pruner <Compressor/Pruner> Level Pruner <Compressor/Pruner>
AGP Pruner <Compressor/Pruner> AGP Pruner <Compressor/Pruner>
L1Filter Pruner <Compressor/L1FilterPruner> L1Filter Pruner <Compressor/l1filterpruner>
Slim Pruner <Compressor/SlimPruner> Slim Pruner <Compressor/SlimPruner>
Lottery Ticket Pruner <Compressor/LotteryTicketHypothesis> Lottery Ticket Pruner <Compressor/LotteryTicketHypothesis>
FPGM Pruner <Compressor/Pruner> FPGM Pruner <Compressor/Pruner>
......
...@@ -6,5 +6,6 @@ Introduction to NNI Training Services ...@@ -6,5 +6,6 @@ Introduction to NNI Training Services
Local<./TrainingService/LocalMode> Local<./TrainingService/LocalMode>
Remote<./TrainingService/RemoteMachineMode> Remote<./TrainingService/RemoteMachineMode>
OpenPAI<./TrainingService/PaiMode> OpenPAI<./TrainingService/PaiMode>
OpenPAI Yarn Mode<./TrainingService/PaiYarnMode>
Kubeflow<./TrainingService/KubeflowMode> Kubeflow<./TrainingService/KubeflowMode>
FrameworkController<./TrainingService/FrameworkControllerMode> FrameworkController<./TrainingService/FrameworkControllerMode>
...@@ -503,8 +503,11 @@ class PPOTuner(Tuner): ...@@ -503,8 +503,11 @@ class PPOTuner(Tuner):
""" """
Generate parameters, if no trial configration for now, self.credit plus 1 to send the config later Generate parameters, if no trial configration for now, self.credit plus 1 to send the config later
Parameters
----------
parameter_id : int parameter_id : int
Unique identifier for requested hyper-parameters. This will later be used in :meth:`receive_trial_result`. Unique identifier for requested hyper-parameters.
This will later be used in :meth:`receive_trial_result`.
**kwargs **kwargs
Not used Not used
...@@ -512,6 +515,7 @@ class PPOTuner(Tuner): ...@@ -512,6 +515,7 @@ class PPOTuner(Tuner):
------- -------
dict dict
One newly generated configuration One newly generated configuration
""" """
if self.first_inf: if self.first_inf:
self.trials_result = [None for _ in range(self.inf_batch_size)] self.trials_result = [None for _ in range(self.inf_batch_size)]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment