Unverified Commit 7a1f05ae authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

Merge pull request #3444 from microsoft/v2.1

V2.1 merge back to master
parents 539a7cd7 a0ae02e6
......@@ -26,7 +26,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* ML Platform owners who want to **support AutoML in their platform**.
## **What's NEW!** &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>
* **New release**: [v2.0 is available](https://github.com/microsoft/nni/releases) - _released on Jan-14-2021_
* **New release**: [v2.1 is available](https://github.com/microsoft/nni/releases) - _released on Mar-10-2021_
* **New demo available**: [Youtube entry](https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw) | [Bilibili 入口](https://space.bilibili.com/1649051673) - _last updated on Feb-19-2021_
* **New use case sharing**: [Cost-effective Hyper-parameter Tuning using AdaptDL with NNI](https://medium.com/casl-project/cost-effective-hyper-parameter-tuning-using-adaptdl-with-nni-e55642888761) - _posted on Feb-23-2021_
......@@ -252,7 +252,7 @@ Note:
* Download the examples via clone the source code.
```bash
git clone -b v2.0 https://github.com/Microsoft/nni.git
git clone -b v2.1 https://github.com/Microsoft/nni.git
```
* Run the MNIST example.
......
......@@ -25,7 +25,7 @@ Step 1. Download ``bash-completion``
cd ~
wget https://raw.githubusercontent.com/microsoft/nni/{nni-version}/tools/bash-completion
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.0``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.
Here, {nni-version} should by replaced by the version of NNI, e.g., ``master``, ``v2.1``. You can also check the latest ``bash-completion`` script :githublink:`here <tools/bash-completion>`.
.. cannot find :githublink:`here <tools/bash-completion>`.
......
......@@ -32,7 +32,7 @@ To avoid storage and legality issues, we do not provide any prepared databases.
git clone -b ${NNI_VERSION} https://github.com/microsoft/nni
cd nni/examples/nas/benchmarks
Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.0``.
Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.1``.
#.
Install dependencies via ``pip3 install -r xxx.requirements.txt``. ``xxx`` can be ``nasbench101``\ , ``nasbench201`` or ``nds``.
......
Advanced Tutorial
=================
This document includes two parts. The first part explains the design decision of ``@basic_unit`` and ``serializer``. The second part is the tutorial of how to write a model space with mutators.
``@basic_unit`` and ``serializer``
----------------------------------
.. _serializer:
``@basic_unit`` and ``serialize`` can be viewed as some kind of serializer. They are designed for making the whole model (including training) serializable to be executed on another process or machine.
**@basic_unit** annotates that a module is a basic unit, i.e, no need to understand the details of this module. The effect is that it prevents Retiarii to parse this module. To understand this, we first briefly explain how Retiarii works: it converts user-defined model to a graph representation (called graph IR) using `TorchScript <https://pytorch.org/docs/stable/jit.html>`__, each instantiated module in the model is converted to a subgraph. Then mutations are applied to the graph to generate new graphs. Each new graph is then converted back to PyTorch code and executed. ``@basic_unit`` here means the module will not be converted to a subgraph, instead, it is converted to a single graph node as a basic unit. That is, the module will not be unfolded anymore. When the module is not unfolded, mutations on initialization parameters of this module becomes easier.
``@basic_unit`` is usually used in the following cases:
* When users want to tune initialization parameters of a module using ``ValueChoice``, then decorate the module with ``@basic_unit``. For example, ``self.conv = MyConv(kernel_size=nn.ValueChoice([1, 3, 5]))``, here ``MyConv`` should be decorated.
* When a module cannot be successfully parsed to a subgraph, decorate the module with ``@basic_unit``. The parse failure could be due to complex control flow. Currently Retiarii does not support adhoc loop, if there is adhoc loop in a module's forward, this class should be decorated as serializable module. For example, the following ``MyModule`` should be decorated.
.. code-block:: python
@basic_unit
class MyModule(nn.Module):
def __init__(self):
...
def forward(self, x):
for i in range(10): # <- adhoc loop
...
* Some inline mutation APIs require their handled module to be decorated with ``@basic_unit``. For example, user-defined module that is provided to ``LayerChoice`` as a candidate op should be decorated.
**serialize** is mainly used for serializing model training logic. It enables re-instantiation of model evaluator in another process or machine. Re-instantiation is necessary because most of time model and evaluator should be sent to training services. ``serialize`` is implemented by recording the initialization parameters of user instantiated evaluator.
The evaluator related APIs provided by Retiarii have already supported serialization, for example ``pl.Classification``, ``pl.DataLoader``, no need to apply ``serialize`` on them. In the following case users should use ``serialize`` API manually.
If the initialization parameters of the evaluator APIs (e.g., ``pl.Classification``, ``pl.DataLoader``) are not primitive types (e.g., ``int``, ``string``), they should be applied with ``serialize``. If those parameters' initialization parameters are not primitive types, ``serialize`` should also be applied. In a word, ``serialize`` should be applied recursively if necessary.
Express Mutations with Mutators
-------------------------------
Besides inline mutations which have been demonstrated `here <./Tutorial.rst>`__, Retiarii provides a more general approach to express a model space: *Mutator*. Inline mutations APIs are also implemented with mutator, which can be seen as a special case of model mutation.
.. note:: Mutator and inline mutation APIs cannot be used together.
A mutator is a piece of logic to express how to mutate a given model. Users are free to write their own mutators. Then a model space is expressed with a base model and a list of mutators. A model in the model space is sampled by applying the mutators on the base model one after another. An example is shown below.
.. code-block:: python
applied_mutators = []
applied_mutators.append(BlockMutator('mutable_0'))
applied_mutators.append(BlockMutator('mutable_1'))
``BlockMutator`` is defined by users to express how to mutate the base model.
Write a mutator
^^^^^^^^^^^^^^^
User-defined mutator should inherit ``Mutator`` class, and implement mutation logic in the member function ``mutate``.
.. code-block:: python
from nni.retiarii import Mutator
class BlockMutator(Mutator):
def __init__(self, target: str, candidates: List):
super(BlockMutator, self).__init__()
self.target = target
self.candidate_op_list = candidates
def mutate(self, model):
nodes = model.get_nodes_by_label(self.target)
for node in nodes:
chosen_op = self.choice(self.candidate_op_list)
node.update_operation(chosen_op.type, chosen_op.params)
The input of ``mutate`` is graph IR (Intermediate Representation) of the base model (please refer to `here <./ApiReference.rst>`__ for the format and APIs of the IR), users can mutate the graph using the graph's member functions (e.g., ``get_nodes_by_label``, ``update_operation``). The mutation operations can be combined with the API ``self.choice``, in order to express a set of possible mutations. In the above example, the node's operation can be changed to any operation from ``candidate_op_list``.
Use placehoder to make mutation easier: ``nn.Placeholder``. If you want to mutate a subgraph or node of your model, you can define a placeholder in this model to represent the subgraph or node. Then, use mutator to mutate this placeholder to make it real modules.
.. code-block:: python
ph = nn.Placeholder(
label='mutable_0',
kernel_size_options=[1, 3, 5],
n_layer_options=[1, 2, 3, 4],
exp_ratio=exp_ratio,
stride=stride
)
``label`` is used by mutator to identify this placeholder. The other parameters are the information that are required by mutator. They can be accessed from ``node.operation.parameters`` as a dict, it could include any information that users want to put to pass it to user defined mutator. The complete example code can be found in :githublink:`Mnasnet base model <test/retiarii_test/mnasnet/base_mnasnet.py>`.
Starting an experiment is almost the same as using inline mutation APIs. The only difference is that the applied mutators should be passed to ``RetiariiExperiment``. Below is a simple example.
.. code-block:: python
exp = RetiariiExperiment(base_model, trainer, applied_mutators, simple_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'mnasnet_search'
exp_config.trial_concurrency = 2
exp_config.max_trial_number = 10
exp_config.training_service.use_active_gpu = False
exp.run(exp_config, 8081)
......@@ -39,8 +39,8 @@ Graph Mutation APIs
.. autoclass:: nni.retiarii.Operation
:members:
Trainers
--------
Evaluators
----------
.. autoclass:: nni.retiarii.evaluator.FunctionalEvaluator
:members:
......
One-shot Experiments on Retiarii
================================
Before reading this tutorial, we highly recommend you to first go through the tutorial of how to `define a model space <./Tutorial.rst#define-your-model-space>`__.
Model Search with One-shot Trainer
----------------------------------
With a defined model space, users can explore the space in two ways. One is using strategy and single-arch evaluator as demonstrated `here <./Tutorial.rst#explore-the-defined-model-space>`__. The other is using one-shot trainer, which consumes much less computational resource compared to the first one. In this tutorial we focus on this one-shot approach. The principle of one-shot approach is combining all the models in a model space into one big model (usually called super-model or super-graph). It takes charge of both search, training and testing, by training and evaluating this big model.
We list the supported one-shot trainers here:
* DARTS trainer
* ENAS trainer
* ProxylessNAS trainer
* Single-path (random) trainer
See `API reference <./ApiReference.rst>`__ for detailed usages. Here, we show an example to use DARTS trainer manually.
.. code-block:: python
from nni.retiarii.oneshot.pytorch import DartsTrainer
trainer = DartsTrainer(
model=model,
loss=criterion,
metrics=lambda output, target: accuracy(output, target, topk=(1,)),
optimizer=optim,
num_epochs=args.epochs,
dataset=dataset_train,
batch_size=args.batch_size,
log_frequency=args.log_frequency,
unrolled=args.unrolled
)
trainer.fit()
final_architecture = trainer.export()
**Format of the exported architecture.** TBD.
One-shot experiment can be visualized with NAS UI, please refer to `here <../Visualization.rst>`__ for the usage guidance. Note that NAS visualization is under intensive development.
Customize a New One-shot Trainer
--------------------------------
One-shot trainers should inherit ``nni.retiarii.oneshot.BaseOneShotTrainer``, and need to implement ``fit()`` (used to conduct the fitting and searching process) and ``export()`` method (used to return the searched best architecture).
Writing a one-shot trainer is very different to single-arch evaluator. First of all, there are no more restrictions on init method arguments, any Python arguments are acceptable. Secondly, the model fed into one-shot trainers might be a model with Retiarii-specific modules, such as LayerChoice and InputChoice. Such model cannot directly forward-propagate and trainers need to decide how to handle those modules.
A typical example is DartsTrainer, where learnable-parameters are used to combine multiple choices in LayerChoice. Retiarii provides ease-to-use utility functions for module-replace purposes, namely ``replace_layer_choice``, ``replace_input_choice``. A simplified example is as follows:
.. code-block:: python
from nni.retiarii.oneshot import BaseOneShotTrainer
from nni.retiarii.oneshot.pytorch import replace_layer_choice, replace_input_choice
class DartsLayerChoice(nn.Module):
def __init__(self, layer_choice):
super(DartsLayerChoice, self).__init__()
self.name = layer_choice.key
self.op_choices = nn.ModuleDict(layer_choice.named_children())
self.alpha = nn.Parameter(torch.randn(len(self.op_choices)) * 1e-3)
def forward(self, *args, **kwargs):
op_results = torch.stack([op(*args, **kwargs) for op in self.op_choices.values()])
alpha_shape = [-1] + [1] * (len(op_results.size()) - 1)
return torch.sum(op_results * F.softmax(self.alpha, -1).view(*alpha_shape), 0)
class DartsTrainer(BaseOneShotTrainer):
def __init__(self, model, loss, metrics, optimizer):
self.model = model
self.loss = loss
self.metrics = metrics
self.num_epochs = 10
self.nas_modules = []
replace_layer_choice(self.model, DartsLayerChoice, self.nas_modules)
... # init dataloaders and optimizers
def fit(self):
for i in range(self.num_epochs):
for (trn_X, trn_y), (val_X, val_y) in zip(self.train_loader, self.valid_loader):
self.train_architecture(val_X, val_y)
self.train_model_weight(trn_X, trn_y)
@torch.no_grad()
def export(self):
result = dict()
for name, module in self.nas_modules:
if name not in result:
result[name] = select_best_of_module(module)
return result
The full code of DartsTrainer is available to Retiarii source code. Please have a check at :githublink:`DartsTrainer <nni/retiarii/oneshot/pytorch/darts.py>`.
This diff is collapsed.
Customize A New Evaluator/Trainer
=================================
Customize A New Model Evaluator
===============================
Evaluators/Trainers are necessary to evaluate the performance of new explored models. In NAS scenario, this further divides into two use cases:
1. **Single-arch evaluators**: evaluators that are used to train and evaluate one single model.
2. **One-shot trainers**: trainers that handle training and searching simultaneously, from an end-to-end perspective.
Single-arch evaluators
----------------------
Model Evaluator is necessary to evaluate the performance of new explored models. A model evaluator usually includes training, validating and testing of a single model. We provide two ways for users to write a new model evaluator, which will be demonstrated below respectively.
With FunctionalEvaluator
^^^^^^^^^^^^^^^^^^^^^^^^
------------------------
The simplest way to customize a new evaluator is with functional APIs, which is very easy when training code is already available. Users only need to write a fit function that wraps everything. This function takes one positional arguments (model) and possible keyword arguments. In this way, users get everything under their control, but exposes less information to the framework and thus fewer opportunities for possible optimization. An example is as belows:
The simplest way to customize a new evaluator is with functional APIs, which is very easy when training code is already available. Users only need to write a fit function that wraps everything. This function takes one positional arguments (``model_cls``) and possible keyword arguments. The keyword arguments (other than ``model_cls``) are fed to FunctionEvaluator as its initialization parameters. In this way, users get everything under their control, but expose less information to the framework and thus fewer opportunities for possible optimization. An example is as belows:
.. code-block:: python
from nni.retiarii.evaluator import FunctionalEvaluator
from nni.retiarii.experiment.pytorch import RetiariiExperiment
def fit(model, dataloader):
def fit(model_cls, dataloader):
model = model_cls()
train(model, dataloader)
acc = test(model, dataloader)
nni.report_final_result(acc)
......@@ -27,12 +22,14 @@ The simplest way to customize a new evaluator is with functional APIs, which is
evaluator = FunctionalEvaluator(fit, dataloader=DataLoader(foo, bar))
experiment = RetiariiExperiment(base_model, evaluator, mutators, strategy)
.. note:: Due to our current implementation limitation, the ``fit`` function should be put in another python file instead of putting it in the main file. This limitation will be fixed in future release.
With PyTorch-Lightning
^^^^^^^^^^^^^^^^^^^^^^
----------------------
It's recommended to write training code in PyTorch-Lightning style, that is, to write a LightningModule that defines all elements needed for training (e.g., loss function, optimizer) and to define a trainer that takes (optional) dataloaders to execute the training. Before that, please read the `document of PyTorch-lightning <https://pytorch-lightning.readthedocs.io/>` to learn the basic concepts and components provided by PyTorch-lightning.
It's recommended to write training code in PyTorch-Lightning style, that is, to write a LightningModule that defines all elements needed for training (e.g., loss function, optimizer) and to define a trainer that takes (optional) dataloaders to execute the training. Before that, please read the `document of PyTorch-lightning <https://pytorch-lightning.readthedocs.io/>`__ to learn the basic concepts and components provided by PyTorch-lightning.
In pratice, writing a new training module in NNI should inherit ``nni.retiarii.evaluator.pytorch.lightning.LightningModule``, which has a ``set_model`` that will be called after ``__init__`` to save the candidate model (generated by strategy) as ``self.model``. The rest of the process (like ``training_step``) should be the same as writing any other lightning module. Evaluators should also communicate with strategies via two API calls (``nni.report_intermediate_result`` for periodical metrics and ``nni.report_final_result`` for final metrics), added in ``on_validation_epoch_end`` and ``teardown`` respectively.
In practice, writing a new training module in Retiarii should inherit ``nni.retiarii.evaluator.pytorch.lightning.LightningModule``, which has a ``set_model`` that will be called after ``__init__`` to save the candidate model (generated by strategy) as ``self.model``. The rest of the process (like ``training_step``) should be the same as writing any other lightning module. Evaluators should also communicate with strategies via two API calls (``nni.report_intermediate_result`` for periodical metrics and ``nni.report_final_result`` for final metrics), added in ``on_validation_epoch_end`` and ``teardown`` respectively.
An example is as follows:
......@@ -97,61 +94,3 @@ Then, users need to wrap everything (including LightningModule, trainer and data
train_dataloader=pl.DataLoader(train_dataset, batch_size=100),
val_dataloaders=pl.DataLoader(test_dataset, batch_size=100))
experiment = RetiariiExperiment(base_model, lightning, mutators, strategy)
One-shot trainers
-----------------
One-shot trainers should inheirt ``nni.retiarii.oneshot.BaseOneShotTrainer``, and need to implement ``fit()`` (used to conduct the fitting and searching process) and ``export()`` method (used to return the searched best architecture).
Writing a one-shot trainer is very different to classic evaluators. First of all, there are no more restrictions on init method arguments, any Python arguments are acceptable. Secondly, the model feeded into one-shot trainers might be a model with Retiarii-specific modules, such as LayerChoice and InputChoice. Such model cannot directly forward-propagate and trainers need to decide how to handle those modules.
A typical example is DartsTrainer, where learnable-parameters are used to combine multiple choices in LayerChoice. Retiarii provides ease-to-use utility functions for module-replace purposes, namely ``replace_layer_choice``, ``replace_input_choice``. A simplified example is as follows:
.. code-block:: python
from nni.retiarii.oneshot import BaseOneShotTrainer
from nni.retiarii.oneshot.pytorch import replace_layer_choice, replace_input_choice
class DartsLayerChoice(nn.Module):
def __init__(self, layer_choice):
super(DartsLayerChoice, self).__init__()
self.name = layer_choice.key
self.op_choices = nn.ModuleDict(layer_choice.named_children())
self.alpha = nn.Parameter(torch.randn(len(self.op_choices)) * 1e-3)
def forward(self, *args, **kwargs):
op_results = torch.stack([op(*args, **kwargs) for op in self.op_choices.values()])
alpha_shape = [-1] + [1] * (len(op_results.size()) - 1)
return torch.sum(op_results * F.softmax(self.alpha, -1).view(*alpha_shape), 0)
class DartsTrainer(BaseOneShotTrainer):
def __init__(self, model, loss, metrics, optimizer):
self.model = model
self.loss = loss
self.metrics = metrics
self.num_epochs = 10
self.nas_modules = []
replace_layer_choice(self.model, DartsLayerChoice, self.nas_modules)
... # init dataloaders and optimizers
def fit(self):
for i in range(self.num_epochs):
for (trn_X, trn_y), (val_X, val_y) in zip(self.train_loader, self.valid_loader):
self.train_architecture(val_X, val_y)
self.train_model_weight(trn_X, trn_y)
@torch.no_grad()
def export(self):
result = dict()
for name, module in self.nas_modules:
if name not in result:
result[name] = select_best_of_module(module)
return result
The full code of DartsTrainer is available to Retiarii source code. Please have a check at :githublink:`nni/retiarii/trainer/pytorch/darts.py`.
......@@ -8,6 +8,8 @@ Retiarii Overview
:maxdepth: 2
Quick Start <Tutorial>
Customize a New Trainer <WriteTrainer>
Write a Model Evaluator <WriteTrainer>
One-shot NAS <OneshotTrainer>
Advanced Tutorial <Advanced>
Customize a New Strategy <WriteStrategy>
Retiarii APIs <ApiReference>
\ No newline at end of file
......@@ -5,6 +5,71 @@
Change Log
==========
Release 2.1 - 3/10/2021
-----------------------
Major updates
^^^^^^^^^^^^^
Neural architecture search
""""""""""""""""""""""""""
* Improve NAS 2.0 (Retiarii) Framework (Improved Experimental)
* Improve the robustness of graph generation and code generation for PyTorch models (#3365)
* Support the inline mutation API ``ValueChoice`` (#3349 #3382)
* Improve the design and implementation of Model Evaluator (#3359 #3404)
* Support Random/Grid/Evolution exploration strategies (i.e., search algorithms) (#3377)
* Refer to `here <https://github.com/microsoft/nni/issues/3301>`__ for Retiarii Roadmap
Training service
""""""""""""""""
* Support shared storage for reuse mode (#3354)
* Support Windows as the local training service in hybrid mode (#3353)
* Remove PAIYarn training service (#3327)
* Add "recently-idle" scheduling algorithm (#3375)
* Deprecate ``preCommand`` and enable ``pythonPath`` for remote training service (#3284 #3410)
* Refactor reuse mode temp folder (#3374)
nnictl & nni.experiment
"""""""""""""""""""""""
* Migrate ``nnicli`` to new Python API ``nni.experiment`` (#3334)
* Refactor the way of specifying tuner in experiment Python API (\ ``nni.experiment``\ ), more aligned with ``nnictl`` (#3419)
WebUI
"""""
* Support showing the assigned training service of each trial in hybrid mode on WebUI (#3261 #3391)
* Support multiple selection for filter status in experiments management page (#3351)
* Improve overview page (#3316 #3317 #3352)
* Support copy trial id in the table (#3378)
Documentation
^^^^^^^^^^^^^
* Improve model compression examples and documentation (#3326 #3371)
* Add Python API examples and documentation (#3396)
* Add SECURITY doc (#3358)
* Add 'What's NEW!' section in README (#3395)
* Update English contributing doc (#3398, thanks external contributor @Yongxuanzhang)
Bug fixes
^^^^^^^^^
* Fix AML outputs path and python process not killed (#3321)
* Fix bug that an experiment launched from Python cannot be resumed by nnictl (#3309)
* Fix import path of network morphism example (#3333)
* Fix bug in the tuple unpack (#3340)
* Fix bug of security for arbitrary code execution (#3311, thanks external contributor @huntr-helper)
* Fix ``NoneType`` error on jupyter notebook (#3337, thanks external contributor @tczhangzhi)
* Fix bugs in Retiarii (#3339 #3341 #3357, thanks external contributor @tczhangzhi)
* Fix bug in AdaptDL mode example (#3381, thanks external contributor @ZeyaWang)
* Fix the spelling mistake of assessor (#3416, thanks external contributor @ByronCHAO)
* Fix bug in ruamel import (#3430, thanks external contributor @rushtehrani)
Release 2.0 - 1/14/2021
-----------------------
......
......@@ -124,4 +124,9 @@ Run the following commands to start the example experiment:
nnictl create --config config_aml.yml
Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.0``.
Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.1``.
Monitor your code in the cloud by using the studio
--------------------------------------------------
To monitor your job's code, you need to visit your studio which you create at step 5. Once the job completes, go to the Outputs + logs tab. There you can see a 70_driver_log.txt file, This file contains the standard output from a run and can be useful when you're debugging remote runs in the cloud. Learn more about aml from `here <https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-hello-world>`__.
......@@ -51,4 +51,4 @@ hybridConfig:
* trainingServicePlatforms. required key. This field specify the platforms used in hybrid mode, the values using yaml list format. NNI support setting ``local``, ``remote``, ``aml``, ``pai`` in this field.
.. Note:: If setting a platform in trainingServicePlatforms mode, users should also set the corresponding configuration for the platform. For example, if set ``remote`` as one of the platform, should also set ``machineList`` and ``remoteConfig`` configuration. Local platform in hybrid mode does not support windows for now.
.. Note:: If setting a platform in trainingServicePlatforms mode, users should also set the corresponding configuration for the platform. For example, if set ``remote`` as one of the platform, should also set ``machineList`` and ``remoteConfig`` configuration.
\ No newline at end of file
......@@ -120,7 +120,7 @@ Modify ``nni/examples/trials/ga_squad/config_pai.yml``\ , here is the default co
#Your nni_manager ip
nniManagerIp: 10.10.10.10
tuner:
codeDir: https://github.com/Microsoft/nni/tree/v2.0/examples/tuners/ga_customer_tuner
codeDir: https://github.com/Microsoft/nni/tree/v2.1/examples/tuners/ga_customer_tuner
classFileName: customer_tuner.py
className: CustomerTuner
classArgs:
......
......@@ -71,4 +71,4 @@ Our documentation is built with :githublink:`sphinx <docs>`.
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v2.0/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example).
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v2.1/`` (:githublink:`mnist.py <examples/trials/mnist-pytorch/mnist.py>` for example).
......@@ -24,7 +24,7 @@ Install NNI through source code
.. code-block:: bash
git clone -b v2.0 https://github.com/Microsoft/nni.git
git clone -b v2.1 https://github.com/Microsoft/nni.git
cd nni
python3 -m pip install --upgrade pip setuptools
python3 setup.py develop
......@@ -37,7 +37,7 @@ If you want to perform a persist install instead, we recommend to build your own
.. code-block:: bash
git clone -b v2.0 https://github.com/Microsoft/nni.git
git clone -b v2.1 https://github.com/Microsoft/nni.git
cd nni
export NNI_RELEASE=2.0
python3 -m pip install --upgrade pip setuptools wheel
......@@ -59,7 +59,7 @@ Verify installation
.. code-block:: bash
git clone -b v2.0 https://github.com/Microsoft/nni.git
git clone -b v2.1 https://github.com/Microsoft/nni.git
*
Run the MNIST example.
......
......@@ -40,7 +40,7 @@ If you want to contribute to NNI, refer to `setup development environment <Setup
.. code-block:: bat
git clone -b v2.0 https://github.com/Microsoft/nni.git
git clone -b v2.1 https://github.com/Microsoft/nni.git
cd nni
python setup.py develop
......@@ -52,7 +52,7 @@ Verify installation
.. code-block:: bat
git clone -b v2.0 https://github.com/Microsoft/nni.git
git clone -b v2.1 https://github.com/Microsoft/nni.git
*
Run the MNIST example.
......
......@@ -27,7 +27,7 @@ author = 'Microsoft'
# The short X.Y version
version = ''
# The full version, including alpha/beta/rc tags
release = 'v2.0'
release = 'v2.1'
# -- General configuration ---------------------------------------------------
......
......@@ -21,7 +21,7 @@ For details, please refer to the following tutorials:
Write A Search Space <NAS/WriteSearchSpace>
Classic NAS <NAS/ClassicNas>
One-shot NAS <NAS/one_shot_nas>
Retiarii NAS (experimental) <NAS/retiarii/retiarii_index>
Retiarii NAS (Alpha) <NAS/retiarii/retiarii_index>
Customize a NAS Algorithm <NAS/Advanced>
NAS Visualization <NAS/Visualization>
Search Space Zoo <NAS/SearchSpaceZoo>
......
......@@ -47,6 +47,8 @@ class ConfigBase:
They will be converted to snake_case automatically.
If a field is missing and don't have default value, it will be set to `dataclasses.MISSING`.
"""
if 'basepath' in kwargs:
_base_path = kwargs.pop('basepath')
kwargs = {util.case_insensitive(key): value for key, value in kwargs.items()}
if _base_path is None:
_base_path = Path()
......
......@@ -68,17 +68,24 @@ class ExperimentConfig(ConfigBase):
training_service: Union[TrainingServiceConfig, List[TrainingServiceConfig]]
def __init__(self, training_service_platform: Optional[Union[str, List[str]]] = None, **kwargs):
base_path = kwargs.pop('_base_path', None)
kwargs = util.case_insensitive(kwargs)
if training_service_platform is not None:
assert 'trainingservice' not in kwargs
kwargs['trainingservice'] = util.training_service_config_factory(platform = training_service_platform)
kwargs['trainingservice'] = util.training_service_config_factory(
platform=training_service_platform,
base_path=base_path
)
elif isinstance(kwargs.get('trainingservice'), (dict, list)):
# dict means a single training service
# list means hybrid training service
kwargs['trainingservice'] = util.training_service_config_factory(config = kwargs['trainingservice'])
kwargs['trainingservice'] = util.training_service_config_factory(
config=kwargs['trainingservice'],
base_path=base_path
)
else:
raise RuntimeError('Unsupported Training service configuration!')
super().__init__(**kwargs)
super().__init__(_base_path=base_path, **kwargs)
for algo_type in ['tuner', 'assessor', 'advisor']:
if isinstance(kwargs.get(algo_type), dict):
setattr(self, algo_type, _AlgorithmConfig(**kwargs.pop(algo_type)))
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment