@@ -59,7 +59,7 @@ NNI's folder structure is shown below:
Function annotation of TrainingService
--------------------------------------
.. code-block:: bash
.. code-block:: typescript
abstract class TrainingService {
public abstract listTrialJobs(): Promise<TrialJobDetail[]>;
...
...
@@ -82,7 +82,7 @@ The parent class of TrainingService has a few abstract functions, users need to
ClusterMetadata is the data related to platform details, for examples, the ClusterMetadata defined in remote machine server is:
.. code-block:: bash
.. code-block:: typescript
export class RemoteMachineMeta {
public readonly ip : string;
...
...
@@ -117,7 +117,7 @@ This function will return the metadata value according to the values, it could b
SubmitTrialJob is a function to submit new trial jobs, users should generate a job instance in TrialJobDetail type. TrialJobDetail is defined as follow:
@@ -239,7 +239,7 @@ To run the tutorial, follow the steps below:
2. **Search**: Based on the architecture of simplified PFLD, the setting of multi-stage search space and hyper-parameters for searching should be firstly configured to construct the supernet. For example,
.. code-block:: bash
.. code-block::
from lib.builder import search_space
from lib.ops import PRIMITIVES
...
...
@@ -249,13 +249,13 @@ To run the tutorial, follow the steps below:
# configuration of hyper-parameters
# search_space defines the multi-stage search space
@@ -1506,7 +1506,7 @@ NNICTL new features and updates
Before v0.3, NNI only supports running single experiment once a time. After this release, users are able to run multiple experiments simultaneously. Each experiment will require a unique port, the 1st experiment will be set to the default port as previous versions. You can specify a unique port for the rest experiments as below:
Note that: for fine-tuning a pruned model, run :githublink:`basic_pruners_torch.py <examples/model_compress/pruning/basic_pruners_torch.py>` first to get the mask file, then pass the mask path as argument to the script.
@@ -6,40 +6,42 @@ NNI can easily run on Google Colab platform. However, Colab doesn't expose its p
How to Open NNI's Web UI on Google Colab
----------------------------------------
#. Install required packages and softwares.
.. code-block:: bash
.. code-block:: bash
! pip install nni # install nni
! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip # download ngrok and unzip it
! unzip ngrok-stable-linux-amd64.zip
! mkdir -p nni_repo
! git clone https://github.com/microsoft/nni.git nni_repo/nni # clone NNI's offical repo to get examples
! pip install nni # install nni
! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip # download ngrok and unzip it
! unzip ngrok-stable-linux-amd64.zip
! mkdir -p nni_repo
! git clone https://github.com/microsoft/nni.git nni_repo/nni # clone NNI's offical repo to get examples
#. Register a ngrok account `here <https://ngrok.com/>`__\ , then connect to your account using your authtoken.
#. Register a ngrok account `here <https://ngrok.com/>`__, then connect to your account using your authtoken.
.. code-block:: bash
.. code-block:: bash
! ./ngrok authtoken <your-authtoken>
! ./ngrok authtoken YOUR_AUTH_TOKEN
#. Start an NNI example on a port bigger than 1024, then start ngrok with the same port. If you want to use gpu, make sure gpuNum >= 1 in config.yml. Use ``get_ipython()`` to start ngrok since it will be stuck if you use ``! ngrok http 5000 &``.
As we can see, this function is actually a compiler, that converts the internal model DAG configuration (which will be introduced in the ``Model configuration format`` section) ``graph``\ , to a Tensorflow computation graph.
...
...
@@ -162,6 +162,7 @@ performs topological sorting on the internal graph representation, and the code
.. code-block:: python
for _, topo_i in enumerate(topology):
...
performs actually conversion that maps each layer to a part in Tensorflow computation graph.