Unverified Commit 070df4a0 authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

Merge pull request #4291 from microsoft/v2.5

merge v2.5 back to master
parents 821706b8 6a082fe9
<!-- <style>
table, tr, td{
border: none;
}
div{
width: 300px;
height: 200px;
border: 1px solid grey;
background: #ccc;
box-sizing: border-box;
}
img{
width: 260px;
height: 160px;
margin: 20px;
}
</style> -->
<table>
<tr>
<td>
<div>
<img style="
width: 300px;
"
src="../../img/emoicons/NoBug.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Holiday.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
height: 180px;
" src="../../img/emoicons/Error.png"/>
</div>
</td>
</tr>
<tr>
<td align="center">No bug</td>
<td align="center">Holiday</td>
<td align="center">Error</td>
</tr>
<tr>
<td>
<div>
<img style="
width: 300px;
height: 210px;
" src="../../img/emoicons/Working.png"/>
</div>
</td>
<td >
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Sign.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Crying.png"/>
</div>
</td>
</tr>
<tr>
<td align="center" >Working</td>
<td align="center" >Sign</td>
<td align="center" >Crying</td>
</tr>
<tr>
<td>
<div>
<img style="
width: 300px;
height: 190px;
" src="../../img/emoicons/Cut.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Weaving.png"/>
</div>
</td>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Comfort.png"/>
</div>
</td>
</tr>
<tr>
<td align="center">Cut</td>
<td align="center">Weaving</td>
<td align="center">Comfort</td>
</tr>
<tr>
<td>
<div>
<img style="
width: 300px;
" src="../../img/emoicons/Sweat.png"/>
</div>
</td>
<td></td>
<td></td>
</tr>
<tr>
<td align="center">Sweat</td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
...@@ -27,7 +27,7 @@ author = 'Microsoft' ...@@ -27,7 +27,7 @@ author = 'Microsoft'
# The short X.Y version # The short X.Y version
version = '' version = ''
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = 'v2.4' release = 'v2.5'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------
......
...@@ -26,6 +26,7 @@ For details, please refer to the following tutorials: ...@@ -26,6 +26,7 @@ For details, please refer to the following tutorials:
Overview <Compression/Overview> Overview <Compression/Overview>
Quick Start <Compression/QuickStart> Quick Start <Compression/QuickStart>
Pruning <Compression/pruning> Pruning <Compression/pruning>
Pruning V2 <Compression/v2_pruning>
Quantization <Compression/quantization> Quantization <Compression/quantization>
Utilities <Compression/CompressionUtils> Utilities <Compression/CompressionUtils>
Advanced Usage <Compression/advanced> Advanced Usage <Compression/advanced>
......
...@@ -114,17 +114,17 @@ ExperimentConfig ...@@ -114,17 +114,17 @@ ExperimentConfig
- Description - Description
* - experimentName * - experimentName
- ``Optional[str]`` - ``str``, optional
- Mnemonic name of the experiment, which will be shown in WebUI and nnictl. - Mnemonic name of the experiment, which will be shown in WebUI and nnictl.
* - searchSpaceFile * - searchSpaceFile
- ``Optional[str]`` - ``str``, optional
- Path_ to the JSON file containing the search space. - Path_ to the JSON file containing the search space.
Search space format is determined by tuner. The common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__. Search space format is determined by tuner. The common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__.
Mutually exclusive to ``searchSpace``. Mutually exclusive to ``searchSpace``.
* - searchSpace * - searchSpace
- ``Optional[JSON]`` - ``JSON``, optional
- Search space object. - Search space object.
The format is determined by tuner. Common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__. The format is determined by tuner. Common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__.
Note that ``None`` means "no such field" so empty search space should be written as ``{}``. Note that ``None`` means "no such field" so empty search space should be written as ``{}``.
...@@ -137,9 +137,8 @@ ExperimentConfig ...@@ -137,9 +137,8 @@ ExperimentConfig
Note that using ``python3`` on Linux and macOS, and using ``python`` on Windows. Note that using ``python3`` on Linux and macOS, and using ``python`` on Windows.
* - trialCodeDirectory * - trialCodeDirectory
- ``str`` - ``str``, optional
- `Path`_ to the directory containing trial source files. - Default: ``"."``. `Path`_ to the directory containing trial source files.
default: ``"."``.
All files in this directory will be sent to the training machine, unless in the ``.nniignore`` file. All files in this directory will be sent to the training machine, unless in the ``.nniignore`` file.
(See :ref:`nniignore <nniignore>` for details.) (See :ref:`nniignore <nniignore>` for details.)
...@@ -149,8 +148,8 @@ ExperimentConfig ...@@ -149,8 +148,8 @@ ExperimentConfig
The real concurrency also depends on hardware resources and may be less than this value. The real concurrency also depends on hardware resources and may be less than this value.
* - trialGpuNumber * - trialGpuNumber
- ``Optional[int]`` - ``int`` or ``None``, optional
- This field might have slightly different meanings for various training services, - Default: None. This field might have slightly different meanings for various training services,
especially when set to ``0`` or ``None``. especially when set to ``0`` or ``None``.
See `training service's document <../training_services.rst>`__ for details. See `training service's document <../training_services.rst>`__ for details.
...@@ -159,75 +158,72 @@ ExperimentConfig ...@@ -159,75 +158,72 @@ ExperimentConfig
but they can still use all GPU resources if they want. but they can still use all GPU resources if they want.
* - maxExperimentDuration * - maxExperimentDuration
- ``Optional[str]`` - ``str``, optional
- Limit the duration of this experiment if specified. - Limit the duration of this experiment if specified. The duration is unlimited if not set.
format: ``number + s|m|h|d`` Format: ``number + s|m|h|d``.
examples: ``"10m"``, ``"0.5h"`` Examples: ``"10m"``, ``"0.5h"``.
When time runs out, the experiment will stop creating trials but continue to serve WebUI. When time runs out, the experiment will stop creating trials but continue to serve WebUI.
* - maxTrialNumber * - maxTrialNumber
- ``Optional[int]`` - ``int``, optional
- Limit the number of trials to create if specified. - Limit the number of trials to create if specified. The trial number is unlimited if not set.
When the budget runs out, the experiment will stop creating trials but continue to serve WebUI. When the budget runs out, the experiment will stop creating trials but continue to serve WebUI.
* - maxTrialDuration * - maxTrialDuration
- ``Optional[str]`` - ``str``, optional
- Limit the duration of trial job if specified. - Limit the duration of trial job if specified. The duration is unlimited if not set.
format: ``number + s|m|h|d`` Format: ``number + s|m|h|d``.
examples: ``"10m"``, ``"0.5h"`` Examples: ``"10m"``, ``"0.5h"``.
When time runs out, the current trial job will stop. When time runs out, the current trial job will stop.
* - nniManagerIp * - nniManagerIp
- ``Optional[str]`` - ``str``, optional
- IP of the current machine, used by training machines to access NNI manager. Not used in local mode. - Default: default connection chosen by system. IP of the current machine, used by training machines to access NNI manager. Not used in local mode.
If not specified, IPv4 address of ``eth0`` will be used.
Except for the local mode, it is highly recommended to set this field manually. Except for the local mode, it is highly recommended to set this field manually.
* - useAnnotation * - useAnnotation
- ``bool`` - ``bool``, optional
- Enable `annotation <../Tutorial/AnnotationSpec.rst>`__. - Default: ``False``. Enable `annotation <../Tutorial/AnnotationSpec.rst>`__.
default: ``False``.
When using annotation, ``searchSpace`` and ``searchSpaceFile`` should not be specified manually. When using annotation, ``searchSpace`` and ``searchSpaceFile`` should not be specified manually.
* - debug * - debug
- ``bool`` - ``bool``, optional
- Enable debug mode. - Default: ``False``. Enable debug mode.
default: ``False``
When enabled, logging will be more verbose and some internal validation will be loosened. When enabled, logging will be more verbose and some internal validation will be loosened.
* - logLevel * - logLevel
- ``Optional[str]`` - ``str``, optional
- Set log level of the whole system. - Default: ``info`` or ``debug``, depending on ``debug`` option. Set log level of the whole system.
values: ``"trace"``, ``"debug"``, ``"info"``, ``"warning"``, ``"error"``, ``"fatal"`` values: ``"trace"``, ``"debug"``, ``"info"``, ``"warning"``, ``"error"``, ``"fatal"``
Defaults to "info" or "debug", depending on ``debug`` option. When debug mode is enabled, Loglevel is set to "debug", otherwise, Loglevel is set to "info". When debug mode is enabled, Loglevel is set to "debug", otherwise, Loglevel is set to "info".
Most modules of NNI will be affected by this value, including NNI manager, tuner, training service, etc. Most modules of NNI will be affected by this value, including NNI manager, tuner, training service, etc.
The exception is trial, whose logging level is directly managed by trial code. The exception is trial, whose logging level is directly managed by trial code.
For Python modules, "trace" acts as logging level 0 and "fatal" acts as ``logging.CRITICAL``. For Python modules, "trace" acts as logging level 0 and "fatal" acts as ``logging.CRITICAL``.
* - experimentWorkingDirectory * - experimentWorkingDirectory
- ``Optional[str]`` - ``str``, optional
- Specify the :ref:`directory <path>` to place log, checkpoint, metadata, and other run-time stuff. - Default: ``~/nni-experiments``.
By default uses ``~/nni-experiments``. Specify the :ref:`directory <path>` to place log, checkpoint, metadata, and other run-time stuff.
NNI will create a subdirectory named by experiment ID, so it is safe to use the same directory for multiple experiments. NNI will create a subdirectory named by experiment ID, so it is safe to use the same directory for multiple experiments.
* - tunerGpuIndices * - tunerGpuIndices
- ``Optional[list[int] | str | int]`` - ``list[int]`` or ``str`` or ``int``, optional
- Limit the GPUs visible to tuner, assessor, and advisor. - Limit the GPUs visible to tuner, assessor, and advisor.
This will be the ``CUDA_VISIBLE_DEVICES`` environment variable of tuner process. This will be the ``CUDA_VISIBLE_DEVICES`` environment variable of tuner process.
Because tuner, assessor, and advisor run in the same process, this option will affect them all. Because tuner, assessor, and advisor run in the same process, this option will affect them all.
* - tuner * - tuner
- ``Optional[AlgorithmConfig]`` - ``AlgorithmConfig``, optional
- Specify the tuner. - Specify the tuner.
The built-in tuners can be found `here <../builtin_tuner.rst>`__ and you can follow `this tutorial <../Tuner/CustomizeTuner.rst>`__ to customize a new tuner. The built-in tuners can be found `here <../builtin_tuner.rst>`__ and you can follow `this tutorial <../Tuner/CustomizeTuner.rst>`__ to customize a new tuner.
* - assessor * - assessor
- ``Optional[AlgorithmConfig]`` - ``AlgorithmConfig``, optional
- Specify the assessor. - Specify the assessor.
The built-in assessors can be found `here <../builtin_assessor.rst>`__ and you can follow `this tutorial <../Assessor/CustomizeAssessor.rst>`__ to customize a new assessor. The built-in assessors can be found `here <../builtin_assessor.rst>`__ and you can follow `this tutorial <../Assessor/CustomizeAssessor.rst>`__ to customize a new assessor.
* - advisor * - advisor
- ``Optional[AlgorithmConfig]`` - ``AlgorithmConfig``, optional
- Specify the advisor. - Specify the advisor.
NNI provides two built-in advisors: `BOHB <../Tuner/BohbAdvisor.rst>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__, and you can follow `this tutorial <../Tuner/CustomizeAdvisor.rst>`__ to customize a new advisor. NNI provides two built-in advisors: `BOHB <../Tuner/BohbAdvisor.rst>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__, and you can follow `this tutorial <../Tuner/CustomizeAdvisor.rst>`__ to customize a new advisor.
...@@ -236,7 +232,7 @@ ExperimentConfig ...@@ -236,7 +232,7 @@ ExperimentConfig
- Specify the `training service <../TrainingService/Overview.rst>`__. - Specify the `training service <../TrainingService/Overview.rst>`__.
* - sharedStorage * - sharedStorage
- ``Optional[SharedStorageConfig]`` - ``SharedStorageConfig``, optional
- Configure the shared storage, detailed usage can be found `here <../Tutorial/HowToUseSharedStorage.rst>`__. - Configure the shared storage, detailed usage can be found `here <../Tutorial/HowToUseSharedStorage.rst>`__.
AlgorithmConfig AlgorithmConfig
...@@ -259,23 +255,23 @@ For customized algorithms, there are two ways to describe them: ...@@ -259,23 +255,23 @@ For customized algorithms, there are two ways to describe them:
- Description - Description
* - name * - name
- ``Optional[str]`` - ``str`` or ``None``, optional
- Name of the built-in or registered algorithm. - Default: None. Name of the built-in or registered algorithm.
``str`` for the built-in and registered algorithm, ``None`` for other customized algorithms. ``str`` for the built-in and registered algorithm, ``None`` for other customized algorithms.
* - className * - className
- ``Optional[str]`` - ``str`` or ``None``, optional
- Qualified class name of not registered customized algorithm. - Default: None. Qualified class name of not registered customized algorithm.
``None`` for the built-in and registered algorithm, ``str`` for other customized algorithms. ``None`` for the built-in and registered algorithm, ``str`` for other customized algorithms.
example: ``"my_tuner.MyTuner"`` example: ``"my_tuner.MyTuner"``
* - codeDirectory * - codeDirectory
- ``Optional[str]`` - ``str`` or ``None``, optional
- `Path`_ to the directory containing the customized algorithm class. - Default: None. Path_ to the directory containing the customized algorithm class.
``None`` for the built-in and registered algorithm, ``str`` for other customized algorithms. ``None`` for the built-in and registered algorithm, ``str`` for other customized algorithms.
* - classArgs * - classArgs
- ``Optional[dict[str, Any]]`` - ``dict[str, Any]``, optional
- Keyword arguments passed to algorithm class' constructor. - Keyword arguments passed to algorithm class' constructor.
See algorithm's document for supported value. See algorithm's document for supported value.
...@@ -311,8 +307,8 @@ Detailed usage can be found `here <../TrainingService/LocalMode.rst>`__. ...@@ -311,8 +307,8 @@ Detailed usage can be found `here <../TrainingService/LocalMode.rst>`__.
- -
* - useActiveGpu * - useActiveGpu
- ``Optional[bool]`` - ``bool``, optional
- Specify whether NNI should submit trials to GPUs occupied by other tasks. - Default: ``False``. Specify whether NNI should submit trials to GPUs occupied by other tasks.
Must be set when ``trialGpuNumber`` greater than zero. Must be set when ``trialGpuNumber`` greater than zero.
Following processes can make GPU "active": Following processes can make GPU "active":
...@@ -325,12 +321,11 @@ Detailed usage can be found `here <../TrainingService/LocalMode.rst>`__. ...@@ -325,12 +321,11 @@ Detailed usage can be found `here <../TrainingService/LocalMode.rst>`__.
When you create multiple NNI experiments and ``useActiveGpu`` is set to ``True``, they will submit multiple trials to the same GPU(s) simultaneously. When you create multiple NNI experiments and ``useActiveGpu`` is set to ``True``, they will submit multiple trials to the same GPU(s) simultaneously.
* - maxTrialNumberPerGpu * - maxTrialNumberPerGpu
- ``int`` - ``int``, optional
- Specify how many trials can share one GPU. - Default: ``1``. Specify how many trials can share one GPU.
default: ``1``
* - gpuIndices * - gpuIndices
- ``Optional[list[int] | str | int]`` - ``list[int]`` or ``str`` or ``int``, optional
- Limit the GPUs visible to trial processes. - Limit the GPUs visible to trial processes.
If ``trialGpuNumber`` is less than the length of this value, only a subset will be visible to each trial. If ``trialGpuNumber`` is less than the length of this value, only a subset will be visible to each trial.
This will be used as ``CUDA_VISIBLE_DEVICES`` environment variable. This will be used as ``CUDA_VISIBLE_DEVICES`` environment variable.
...@@ -357,8 +352,8 @@ Detailed usage can be found `here <../TrainingService/RemoteMachineMode.rst>`__. ...@@ -357,8 +352,8 @@ Detailed usage can be found `here <../TrainingService/RemoteMachineMode.rst>`__.
- List of training machines. - List of training machines.
* - reuseMode * - reuseMode
- ``bool`` - ``bool``, optional
- Enable `reuse mode <../TrainingService/Overview.rst#training-service-under-reuse-mode>`__. - Default: ``True``. Enable `reuse mode <../TrainingService/Overview.rst#training-service-under-reuse-mode>`__.
RemoteMachineConfig RemoteMachineConfig
""""""""""""""""""" """""""""""""""""""
...@@ -376,31 +371,29 @@ RemoteMachineConfig ...@@ -376,31 +371,29 @@ RemoteMachineConfig
- IP or hostname (domain name) of the machine. - IP or hostname (domain name) of the machine.
* - port * - port
- ``int`` - ``int``, optional
- SSH service port. - Default: ``22``. SSH service port.
default: ``22``
* - user * - user
- ``str`` - ``str``
- Login user name. - Login user name.
* - password * - password
- ``Optional[str]`` - ``str``, optional
- If not specified, ``sshKeyFile`` will be used instead. - If not specified, ``sshKeyFile`` will be used instead.
* - sshKeyFile * - sshKeyFile
- ``Optional[str]`` - ``str``, optional
- `Path`_ to ``sshKeyFile`` (identity file). - `Path`_ to ``sshKeyFile`` (identity file).
Only used when ``password`` is not specified. Only used when ``password`` is not specified.
* - sshPassphrase * - sshPassphrase
- ``Optional[str]`` - ``str``, optional
- Passphrase of SSH identity file. - Passphrase of SSH identity file.
* - useActiveGpu * - useActiveGpu
- ``bool`` - ``bool``, optional
- Specify whether NNI should submit trials to GPUs occupied by other tasks. - Default: ``False``. Specify whether NNI should submit trials to GPUs occupied by other tasks.
default: ``False``
Must be set when ``trialGpuNumber`` greater than zero. Must be set when ``trialGpuNumber`` greater than zero.
Following processes can make GPU "active": Following processes can make GPU "active":
...@@ -413,18 +406,17 @@ RemoteMachineConfig ...@@ -413,18 +406,17 @@ RemoteMachineConfig
When you create multiple NNI experiments and ``useActiveGpu`` is set to ``True``, they will submit multiple trials to the same GPU(s) simultaneously. When you create multiple NNI experiments and ``useActiveGpu`` is set to ``True``, they will submit multiple trials to the same GPU(s) simultaneously.
* - maxTrialNumberPerGpu * - maxTrialNumberPerGpu
- ``int`` - ``int``, optional
- Specify how many trials can share one GPU. - Default: ``1``. Specify how many trials can share one GPU.
default: ``1``
* - gpuIndices * - gpuIndices
- ``Optional[list[int] | str | int]`` - ``list[int]`` or ``str`` or ``int``, optional
- Limit the GPUs visible to trial processes. - Limit the GPUs visible to trial processes.
If ``trialGpuNumber`` is less than the length of this value, only a subset will be visible to each trial. If ``trialGpuNumber`` is less than the length of this value, only a subset will be visible to each trial.
This will be used as ``CUDA_VISIBLE_DEVICES`` environment variable. This will be used as ``CUDA_VISIBLE_DEVICES`` environment variable.
* - pythonPath * - pythonPath
- ``Optional[str]`` - ``str``, optional
- Specify a Python environment. - Specify a Python environment.
This path will be inserted at the front of PATH. Here are some examples: This path will be inserted at the front of PATH. Here are some examples:
...@@ -434,7 +426,7 @@ RemoteMachineConfig ...@@ -434,7 +426,7 @@ RemoteMachineConfig
If you are working on Anaconda, there is some difference. On Windows, you also have to add ``../script`` and ``../Library/bin`` separated by ``;``. Examples are as below: If you are working on Anaconda, there is some difference. On Windows, you also have to add ``../script`` and ``../Library/bin`` separated by ``;``. Examples are as below:
- (linux anaconda) pythonPath: ``/home/yourname/anaconda3/envs/myenv/bin/`` - (linux anaconda) pythonPath: ``/home/yourname/anaconda3/envs/myenv/bin/``
- (windows anaconda) pythonPath: ``C:/Users/yourname/.conda/envs/myenv;C:/Users/yourname/.conda/envs/myenv/Scripts;C:/Users/yourname/.conda/envs/myenv/Library/bin`` - (windows anaconda) pythonPath: ``C:/Users/yourname/.conda/envs/myenv``; ``C:/Users/yourname/.conda/envs/myenv/Scripts``; ``C:/Users/yourname/.conda/envs/myenv/Library/bin``
This is useful if preparing steps vary for different machines. This is useful if preparing steps vary for different machines.
...@@ -485,9 +477,8 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__. ...@@ -485,9 +477,8 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
- Specify the storage name used in OpenPAI. - Specify the storage name used in OpenPAI.
* - dockerImage * - dockerImage
- ``str`` - ``str``, optional
- Name and tag of docker image to run the trials. - Default: ``"msranni/nni:latest"``. Name and tag of docker image to run the trials.
default: ``"msranni/nni:latest"``.
* - localStorageMountPoint * - localStorageMountPoint
- ``str`` - ``str``
...@@ -499,16 +490,15 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__. ...@@ -499,16 +490,15 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
This must be an absolute path. This must be an absolute path.
* - reuseMode * - reuseMode
- ``bool`` - ``bool``, optional
- Enable `reuse mode <../TrainingService/Overview.rst#training-service-under-reuse-mode>`__. - Default: ``True``. Enable `reuse mode <../TrainingService/Overview.rst#training-service-under-reuse-mode>`__.
default: ``False``.
* - openpaiConfig * - openpaiConfig
- ``Optional[JSON]`` - ``JSON``, optional
- Embedded OpenPAI config file. - Embedded OpenPAI config file.
* - openpaiConfigFile * - openpaiConfigFile
- ``Optional[str]`` - ``str``, optional
- `Path`_ to OpenPAI config file. - `Path`_ to OpenPAI config file.
An example can be found `here <https://github.com/microsoft/pai/blob/master/docs/manual/cluster-user/examples/hello-world-job.yaml>`__. An example can be found `here <https://github.com/microsoft/pai/blob/master/docs/manual/cluster-user/examples/hello-world-job.yaml>`__.
...@@ -530,9 +520,8 @@ Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__. ...@@ -530,9 +520,8 @@ Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__.
- -
* - dockerImage * - dockerImage
- ``str`` - ``str``, optional
- Name and tag of docker image to run the trials. - Default: ``"msranni/nni:latest"``. Name and tag of docker image to run the trials.
default: ``"msranni/nni:latest"``
* - subscriptionId * - subscriptionId
- ``str`` - ``str``
...@@ -568,17 +557,16 @@ Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__. ...@@ -568,17 +557,16 @@ Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.
- -
* - type * - type
- ``str`` - ``str``, optional
- Job spec type. - Default: ``"Worker"``. Job spec type.
default: ``"worker"``.
* - image * - image
- ``str`` - ``str``
- Name and tag of docker image to run the trials. - Name and tag of docker image to run the trials.
* - jobType * - jobType
- ``str`` - ``str``, optional
- PAI-DLC training job type, ``"TFJob"`` or ``"PyTorchJob"``. - Default: ``"TFJob"``. PAI-DLC training job type, ``"TFJob"`` or ``"PyTorchJob"``.
* - podCount * - podCount
- ``str`` - ``str``
...@@ -698,7 +686,7 @@ azureBlobConfig ...@@ -698,7 +686,7 @@ azureBlobConfig
- Azure storage account name. - Azure storage account name.
* - storageAccountKey * - storageAccountKey
- ``Optional[str]`` - ``str``
- Azure storage account key. - Azure storage account key.
* - containerName * - containerName
......
This diff is collapsed.
...@@ -26,3 +26,5 @@ gym ...@@ -26,3 +26,5 @@ gym
tianshou tianshou
https://download.pytorch.org/whl/cpu/torch-1.7.1%2Bcpu-cp37-cp37m-linux_x86_64.whl https://download.pytorch.org/whl/cpu/torch-1.7.1%2Bcpu-cp37-cp37m-linux_x86_64.whl
https://download.pytorch.org/whl/cpu/torchvision-0.8.2%2Bcpu-cp37-cp37m-linux_x86_64.whl https://download.pytorch.org/whl/cpu/torchvision-0.8.2%2Bcpu-cp37-cp37m-linux_x86_64.whl
pytorch-lightning
onnx
import sys
from tqdm import tqdm from tqdm import tqdm
import torch import torch
...@@ -5,7 +6,8 @@ from torchvision import datasets, transforms ...@@ -5,7 +6,8 @@ from torchvision import datasets, transforms
from nni.algorithms.compression.v2.pytorch.pruning import AGPPruner from nni.algorithms.compression.v2.pytorch.pruning import AGPPruner
from examples.model_compress.models.cifar10.vgg import VGG sys.path.append('../../models')
from cifar10.vgg import VGG
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
......
import sys
from tqdm import tqdm from tqdm import tqdm
import torch import torch
...@@ -7,7 +8,8 @@ from nni.algorithms.compression.v2.pytorch.pruning import L1NormPruner ...@@ -7,7 +8,8 @@ from nni.algorithms.compression.v2.pytorch.pruning import L1NormPruner
from nni.algorithms.compression.v2.pytorch.pruning.tools import AGPTaskGenerator from nni.algorithms.compression.v2.pytorch.pruning.tools import AGPTaskGenerator
from nni.algorithms.compression.v2.pytorch.pruning.basic_scheduler import PruningScheduler from nni.algorithms.compression.v2.pytorch.pruning.basic_scheduler import PruningScheduler
from examples.model_compress.models.cifar10.vgg import VGG sys.path.append('../../models')
from cifar10.vgg import VGG
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
...@@ -90,6 +92,8 @@ if __name__ == '__main__': ...@@ -90,6 +92,8 @@ if __name__ == '__main__':
# or the result with the highest score (given by evaluator) will be the best result. # or the result with the highest score (given by evaluator) will be the best result.
# scheduler = PruningScheduler(pruner, task_generator, finetuner=finetuner, speed_up=True, dummy_input=dummy_input, evaluator=evaluator) # scheduler = PruningScheduler(pruner, task_generator, finetuner=finetuner, speed_up=True, dummy_input=dummy_input, evaluator=evaluator)
scheduler = PruningScheduler(pruner, task_generator, finetuner=finetuner, speed_up=True, dummy_input=dummy_input, evaluator=None) scheduler = PruningScheduler(pruner, task_generator, finetuner=finetuner, speed_up=True, dummy_input=dummy_input, evaluator=None, reset_weight=False)
scheduler.compress() scheduler.compress()
_, model, masks, _, _ = scheduler.get_best_result()
import sys
from tqdm import tqdm from tqdm import tqdm
import torch import torch
...@@ -6,7 +7,8 @@ from torchvision import datasets, transforms ...@@ -6,7 +7,8 @@ from torchvision import datasets, transforms
from nni.algorithms.compression.v2.pytorch.pruning import L1NormPruner from nni.algorithms.compression.v2.pytorch.pruning import L1NormPruner
from nni.compression.pytorch.speedup import ModelSpeedup from nni.compression.pytorch.speedup import ModelSpeedup
from examples.model_compress.models.cifar10.vgg import VGG sys.path.append('../../models')
from cifar10.vgg import VGG
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
...@@ -72,7 +74,7 @@ if __name__ == '__main__': ...@@ -72,7 +74,7 @@ if __name__ == '__main__':
evaluator(model) evaluator(model)
pruner._unwrap_model() pruner._unwrap_model()
ModelSpeedup(model, dummy_input=torch.rand(10, 3, 32, 32).to(device), masks_file='simple_masks.pth').speedup_model() ModelSpeedup(model, dummy_input=torch.rand(10, 3, 32, 32).to(device), masks_file=masks).speedup_model()
print('\nThe accuracy after speed up:') print('\nThe accuracy after speed up:')
evaluator(model) evaluator(model)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment