"testing/vscode:/vscode.git/clone" did not exist on "8bf752ae4646a88e1ae2ffdcba5495fcc2fb3b9d"
Unverified Commit fb0c2734 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

i18n toolchain based on sphinx-intl (#4759)

parent f5b89bb6
......@@ -15,6 +15,7 @@ sphinx >= 4.5
sphinx-argparse-nni >= 0.4.0
sphinx-copybutton
sphinx-gallery
sphinx-intl
sphinx-tabs
sphinxcontrib-bibtex
git+https://github.com/bashtage/sphinx-material@6e0ef82#egg=sphinx_material
......@@ -8,3 +8,6 @@ _build/
# auto-generated reference table
_modules/
# Machine-style translation files
*.mo
......@@ -11,6 +11,11 @@ BUILDDIR = build
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
# Build message catelogs for translation
i18n:
@$(SPHINXBUILD) -M getpartialtext "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
sphinx-intl update -p "$(BUILDDIR)/getpartialtext" -d "$(SOURCEDIR)/locales" -l zh
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
......
"""
Basically same as
`sphinx gettext buidler <https://www.sphinx-doc.org/en/master/_modules/sphinx/builders/gettext.html>`_,
but only get texts from files in a whitelist.
"""
import re
from docutils import nodes
from sphinx.application import Sphinx
from sphinx.builders.gettext import MessageCatalogBuilder
class PartialMessageCatalogBuilder(MessageCatalogBuilder):
name = 'getpartialtext'
def init(self):
super().init()
self.whitelist_docs = [re.compile(x) for x in self.config.gettext_documents]
def write_doc(self, docname: str, doctree: nodes.document) -> None:
for doc_re in self.whitelist_docs:
if doc_re.match(docname):
return super().write_doc(docname, doctree)
def setup(app: Sphinx):
app.add_builder(PartialMessageCatalogBuilder)
app.add_config_value('gettext_documents', [], 'gettext')
......@@ -61,6 +61,7 @@ extensions = [
# Custom extensions in extension/ folder.
'tutorial_links', # this has to be after sphinx-gallery
'getpartialtext',
'inplace_translation',
'cardlinkitem',
'codesnippetcard',
......@@ -186,6 +187,18 @@ master_doc = 'index'
# Usually you set "language" from the command line for these cases.
language = None
# Translation related settings
locale_dir = ['locales']
# Documents that requires translation: https://github.com/microsoft/nni/issues/4298
gettext_documents = [
r'^index$',
r'^quickstart$',
r'^installation$',
r'^(nas|hpo|compression)/overview$',
r'^tutorials/(hello_nas|pruning_quick_start_mnist|hpo_quickstart_pytorch/main)$',
]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
......
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/compression/overview.rst:2
msgid "Overview of NNI Model Compression"
msgstr ""
#: ../../source/compression/overview.rst:4
msgid ""
"Deep neural networks (DNNs) have achieved great success in many tasks "
"like computer vision, nature launguage processing, speech processing. "
"However, typical neural networks are both computationally expensive and "
"energy-intensive, which can be difficult to be deployed on devices with "
"low computation resources or with strict latency requirements. Therefore,"
" a natural thought is to perform model compression to reduce model size "
"and accelerate model training/inference without losing performance "
"significantly. Model compression techniques can be divided into two "
"categories: pruning and quantization. The pruning methods explore the "
"redundancy in the model weights and try to remove/prune the redundant and"
" uncritical weights. Quantization refers to compress models by reducing "
"the number of bits required to represent weights or activations. We "
"further elaborate on the two methods, pruning and quantization, in the "
"following chapters. Besides, the figure below visualizes the difference "
"between these two methods."
msgstr ""
#: ../../source/compression/overview.rst:19
msgid ""
"NNI provides an easy-to-use toolkit to help users design and use model "
"pruning and quantization algorithms. For users to compress their models, "
"they only need to add several lines in their code. There are some popular"
" model compression algorithms built-in in NNI. On the other hand, users "
"could easily customize their new compression algorithms using NNI’s "
"interface."
msgstr ""
#: ../../source/compression/overview.rst:24
msgid "There are several core features supported by NNI model compression:"
msgstr ""
#: ../../source/compression/overview.rst:26
msgid "Support many popular pruning and quantization algorithms."
msgstr ""
#: ../../source/compression/overview.rst:27
msgid ""
"Automate model pruning and quantization process with state-of-the-art "
"strategies and NNI's auto tuning power."
msgstr ""
#: ../../source/compression/overview.rst:28
msgid ""
"Speedup a compressed model to make it have lower inference latency and "
"also make it smaller."
msgstr ""
#: ../../source/compression/overview.rst:29
msgid ""
"Provide friendly and easy-to-use compression utilities for users to dive "
"into the compression process and results."
msgstr ""
#: ../../source/compression/overview.rst:30
msgid "Concise interface for users to customize their own compression algorithms."
msgstr ""
#: ../../source/compression/overview.rst:34
msgid "Compression Pipeline"
msgstr ""
#: ../../source/compression/overview.rst:42
msgid ""
"The overall compression pipeline in NNI is shown above. For compressing a"
" pretrained model, pruning and quantization can be used alone or in "
"combination. If users want to apply both, a sequential mode is "
"recommended as common practise."
msgstr ""
#: ../../source/compression/overview.rst:46
msgid ""
"Note that NNI pruners or quantizers are not meant to physically compact "
"the model but for simulating the compression effect. Whereas NNI speedup "
"tool can truly compress model by changing the network architecture and "
"therefore reduce latency. To obtain a truly compact model, users should "
"conduct :doc:`pruning speedup <../tutorials/pruning_speedup>` or "
":doc:`quantizaiton speedup <../tutorials/quantization_speedup>`. The "
"interface and APIs are unified for both PyTorch and TensorFlow. Currently"
" only PyTorch version has been supported, and TensorFlow version will be "
"supported in future."
msgstr ""
#: ../../source/compression/overview.rst:52
msgid "Model Speedup"
msgstr ""
#: ../../source/compression/overview.rst:54
msgid ""
"The final goal of model compression is to reduce inference latency and "
"model size. However, existing model compression algorithms mainly use "
"simulation to check the performance (e.g., accuracy) of compressed model."
" For example, using masks for pruning algorithms, and storing quantized "
"values still in float32 for quantization algorithms. Given the output "
"masks and quantization bits produced by those algorithms, NNI can really "
"speedup the model."
msgstr ""
#: ../../source/compression/overview.rst:59
msgid "The following figure shows how NNI prunes and speeds up your models."
msgstr ""
#: ../../source/compression/overview.rst:67
msgid ""
"The detailed tutorial of Speedup Model with Mask can be found :doc:`here "
"<../tutorials/pruning_speedup>`. The detailed tutorial of Speedup Model "
"with Calibration Config can be found :doc:`here "
"<../tutorials/quantization_speedup>`."
msgstr ""
#: ../../source/compression/overview.rst:72
msgid ""
"NNI's model pruning framework has been upgraded to a more powerful "
"version (named pruning v2 before nni v2.6). The old version (`named "
"pruning before nni v2.6 "
"<https://nni.readthedocs.io/en/v2.6/Compression/pruning.html>`_) will be "
"out of maintenance. If for some reason you have to use the old pruning, "
"v2.6 is the last nni version to support old pruning version."
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/hpo/overview.rst:2
msgid "Hyperparameter Optimization Overview"
msgstr ""
#: ../../source/hpo/overview.rst:4
msgid ""
"Auto hyperparameter optimization (HPO), or auto tuning, is one of the key"
" features of NNI."
msgstr ""
#: ../../source/hpo/overview.rst:7
msgid "Introduction to HPO"
msgstr ""
#: ../../source/hpo/overview.rst:9
msgid ""
"In machine learning, a hyperparameter is a parameter whose value is used "
"to control learning process, and HPO is the problem of choosing a set of "
"optimal hyperparameters for a learning algorithm. (`From "
"<https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)>`__ "
"`Wikipedia "
"<https://en.wikipedia.org/wiki/Hyperparameter_optimization>`__)"
msgstr ""
#: ../../source/hpo/overview.rst:14
msgid "Following code snippet demonstrates a naive HPO process:"
msgstr ""
#: ../../source/hpo/overview.rst:34
msgid ""
"You may have noticed, the example will train 4×10×3=120 models in total. "
"Since it consumes so much computing resources, you may want to:"
msgstr ""
#: ../../source/hpo/overview.rst:37
msgid ""
":ref:`Find the best hyperparameter set with less iterations. <hpo-"
"overview-tuners>`"
msgstr ""
#: ../../source/hpo/overview.rst:38
msgid ":ref:`Train the models on distributed platforms. <hpo-overview-platforms>`"
msgstr ""
#: ../../source/hpo/overview.rst:39
msgid ""
":ref:`Have a portal to monitor and control the process. <hpo-overview-"
"portal>`"
msgstr ""
#: ../../source/hpo/overview.rst:41
msgid "NNI will do them for you."
msgstr ""
#: ../../source/hpo/overview.rst:44
msgid "Key Features of NNI HPO"
msgstr ""
#: ../../source/hpo/overview.rst:49
msgid "Tuning Algorithms"
msgstr ""
#: ../../source/hpo/overview.rst:51
msgid ""
"NNI provides *tuners* to speed up the process of finding best "
"hyperparameter set."
msgstr ""
#: ../../source/hpo/overview.rst:53
msgid ""
"A tuner, or a tuning algorithm, decides the order in which hyperparameter"
" sets are evaluated. Based on the results of historical hyperparameter "
"sets, an efficient tuner can predict where the best hyperparameters "
"locates around, and finds them in much fewer attempts."
msgstr ""
#: ../../source/hpo/overview.rst:57
msgid ""
"The naive example above evaluates all possible hyperparameter sets in "
"constant order, ignoring the historical results. This is the brute-force "
"tuning algorithm called *grid search*."
msgstr ""
#: ../../source/hpo/overview.rst:60
msgid ""
"NNI has out-of-the-box support for a variety of popular tuners. It "
"includes naive algorithms like random search and grid search, Bayesian-"
"based algorithms like TPE and SMAC, RL based algorithms like PPO, and "
"much more."
msgstr ""
#: ../../source/hpo/overview.rst:64
msgid "Main article: :doc:`tuners`"
msgstr ""
#: ../../source/hpo/overview.rst:69
msgid "Training Platforms"
msgstr ""
#: ../../source/hpo/overview.rst:71
msgid ""
"If you are not interested in distributed platforms, you can simply run "
"NNI HPO with current computer, just like any ordinary Python library."
msgstr ""
#: ../../source/hpo/overview.rst:74
msgid ""
"And when you want to leverage more computing resources, NNI provides "
"built-in integration for training platforms from simple on-premise "
"servers to scalable commercial clouds."
msgstr ""
#: ../../source/hpo/overview.rst:77
msgid ""
"With NNI you can write one piece of model code, and concurrently evaluate"
" hyperparameter sets on local machine, SSH servers, Kubernetes-based "
"clusters, AzureML service, and much more."
msgstr ""
#: ../../source/hpo/overview.rst:80
msgid "Main article: :doc:`/experiment/training_service/overview`"
msgstr ""
#: ../../source/hpo/overview.rst:85
msgid "Web Portal"
msgstr ""
#: ../../source/hpo/overview.rst:87
msgid ""
"NNI provides a web portal to monitor training progress, to visualize "
"hyperparameter performance, to manually customize hyperparameters, and to"
" manage multiple HPO experiments."
msgstr ""
#: ../../source/hpo/overview.rst:90
msgid "Main article: :doc:`/experiment/web_portal/web_portal`"
msgstr ""
#: ../../source/hpo/overview.rst:96
msgid "Tutorials"
msgstr ""
#: ../../source/hpo/overview.rst:98
msgid ""
"To start using NNI HPO, choose the quickstart tutorial of your favorite "
"framework:"
msgstr ""
#: ../../source/hpo/overview.rst:100
msgid ":doc:`PyTorch tutorial </tutorials/hpo_quickstart_pytorch/main>`"
msgstr ""
#: ../../source/hpo/overview.rst:101
msgid ":doc:`TensorFlow tutorial </tutorials/hpo_quickstart_tensorflow/main>`"
msgstr ""
#: ../../source/hpo/overview.rst:104
msgid "Extra Features"
msgstr ""
#: ../../source/hpo/overview.rst:106
msgid ""
"After you are familiar with basic usage, you can explore more HPO "
"features:"
msgstr ""
#: ../../source/hpo/overview.rst:108
msgid ""
":doc:`Use command line tool to create and manage experiments (nnictl) "
"</reference/nnictl>`"
msgstr ""
#: ../../source/hpo/overview.rst:109
msgid ":doc:`Early stop non-optimal models (assessor) <assessors>`"
msgstr ""
#: ../../source/hpo/overview.rst:110
msgid ":doc:`TensorBoard integration </experiment/web_portal/tensorboard>`"
msgstr ""
#: ../../source/hpo/overview.rst:111
msgid ":doc:`Implement your own algorithm <custom_algorithm>`"
msgstr ""
#: ../../source/hpo/overview.rst:112
msgid ":doc:`Benchmark tuners <hpo_benchmark>`"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-12 17:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/index.rst:4 ../../source/index.rst:52
msgid "Get Started"
msgstr ""
#: ../../source/index.rst:12
msgid "Hyperparameter Optimization"
msgstr ""
#: ../../source/index.rst:12
msgid "Model Compression"
msgstr ""
#: ../../source/index.rst:12
msgid "User Guide"
msgstr ""
#: ../../source/index.rst:23
msgid "Python API"
msgstr ""
#: ../../source/index.rst:23
msgid "References"
msgstr ""
#: ../../source/index.rst:32
msgid "Misc"
msgstr ""
#: ../../source/index.rst:2
msgid "NNI Documentation"
msgstr ""
#: ../../source/index.rst:44
msgid ""
"**NNI (Neural Network Intelligence)** is a lightweight but powerful "
"toolkit to help users **automate**:"
msgstr ""
#: ../../source/index.rst:46
msgid ":doc:`Hyperparameter Optimization </hpo/overview>`"
msgstr ""
#: ../../source/index.rst:47
msgid ":doc:`Neural Architecture Search </nas/overview>`"
msgstr ""
#: ../../source/index.rst:48
msgid ":doc:`Model Compression </compression/overview>`"
msgstr ""
#: ../../source/index.rst:49
msgid ":doc:`Feature Engineering </feature_engineering/overview>`"
msgstr ""
#: ../../source/index.rst:54
msgid "To install the current release:"
msgstr ""
#: ../../source/index.rst:60
msgid ""
"See the :doc:`installation guide </installation>` if you need additional "
"help on installation."
msgstr ""
#: ../../source/index.rst:63
msgid "Try your first NNI experiment"
msgstr ""
#: ../../source/index.rst:65
msgid "To run your first NNI experiment:"
msgstr ""
#: ../../source/index.rst:71
msgid ""
"you need to have `PyTorch <https://pytorch.org/>`_ (as well as "
"`torchvision <https://pytorch.org/vision/stable/index.html>`_) installed "
"to run this experiment."
msgstr ""
#: ../../source/index.rst:73
msgid ""
"To start your journey now, please follow the :doc:`absolute quickstart of"
" NNI <quickstart>`!"
msgstr ""
#: ../../source/index.rst:76
msgid "Why choose NNI?"
msgstr ""
#: ../../source/index.rst:79
msgid "NNI makes AutoML techniques plug-and-play"
msgstr ""
#: ../../source/index.rst:223
msgid "NNI eases the effort to scale and manage AutoML experiments"
msgstr ""
#: ../../source/index.rst:231
msgid ""
"An AutoML experiment requires many trials to explore feasible and "
"potentially good-performing models. **Training service** aims to make the"
" tuning process easily scalable in a distributed platforms. It provides a"
" unified user experience for diverse computation resources (e.g., local "
"machine, remote servers, AKS). Currently, NNI supports **more than 9** "
"kinds of training services."
msgstr ""
#: ../../source/index.rst:242
msgid ""
"Web portal visualizes the tuning process, exposing the ability to "
"inspect, monitor and control the experiment."
msgstr ""
#: ../../source/index.rst:253
msgid ""
"The DNN model tuning often requires more than one experiment. Users might"
" try different tuning algorithms, fine-tune their search space, or switch"
" to another training service. **Experiment management** provides the "
"power to aggregate and compare tuning results from multiple experiments, "
"so that the tuning workflow becomes clean and organized."
msgstr ""
#: ../../source/index.rst:259
msgid "Get Support and Contribute Back"
msgstr ""
#: ../../source/index.rst:261
msgid ""
"NNI is maintained on the `NNI GitHub repository "
"<https://github.com/microsoft/nni>`_. We collect feedbacks and new "
"proposals/ideas on GitHub. You can:"
msgstr ""
#: ../../source/index.rst:263
msgid ""
"Open a `GitHub issue <https://github.com/microsoft/nni/issues>`_ for bugs"
" and feature requests."
msgstr ""
#: ../../source/index.rst:264
msgid ""
"Open a `pull request <https://github.com/microsoft/nni/pulls>`_ to "
"contribute code (make sure to read the `contribution guide "
"</contribution>` before doing this)."
msgstr ""
#: ../../source/index.rst:265
msgid ""
"Participate in `NNI Discussion "
"<https://github.com/microsoft/nni/discussions>`_ for general questions "
"and new ideas."
msgstr ""
#: ../../source/index.rst:266
msgid "Join the following IM groups."
msgstr ""
#: ../../source/index.rst:272
msgid "Gitter"
msgstr ""
#: ../../source/index.rst:273
msgid "WeChat"
msgstr ""
#: ../../source/index.rst:280
msgid "Citing NNI"
msgstr ""
#: ../../source/index.rst:282
msgid ""
"If you use NNI in a scientific publication, please consider citing NNI in"
" your references."
msgstr ""
#: ../../source/index.rst:284
msgid ""
"Microsoft. Neural Network Intelligence (version |release|). "
"https://github.com/microsoft/nni"
msgstr ""
#: ../../source/index.rst:286
msgid ""
"Bibtex entry (please replace the version with the particular version you "
"are using): ::"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/installation.rst:2
msgid "Install NNI"
msgstr ""
#: ../../source/installation.rst:4
msgid ""
"NNI requires Python >= 3.7. It is tested and supported on Ubuntu >= "
"18.04, Windows 10 >= 21H2, and macOS >= 11."
msgstr ""
#: ../../source/installation.rst:8
msgid "There are 3 ways to install NNI:"
msgstr ""
#: ../../source/installation.rst:10
msgid ":ref:`Using pip <installation-pip>`"
msgstr ""
#: ../../source/installation.rst:11
msgid ":ref:`Build source code <installation-source>`"
msgstr ""
#: ../../source/installation.rst:12
msgid ":ref:`Using Docker <installation-docker>`"
msgstr ""
#: ../../source/installation.rst:17
msgid "Using pip"
msgstr ""
#: ../../source/installation.rst:19
msgid ""
"NNI provides official packages for x86-64 CPUs. They can be installed "
"with pip:"
msgstr ""
#: ../../source/installation.rst:25
msgid "Or to upgrade to latest version:"
msgstr ""
#: ../../source/installation.rst:31
msgid "You can check installation with:"
msgstr ""
#: ../../source/installation.rst:37
msgid ""
"On Linux systems without Conda, you may encounter ``bash: nnictl: command"
" not found`` error. In this case you need to add pip script directory to "
"``PATH``:"
msgstr ""
#: ../../source/installation.rst:48
msgid "Installing from Source Code"
msgstr ""
#: ../../source/installation.rst:50
msgid "NNI hosts source code on `GitHub <https://github.com/microsoft/nni>`__."
msgstr ""
#: ../../source/installation.rst:52
msgid ""
"NNI has experimental support for ARM64 CPUs, including Apple M1. It "
"requires to install from source code."
msgstr ""
#: ../../source/installation.rst:55
msgid "See :doc:`/notes/build_from_source`."
msgstr ""
#: ../../source/installation.rst:60
msgid "Using Docker"
msgstr ""
#: ../../source/installation.rst:62
msgid ""
"NNI provides official Docker image on `Docker Hub "
"<https://hub.docker.com/r/msranni/nni>`__."
msgstr ""
#: ../../source/installation.rst:69
msgid "Installing Extra Dependencies"
msgstr ""
#: ../../source/installation.rst:71
msgid ""
"Some built-in algorithms of NNI requires extra packages. Use ``nni"
"[<algorithm-name>]`` to install their dependencies."
msgstr ""
#: ../../source/installation.rst:74
msgid ""
"For example, to install dependencies of :class:`DNGO "
"tuner<nni.algorithms.hpo.dngo_tuner.DNGOTuner>` :"
msgstr ""
#: ../../source/installation.rst:80
msgid ""
"This command will not reinstall NNI itself, even if it was installed in "
"development mode."
msgstr ""
#: ../../source/installation.rst:82
msgid "Alternatively, you may install all extra dependencies at once:"
msgstr ""
#: ../../source/installation.rst:88
msgid ""
"**NOTE**: SMAC tuner depends on swig3, which requires a manual downgrade "
"on Ubuntu:"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:17+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/nas/overview.rst:2
msgid "Overview"
msgstr ""
#: ../../source/nas/overview.rst:4
msgid ""
"NNI's latest NAS supports are all based on Retiarii Framework, users who "
"are still on `early version using NNI NAS v1.0 "
"<https://nni.readthedocs.io/en/v2.2/nas.html>`__ shall migrate your work "
"to Retiarii as soon as possible. We plan to remove the legacy NAS "
"framework in the next few releases."
msgstr ""
#: ../../source/nas/overview.rst:6
msgid ""
"PyTorch is the **only supported framework on Retiarii**. Inquiries of NAS"
" support on Tensorflow is in `this discussion "
"<https://github.com/microsoft/nni/discussions/4605>`__. If you intend to "
"run NAS with DL frameworks other than PyTorch and Tensorflow, please "
"`open new issues <https://github.com/microsoft/nni/issues>`__ to let us "
"know."
msgstr ""
#: ../../source/nas/overview.rst:9
msgid "Basics"
msgstr ""
#: ../../source/nas/overview.rst:11
msgid ""
"Automatic neural architecture search is playing an increasingly important"
" role in finding better models. Recent research has proven the "
"feasibility of automatic NAS and has led to models that beat many "
"manually designed and tuned models. Representative works include `NASNet "
"<https://arxiv.org/abs/1707.07012>`__, `ENAS "
"<https://arxiv.org/abs/1802.03268>`__, `DARTS "
"<https://arxiv.org/abs/1806.09055>`__, `Network Morphism "
"<https://arxiv.org/abs/1806.10282>`__, and `Evolution "
"<https://arxiv.org/abs/1703.01041>`__. In addition, new innovations "
"continue to emerge."
msgstr ""
#: ../../source/nas/overview.rst:13
msgid ""
"High-level speaking, aiming to solve any particular task with neural "
"architecture search typically requires: search space design, search "
"strategy selection, and performance evaluation. The three components work"
" together with the following loop (from the famous `NAS survey "
"<https://arxiv.org/abs/1808.05377>`__):"
msgstr ""
#: ../../source/nas/overview.rst:19
msgid "In this figure:"
msgstr ""
#: ../../source/nas/overview.rst:21
msgid ""
"*Model search space* means a set of models from which the best model is "
"explored/searched. Sometimes we use *search space* or *model space* in "
"short."
msgstr ""
#: ../../source/nas/overview.rst:22
msgid ""
"*Exploration strategy* is the algorithm that is used to explore a model "
"search space. Sometimes we also call it *search strategy*."
msgstr ""
#: ../../source/nas/overview.rst:23
msgid ""
"*Model evaluator* is responsible for training a model and evaluating its "
"performance."
msgstr ""
#: ../../source/nas/overview.rst:25
msgid ""
"The process is similar to :doc:`Hyperparameter Optimization "
"</hpo/index>`, except that the target is the best architecture rather "
"than hyperparameter. Concretely, an exploration strategy selects an "
"architecture from a predefined search space. The architecture is passed "
"to a performance evaluation to get a score, which represents how well "
"this architecture performs on a particular task. This process is repeated"
" until the search process is able to find the best architecture."
msgstr ""
#: ../../source/nas/overview.rst:28
msgid "Key Features"
msgstr ""
#: ../../source/nas/overview.rst:30
msgid ""
"The current NAS framework in NNI is powered by the research of `Retiarii:"
" A Deep Learning Exploratory-Training Framework "
"<https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__, where "
"we highlight the following features:"
msgstr ""
#: ../../source/nas/overview.rst:32
msgid ":doc:`Simple APIs to construct search space easily <construct_space>`"
msgstr ""
#: ../../source/nas/overview.rst:33
msgid ":doc:`SOTA NAS algorithms to explore search space <exploration_strategy>`"
msgstr ""
#: ../../source/nas/overview.rst:34
msgid ""
":doc:`Experiment backend support to scale up experiments on large-scale "
"AI platforms </experiment/overview>`"
msgstr ""
#: ../../source/nas/overview.rst:37
msgid "Why NAS with NNI"
msgstr ""
#: ../../source/nas/overview.rst:39
msgid ""
"We list out the three perspectives where NAS can be particularly "
"challegning without NNI. NNI provides solutions to relieve users' "
"engineering effort when they want to try NAS techniques in their own "
"scenario."
msgstr ""
#: ../../source/nas/overview.rst:42
msgid "Search Space Design"
msgstr ""
#: ../../source/nas/overview.rst:44
msgid ""
"The search space defines which architectures can be represented in "
"principle. Incorporating prior knowledge about typical properties of "
"architectures well-suited for a task can reduce the size of the search "
"space and simplify the search. However, this also introduces a human "
"bias, which may prevent finding novel architectural building blocks that "
"go beyond the current human knowledge. Search space design can be very "
"challenging for beginners, who might not possess the experience to "
"balance the richness and simplicity."
msgstr ""
#: ../../source/nas/overview.rst:46
msgid ""
"In NNI, we provide a wide range of APIs to build the search space. There "
"are :doc:`high-level APIs <construct_space>`, that enables incorporating "
"human knowledge about what makes a good architecture or search space. "
"There are also :doc:`low-level APIs <mutator>`, that is a list of "
"primitives to construct a network from operator to operator."
msgstr ""
#: ../../source/nas/overview.rst:49
msgid "Exploration strategy"
msgstr ""
#: ../../source/nas/overview.rst:51
msgid ""
"The exploration strategy details how to explore the search space (which "
"is often exponentially large). It encompasses the classical exploration-"
"exploitation trade-off since, on the one hand, it is desirable to find "
"well-performing architectures quickly, while on the other hand, premature"
" convergence to a region of suboptimal architectures should be avoided. "
"The \"best\" exploration strategy for a particular scenario is usually "
"found via trial-and-error. As many state-of-the-art strategies are "
"implemented with their own code-base, it becomes very troublesome to "
"switch from one to another."
msgstr ""
#: ../../source/nas/overview.rst:53
msgid ""
"In NNI, we have also provided :doc:`a list of strategies "
"<exploration_strategy>`. Some of them are powerful yet time consuming, "
"while others might be suboptimal but really efficient. Given that all "
"strategies are implemented with a unified interface, users can always "
"find one that matches their need."
msgstr ""
#: ../../source/nas/overview.rst:56
msgid "Performance estimation"
msgstr ""
#: ../../source/nas/overview.rst:58
msgid ""
"The objective of NAS is typically to find architectures that achieve high"
" predictive performance on unseen data. Performance estimation refers to "
"the process of estimating this performance. The problem with performance "
"estimation is mostly its scalability, i.e., how can I run and manage "
"multiple trials simultaneously."
msgstr ""
#: ../../source/nas/overview.rst:60
msgid ""
"In NNI, we standardize this process is implemented with :doc:`evaluator "
"<evaluator>`, which is responsible of estimating a model's performance. "
"The choices of evaluators also range from the simplest option, e.g., to "
"perform a standard training and validation of the architecture on data, "
"to complex configurations and implementations. Evaluators are run in "
"*trials*, where trials can be spawn onto distributed platforms with our "
"powerful :doc:`training service </experiment/training_service/overview>`."
msgstr ""
#: ../../source/nas/overview.rst:63
msgid "Tutorials"
msgstr ""
#: ../../source/nas/overview.rst:65
msgid ""
"To start using NNI NAS framework, we recommend at least going through the"
" following tutorials:"
msgstr ""
#: ../../source/nas/overview.rst:67
msgid ":doc:`Quickstart </tutorials/hello_nas>`"
msgstr ""
#: ../../source/nas/overview.rst:68
msgid ":doc:`construct_space`"
msgstr ""
#: ../../source/nas/overview.rst:69
msgid ":doc:`exploration_strategy`"
msgstr ""
#: ../../source/nas/overview.rst:70
msgid ":doc:`evaluator`"
msgstr ""
#: ../../source/nas/overview.rst:73
msgid "Resources"
msgstr ""
#: ../../source/nas/overview.rst:75
msgid ""
"The following articles will help with a better understanding of the "
"current arts of NAS:"
msgstr ""
#: ../../source/nas/overview.rst:77
msgid ""
"`Neural Architecture Search: A Survey "
"<https://arxiv.org/abs/1808.05377>`__"
msgstr ""
#: ../../source/nas/overview.rst:78
msgid ""
"`A Comprehensive Survey of Neural Architecture Search: Challenges and "
"Solutions <https://arxiv.org/abs/2006.02903>`__"
msgstr ""
#~ msgid "Basics"
#~ msgstr ""
#~ msgid "Basic Concepts"
#~ msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/quickstart.rst:2
msgid "Quickstart"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-12 17:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../templates/globaltoc.html:6
msgid "Overview"
msgstr ""
This diff is collapsed.
......@@ -305,16 +305,31 @@ To contribute a new tutorial, here are the steps to follow:
* `How to add images to notebooks <https://sphinx-gallery.github.io/stable/configuration.html#adding-images-to-notebooks>`_.
* `How to reference a tutorial in documentation <https://sphinx-gallery.github.io/stable/advanced.html#cross-referencing>`_.
Chinese translation
^^^^^^^^^^^^^^^^^^^
Translation (i18n)
^^^^^^^^^^^^^^^^^^
We only maintain `a partial set of documents <https://github.com/microsoft/nni/issues/4298>`_ with translation. Currently, translation is provided in Simplified Chinese only.
* If you want to update the translation of an existing document, please update messages in ``docs/source/locales``.
* If you have updated a translated English document, we require that the corresponding translated documents to be updated (at least the update should be triggered). Please follow these steps:
1. Run ``make i18n`` under ``docs`` folder.
2. Verify that there are new messages in ``docs/source/locales``.
3. Translate the messages.
* If you intend to translate a new document:
We only maintain `a partial set of documents <https://github.com/microsoft/nni/issues/4298>`_ with Chinese translation. If you intend to contribute more, follow the steps:
1. Update ``docs/source/conf.py`` to make ``gettext_documents`` include your document (probably adding a new regular expression).
2. See the steps above.
To build the translated documentation (for example Chinese documentation), please run:
.. code-block:: bash
1. Add a ``xxx_zh.rst`` in the same folder where ``xxx.rst`` exists.
2. Run ``python tools/chineselink.py`` under ``docs`` folder, to generate a hash string in your created ``xxx_zh.rst``.
3. Don't delete the hash string, add your translation after it.
make -e SPHINXOPTS="-D language='zh'" html
In case you modify an English document with Chinese translation already exists, you also need to run ``python tools/chineselink.py`` first to update the hash string, and update the Chinese translation contents accordingly.
If you ever encountered problems for translation builds, try to remove the previous build via ``rm -r docs/build/``.
.. _code-of-conduct:
......
......@@ -38,8 +38,11 @@ stages:
displayName: Sphinx sanity check (Chinese)
- script: |
set -e
cd docs
python tools/chineselink.py check
rm -rf build
make i18n
git diff --exit-code source/locales
displayName: Translation up-to-date
- script: |
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment