nas.po 10.6 KB
Newer Older
qianyj's avatar
qianyj committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-20 05:50+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"

#: ../../source/nas/overview.rst:2
msgid "Overview"
msgstr ""

#: ../../source/nas/overview.rst:4
msgid ""
"NNI's latest NAS supports are all based on Retiarii Framework, users who "
"are still on `early version using NNI NAS v1.0 "
"<https://nni.readthedocs.io/en/v2.2/nas.html>`__ shall migrate your work "
"to Retiarii as soon as possible. We plan to remove the legacy NAS "
"framework in the next few releases."
msgstr ""

#: ../../source/nas/overview.rst:6
msgid ""
"PyTorch is the **only supported framework on Retiarii**. Inquiries of NAS"
" support on Tensorflow is in `this discussion "
"<https://github.com/microsoft/nni/discussions/4605>`__. If you intend to "
"run NAS with DL frameworks other than PyTorch and Tensorflow, please "
"`open new issues <https://github.com/microsoft/nni/issues>`__ to let us "
"know."
msgstr ""

#: ../../source/nas/overview.rst:9
msgid "Basics"
msgstr ""

#: ../../source/nas/overview.rst:11
msgid ""
"Automatic neural architecture search is playing an increasingly important"
" role in finding better models. Recent research has proven the "
"feasibility of automatic NAS and has led to models that beat many "
"manually designed and tuned models. Representative works include `NASNet "
"<https://arxiv.org/abs/1707.07012>`__, `ENAS "
"<https://arxiv.org/abs/1802.03268>`__, `DARTS "
"<https://arxiv.org/abs/1806.09055>`__, `Network Morphism "
"<https://arxiv.org/abs/1806.10282>`__, and `Evolution "
"<https://arxiv.org/abs/1703.01041>`__. In addition, new innovations "
"continue to emerge."
msgstr ""

#: ../../source/nas/overview.rst:13
msgid ""
"High-level speaking, aiming to solve any particular task with neural "
"architecture search typically requires: search space design, search "
"strategy selection, and performance evaluation. The three components work"
" together with the following loop (from the famous `NAS survey "
"<https://arxiv.org/abs/1808.05377>`__):"
msgstr ""

#: ../../source/nas/overview.rst:19
msgid "In this figure:"
msgstr ""

#: ../../source/nas/overview.rst:21
msgid ""
"*Model search space*  means a set of models from which the best model is "
"explored/searched. Sometimes we use *search space* or *model space* in "
"short."
msgstr ""

#: ../../source/nas/overview.rst:22
msgid ""
"*Exploration strategy* is the algorithm that is used to explore a model "
"search space. Sometimes we also call it *search strategy*."
msgstr ""

#: ../../source/nas/overview.rst:23
msgid ""
"*Model evaluator* is responsible for training a model and evaluating its "
"performance."
msgstr ""

#: ../../source/nas/overview.rst:25
msgid ""
"The process is similar to :doc:`Hyperparameter Optimization "
"</hpo/overview>`, except that the target is the best architecture rather "
"than hyperparameter. Concretely, an exploration strategy selects an "
"architecture from a predefined search space. The architecture is passed "
"to a performance evaluation to get a score, which represents how well "
"this architecture performs on a particular task. This process is repeated"
" until the search process is able to find the best architecture."
msgstr ""

#: ../../source/nas/overview.rst:28
msgid "Key Features"
msgstr ""

#: ../../source/nas/overview.rst:30
msgid ""
"The current NAS framework in NNI is powered by the research of `Retiarii:"
" A Deep Learning Exploratory-Training Framework "
"<https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__, where "
"we highlight the following features:"
msgstr ""

#: ../../source/nas/overview.rst:32
msgid ":doc:`Simple APIs to construct search space easily <construct_space>`"
msgstr ""

#: ../../source/nas/overview.rst:33
msgid ":doc:`SOTA NAS algorithms to explore search space <exploration_strategy>`"
msgstr ""

#: ../../source/nas/overview.rst:34
msgid ""
":doc:`Experiment backend support to scale up experiments on large-scale "
"AI platforms </experiment/overview>`"
msgstr ""

#: ../../source/nas/overview.rst:37
msgid "Why NAS with NNI"
msgstr ""

#: ../../source/nas/overview.rst:39
msgid ""
"We list out the three perspectives where NAS can be particularly "
"challegning without NNI. NNI provides solutions to relieve users' "
"engineering effort when they want to try NAS techniques in their own "
"scenario."
msgstr ""

#: ../../source/nas/overview.rst:42
msgid "Search Space Design"
msgstr ""

#: ../../source/nas/overview.rst:44
msgid ""
"The search space defines which architectures can be represented in "
"principle. Incorporating prior knowledge about typical properties of "
"architectures well-suited for a task can reduce the size of the search "
"space and simplify the search. However, this also introduces a human "
"bias, which may prevent finding novel architectural building blocks that "
"go beyond the current human knowledge. Search space design can be very "
"challenging for beginners, who might not possess the experience to "
"balance the richness and simplicity."
msgstr ""

#: ../../source/nas/overview.rst:46
msgid ""
"In NNI, we provide a wide range of APIs to build the search space. There "
"are :doc:`high-level APIs <construct_space>`, that enables the "
"possibility to incorporate human knowledge about what makes a good "
"architecture or search space. There are also :doc:`low-level APIs "
"<mutator>`, that is a list of primitives to construct a network from "
"operation to operation."
msgstr ""

#: ../../source/nas/overview.rst:49
msgid "Exploration strategy"
msgstr ""

#: ../../source/nas/overview.rst:51
msgid ""
"The exploration strategy details how to explore the search space (which "
"is often exponentially large). It encompasses the classical exploration-"
"exploitation trade-off since, on the one hand, it is desirable to find "
"well-performing architectures quickly, while on the other hand, premature"
" convergence to a region of suboptimal architectures should be avoided. "
"The \"best\" exploration strategy for a particular scenario is usually "
"found via trial-and-error. As many state-of-the-art strategies are "
"implemented with their own code-base, it becomes very troublesome to "
"switch from one to another."
msgstr ""

#: ../../source/nas/overview.rst:53
msgid ""
"In NNI, we have also provided :doc:`a list of strategies "
"<exploration_strategy>`. Some of them are powerful yet time consuming, "
"while others might be suboptimal but really efficient. Given that all "
"strategies are implemented with a unified interface, users can always "
"find one that matches their need."
msgstr ""

#: ../../source/nas/overview.rst:56
msgid "Performance estimation"
msgstr ""

#: ../../source/nas/overview.rst:58
msgid ""
"The objective of NAS is typically to find architectures that achieve high"
" predictive performance on unseen data. Performance estimation refers to "
"the process of estimating this performance. The problem with performance "
"estimation is mostly its scalability, i.e., how can I run and manage "
"multiple trials simultaneously."
msgstr ""

#: ../../source/nas/overview.rst:60
msgid ""
"In NNI, we standardize this process is implemented with :doc:`evaluator "
"<evaluator>`, which is responsible of estimating a model's performance. "
"NNI has quite a few built-in supports of evaluators, ranging from the "
"simplest option, e.g., to perform a standard training and validation of "
"the architecture on data, to complex configurations and implementations. "
"Evaluators are run in *trials*, where trials can be spawn onto "
"distributed platforms with our powerful :doc:`training service "
"</experiment/training_service/overview>`."
msgstr ""

#: ../../source/nas/overview.rst:63
msgid "Tutorials"
msgstr ""

#: ../../source/nas/overview.rst:65
msgid ""
"To start using NNI NAS framework, we recommend at least going through the"
" following tutorials:"
msgstr ""

#: ../../source/nas/overview.rst:67
msgid ":doc:`Quickstart </tutorials/hello_nas>`"
msgstr ""

#: ../../source/nas/overview.rst:68
msgid ":doc:`construct_space`"
msgstr ""

#: ../../source/nas/overview.rst:69
msgid ":doc:`exploration_strategy`"
msgstr ""

#: ../../source/nas/overview.rst:70
msgid ":doc:`evaluator`"
msgstr ""

#: ../../source/nas/overview.rst:73
msgid "Resources"
msgstr ""

#: ../../source/nas/overview.rst:75
msgid ""
"The following articles will help with a better understanding of the "
"current arts of NAS:"
msgstr ""

#: ../../source/nas/overview.rst:77
msgid ""
"`Neural Architecture Search: A Survey "
"<https://arxiv.org/abs/1808.05377>`__"
msgstr ""

#: ../../source/nas/overview.rst:78
msgid ""
"`A Comprehensive Survey of Neural Architecture Search: Challenges and "
"Solutions <https://arxiv.org/abs/2006.02903>`__"
msgstr ""

#~ msgid "Basics"
#~ msgstr ""

#~ msgid "Basic Concepts"
#~ msgstr ""

#~ msgid ""
#~ "The process is similar to "
#~ ":doc:`Hyperparameter Optimization </hpo/index>`, "
#~ "except that the target is the best"
#~ " architecture rather than hyperparameter. "
#~ "Concretely, an exploration strategy selects"
#~ " an architecture from a predefined "
#~ "search space. The architecture is passed"
#~ " to a performance evaluation to get"
#~ " a score, which represents how well"
#~ " this architecture performs on a "
#~ "particular task. This process is "
#~ "repeated until the search process is "
#~ "able to find the best architecture."
#~ msgstr ""

#~ msgid ""
#~ "In NNI, we provide a wide range"
#~ " of APIs to build the search "
#~ "space. There are :doc:`high-level APIs"
#~ " <construct_space>`, that enables incorporating"
#~ " human knowledge about what makes a"
#~ " good architecture or search space. "
#~ "There are also :doc:`low-level APIs "
#~ "<mutator>`, that is a list of "
#~ "primitives to construct a network from"
#~ " operator to operator."
#~ msgstr ""

#~ msgid ""
#~ "In NNI, we standardize this process "
#~ "is implemented with :doc:`evaluator "
#~ "<evaluator>`, which is responsible of "
#~ "estimating a model's performance. The "
#~ "choices of evaluators also range from"
#~ " the simplest option, e.g., to "
#~ "perform a standard training and "
#~ "validation of the architecture on data,"
#~ " to complex configurations and "
#~ "implementations. Evaluators are run in "
#~ "*trials*, where trials can be spawn "
#~ "onto distributed platforms with our "
#~ "powerful :doc:`training service "
#~ "</experiment/training_service/overview>`."
#~ msgstr ""