tutorials.po 25.4 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
11
"POT-Creation-Date: 2022-04-20 05:50+0000\n"
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"

#: ../../source/tutorials/hello_nas.rst:13
msgid ""
"Click :ref:`here <sphx_glr_download_tutorials_hello_nas.py>` to download "
"the full example code"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:22
msgid "Hello, NAS!"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:24
msgid ""
"This is the 101 tutorial of Neural Architecture Search (NAS) on NNI. In "
"this tutorial, we will search for a neural architecture on MNIST dataset "
"with the help of NAS framework of NNI, i.e., *Retiarii*. We use multi-"
"trial NAS as an example to show how to construct and explore a model "
"space."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:28
msgid ""
"There are mainly three crucial components for a neural architecture "
"search task, namely,"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:30
msgid "Model search space that defines a set of models to explore."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:31
msgid "A proper strategy as the method to explore this model space."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:32
msgid ""
"A model evaluator that reports the performance of every model in the "
"space."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:34
msgid ""
"Currently, PyTorch is the only supported framework by Retiarii, and we "
"have only tested **PyTorch 1.7 to 1.10**. This tutorial assumes PyTorch "
"context but it should also apply to other frameworks, which is in our "
"future plan."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:38
msgid "Define your Model Space"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:40
msgid ""
"Model space is defined by users to express a set of models that users "
"want to explore, which contains potentially good-performing models. In "
"this framework, a model space is defined with two parts: a base model and"
" possible mutations on the base model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:46
msgid "Define Base Model"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:48
msgid ""
"Defining a base model is almost the same as defining a PyTorch (or "
"TensorFlow) model. Usually, you only need to replace the code ``import "
"torch.nn as nn`` with ``import nni.retiarii.nn.pytorch as nn`` to use our"
" wrapped PyTorch modules."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:52
msgid "Below is a very simple example of defining a base model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:93
msgid ""
"Always keep in mind that you should use ``import nni.retiarii.nn.pytorch "
"as nn`` and :meth:`nni.retiarii.model_wrapper`. Many mistakes are a "
"result of forgetting one of those. Also, please use ``torch.nn`` for "
"submodules of ``nn.init``, e.g., ``torch.nn.init`` instead of "
"``nn.init``."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:98
msgid "Define Model Mutations"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:100
msgid ""
"A base model is only one concrete model not a model space. We provide "
":doc:`API and Primitives </nas/construct_space>` for users to express how"
" the base model can be mutated. That is, to build a model space which "
"includes many models."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:103
msgid "Based on the above base model, we can define a model space as below."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:134
msgid "This results in the following code:"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:189
#: ../../source/tutorials/hello_nas.rst:259
126
127
128
129
#: ../../source/tutorials/hello_nas.rst:471
#: ../../source/tutorials/hello_nas.rst:564
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:244
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:281
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
#: ../../source/tutorials/pruning_quick_start_mnist.rst:65
#: ../../source/tutorials/pruning_quick_start_mnist.rst:107
#: ../../source/tutorials/pruning_quick_start_mnist.rst:172
#: ../../source/tutorials/pruning_quick_start_mnist.rst:218
#: ../../source/tutorials/pruning_quick_start_mnist.rst:255
#: ../../source/tutorials/pruning_quick_start_mnist.rst:283
msgid "Out:"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:210
msgid ""
"This example uses two mutation APIs, :class:`nn.LayerChoice "
"<nni.retiarii.nn.pytorch.LayerChoice>` and :class:`nn.InputChoice "
"<nni.retiarii.nn.pytorch.ValueChoice>`. :class:`nn.LayerChoice "
"<nni.retiarii.nn.pytorch.LayerChoice>` takes a list of candidate modules "
"(two in this example), one will be chosen for each sampled model. It can "
"be used like normal PyTorch module. :class:`nn.InputChoice "
"<nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values, "
"one will be chosen to take effect for each sampled model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:219
msgid ""
"More detailed API description and usage can be found :doc:`here "
"</nas/construct_space>`."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:223
msgid ""
"We are actively enriching the mutation APIs, to facilitate easy "
"construction of model space. If the currently supported mutation APIs "
"cannot express your model space, please refer to :doc:`this doc "
"</nas/mutator>` for customizing mutators."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:228
msgid "Explore the Defined Model Space"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:230
msgid ""
"There are basically two exploration approaches: (1) search by evaluating "
"each sampled model independently, which is the search approach in :ref"
":`multi-trial NAS <multi-trial-nas>` and (2) one-shot weight-sharing "
"based search, which is used in one-shot NAS. We demonstrate the first "
"approach in this tutorial. Users can refer to :ref:`here <one-shot-nas>` "
"for the second approach."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:235
msgid ""
"First, users need to pick a proper exploration strategy to explore the "
"defined model space. Second, users need to pick or customize a model "
"evaluator to evaluate the performance of each explored model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:239
msgid "Pick an exploration strategy"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:241
msgid ""
"Retiarii supports many :doc:`exploration strategies "
"</nas/exploration_strategy>`."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:243
msgid "Simply choosing (i.e., instantiate) an exploration strategy as below."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:273
msgid "Pick or customize a model evaluator"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:275
msgid ""
"In the exploration process, the exploration strategy repeatedly generates"
" new models. A model evaluator is for training and validating each "
"generated model to obtain the model's performance. The performance is "
"sent to the exploration strategy for the strategy to generate better "
"models."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:279
msgid ""
"Retiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, "
"but to start with, it is recommended to use :class:`FunctionalEvaluator "
"<nni.retiarii.evaluator.FunctionalEvaluator>`, that is, to wrap your own "
"training and evaluation code with one single function. This function "
"should receive one single model class and uses "
":func:`nni.report_final_result` to report the final score of this model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:284
msgid ""
"An example here creates a simple evaluator that runs on MNIST dataset, "
"trains for 2 epochs, and reports its validation accuracy."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:367
msgid "Create the evaluator"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:386
msgid ""
"The ``train_epoch`` and ``test_epoch`` here can be any customized "
"function, where users can write their own training recipe."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:389
msgid ""
"It is recommended that the ``evaluate_model`` here accepts no additional "
"arguments other than ``model_cls``. However, in the :doc:`advanced "
"tutorial </nas/evaluator>`, we will show how to use additional arguments "
"in case you actually need those. In future, we will support mutation on "
"the arguments of evaluators, which is commonly called \"Hyper-parmeter "
"tuning\"."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:394
msgid "Launch an Experiment"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:396
msgid ""
"After all the above are prepared, it is time to start an experiment to do"
" the model search. An example is shown below."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:417
msgid ""
"The following configurations are useful to control how many trials to run"
" at most / at the same time."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:436
msgid ""
"Remember to set the following config if you want to GPU. "
"``use_active_gpu`` should be set true if you wish to use an occupied GPU "
"(possibly running a GUI)."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:456
msgid ""
"Launch the experiment. The experiment should take several minutes to "
"finish on a workstation with 2 GPUs."
msgstr ""

278
#: ../../source/tutorials/hello_nas.rst:495
279
280
281
282
283
284
msgid ""
"Users can also run Retiarii Experiment with :doc:`different training "
"services </experiment/training_service/overview>` besides ``local`` "
"training service."
msgstr ""

285
#: ../../source/tutorials/hello_nas.rst:499
286
287
288
msgid "Visualize the Experiment"
msgstr ""

289
#: ../../source/tutorials/hello_nas.rst:501
290
291
292
293
294
295
296
297
msgid ""
"Users can visualize their experiment in the same way as visualizing a "
"normal hyper-parameter tuning experiment. For example, open "
"``localhost:8081`` in your browser, 8081 is the port that you set in "
"``exp.run``. Please refer to :doc:`here "
"</experiment/web_portal/web_portal>` for details."
msgstr ""

298
#: ../../source/tutorials/hello_nas.rst:505
299
300
301
302
303
304
305
306
msgid ""
"We support visualizing models with 3rd-party visualization engines (like "
"`Netron <https://netron.app/>`__). This can be used by clicking "
"``Visualization`` in detail panel for each trial. Note that current "
"visualization is based on `onnx <https://onnx.ai/>`__ , thus "
"visualization is not feasible if the model cannot be exported into onnx."
msgstr ""

307
#: ../../source/tutorials/hello_nas.rst:510
308
309
310
311
312
313
msgid ""
"Built-in evaluators (e.g., Classification) will automatically export the "
"model into a file. For your own evaluator, you need to save your file "
"into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work. For instance,"
msgstr ""

314
#: ../../source/tutorials/hello_nas.rst:541
315
316
317
msgid "Relaunch the experiment, and a button is shown on Web portal."
msgstr ""

318
#: ../../source/tutorials/hello_nas.rst:546
319
320
321
msgid "Export Top Models"
msgstr ""

322
#: ../../source/tutorials/hello_nas.rst:548
323
324
325
326
327
msgid ""
"Users can export top models after the exploration is done using "
"``export_top_models``."
msgstr ""

328
329
330
331
332
333
334
335
336
337
#: ../../source/tutorials/hello_nas.rst:575
msgid ""
"The output is ``json`` object which records the mutation actions of the "
"top model. If users want to output source code of the top model, they can"
" use :ref:`graph-based execution engine <graph-based-execution-engine>` "
"for the experiment, by simply adding the following two lines."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:597
msgid "**Total running time of the script:** ( 2 minutes  4.499 seconds)"
338
339
msgstr ""

340
#: ../../source/tutorials/hello_nas.rst:612
341
342
343
msgid ":download:`Download Python source code: hello_nas.py <hello_nas.py>`"
msgstr ""

344
#: ../../source/tutorials/hello_nas.rst:618
345
346
347
msgid ":download:`Download Jupyter notebook: hello_nas.ipynb <hello_nas.ipynb>`"
msgstr ""

348
349
#: ../../source/tutorials/hello_nas.rst:625
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:335
350
351
352
353
#: ../../source/tutorials/pruning_quick_start_mnist.rst:357
msgid "`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_"
msgstr ""

354
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:13
355
356
357
358
359
360
msgid ""
"Click :ref:`here "
"<sphx_glr_download_tutorials_hpo_quickstart_pytorch_main.py>` to download"
" the full example code"
msgstr ""

361
362
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:22
msgid "HPO Quickstart with PyTorch"
363
364
msgstr ""

365
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:23
366
367
368
369
370
msgid ""
"This tutorial optimizes the model in `official PyTorch quickstart`_ with "
"auto-tuning."
msgstr ""

371
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:25
372
373
374
msgid "The tutorial consists of 4 steps:"
msgstr ""

375
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:27
376
377
378
msgid "Modify the model for auto-tuning."
msgstr ""

379
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:28
380
381
382
msgid "Define hyperparameters' search space."
msgstr ""

383
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:29
384
385
386
msgid "Configure the experiment."
msgstr ""

387
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:30
388
389
390
msgid "Run the experiment."
msgstr ""

391
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:37
392
393
394
msgid "Step 1: Prepare the model"
msgstr ""

395
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:38
396
397
398
msgid "In first step, we need to prepare the model to be tuned."
msgstr ""

399
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:40
400
401
402
403
404
405
msgid ""
"The model should be put in a separate script. It will be evaluated many "
"times concurrently, and possibly will be trained on distributed "
"platforms."
msgstr ""

406
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:44
407
408
409
msgid "In this tutorial, the model is defined in :doc:`model.py <model>`."
msgstr ""

410
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:46
411
412
413
msgid "In short, it is a PyTorch model with 3 additional API calls:"
msgstr ""

414
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:48
415
416
417
418
419
msgid ""
"Use :func:`nni.get_next_parameter` to fetch the hyperparameters to be "
"evalutated."
msgstr ""

420
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:49
421
422
423
424
425
msgid ""
"Use :func:`nni.report_intermediate_result` to report per-epoch accuracy "
"metrics."
msgstr ""

426
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:50
427
428
429
msgid "Use :func:`nni.report_final_result` to report final accuracy."
msgstr ""

430
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:52
431
432
433
msgid "Please understand the model code before continue to next step."
msgstr ""

434
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:57
435
436
437
msgid "Step 2: Define search space"
msgstr ""

438
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:58
439
440
441
442
443
msgid ""
"In model code, we have prepared 3 hyperparameters to be tuned: "
"*features*, *lr*, and *momentum*."
msgstr ""

444
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:61
445
446
447
448
449
msgid ""
"Here we need to define their *search space* so the tuning algorithm can "
"sample them in desired range."
msgstr ""

450
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:63
451
452
453
msgid "Assuming we have following prior knowledge for these hyperparameters:"
msgstr ""

454
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:65
455
456
457
msgid "*features* should be one of 128, 256, 512, 1024."
msgstr ""

458
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:66
459
460
461
462
463
msgid ""
"*lr* should be a float between 0.0001 and 0.1, and it follows exponential"
" distribution."
msgstr ""

464
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:67
465
466
467
msgid "*momentum* should be a float between 0 and 1."
msgstr ""

468
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:69
469
470
471
472
473
474
475
msgid ""
"In NNI, the space of *features* is called ``choice``; the space of *lr* "
"is called ``loguniform``; and the space of *momentum* is called "
"``uniform``. You may have noticed, these names are derived from "
"``numpy.random``."
msgstr ""

476
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:74
477
478
479
480
481
msgid ""
"For full specification of search space, check :doc:`the reference "
"</hpo/search_space>`."
msgstr ""

482
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:76
483
484
485
msgid "Now we can define the search space as follow:"
msgstr ""

486
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:99
487
488
489
msgid "Step 3: Configure the experiment"
msgstr ""

490
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:100
491
492
493
494
495
496
msgid ""
"NNI uses an *experiment* to manage the HPO process. The *experiment "
"config* defines how to train the models and how to explore the search "
"space."
msgstr ""

497
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:103
498
499
500
501
502
503
msgid ""
"In this tutorial we use a *local* mode experiment, which means models "
"will be trained on local machine, without using any special training "
"platform."
msgstr ""

504
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:122
505
506
507
msgid "Now we start to configure the experiment."
msgstr ""

508
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:125
509
510
511
msgid "Configure trial code"
msgstr ""

512
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:126
513
514
515
516
517
msgid ""
"In NNI evaluation of each hyperparameter set is called a *trial*. So the "
"model script is called *trial code*."
msgstr ""

518
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:144
519
520
521
522
523
524
525
526
msgid ""
"When ``trial_code_directory`` is a relative path, it relates to current "
"working directory. To run ``main.py`` in a different path, you can set "
"trial code directory to ``Path(__file__).parent``. (`__file__ "
"<https://docs.python.org/3.10/reference/datamodel.html#index-43>`__ is "
"only available in standard Python, not in Jupyter Notebook.)"
msgstr ""

527
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:151
528
529
530
531
532
msgid ""
"If you are using Linux system without Conda, you may need to change "
"``\"python model.py\"`` to ``\"python3 model.py\"``."
msgstr ""

533
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:157
534
535
536
msgid "Configure search space"
msgstr ""

537
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:175
538
539
540
msgid "Configure tuning algorithm"
msgstr ""

541
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:176
542
543
544
msgid "Here we use :doc:`TPE tuner </hpo/tuners>`."
msgstr ""

545
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:195
546
547
548
msgid "Configure how many trials to run"
msgstr ""

549
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:196
550
551
552
553
554
msgid ""
"Here we evaluate 10 sets of hyperparameters in total, and concurrently "
"evaluate 2 sets at a time."
msgstr ""

555
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:213
556
557
558
msgid "You may also set ``max_experiment_duration = '1h'`` to limit running time."
msgstr ""

559
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:215
560
561
562
563
564
msgid ""
"If neither ``max_trial_number`` nor ``max_experiment_duration`` are set, "
"the experiment will run forever until you press Ctrl-C."
msgstr ""

565
566
567
568
569
570
571
572
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:220
msgid ""
"``max_trial_number`` is set to 10 here for a fast example. In real world "
"it should be set to a larger number. With default config TPE tuner "
"requires 20 trials to warm up."
msgstr ""

#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:227
573
574
575
msgid "Step 4: Run the experiment"
msgstr ""

576
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:228
577
578
579
580
581
msgid ""
"Now the experiment is ready. Choose a port and launch it. (Here we use "
"port 8080.)"
msgstr ""

582
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:230
583
584
585
586
587
msgid ""
"You can use the web portal to view experiment status: "
"http://localhost:8080."
msgstr ""

588
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:260
589
590
591
msgid "After the experiment is done"
msgstr ""

592
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:261
593
594
595
msgid "Everything is done and it is safe to exit now. The following are optional."
msgstr ""

596
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:263
597
598
599
600
601
602
msgid ""
"If you are using standard Python instead of Jupyter Notebook, you can add"
" ``input()`` or ``signal.pause()`` to prevent Python from exiting, "
"allowing you to view the web portal after the experiment is done."
msgstr ""

603
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:293
604
605
606
607
608
msgid ""
":meth:`nni.experiment.Experiment.stop` is automatically invoked when "
"Python exits, so it can be omitted in your code."
msgstr ""

609
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:296
610
611
612
613
614
msgid ""
"After the experiment is stopped, you can run "
":meth:`nni.experiment.Experiment.view` to restart web portal."
msgstr ""

615
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:300
616
617
618
619
620
msgid ""
"This example uses :doc:`Python API </reference/experiment>` to create "
"experiment."
msgstr ""

621
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:302
622
623
msgid ""
"You can also create and manage experiments with :doc:`command line tool "
624
"<../hpo_nnictl/nnictl>`."
625
626
msgstr ""

627
628
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:307
msgid "**Total running time of the script:** ( 1 minutes  24.367 seconds)"
629
630
msgstr ""

631
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:322
632
633
634
msgid ":download:`Download Python source code: main.py <main.py>`"
msgstr ""

635
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:328
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
msgid ":download:`Download Jupyter notebook: main.ipynb <main.ipynb>`"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:13
msgid ""
"Click :ref:`here "
"<sphx_glr_download_tutorials_pruning_quick_start_mnist.py>` to download "
"the full example code"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:22
msgid "Pruning Quickstart"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:24
msgid ""
"Model pruning is a technique to reduce the model size and computation by "
"reducing model weight size or intermediate state size. There are three "
"common practices for pruning a DNN model:"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:27
msgid "Pre-training a model -> Pruning the model -> Fine-tuning the pruned model"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:28
msgid ""
"Pruning a model during training (i.e., pruning aware training) -> Fine-"
"tuning the pruned model"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:29
msgid "Pruning a model -> Training the pruned model from scratch"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:31
msgid ""
"NNI supports all of the above pruning practices by working on the key "
"pruning stage. Following this tutorial for a quick look at how to use NNI"
" to prune a model in a common practice."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:37
msgid "Preparation"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:39
msgid ""
"In this tutorial, we use a simple model and pre-trained on MNIST dataset."
" If you are familiar with defining a model and training in pytorch, you "
"can skip directly to `Pruning Model`_."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:121
msgid "Pruning Model"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:123
msgid ""
"Using L1NormPruner to prune the model and generate the masks. Usually, a "
"pruner requires original model and ``config_list`` as its inputs. "
"Detailed about how to write ``config_list`` please refer "
":doc:`compression config specification "
"<../compression/compression_config_list>`."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:127
msgid ""
"The following `config_list` means all layers whose type is `Linear` or "
"`Conv2d` will be pruned, except the layer named `fc3`, because `fc3` is "
"`exclude`. The final sparsity ratio for each layer is 50%. The layer "
"named `fc3` will not be pruned."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:153
msgid "Pruners usually require `model` and `config_list` as input arguments."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:232
msgid ""
"Speedup the original model with masks, note that `ModelSpeedup` requires "
"an unwrapped model. The model becomes smaller after speedup, and reaches "
"a higher sparsity ratio because `ModelSpeedup` will propagate the masks "
"across layers."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:269
msgid "the model will become real smaller after speedup"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:307
msgid "Fine-tuning Compacted Model"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:308
msgid ""
"Note that if the model has been sped up, you need to re-initialize a new "
"optimizer for fine-tuning. Because speedup will replace the masked big "
"layers with dense small ones."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:329
msgid "**Total running time of the script:** ( 0 minutes  58.337 seconds)"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:344
msgid ""
":download:`Download Python source code: pruning_quick_start_mnist.py "
"<pruning_quick_start_mnist.py>`"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:350
msgid ""
":download:`Download Jupyter notebook: pruning_quick_start_mnist.ipynb "
"<pruning_quick_start_mnist.ipynb>`"
msgstr ""

753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
#~ msgid "**Total running time of the script:** ( 2 minutes  15.810 seconds)"
#~ msgstr ""

#~ msgid "NNI HPO Quickstart with PyTorch"
#~ msgstr ""

#~ msgid ""
#~ "There is also a :doc:`TensorFlow "
#~ "version<../hpo_quickstart_tensorflow/main>` if you "
#~ "prefer it."
#~ msgstr ""

#~ msgid ""
#~ "You can also create and manage "
#~ "experiments with :doc:`command line tool "
#~ "</reference/nnictl>`."
#~ msgstr ""

#~ msgid "**Total running time of the script:** ( 1 minutes  24.393 seconds)"
#~ msgstr ""