tutorials.po 25.4 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
11
"POT-Creation-Date: 2022-10-18 19:27+0800\n"
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"

#: ../../source/tutorials/hello_nas.rst:13
msgid ""
"Click :ref:`here <sphx_glr_download_tutorials_hello_nas.py>` to download "
"the full example code"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:22
msgid "Hello, NAS!"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:24
msgid ""
"This is the 101 tutorial of Neural Architecture Search (NAS) on NNI. In "
"this tutorial, we will search for a neural architecture on MNIST dataset "
"with the help of NAS framework of NNI, i.e., *Retiarii*. We use multi-"
"trial NAS as an example to show how to construct and explore a model "
"space."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:28
msgid ""
"There are mainly three crucial components for a neural architecture "
"search task, namely,"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:30
msgid "Model search space that defines a set of models to explore."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:31
msgid "A proper strategy as the method to explore this model space."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:32
msgid ""
"A model evaluator that reports the performance of every model in the "
"space."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:34
msgid ""
"Currently, PyTorch is the only supported framework by Retiarii, and we "
"have only tested **PyTorch 1.7 to 1.10**. This tutorial assumes PyTorch "
"context but it should also apply to other frameworks, which is in our "
"future plan."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:38
msgid "Define your Model Space"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:40
msgid ""
"Model space is defined by users to express a set of models that users "
"want to explore, which contains potentially good-performing models. In "
"this framework, a model space is defined with two parts: a base model and"
" possible mutations on the base model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:46
msgid "Define Base Model"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:48
msgid ""
"Defining a base model is almost the same as defining a PyTorch (or "
"TensorFlow) model. Usually, you only need to replace the code ``import "
"torch.nn as nn`` with ``import nni.retiarii.nn.pytorch as nn`` to use our"
" wrapped PyTorch modules."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:52
msgid "Below is a very simple example of defining a base model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:93
msgid ""
"Always keep in mind that you should use ``import nni.retiarii.nn.pytorch "
"as nn`` and :meth:`nni.retiarii.model_wrapper`. Many mistakes are a "
"result of forgetting one of those. Also, please use ``torch.nn`` for "
"submodules of ``nn.init``, e.g., ``torch.nn.init`` instead of "
"``nn.init``."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:98
msgid "Define Model Mutations"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:100
msgid ""
"A base model is only one concrete model not a model space. We provide "
":doc:`API and Primitives </nas/construct_space>` for users to express how"
" the base model can be mutated. That is, to build a model space which "
"includes many models."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:103
msgid "Based on the above base model, we can define a model space as below."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:134
msgid "This results in the following code:"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:189
#: ../../source/tutorials/hello_nas.rst:259
126
127
128
129
#: ../../source/tutorials/hello_nas.rst:471
#: ../../source/tutorials/hello_nas.rst:564
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:244
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:281
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
msgid "Out:"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:210
msgid ""
"This example uses two mutation APIs, :class:`nn.LayerChoice "
"<nni.retiarii.nn.pytorch.LayerChoice>` and :class:`nn.InputChoice "
"<nni.retiarii.nn.pytorch.ValueChoice>`. :class:`nn.LayerChoice "
"<nni.retiarii.nn.pytorch.LayerChoice>` takes a list of candidate modules "
"(two in this example), one will be chosen for each sampled model. It can "
"be used like normal PyTorch module. :class:`nn.InputChoice "
"<nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values, "
"one will be chosen to take effect for each sampled model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:219
msgid ""
"More detailed API description and usage can be found :doc:`here "
"</nas/construct_space>`."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:223
msgid ""
"We are actively enriching the mutation APIs, to facilitate easy "
"construction of model space. If the currently supported mutation APIs "
"cannot express your model space, please refer to :doc:`this doc "
"</nas/mutator>` for customizing mutators."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:228
msgid "Explore the Defined Model Space"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:230
msgid ""
"There are basically two exploration approaches: (1) search by evaluating "
"each sampled model independently, which is the search approach in :ref"
":`multi-trial NAS <multi-trial-nas>` and (2) one-shot weight-sharing "
"based search, which is used in one-shot NAS. We demonstrate the first "
"approach in this tutorial. Users can refer to :ref:`here <one-shot-nas>` "
"for the second approach."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:235
msgid ""
"First, users need to pick a proper exploration strategy to explore the "
"defined model space. Second, users need to pick or customize a model "
"evaluator to evaluate the performance of each explored model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:239
msgid "Pick an exploration strategy"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:241
msgid ""
"Retiarii supports many :doc:`exploration strategies "
"</nas/exploration_strategy>`."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:243
msgid "Simply choosing (i.e., instantiate) an exploration strategy as below."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:273
msgid "Pick or customize a model evaluator"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:275
msgid ""
"In the exploration process, the exploration strategy repeatedly generates"
" new models. A model evaluator is for training and validating each "
"generated model to obtain the model's performance. The performance is "
"sent to the exploration strategy for the strategy to generate better "
"models."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:279
msgid ""
"Retiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, "
"but to start with, it is recommended to use :class:`FunctionalEvaluator "
"<nni.retiarii.evaluator.FunctionalEvaluator>`, that is, to wrap your own "
"training and evaluation code with one single function. This function "
"should receive one single model class and uses "
":func:`nni.report_final_result` to report the final score of this model."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:284
msgid ""
"An example here creates a simple evaluator that runs on MNIST dataset, "
"trains for 2 epochs, and reports its validation accuracy."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:367
msgid "Create the evaluator"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:386
msgid ""
"The ``train_epoch`` and ``test_epoch`` here can be any customized "
"function, where users can write their own training recipe."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:389
msgid ""
"It is recommended that the ``evaluate_model`` here accepts no additional "
"arguments other than ``model_cls``. However, in the :doc:`advanced "
"tutorial </nas/evaluator>`, we will show how to use additional arguments "
"in case you actually need those. In future, we will support mutation on "
"the arguments of evaluators, which is commonly called \"Hyper-parmeter "
"tuning\"."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:394
msgid "Launch an Experiment"
msgstr ""

#: ../../source/tutorials/hello_nas.rst:396
msgid ""
"After all the above are prepared, it is time to start an experiment to do"
" the model search. An example is shown below."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:417
msgid ""
"The following configurations are useful to control how many trials to run"
" at most / at the same time."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:436
msgid ""
"Remember to set the following config if you want to GPU. "
"``use_active_gpu`` should be set true if you wish to use an occupied GPU "
"(possibly running a GUI)."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:456
msgid ""
"Launch the experiment. The experiment should take several minutes to "
"finish on a workstation with 2 GPUs."
msgstr ""

272
#: ../../source/tutorials/hello_nas.rst:495
273
274
275
276
277
278
msgid ""
"Users can also run Retiarii Experiment with :doc:`different training "
"services </experiment/training_service/overview>` besides ``local`` "
"training service."
msgstr ""

279
#: ../../source/tutorials/hello_nas.rst:499
280
281
282
msgid "Visualize the Experiment"
msgstr ""

283
#: ../../source/tutorials/hello_nas.rst:501
284
285
286
287
288
289
290
291
msgid ""
"Users can visualize their experiment in the same way as visualizing a "
"normal hyper-parameter tuning experiment. For example, open "
"``localhost:8081`` in your browser, 8081 is the port that you set in "
"``exp.run``. Please refer to :doc:`here "
"</experiment/web_portal/web_portal>` for details."
msgstr ""

292
#: ../../source/tutorials/hello_nas.rst:505
293
294
295
296
297
298
299
300
msgid ""
"We support visualizing models with 3rd-party visualization engines (like "
"`Netron <https://netron.app/>`__). This can be used by clicking "
"``Visualization`` in detail panel for each trial. Note that current "
"visualization is based on `onnx <https://onnx.ai/>`__ , thus "
"visualization is not feasible if the model cannot be exported into onnx."
msgstr ""

301
#: ../../source/tutorials/hello_nas.rst:510
302
303
304
305
306
307
msgid ""
"Built-in evaluators (e.g., Classification) will automatically export the "
"model into a file. For your own evaluator, you need to save your file "
"into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work. For instance,"
msgstr ""

308
#: ../../source/tutorials/hello_nas.rst:541
309
310
311
msgid "Relaunch the experiment, and a button is shown on Web portal."
msgstr ""

312
#: ../../source/tutorials/hello_nas.rst:546
313
314
315
msgid "Export Top Models"
msgstr ""

316
#: ../../source/tutorials/hello_nas.rst:548
317
318
319
320
321
msgid ""
"Users can export top models after the exploration is done using "
"``export_top_models``."
msgstr ""

322
323
324
325
326
327
328
329
330
331
#: ../../source/tutorials/hello_nas.rst:575
msgid ""
"The output is ``json`` object which records the mutation actions of the "
"top model. If users want to output source code of the top model, they can"
" use :ref:`graph-based execution engine <graph-based-execution-engine>` "
"for the experiment, by simply adding the following two lines."
msgstr ""

#: ../../source/tutorials/hello_nas.rst:597
msgid "**Total running time of the script:** ( 2 minutes  4.499 seconds)"
332
333
msgstr ""

334
#: ../../source/tutorials/hello_nas.rst:612
335
336
337
msgid ":download:`Download Python source code: hello_nas.py <hello_nas.py>`"
msgstr ""

338
#: ../../source/tutorials/hello_nas.rst:618
339
340
341
msgid ":download:`Download Jupyter notebook: hello_nas.ipynb <hello_nas.ipynb>`"
msgstr ""

342
343
#: ../../source/tutorials/hello_nas.rst:625
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:335
344
#: ../../source/tutorials/pruning_quick_start_mnist.rst:343
345
346
347
msgid "`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_"
msgstr ""

348
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:13
349
350
351
352
353
354
msgid ""
"Click :ref:`here "
"<sphx_glr_download_tutorials_hpo_quickstart_pytorch_main.py>` to download"
" the full example code"
msgstr ""

355
356
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:22
msgid "HPO Quickstart with PyTorch"
357
358
msgstr ""

359
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:23
360
361
362
363
364
msgid ""
"This tutorial optimizes the model in `official PyTorch quickstart`_ with "
"auto-tuning."
msgstr ""

365
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:25
366
367
368
msgid "The tutorial consists of 4 steps:"
msgstr ""

369
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:27
370
371
372
msgid "Modify the model for auto-tuning."
msgstr ""

373
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:28
374
375
376
msgid "Define hyperparameters' search space."
msgstr ""

377
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:29
378
379
380
msgid "Configure the experiment."
msgstr ""

381
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:30
382
383
384
msgid "Run the experiment."
msgstr ""

385
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:37
386
387
388
msgid "Step 1: Prepare the model"
msgstr ""

389
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:38
390
391
392
msgid "In first step, we need to prepare the model to be tuned."
msgstr ""

393
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:40
394
395
396
397
398
399
msgid ""
"The model should be put in a separate script. It will be evaluated many "
"times concurrently, and possibly will be trained on distributed "
"platforms."
msgstr ""

400
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:44
401
402
403
msgid "In this tutorial, the model is defined in :doc:`model.py <model>`."
msgstr ""

404
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:46
405
406
407
msgid "In short, it is a PyTorch model with 3 additional API calls:"
msgstr ""

408
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:48
409
410
411
412
413
msgid ""
"Use :func:`nni.get_next_parameter` to fetch the hyperparameters to be "
"evalutated."
msgstr ""

414
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:49
415
416
417
418
419
msgid ""
"Use :func:`nni.report_intermediate_result` to report per-epoch accuracy "
"metrics."
msgstr ""

420
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:50
421
422
423
msgid "Use :func:`nni.report_final_result` to report final accuracy."
msgstr ""

424
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:52
425
426
427
msgid "Please understand the model code before continue to next step."
msgstr ""

428
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:57
429
430
431
msgid "Step 2: Define search space"
msgstr ""

432
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:58
433
434
435
436
437
msgid ""
"In model code, we have prepared 3 hyperparameters to be tuned: "
"*features*, *lr*, and *momentum*."
msgstr ""

438
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:61
439
440
441
442
443
msgid ""
"Here we need to define their *search space* so the tuning algorithm can "
"sample them in desired range."
msgstr ""

444
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:63
445
446
447
msgid "Assuming we have following prior knowledge for these hyperparameters:"
msgstr ""

448
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:65
449
450
451
msgid "*features* should be one of 128, 256, 512, 1024."
msgstr ""

452
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:66
453
454
455
456
457
msgid ""
"*lr* should be a float between 0.0001 and 0.1, and it follows exponential"
" distribution."
msgstr ""

458
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:67
459
460
461
msgid "*momentum* should be a float between 0 and 1."
msgstr ""

462
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:69
463
464
465
466
467
468
469
msgid ""
"In NNI, the space of *features* is called ``choice``; the space of *lr* "
"is called ``loguniform``; and the space of *momentum* is called "
"``uniform``. You may have noticed, these names are derived from "
"``numpy.random``."
msgstr ""

470
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:74
471
472
473
474
475
msgid ""
"For full specification of search space, check :doc:`the reference "
"</hpo/search_space>`."
msgstr ""

476
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:76
477
478
479
msgid "Now we can define the search space as follow:"
msgstr ""

480
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:99
481
482
483
msgid "Step 3: Configure the experiment"
msgstr ""

484
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:100
485
486
487
488
489
490
msgid ""
"NNI uses an *experiment* to manage the HPO process. The *experiment "
"config* defines how to train the models and how to explore the search "
"space."
msgstr ""

491
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:103
492
493
494
495
496
497
msgid ""
"In this tutorial we use a *local* mode experiment, which means models "
"will be trained on local machine, without using any special training "
"platform."
msgstr ""

498
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:122
499
500
501
msgid "Now we start to configure the experiment."
msgstr ""

502
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:125
503
504
505
msgid "Configure trial code"
msgstr ""

506
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:126
507
508
509
510
511
msgid ""
"In NNI evaluation of each hyperparameter set is called a *trial*. So the "
"model script is called *trial code*."
msgstr ""

512
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:144
513
514
515
516
517
518
519
520
msgid ""
"When ``trial_code_directory`` is a relative path, it relates to current "
"working directory. To run ``main.py`` in a different path, you can set "
"trial code directory to ``Path(__file__).parent``. (`__file__ "
"<https://docs.python.org/3.10/reference/datamodel.html#index-43>`__ is "
"only available in standard Python, not in Jupyter Notebook.)"
msgstr ""

521
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:151
522
523
524
525
526
msgid ""
"If you are using Linux system without Conda, you may need to change "
"``\"python model.py\"`` to ``\"python3 model.py\"``."
msgstr ""

527
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:157
528
529
530
msgid "Configure search space"
msgstr ""

531
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:175
532
533
534
msgid "Configure tuning algorithm"
msgstr ""

535
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:176
536
537
538
msgid "Here we use :doc:`TPE tuner </hpo/tuners>`."
msgstr ""

539
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:195
540
541
542
msgid "Configure how many trials to run"
msgstr ""

543
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:196
544
545
546
547
548
msgid ""
"Here we evaluate 10 sets of hyperparameters in total, and concurrently "
"evaluate 2 sets at a time."
msgstr ""

549
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:213
550
551
552
msgid "You may also set ``max_experiment_duration = '1h'`` to limit running time."
msgstr ""

553
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:215
554
555
556
557
558
msgid ""
"If neither ``max_trial_number`` nor ``max_experiment_duration`` are set, "
"the experiment will run forever until you press Ctrl-C."
msgstr ""

559
560
561
562
563
564
565
566
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:220
msgid ""
"``max_trial_number`` is set to 10 here for a fast example. In real world "
"it should be set to a larger number. With default config TPE tuner "
"requires 20 trials to warm up."
msgstr ""

#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:227
567
568
569
msgid "Step 4: Run the experiment"
msgstr ""

570
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:228
571
572
573
574
575
msgid ""
"Now the experiment is ready. Choose a port and launch it. (Here we use "
"port 8080.)"
msgstr ""

576
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:230
577
578
579
580
581
msgid ""
"You can use the web portal to view experiment status: "
"http://localhost:8080."
msgstr ""

582
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:260
583
584
585
msgid "After the experiment is done"
msgstr ""

586
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:261
587
588
589
msgid "Everything is done and it is safe to exit now. The following are optional."
msgstr ""

590
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:263
591
592
593
594
595
596
msgid ""
"If you are using standard Python instead of Jupyter Notebook, you can add"
" ``input()`` or ``signal.pause()`` to prevent Python from exiting, "
"allowing you to view the web portal after the experiment is done."
msgstr ""

597
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:293
598
599
600
601
602
msgid ""
":meth:`nni.experiment.Experiment.stop` is automatically invoked when "
"Python exits, so it can be omitted in your code."
msgstr ""

603
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:296
604
605
606
607
608
msgid ""
"After the experiment is stopped, you can run "
":meth:`nni.experiment.Experiment.view` to restart web portal."
msgstr ""

609
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:300
610
611
612
613
614
msgid ""
"This example uses :doc:`Python API </reference/experiment>` to create "
"experiment."
msgstr ""

615
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:302
616
617
msgid ""
"You can also create and manage experiments with :doc:`command line tool "
618
"<../hpo_nnictl/nnictl>`."
619
620
msgstr ""

621
622
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:307
msgid "**Total running time of the script:** ( 1 minutes  24.367 seconds)"
623
624
msgstr ""

625
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:322
626
627
628
msgid ":download:`Download Python source code: main.py <main.py>`"
msgstr ""

629
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:328
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
msgid ":download:`Download Jupyter notebook: main.ipynb <main.ipynb>`"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:13
msgid ""
"Click :ref:`here "
"<sphx_glr_download_tutorials_pruning_quick_start_mnist.py>` to download "
"the full example code"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:22
msgid "Pruning Quickstart"
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:24
645
646
647
648
msgid "Here is a three-minute video to get you started with model pruning."
msgstr ""

#: ../../source/tutorials/pruning_quick_start_mnist.rst:29
649
650
651
652
653
654
msgid ""
"Model pruning is a technique to reduce the model size and computation by "
"reducing model weight size or intermediate state size. There are three "
"common practices for pruning a DNN model:"
msgstr ""

655
#: ../../source/tutorials/pruning_quick_start_mnist.rst:32
656
657
658
msgid "Pre-training a model -> Pruning the model -> Fine-tuning the pruned model"
msgstr ""

659
#: ../../source/tutorials/pruning_quick_start_mnist.rst:33
660
661
662
663
664
msgid ""
"Pruning a model during training (i.e., pruning aware training) -> Fine-"
"tuning the pruned model"
msgstr ""

665
#: ../../source/tutorials/pruning_quick_start_mnist.rst:34
666
667
668
msgid "Pruning a model -> Training the pruned model from scratch"
msgstr ""

669
#: ../../source/tutorials/pruning_quick_start_mnist.rst:36
670
671
672
673
674
675
msgid ""
"NNI supports all of the above pruning practices by working on the key "
"pruning stage. Following this tutorial for a quick look at how to use NNI"
" to prune a model in a common practice."
msgstr ""

676
#: ../../source/tutorials/pruning_quick_start_mnist.rst:42
677
678
679
msgid "Preparation"
msgstr ""

680
#: ../../source/tutorials/pruning_quick_start_mnist.rst:44
681
682
683
684
685
686
msgid ""
"In this tutorial, we use a simple model and pre-trained on MNIST dataset."
" If you are familiar with defining a model and training in pytorch, you "
"can skip directly to `Pruning Model`_."
msgstr ""

687
#: ../../source/tutorials/pruning_quick_start_mnist.rst:122
688
689
690
msgid "Pruning Model"
msgstr ""

691
#: ../../source/tutorials/pruning_quick_start_mnist.rst:124
692
693
694
695
696
697
698
699
msgid ""
"Using L1NormPruner to prune the model and generate the masks. Usually, a "
"pruner requires original model and ``config_list`` as its inputs. "
"Detailed about how to write ``config_list`` please refer "
":doc:`compression config specification "
"<../compression/compression_config_list>`."
msgstr ""

700
#: ../../source/tutorials/pruning_quick_start_mnist.rst:128
701
702
703
704
705
706
707
msgid ""
"The following `config_list` means all layers whose type is `Linear` or "
"`Conv2d` will be pruned, except the layer named `fc3`, because `fc3` is "
"`exclude`. The final sparsity ratio for each layer is 50%. The layer "
"named `fc3` will not be pruned."
msgstr ""

708
#: ../../source/tutorials/pruning_quick_start_mnist.rst:154
709
710
711
msgid "Pruners usually require `model` and `config_list` as input arguments."
msgstr ""

712
#: ../../source/tutorials/pruning_quick_start_mnist.rst:229
713
714
715
716
717
718
719
msgid ""
"Speedup the original model with masks, note that `ModelSpeedup` requires "
"an unwrapped model. The model becomes smaller after speedup, and reaches "
"a higher sparsity ratio because `ModelSpeedup` will propagate the masks "
"across layers."
msgstr ""

720
#: ../../source/tutorials/pruning_quick_start_mnist.rst:262
721
722
723
msgid "the model will become real smaller after speedup"
msgstr ""

724
#: ../../source/tutorials/pruning_quick_start_mnist.rst:298
725
726
727
msgid "Fine-tuning Compacted Model"
msgstr ""

728
#: ../../source/tutorials/pruning_quick_start_mnist.rst:299
729
730
731
732
733
734
msgid ""
"Note that if the model has been sped up, you need to re-initialize a new "
"optimizer for fine-tuning. Because speedup will replace the masked big "
"layers with dense small ones."
msgstr ""

735
736
#: ../../source/tutorials/pruning_quick_start_mnist.rst:320
msgid "**Total running time of the script:** ( 1 minutes  0.810 seconds)"
737
738
msgstr ""

739
#: ../../source/tutorials/pruning_quick_start_mnist.rst:332
740
741
742
743
744
msgid ""
":download:`Download Python source code: pruning_quick_start_mnist.py "
"<pruning_quick_start_mnist.py>`"
msgstr ""

745
#: ../../source/tutorials/pruning_quick_start_mnist.rst:336
746
747
748
749
750
msgid ""
":download:`Download Jupyter notebook: pruning_quick_start_mnist.ipynb "
"<pruning_quick_start_mnist.ipynb>`"
msgstr ""

751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
#~ msgid "**Total running time of the script:** ( 2 minutes  15.810 seconds)"
#~ msgstr ""

#~ msgid "NNI HPO Quickstart with PyTorch"
#~ msgstr ""

#~ msgid ""
#~ "There is also a :doc:`TensorFlow "
#~ "version<../hpo_quickstart_tensorflow/main>` if you "
#~ "prefer it."
#~ msgstr ""

#~ msgid ""
#~ "You can also create and manage "
#~ "experiments with :doc:`command line tool "
#~ "</reference/nnictl>`."
#~ msgstr ""

#~ msgid "**Total running time of the script:** ( 1 minutes  24.393 seconds)"
#~ msgstr ""

772
773
774
#~ msgid "**Total running time of the script:** ( 0 minutes  58.337 seconds)"
#~ msgstr ""

775
776
777
#~ msgid "**Total running time of the script:** ( 1 minutes  30.730 seconds)"
#~ msgstr ""