# SOME DESCRIPTIVE TITLE. # Copyright (C) 2022, Microsoft # This file is distributed under the same license as the NNI package. # FIRST AUTHOR , 2022. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: NNI \n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2022-04-13 03:14+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 2.9.1\n" #: ../../source/tutorials/hello_nas.rst:13 msgid "" "Click :ref:`here ` to download " "the full example code" msgstr "" #: ../../source/tutorials/hello_nas.rst:22 msgid "Hello, NAS!" msgstr "" #: ../../source/tutorials/hello_nas.rst:24 msgid "" "This is the 101 tutorial of Neural Architecture Search (NAS) on NNI. In " "this tutorial, we will search for a neural architecture on MNIST dataset " "with the help of NAS framework of NNI, i.e., *Retiarii*. We use multi-" "trial NAS as an example to show how to construct and explore a model " "space." msgstr "" #: ../../source/tutorials/hello_nas.rst:28 msgid "" "There are mainly three crucial components for a neural architecture " "search task, namely," msgstr "" #: ../../source/tutorials/hello_nas.rst:30 msgid "Model search space that defines a set of models to explore." msgstr "" #: ../../source/tutorials/hello_nas.rst:31 msgid "A proper strategy as the method to explore this model space." msgstr "" #: ../../source/tutorials/hello_nas.rst:32 msgid "" "A model evaluator that reports the performance of every model in the " "space." msgstr "" #: ../../source/tutorials/hello_nas.rst:34 msgid "" "Currently, PyTorch is the only supported framework by Retiarii, and we " "have only tested **PyTorch 1.7 to 1.10**. This tutorial assumes PyTorch " "context but it should also apply to other frameworks, which is in our " "future plan." msgstr "" #: ../../source/tutorials/hello_nas.rst:38 msgid "Define your Model Space" msgstr "" #: ../../source/tutorials/hello_nas.rst:40 msgid "" "Model space is defined by users to express a set of models that users " "want to explore, which contains potentially good-performing models. In " "this framework, a model space is defined with two parts: a base model and" " possible mutations on the base model." msgstr "" #: ../../source/tutorials/hello_nas.rst:46 msgid "Define Base Model" msgstr "" #: ../../source/tutorials/hello_nas.rst:48 msgid "" "Defining a base model is almost the same as defining a PyTorch (or " "TensorFlow) model. Usually, you only need to replace the code ``import " "torch.nn as nn`` with ``import nni.retiarii.nn.pytorch as nn`` to use our" " wrapped PyTorch modules." msgstr "" #: ../../source/tutorials/hello_nas.rst:52 msgid "Below is a very simple example of defining a base model." msgstr "" #: ../../source/tutorials/hello_nas.rst:93 msgid "" "Always keep in mind that you should use ``import nni.retiarii.nn.pytorch " "as nn`` and :meth:`nni.retiarii.model_wrapper`. Many mistakes are a " "result of forgetting one of those. Also, please use ``torch.nn`` for " "submodules of ``nn.init``, e.g., ``torch.nn.init`` instead of " "``nn.init``." msgstr "" #: ../../source/tutorials/hello_nas.rst:98 msgid "Define Model Mutations" msgstr "" #: ../../source/tutorials/hello_nas.rst:100 msgid "" "A base model is only one concrete model not a model space. We provide " ":doc:`API and Primitives ` for users to express how" " the base model can be mutated. That is, to build a model space which " "includes many models." msgstr "" #: ../../source/tutorials/hello_nas.rst:103 msgid "Based on the above base model, we can define a model space as below." msgstr "" #: ../../source/tutorials/hello_nas.rst:134 msgid "This results in the following code:" msgstr "" #: ../../source/tutorials/hello_nas.rst:189 #: ../../source/tutorials/hello_nas.rst:259 #: ../../source/tutorials/hello_nas.rst:551 #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:247 #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:284 #: ../../source/tutorials/pruning_quick_start_mnist.rst:65 #: ../../source/tutorials/pruning_quick_start_mnist.rst:107 #: ../../source/tutorials/pruning_quick_start_mnist.rst:172 #: ../../source/tutorials/pruning_quick_start_mnist.rst:218 #: ../../source/tutorials/pruning_quick_start_mnist.rst:255 #: ../../source/tutorials/pruning_quick_start_mnist.rst:283 msgid "Out:" msgstr "" #: ../../source/tutorials/hello_nas.rst:210 msgid "" "This example uses two mutation APIs, :class:`nn.LayerChoice " "` and :class:`nn.InputChoice " "`. :class:`nn.LayerChoice " "` takes a list of candidate modules " "(two in this example), one will be chosen for each sampled model. It can " "be used like normal PyTorch module. :class:`nn.InputChoice " "` takes a list of candidate values, " "one will be chosen to take effect for each sampled model." msgstr "" #: ../../source/tutorials/hello_nas.rst:219 msgid "" "More detailed API description and usage can be found :doc:`here " "`." msgstr "" #: ../../source/tutorials/hello_nas.rst:223 msgid "" "We are actively enriching the mutation APIs, to facilitate easy " "construction of model space. If the currently supported mutation APIs " "cannot express your model space, please refer to :doc:`this doc " "` for customizing mutators." msgstr "" #: ../../source/tutorials/hello_nas.rst:228 msgid "Explore the Defined Model Space" msgstr "" #: ../../source/tutorials/hello_nas.rst:230 msgid "" "There are basically two exploration approaches: (1) search by evaluating " "each sampled model independently, which is the search approach in :ref" ":`multi-trial NAS ` and (2) one-shot weight-sharing " "based search, which is used in one-shot NAS. We demonstrate the first " "approach in this tutorial. Users can refer to :ref:`here ` " "for the second approach." msgstr "" #: ../../source/tutorials/hello_nas.rst:235 msgid "" "First, users need to pick a proper exploration strategy to explore the " "defined model space. Second, users need to pick or customize a model " "evaluator to evaluate the performance of each explored model." msgstr "" #: ../../source/tutorials/hello_nas.rst:239 msgid "Pick an exploration strategy" msgstr "" #: ../../source/tutorials/hello_nas.rst:241 msgid "" "Retiarii supports many :doc:`exploration strategies " "`." msgstr "" #: ../../source/tutorials/hello_nas.rst:243 msgid "Simply choosing (i.e., instantiate) an exploration strategy as below." msgstr "" #: ../../source/tutorials/hello_nas.rst:273 msgid "Pick or customize a model evaluator" msgstr "" #: ../../source/tutorials/hello_nas.rst:275 msgid "" "In the exploration process, the exploration strategy repeatedly generates" " new models. A model evaluator is for training and validating each " "generated model to obtain the model's performance. The performance is " "sent to the exploration strategy for the strategy to generate better " "models." msgstr "" #: ../../source/tutorials/hello_nas.rst:279 msgid "" "Retiarii has provided :doc:`built-in model evaluators `, " "but to start with, it is recommended to use :class:`FunctionalEvaluator " "`, that is, to wrap your own " "training and evaluation code with one single function. This function " "should receive one single model class and uses " ":func:`nni.report_final_result` to report the final score of this model." msgstr "" #: ../../source/tutorials/hello_nas.rst:284 msgid "" "An example here creates a simple evaluator that runs on MNIST dataset, " "trains for 2 epochs, and reports its validation accuracy." msgstr "" #: ../../source/tutorials/hello_nas.rst:367 msgid "Create the evaluator" msgstr "" #: ../../source/tutorials/hello_nas.rst:386 msgid "" "The ``train_epoch`` and ``test_epoch`` here can be any customized " "function, where users can write their own training recipe." msgstr "" #: ../../source/tutorials/hello_nas.rst:389 msgid "" "It is recommended that the ``evaluate_model`` here accepts no additional " "arguments other than ``model_cls``. However, in the :doc:`advanced " "tutorial `, we will show how to use additional arguments " "in case you actually need those. In future, we will support mutation on " "the arguments of evaluators, which is commonly called \"Hyper-parmeter " "tuning\"." msgstr "" #: ../../source/tutorials/hello_nas.rst:394 msgid "Launch an Experiment" msgstr "" #: ../../source/tutorials/hello_nas.rst:396 msgid "" "After all the above are prepared, it is time to start an experiment to do" " the model search. An example is shown below." msgstr "" #: ../../source/tutorials/hello_nas.rst:417 msgid "" "The following configurations are useful to control how many trials to run" " at most / at the same time." msgstr "" #: ../../source/tutorials/hello_nas.rst:436 msgid "" "Remember to set the following config if you want to GPU. " "``use_active_gpu`` should be set true if you wish to use an occupied GPU " "(possibly running a GUI)." msgstr "" #: ../../source/tutorials/hello_nas.rst:456 msgid "" "Launch the experiment. The experiment should take several minutes to " "finish on a workstation with 2 GPUs." msgstr "" #: ../../source/tutorials/hello_nas.rst:474 msgid "" "Users can also run Retiarii Experiment with :doc:`different training " "services ` besides ``local`` " "training service." msgstr "" #: ../../source/tutorials/hello_nas.rst:478 msgid "Visualize the Experiment" msgstr "" #: ../../source/tutorials/hello_nas.rst:480 msgid "" "Users can visualize their experiment in the same way as visualizing a " "normal hyper-parameter tuning experiment. For example, open " "``localhost:8081`` in your browser, 8081 is the port that you set in " "``exp.run``. Please refer to :doc:`here " "` for details." msgstr "" #: ../../source/tutorials/hello_nas.rst:484 msgid "" "We support visualizing models with 3rd-party visualization engines (like " "`Netron `__). This can be used by clicking " "``Visualization`` in detail panel for each trial. Note that current " "visualization is based on `onnx `__ , thus " "visualization is not feasible if the model cannot be exported into onnx." msgstr "" #: ../../source/tutorials/hello_nas.rst:489 msgid "" "Built-in evaluators (e.g., Classification) will automatically export the " "model into a file. For your own evaluator, you need to save your file " "into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work. For instance," msgstr "" #: ../../source/tutorials/hello_nas.rst:520 msgid "Relaunch the experiment, and a button is shown on Web portal." msgstr "" #: ../../source/tutorials/hello_nas.rst:525 msgid "Export Top Models" msgstr "" #: ../../source/tutorials/hello_nas.rst:527 msgid "" "Users can export top models after the exploration is done using " "``export_top_models``." msgstr "" #: ../../source/tutorials/hello_nas.rst:563 msgid "**Total running time of the script:** ( 2 minutes 15.810 seconds)" msgstr "" #: ../../source/tutorials/hello_nas.rst:578 msgid ":download:`Download Python source code: hello_nas.py `" msgstr "" #: ../../source/tutorials/hello_nas.rst:584 msgid ":download:`Download Jupyter notebook: hello_nas.ipynb `" msgstr "" #: ../../source/tutorials/hello_nas.rst:591 #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:338 #: ../../source/tutorials/pruning_quick_start_mnist.rst:357 msgid "`Gallery generated by Sphinx-Gallery `_" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:14 msgid "" "Click :ref:`here " "` to download" " the full example code" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:23 msgid "NNI HPO Quickstart with PyTorch" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:24 msgid "" "This tutorial optimizes the model in `official PyTorch quickstart`_ with " "auto-tuning." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:26 msgid "" "There is also a :doc:`TensorFlow " "version<../hpo_quickstart_tensorflow/main>` if you prefer it." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:28 msgid "The tutorial consists of 4 steps:" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:30 msgid "Modify the model for auto-tuning." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:31 msgid "Define hyperparameters' search space." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:32 msgid "Configure the experiment." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:33 msgid "Run the experiment." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:40 msgid "Step 1: Prepare the model" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:41 msgid "In first step, we need to prepare the model to be tuned." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:43 msgid "" "The model should be put in a separate script. It will be evaluated many " "times concurrently, and possibly will be trained on distributed " "platforms." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:47 msgid "In this tutorial, the model is defined in :doc:`model.py `." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:49 msgid "In short, it is a PyTorch model with 3 additional API calls:" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:51 msgid "" "Use :func:`nni.get_next_parameter` to fetch the hyperparameters to be " "evalutated." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:52 msgid "" "Use :func:`nni.report_intermediate_result` to report per-epoch accuracy " "metrics." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:53 msgid "Use :func:`nni.report_final_result` to report final accuracy." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:55 msgid "Please understand the model code before continue to next step." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:60 msgid "Step 2: Define search space" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:61 msgid "" "In model code, we have prepared 3 hyperparameters to be tuned: " "*features*, *lr*, and *momentum*." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:64 msgid "" "Here we need to define their *search space* so the tuning algorithm can " "sample them in desired range." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:66 msgid "Assuming we have following prior knowledge for these hyperparameters:" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:68 msgid "*features* should be one of 128, 256, 512, 1024." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:69 msgid "" "*lr* should be a float between 0.0001 and 0.1, and it follows exponential" " distribution." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:70 msgid "*momentum* should be a float between 0 and 1." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:72 msgid "" "In NNI, the space of *features* is called ``choice``; the space of *lr* " "is called ``loguniform``; and the space of *momentum* is called " "``uniform``. You may have noticed, these names are derived from " "``numpy.random``." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:77 msgid "" "For full specification of search space, check :doc:`the reference " "`." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:79 msgid "Now we can define the search space as follow:" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:102 msgid "Step 3: Configure the experiment" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:103 msgid "" "NNI uses an *experiment* to manage the HPO process. The *experiment " "config* defines how to train the models and how to explore the search " "space." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:106 msgid "" "In this tutorial we use a *local* mode experiment, which means models " "will be trained on local machine, without using any special training " "platform." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:125 msgid "Now we start to configure the experiment." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:128 msgid "Configure trial code" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:129 msgid "" "In NNI evaluation of each hyperparameter set is called a *trial*. So the " "model script is called *trial code*." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:147 msgid "" "When ``trial_code_directory`` is a relative path, it relates to current " "working directory. To run ``main.py`` in a different path, you can set " "trial code directory to ``Path(__file__).parent``. (`__file__ " "`__ is " "only available in standard Python, not in Jupyter Notebook.)" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:154 msgid "" "If you are using Linux system without Conda, you may need to change " "``\"python model.py\"`` to ``\"python3 model.py\"``." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:160 msgid "Configure search space" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:178 msgid "Configure tuning algorithm" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:179 msgid "Here we use :doc:`TPE tuner `." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:198 msgid "Configure how many trials to run" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:199 msgid "" "Here we evaluate 10 sets of hyperparameters in total, and concurrently " "evaluate 2 sets at a time." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:218 msgid "" "``max_trial_number`` is set to 10 here for a fast example. In real world " "it should be set to a larger number. With default config TPE tuner " "requires 20 trials to warm up." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:222 msgid "You may also set ``max_experiment_duration = '1h'`` to limit running time." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:224 msgid "" "If neither ``max_trial_number`` nor ``max_experiment_duration`` are set, " "the experiment will run forever until you press Ctrl-C." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:230 msgid "Step 4: Run the experiment" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:231 msgid "" "Now the experiment is ready. Choose a port and launch it. (Here we use " "port 8080.)" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:233 msgid "" "You can use the web portal to view experiment status: " "http://localhost:8080." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:263 msgid "After the experiment is done" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:264 msgid "Everything is done and it is safe to exit now. The following are optional." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:266 msgid "" "If you are using standard Python instead of Jupyter Notebook, you can add" " ``input()`` or ``signal.pause()`` to prevent Python from exiting, " "allowing you to view the web portal after the experiment is done." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:296 msgid "" ":meth:`nni.experiment.Experiment.stop` is automatically invoked when " "Python exits, so it can be omitted in your code." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:299 msgid "" "After the experiment is stopped, you can run " ":meth:`nni.experiment.Experiment.view` to restart web portal." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:303 msgid "" "This example uses :doc:`Python API ` to create " "experiment." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:305 msgid "" "You can also create and manage experiments with :doc:`command line tool " "`." msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:310 msgid "**Total running time of the script:** ( 1 minutes 24.393 seconds)" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:325 msgid ":download:`Download Python source code: main.py `" msgstr "" #: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:331 msgid ":download:`Download Jupyter notebook: main.ipynb `" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:13 msgid "" "Click :ref:`here " "` to download " "the full example code" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:22 msgid "Pruning Quickstart" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:24 msgid "" "Model pruning is a technique to reduce the model size and computation by " "reducing model weight size or intermediate state size. There are three " "common practices for pruning a DNN model:" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:27 msgid "Pre-training a model -> Pruning the model -> Fine-tuning the pruned model" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:28 msgid "" "Pruning a model during training (i.e., pruning aware training) -> Fine-" "tuning the pruned model" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:29 msgid "Pruning a model -> Training the pruned model from scratch" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:31 msgid "" "NNI supports all of the above pruning practices by working on the key " "pruning stage. Following this tutorial for a quick look at how to use NNI" " to prune a model in a common practice." msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:37 msgid "Preparation" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:39 msgid "" "In this tutorial, we use a simple model and pre-trained on MNIST dataset." " If you are familiar with defining a model and training in pytorch, you " "can skip directly to `Pruning Model`_." msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:121 msgid "Pruning Model" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:123 msgid "" "Using L1NormPruner to prune the model and generate the masks. Usually, a " "pruner requires original model and ``config_list`` as its inputs. " "Detailed about how to write ``config_list`` please refer " ":doc:`compression config specification " "<../compression/compression_config_list>`." msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:127 msgid "" "The following `config_list` means all layers whose type is `Linear` or " "`Conv2d` will be pruned, except the layer named `fc3`, because `fc3` is " "`exclude`. The final sparsity ratio for each layer is 50%. The layer " "named `fc3` will not be pruned." msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:153 msgid "Pruners usually require `model` and `config_list` as input arguments." msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:232 msgid "" "Speedup the original model with masks, note that `ModelSpeedup` requires " "an unwrapped model. The model becomes smaller after speedup, and reaches " "a higher sparsity ratio because `ModelSpeedup` will propagate the masks " "across layers." msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:269 msgid "the model will become real smaller after speedup" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:307 msgid "Fine-tuning Compacted Model" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:308 msgid "" "Note that if the model has been sped up, you need to re-initialize a new " "optimizer for fine-tuning. Because speedup will replace the masked big " "layers with dense small ones." msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:329 msgid "**Total running time of the script:** ( 0 minutes 58.337 seconds)" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:344 msgid "" ":download:`Download Python source code: pruning_quick_start_mnist.py " "`" msgstr "" #: ../../source/tutorials/pruning_quick_start_mnist.rst:350 msgid "" ":download:`Download Jupyter notebook: pruning_quick_start_mnist.ipynb " "`" msgstr ""