Unverified Commit 56914d4f authored by jpool-nv's avatar jpool-nv Committed by GitHub
Browse files

Update ASP README to highlight default recipe

The Recipe was presented after some non-standard API calls, so moving the suggested usage up, giving it its own section, and reinforcing the suggested usage in the non-standard section.
parent 8cf5ae61
# Introduction to ASP # Introduction to ASP
This page documents the API for ASP (Automatic Sparsity), a tool that enables sparse training and inference for PyTorch models by adding 2 lines of Python. This serves as a quick-start for ASP (Automatic SParsity), a tool that enables sparse training and inference for PyTorch models by adding 2 lines of Python.
## Importing ASP ## Importing ASP
``` ```
...@@ -14,7 +14,7 @@ Apart from the import statement, it is sufficient to add just the following line ...@@ -14,7 +14,7 @@ Apart from the import statement, it is sufficient to add just the following line
ASP.prune_trained_model(model, optimizer) ASP.prune_trained_model(model, optimizer)
``` ```
In a typical PyTorch training loop, it might look like this: In the context of a typical PyTorch training loop, it might look like this:
``` ```
ASP.prune_trained_model(model, optimizer) ASP.prune_trained_model(model, optimizer)
...@@ -27,21 +27,14 @@ for epoch in range(epochs): ...@@ -27,21 +27,14 @@ for epoch in range(epochs):
torch.save(...) torch.save(...)
``` ```
The `prune_trained_model` calculates the sparse mask and applies it to the weights. This is done once, i.e., sparse locations in the weights matrix remain fixed after this step. In order to recompute the sparse mask in between training, say after an epoch, use the following method: The `prune_trained_model` step calculates the sparse mask and applies it to the weights. This is done once, i.e., sparse locations in the weights matrix remain fixed after this step.
```
ASP.compute_sparse_masks()
```
A more thorough example can be found in `./test/toy_problem.py`.
## Generate a Sparse Network
The following approach serves as a guiding example on how to generate a pruned model that can use Sparse Tensor Cores in the NVIDIA Ampere Architecture. This approach generates a model for deployment, i.e. inference mode.
The following approach serves as a guiding example on how to generate a pruned model that can use Sparse Tensor Core in NVIDIA Ampere Architecture. This approach generates a model for deployment, i.e. inference mode.
``` ```
(1) Given a fully trained (dense) network, prune parameter values in 2:4 sparsepattern. (1) Given a fully trained (dense) network, prune parameter values in a 2:4 sparse pattern.
(2) Fine-tune the pruned model with optimization method and hyper-parameters (learning-rate, schedule, number of epochs, etc.) exactly as those used to obtain the trained model. (2) Fine-tune the pruned model with optimization method and hyper-parameters (learning-rate, schedule, number of epochs, etc.) exactly as those used to obtain the trained model.
(3) (If required) Quantize the model. (3) (If required) Quantize the model.
``` ```
...@@ -68,3 +61,18 @@ for epoch in range(epochs): # train the pruned model for the same number of epoc ...@@ -68,3 +61,18 @@ for epoch in range(epochs): # train the pruned model for the same number of epoc
torch.save(...) # saves the pruned checkpoint with sparsity masks torch.save(...) # saves the pruned checkpoint with sparsity masks
``` ```
## Non-Standard Usage
If your goal is to easily perpare a network for accelerated inference, please follow the recipe above. However, ASP can also be used to perform experiments in advanced techniques like training with sparsity from initialization. For example, in order to recompute the sparse mask in between training steps, use the following method:
```
ASP.compute_sparse_masks()
```
A more thorough example can be found in `./test/toy_problem.py`.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment