# Fast Discounted Cumulative Sums in PyTorch
[](https://pypi.org/project/torch-discounted-cumsum/)

[](https://pepy.tech/project/torch-discounted-cumsum)
[-BSD%203--Clause-blue.svg)](LICENSE_code)
[-CC%20BY%204.0-lightgrey.svg)](LICENSE_doc)
This repository implements an efficient parallel algorithm for the computation of discounted cumulative sums
and a Python package with differentiable bindings to PyTorch. The discounted `cumsum` operation is frequently seen in
data science domains concerned with time series, including Reinforcement Learning (RL).
The traditional sequential algorithm performs the computation of the output elements in a loop. For an input of size
`N`, it requires `O(N)` operations and takes `O(N)` time steps to complete.
The proposed parallel algorithm requires a total of `O(N log N)` operations, but takes only `O(log N)` time steps, which is a
considerable trade-off in many applications involving large inputs.
Features of the parallel algorithm:
- Speed logarithmic in the input size
- Better numerical precision than sequential algorithms
Features of the package:
- CPU: sequential algorithm in C++
- GPU: parallel algorithm in CUDA
- Gradients computation wrt input
- Both left and right directions of summation supported
- PyTorch bindings
## Usage
#### Installation
```shell script
pip install torch-discounted-cumsum
```
#### API
- `discounted_cumsum_right`: Computes discounted cumulative sums to the right of each position (a standard setting in RL)
- `discounted_cumsum_left`: Computes discounted cumulative sums to the left of each position
#### Example
```python
import torch
from torch_discounted_cumsum import discounted_cumsum_right
N = 8
gamma = 0.99
x = torch.ones(1, N).cuda()
y = discounted_cumsum_right(x, gamma)
print(y)
```
Output:
```
tensor([[7.7255, 6.7935, 5.8520, 4.9010, 3.9404, 2.9701, 1.9900, 1.0000]],
device='cuda:0')
```
#### Up to `K` elements
```python
import torch
from torch_discounted_cumsum import discounted_cumsum_right
N = 8
K = 2
gamma = 0.99
x = torch.ones(1, N).cuda()
y_N = discounted_cumsum_right(x, gamma)
y_K = y_N - (gamma ** K) * torch.cat((y_N[:, K:], torch.zeros(1, K).cuda()), dim=1)
print(y_K)
```
Output:
```
tensor([[1.9900, 1.9900, 1.9900, 1.9900, 1.9900, 1.9900, 1.9900, 1.0000]],
device='cuda:0')
```
## Parallel Algorithm
For the sake of simplicity, the algorithm is explained for `N=16`.
The processing is performed in-place in the input vector in `log2 N` stages. Each stage updates `N / 2` positions in parallel
(that is, in a single time step, provided unrestricted parallelism). A stage is characterized by the size of the group of
sequential elements being updated, which is computed as `2 ^ (stage - 1)`.
The group stride is always twice larger than the group size. The elements updated during the stage are highlighted with
the respective stage color in the figure below. Here input elements are denoted with their position id in hex, and the
elements tagged with two symbols indicate the range over which the discounted partial sum is computed upon stage completion.
Each element update includes an in-place addition of a discounted element, which follows the last
updated element in the group. The discount factor is computed as gamma raised to the power of the distance between the
updated and the discounted elements. In the figure below, this operation is denoted with tilted arrows with a greek
gamma tag. After the last stage completes, the output is written in place of the input.