You can also find recipes [here](https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR/pruned_transducer_stateless) that uses `rnnt_loss_pruned` to train a model.
### For rnnt_loss
The unprund rnnt_loss is the same as torchaudio rnnt_loss, it produces same output as
torchaudio for the same input.
```python
logits=torch.randn((B,S,T,C),dtype=torch.float32)
symbols=torch.randint(0,C,(B,S))
termination_symbol=0
boundary=torch.zeros((B,4),dtype=torch.int64)
boundary[:,2]=target_lengths
boundary[:,3]=num_frames
loss=fast_rnnt.rnnt_loss(
logits=logits,
symbols=symbols,
termination_symbol=termination_symbol,
boundary=boundary,
reduction="sum",
)
```
## Benchmarking
The [repo](https://github.com/csukuangfj/transducer-loss-benchmarking) compares the speed
and memory usage of several transducer losses, the summary in the following table is taken
from there, you can check the repository for more details.
|Name |Average step time (us) | Peak memory usage (MB)|