Commit 182fe8de authored by pkufool's avatar pkufool
Browse files

Minor fixes

parent 663cd235
This project implements a method for faster and more memory-efficient RNN-T computation, called `pruned rnnt`. This project implements a method for faster and more memory-efficient RNN-T loss computation, called `pruned rnnt`.
Note: There is also a fast RNNT loss implementation in [k2](https://github.com/k2-fsa/k2) project, which shares the same code here. We make `fast_rnnt` a stand-alone project in case someone wants only this rnnt loss. Note: There is also a fast RNN-T loss implementation in [k2](https://github.com/k2-fsa/k2) project, which shares the same code here. We make `fast_rnnt` a stand-alone project in case someone wants only this rnnt loss.
## How does the pruned-rnnt work ? ## How does the pruned-rnnt work ?
...@@ -90,6 +90,8 @@ and describe your problem there. ...@@ -90,6 +90,8 @@ and describe your problem there.
This is a simple case of the RNN-T loss, where the joiner network is just This is a simple case of the RNN-T loss, where the joiner network is just
addition. addition.
Note: termination_symbol plays the role of blank in other RNN-T loss implementations, we call it termination_symbol as it terminates symbols of current frame.
```python ```python
am = torch.randn((B, T, C), dtype=torch.float32) am = torch.randn((B, T, C), dtype=torch.float32)
lm = torch.randn((B, S + 1, C), dtype=torch.float32) lm = torch.randn((B, S + 1, C), dtype=torch.float32)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment