Unverified Commit 33a655fd authored by Caroline Chen's avatar Caroline Chen Committed by GitHub
Browse files

Update RNNT Loss docs and add example (#1835)

parent 274ada80
...@@ -1764,17 +1764,17 @@ def rnnt_loss( ...@@ -1764,17 +1764,17 @@ def rnnt_loss(
dependencies. dependencies.
Args: Args:
logits (Tensor): Tensor of dimension (batch, max seq length, max target length + 1, class) logits (Tensor): Tensor of dimension `(batch, max seq length, max target length + 1, class)`
containing output from joiner containing output from joiner
targets (Tensor): Tensor of dimension (batch, max target length) containing targets with zero padded targets (Tensor): Tensor of dimension `(batch, max target length)` containing targets with zero padded
logit_lengths (Tensor): Tensor of dimension (batch) containing lengths of each sequence from encoder logit_lengths (Tensor): Tensor of dimension `(batch)` containing lengths of each sequence from encoder
target_lengths (Tensor): Tensor of dimension (batch) containing lengths of targets for each sequence target_lengths (Tensor): Tensor of dimension `(batch)` containing lengths of targets for each sequence
blank (int, optional): blank label (Default: ``-1``) blank (int, optional): blank label (Default: ``-1``)
clamp (float, optional): clamp for gradients (Default: ``-1``) clamp (float, optional): clamp for gradients (Default: ``-1``)
reduction (string, optional): Specifies the reduction to apply to the output: reduction (string, optional): Specifies the reduction to apply to the output:
``'none'`` | ``'mean'`` | ``'sum'``. (Default: ``'mean'``) ``'none'`` | ``'mean'`` | ``'sum'``. (Default: ``'mean'``)
Returns: Returns:
Tensor: Loss with the reduction option applied. If ``reduction`` is ``'none'``, then size (batch), Tensor: Loss with the reduction option applied. If ``reduction`` is ``'none'``, then size `(batch)`,
otherwise scalar. otherwise scalar.
""" """
if reduction not in ['none', 'mean', 'sum']: if reduction not in ['none', 'mean', 'sum']:
......
...@@ -1450,6 +1450,23 @@ class RNNTLoss(torch.nn.Module): ...@@ -1450,6 +1450,23 @@ class RNNTLoss(torch.nn.Module):
clamp (float, optional): clamp for gradients (Default: ``-1``) clamp (float, optional): clamp for gradients (Default: ``-1``)
reduction (string, optional): Specifies the reduction to apply to the output: reduction (string, optional): Specifies the reduction to apply to the output:
``'none'`` | ``'mean'`` | ``'sum'``. (Default: ``'mean'``) ``'none'`` | ``'mean'`` | ``'sum'``. (Default: ``'mean'``)
Example
>>> # Hypothetical values
>>> logits = torch.tensor([[[[0.1, 0.6, 0.1, 0.1, 0.1],
>>> [0.1, 0.1, 0.6, 0.1, 0.1],
>>> [0.1, 0.1, 0.2, 0.8, 0.1]],
>>> [[0.1, 0.6, 0.1, 0.1, 0.1],
>>> [0.1, 0.1, 0.2, 0.1, 0.1],
>>> [0.7, 0.1, 0.2, 0.1, 0.1]]]],
>>> dtype=torch.float32,
>>> requires_grad=True)
>>> targets = torch.tensor([[1, 2]], dtype=torch.int)
>>> logit_lengths = torch.tensor([2], dtype=torch.int)
>>> target_lengths = torch.tensor([2], dtype=torch.int)
>>> transform = transforms.RNNTLoss(blank=0)
>>> loss = transform(logits, targets, logit_lengths, target_lengths)
>>> loss.backward()
""" """
def __init__( def __init__(
...@@ -1472,11 +1489,11 @@ class RNNTLoss(torch.nn.Module): ...@@ -1472,11 +1489,11 @@ class RNNTLoss(torch.nn.Module):
): ):
""" """
Args: Args:
logits (Tensor): Tensor of dimension (batch, max seq length, max target length + 1, class) logits (Tensor): Tensor of dimension `(batch, max seq length, max target length + 1, class)`
containing output from joiner containing output from joiner
targets (Tensor): Tensor of dimension (batch, max target length) containing targets with zero padded targets (Tensor): Tensor of dimension `(batch, max target length)` containing targets with zero padded
logit_lengths (Tensor): Tensor of dimension (batch) containing lengths of each sequence from encoder logit_lengths (Tensor): Tensor of dimension `(batch)` containing lengths of each sequence from encoder
target_lengths (Tensor): Tensor of dimension (batch) containing lengths of targets for each sequence target_lengths (Tensor): Tensor of dimension `(batch)` containing lengths of targets for each sequence
Returns: Returns:
Tensor: Loss with the reduction option applied. If ``reduction`` is ``'none'``, then size (batch), Tensor: Loss with the reduction option applied. If ``reduction`` is ``'none'``, then size (batch),
otherwise scalar. otherwise scalar.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment