Commit 8844fbb7 authored by moto's avatar moto Committed by Facebook GitHub Bot
Browse files

Add notes about prototype features in tutorials (#2288)

Summary: Pull Request resolved: https://github.com/pytorch/audio/pull/2288

Reviewed By: hwangjeff

Differential Revision: D35099492

Pulled By: mthrok

fbshipit-source-id: 955c5e617469009ae2600d2764d601d794ee916f
parent 7444f568
...@@ -11,6 +11,21 @@ using CTC loss. ...@@ -11,6 +11,21 @@ using CTC loss.
""" """
######################################################################
#
# .. note::
#
# This tutorial requires torchaudio prototype features.
#
# torchaudio prototype features are available on nightly builds.
# Please refer to https://pytorch.org/get-started/locally/
# for instructions.
#
# The interfaces of prototype features are unstable and subject to
# change. Please refer to `the nightly build documentation
# <https://pytorch.org/audio/main/>`__ for the up-to-date
# API references.
#
###################################################################### ######################################################################
# Overview # Overview
......
...@@ -10,28 +10,22 @@ to perform online speech recognition. ...@@ -10,28 +10,22 @@ to perform online speech recognition.
""" """
###################################################################### ######################################################################
# 1. Overview
# -----------
# #
# Performing online speech recognition is composed of the following steps # .. note::
#
# 1. Build the inference pipeline
# Emformer RNN-T is composed of three components: feature extractor,
# decoder and token processor.
# 2. Format the waveform into chunks of expected sizes.
# 3. Pass data through the pipeline.
######################################################################
# 2. Preparation
# --------------
# #
# This tutorial requires torchaudio with prototype features and
###################################################################### # FFmpeg libraries (>=4.1).
# #
# .. note:: # torchaudio prototype features are available on nightly builds.
# Please refer to https://pytorch.org/get-started/locally/
# for instructions.
# #
# The streaming API requires FFmpeg libraries (>=4.1). # The interfaces of prototype features are unstable and subject to
# change. Please refer to `the nightly build documentation
# <https://pytorch.org/audio/main/>`__ for the up-to-date
# API references.
# #
# There are multiple ways to install FFmpeg libraries.
# If you are using Anaconda Python distribution, # If you are using Anaconda Python distribution,
# ``conda install -c anaconda ffmpeg`` will install # ``conda install -c anaconda ffmpeg`` will install
# the required libraries. # the required libraries.
...@@ -44,6 +38,23 @@ to perform online speech recognition. ...@@ -44,6 +38,23 @@ to perform online speech recognition.
# !add-apt-repository -y ppa:savoury1/ffmpeg4 # !add-apt-repository -y ppa:savoury1/ffmpeg4
# !apt-get -qq install -y ffmpeg # !apt-get -qq install -y ffmpeg
######################################################################
# 1. Overview
# -----------
#
# Performing online speech recognition is composed of the following steps
#
# 1. Build the inference pipeline
# Emformer RNN-T is composed of three components: feature extractor,
# decoder and token processor.
# 2. Format the waveform into chunks of expected sizes.
# 3. Pass data through the pipeline.
######################################################################
# 2. Preparation
# --------------
#
import IPython import IPython
import torch import torch
import torchaudio import torchaudio
......
...@@ -8,6 +8,35 @@ libavfilter provides. ...@@ -8,6 +8,35 @@ libavfilter provides.
""" """
######################################################################
#
# .. note::
#
# This tutorial requires torchaudio with prototype features and
# FFmpeg libraries (>=4.1).
#
# The torchaudio prototype features are available on nightly builds.
# Please refer to https://pytorch.org/get-started/locally/
# for instructions.
#
# The interfaces of prototype features are unstable and subject to
# change. Please refer to `the nightly build documentation
# <https://pytorch.org/audio/main/>`__ for the up-to-date
# API references.
#
# There are multiple ways to install FFmpeg libraries.
# If you are using Anaconda Python distribution,
# ``conda install -c anaconda ffmpeg`` will install
# the required libraries.
#
# When running this tutorial in Google Colab, the following
# command should do.
#
# .. code::
#
# !add-apt-repository -y ppa:savoury1/ffmpeg4
# !apt-get -qq install -y ffmpeg
###################################################################### ######################################################################
# 1. Overview # 1. Overview
# ----------- # -----------
...@@ -44,24 +73,6 @@ libavfilter provides. ...@@ -44,24 +73,6 @@ libavfilter provides.
# -------------- # --------------
# #
######################################################################
#
# .. note::
#
# The streaming API requires FFmpeg libraries (>=4.1).
#
# If you are using Anaconda Python distribution,
# ``conda install -c anaconda ffmpeg`` will install
# the required libraries.
#
# When running this tutorial in Google Colab, the following
# command should do.
#
# .. code::
#
# !add-apt-repository -y ppa:savoury1/ffmpeg4
# !apt-get -qq install -y ffmpeg
import IPython import IPython
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import torch import torch
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment