README.md 9.39 KB
Newer Older
Soumith Chintala's avatar
Soumith Chintala committed
1
torchaudio: an audio library for PyTorch
Vincent QB's avatar
Vincent QB committed
2
========================================
Soumith Chintala's avatar
Soumith Chintala committed
3

moto's avatar
moto committed
4
5
6
[![Build Status](https://circleci.com/gh/pytorch/audio.svg?style=svg)](https://app.circleci.com/pipelines/github/pytorch/audio)
[![Coverage](https://codecov.io/gh/pytorch/audio/branch/master/graph/badge.svg)](https://codecov.io/gh/pytorch/audio)
[![Documentation](https://img.shields.io/badge/dynamic/json.svg?label=docs&url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchaudio%2Fjson&query=%24.info.version&colorB=brightgreen&prefix=v)](https://pytorch.org/audio/)
7

jamarshon's avatar
jamarshon committed
8
The aim of torchaudio is to apply [PyTorch](https://github.com/pytorch/pytorch) to
9
the audio domain. By supporting PyTorch, torchaudio follows the same philosophy
jamarshon's avatar
jamarshon committed
10
11
of providing strong GPU acceleration, having a focus on trainable features through
the autograd system, and having consistent style (tensor names and dimension names).
12
13
Therefore, it is primarily a machine learning library and not a general signal
processing library. The benefits of Pytorch is be seen in torchaudio through
jamarshon's avatar
jamarshon committed
14
15
16
having all the computations be through Pytorch operations which makes it easy
to use and feel like a natural extension.

17
- [Support audio I/O (Load files, Save files)](http://pytorch.org/audio/)
18
  - Load the following formats into a torch Tensor using SoX
19
20
21
    - mp3, wav, aac, ogg, flac, avr, cdda, cvs/vms,
    - aiff, au, amr, mp2, mp4, ac3, avi, wmv,
    - mpeg, ircam and any other format supported by libsox.
22
    - [Kaldi (ark/scp)](http://pytorch.org/audio/kaldi_io.html)
23
24
- [Dataloaders for common audio datasets (VCTK, YesNo)](http://pytorch.org/audio/datasets.html)
- Common audio transforms
jamarshon's avatar
jamarshon committed
25
    - [Spectrogram, AmplitudeToDB, MelScale, MelSpectrogram, MFCC, MuLawEncoding, MuLawDecoding, Resample](http://pytorch.org/audio/transforms.html)
26
- Compliance interfaces: Run code using PyTorch that align with other libraries
jamarshon's avatar
jamarshon committed
27
    - [Kaldi: spectrogram, fbank, mfcc, resample_waveform](https://pytorch.org/audio/compliance.kaldi.html)
Soumith Chintala's avatar
Soumith Chintala committed
28
29
30

Dependencies
------------
31
* pytorch (nightly version needed for development)
32
* libsox v14.3.2 or above (only required when building from source)
33
* [optional] vesis84/kaldi-io-for-python commit cb46cb1f44318a5d04d4941cf39084c5b021241e or above
Soumith Chintala's avatar
Soumith Chintala committed
34
35
36
37

Installation
------------

38
### Binary Distibutions
jamarshon's avatar
jamarshon committed
39

40
To install the latest version using anaconda, run:
41

42
43
44
45
```
conda install -c pytorch torchaudio
```

46
To install the latest pip wheels, run:
jamarshon's avatar
jamarshon committed
47
48

```
49
50
pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html
```
jamarshon's avatar
jamarshon committed
51

52
53
54
(If you do not have torch already installed, this will default to installing
torch from PyPI. If you need a different torch configuration, preinstall torch
before running this command.)
jamarshon's avatar
jamarshon committed
55

56
57
58
59
60
### Nightly build

Note that nightly build is build on PyTorch's nightly build. Therefore, you need to install the latest PyTorch when you use nightly build of torchaudio.

**pip**
jamarshon's avatar
jamarshon committed
61

62
```
Mingbo Wan's avatar
Mingbo Wan committed
63
pip install numpy
Vincent QB's avatar
Vincent QB committed
64
pip install --pre torchaudio -f https://download.pytorch.org/whl/nightly/torch_nightly.html
jamarshon's avatar
jamarshon committed
65
66
```

67
**conda**
Mingbo Wan's avatar
Mingbo Wan committed
68
69
70
71
72

```
conda install -y -c pytorch-nightly torchaudio
```

jamarshon's avatar
jamarshon committed
73
74
75
### From Source

If your system configuration is not among the supported configurations
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
above, you can build torchaudio from source.

This will require libsox v14.3.2 or above.

<Details><Summary>Click here for the examples on how to install SoX</Summary>

OSX (Homebrew):
```bash
brew install sox
```

Linux (Ubuntu):
```bash
sudo apt-get install sox libsox-dev libsox-fmt-all
```

Anaconda
```bash
conda install -c conda-forge sox
```

</Details>
jamarshon's avatar
jamarshon committed
98

Soumith Chintala's avatar
Soumith Chintala committed
99
```bash
Soumith Chintala's avatar
Soumith Chintala committed
100
# Linux
Soumith Chintala's avatar
Soumith Chintala committed
101
python setup.py install
Soumith Chintala's avatar
Soumith Chintala committed
102

103
# OSX
Soumith Chintala's avatar
Soumith Chintala committed
104
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
Soumith Chintala's avatar
Soumith Chintala committed
105
106
```

107
108
Alternatively, the build process can build SoX (and codecs such as libmad, lame and flac) statically and torchaudio can link them, by setting environment variable `BUILD_SOX=1`.
The build process will fetch and build SoX, liblame, libmad, flac before building extension.
109

110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
```bash
# Linux
BUILD_SOX=1 python setup.py install

# OSX
BUILD_SOX=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
```

This is known to work on linux and unix distributions such as Ubuntu and CentOS 7 and macOS.
If you try this on a new system and found a solution to make it work, feel free to share it by opening and issue.

#### Troubleshooting

<Details><Summary>checking build system type... ./config.guess: unable to guess system type</Summary>

Since the configuration file for codecs are old, they cannot correctly detect the new environments, such as Jetson Aarch. You need to replace the `config.guess` file in `./third_party/tmp/lame-3.99.5/config.guess` and/or `./third_party/tmp/libmad-0.15.1b/config.guess` with [the latest one](https://github.com/gcc-mirror/gcc/blob/master/config.guess).

See also: [#658](https://github.com/pytorch/audio/issues/658)

</Details>

<Details><Summary>Undefined reference to `tgetnum' when using `BUILD_SOX`</Summary>

If while building from within an anaconda environment you come across errors similar to the following:
134
135
136
137
138
139
140
141
142
143
144
145

```
../bin/ld: console.c:(.text+0xc1): undefined reference to `tgetnum'
```

Install `ncurses` from `conda-forge` before running `python setup.py install`:

```
# Install ncurses from conda-forge
conda install -c conda-forge ncurses
```

146
147
</Details>

148

Soumith Chintala's avatar
Soumith Chintala committed
149
150
151
152
153
Quick Usage
-----------

```python
import torchaudio
154
155
156

waveform, sample_rate = torchaudio.load('foo.wav')  # load tensor from file
torchaudio.save('foo_save.wav', waveform, sample_rate)  # save tensor to file
Soumith Chintala's avatar
Soumith Chintala committed
157
158
```

159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
Backend Dispatch
----------------

By default in OSX and Linux, torchaudio uses SoX as a backend to load and save files.
The backend can be changed to [SoundFile](https://pysoundfile.readthedocs.io/en/latest/)
using the following. See [SoundFile](https://pysoundfile.readthedocs.io/en/latest/)
for installation instructions.

```python
import torchaudio
torchaudio.set_audio_backend("soundfile")  # switch backend

waveform, sample_rate = torchaudio.load('foo.wav')  # load tensor from file, as usual
torchaudio.save('foo_save.wav', waveform, sample_rate)  # save tensor to file, as usual
```

Unlike SoX, SoundFile does not currently support mp3.

Soumith Chintala's avatar
Soumith Chintala committed
177
API Reference
Vincent QB's avatar
Vincent QB committed
178
-------------
SeanNaren's avatar
SeanNaren committed
179

180
API Reference is located here: http://pytorch.org/audio/
Vincent QB's avatar
Vincent QB committed
181
182
183
184

Conventions
-----------

jamarshon's avatar
jamarshon committed
185
With torchaudio being a machine learning library and built on top of PyTorch,
186
187
torchaudio is standardized around the following naming conventions. Tensors are
assumed to have channel as the first dimension and time as the last
jamarshon's avatar
jamarshon committed
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
dimension (when applicable). This makes it consistent with PyTorch's dimensions.
For size names, the prefix `n_` is used (e.g. "a tensor of size (`n_freq`, `n_mel`)")
whereas dimension names do not have this prefix (e.g. "a tensor of
dimension (channel, time)")

* `waveform`: a tensor of audio samples with dimensions (channel, time)
* `sample_rate`: the rate of audio dimensions (samples per second)
* `specgram`: a tensor of spectrogram with dimensions (channel, freq, time)
* `mel_specgram`: a mel spectrogram with dimensions (channel, mel, time)
* `hop_length`: the number of samples between the starts of consecutive frames
* `n_fft`: the number of Fourier bins
* `n_mel`, `n_mfcc`: the number of mel and MFCC bins
* `n_freq`: the number of bins in a linear spectrogram
* `min_freq`: the lowest frequency of the lowest band in a spectrogram
* `max_freq`: the highest frequency of the highest band in a spectrogram
* `win_length`: the length of the STFT window
204
* `window_fn`: for functions that creates windows e.g. `torch.hann_window`
jamarshon's avatar
jamarshon committed
205

Vincent QB's avatar
Vincent QB committed
206
Transforms expect and return the following dimensions.
jamarshon's avatar
jamarshon committed
207
208
209

* `Spectrogram`: (channel, time) -> (channel, freq, time)
* `AmplitudeToDB`: (channel, freq, time) -> (channel, freq, time)
210
* `MelScale`: (channel, freq, time) -> (channel, mel, time)
jamarshon's avatar
jamarshon committed
211
212
213
214
215
* `MelSpectrogram`: (channel, time) -> (channel, mel, time)
* `MFCC`: (channel, time) -> (channel, mfcc, time)
* `MuLawEncode`: (channel, time) -> (channel, time)
* `MuLawDecode`: (channel, time) -> (channel, time)
* `Resample`: (channel, time) -> (channel, time)
Tomás Osório's avatar
Tomás Osório committed
216
* `Fade`: (channel, time) -> (channel, time)
Tomás Osório's avatar
Tomás Osório committed
217
* `Vol`: (channel, time) -> (channel, time)
jamarshon's avatar
jamarshon committed
218

Vincent QB's avatar
Vincent QB committed
219
Complex numbers are supported via tensors of dimension (..., 2), and torchaudio provides `complex_norm` and `angle` to convert such a tensor into its magnitude and phase. Here, and in the documentation, we use an ellipsis "..." as a placeholder for the rest of the dimensions of a tensor, e.g. optional batching and channel dimensions.
220

jamarshon's avatar
jamarshon committed
221
222
223
224
225
226
227
228
229
230
231
232
Contributing Guidelines
-----------------------

Please let us know if you encounter a bug by filing an [issue](https://github.com/pytorch/audio/issues).

We appreciate all contributions. If you are planning to contribute back
bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions or extensions to the
core, please first open an issue and discuss the feature with us. Sending a PR
without discussion might end up resulting in a rejected PR, because we might be
taking the core in a different direction than you might be aware of.
Vincent QB's avatar
Vincent QB committed
233
234
235
236
237
238
239

Disclaimer on Datasets
----------------------

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!