README.md 4.69 KB
Newer Older
Soumith Chintala's avatar
Soumith Chintala committed
1
torchaudio: an audio library for PyTorch
Vincent QB's avatar
Vincent QB committed
2
========================================
Soumith Chintala's avatar
Soumith Chintala committed
3

4
5
[![Build Status](https://travis-ci.org/pytorch/audio.svg?branch=master)](https://travis-ci.org/pytorch/audio)

jamarshon's avatar
jamarshon committed
6
The aim of torchaudio is to apply [PyTorch](https://github.com/pytorch/pytorch) to
7
the audio domain. By supporting PyTorch, torchaudio follows the same philosophy
jamarshon's avatar
jamarshon committed
8
9
of providing strong GPU acceleration, having a focus on trainable features through
the autograd system, and having consistent style (tensor names and dimension names).
10
11
Therefore, it is primarily a machine learning library and not a general signal
processing library. The benefits of Pytorch is be seen in torchaudio through
jamarshon's avatar
jamarshon committed
12
13
14
having all the computations be through Pytorch operations which makes it easy
to use and feel like a natural extension.

15
- [Support audio I/O (Load files, Save files)](http://pytorch.org/audio/)
16
  - Load the following formats into a torch Tensor using sox
17
18
19
    - mp3, wav, aac, ogg, flac, avr, cdda, cvs/vms,
    - aiff, au, amr, mp2, mp4, ac3, avi, wmv,
    - mpeg, ircam and any other format supported by libsox.
20
    - [Kaldi (ark/scp)](http://pytorch.org/audio/kaldi_io.html)
21
22
- [Dataloaders for common audio datasets (VCTK, YesNo)](http://pytorch.org/audio/datasets.html)
- Common audio transforms
23
24
25
    - [Spectrogram, SpectrogramToDB, MelScale, MelSpectrogram, MFCC, MuLawEncoding, MuLawDecoding, Resample](http://pytorch.org/audio/transforms.html)
- Compliance interfaces: Run code using PyTorch that align with other libraries
    - [Kaldi: fbank, spectrogram, resample_waveform](https://pytorch.org/audio/compliance.kaldi.html)
Soumith Chintala's avatar
Soumith Chintala committed
26
27
28

Dependencies
------------
29
* pytorch (nightly version needed for development)
Soumith Chintala's avatar
Soumith Chintala committed
30
* libsox v14.3.2 or above
31
* [optional] vesis84/kaldi-io-for-python commit cb46cb1f44318a5d04d4941cf39084c5b021241e or above
Soumith Chintala's avatar
Soumith Chintala committed
32
33
34
35
36
37
38
39
40
41

Quick install on
OSX (Homebrew):
```bash
brew install sox
```
Linux (Ubuntu):
```bash
sudo apt-get install sox libsox-dev libsox-fmt-all
```
42
43
44
45
Anaconda
```bash
conda install -c conda-forge sox
```
Soumith Chintala's avatar
Soumith Chintala committed
46
47
48
49
50

Installation
------------

```bash
Soumith Chintala's avatar
Soumith Chintala committed
51
# Linux
Soumith Chintala's avatar
Soumith Chintala committed
52
python setup.py install
Soumith Chintala's avatar
Soumith Chintala committed
53

54
# OSX
Soumith Chintala's avatar
Soumith Chintala committed
55
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
Soumith Chintala's avatar
Soumith Chintala committed
56
57
58
59
60
61
62
63
```

Quick Usage
-----------

```python
import torchaudio
sound, sample_rate = torchaudio.load('foo.mp3')
SeanNaren's avatar
SeanNaren committed
64
torchaudio.save('foo_save.mp3', sound, sample_rate) # saves tensor to file
Soumith Chintala's avatar
Soumith Chintala committed
65
66
67
```

API Reference
Vincent QB's avatar
Vincent QB committed
68
-------------
SeanNaren's avatar
SeanNaren committed
69

70
API Reference is located here: http://pytorch.org/audio/
Vincent QB's avatar
Vincent QB committed
71
72
73
74

Conventions
-----------

jamarshon's avatar
jamarshon committed
75
With torchaudio being a machine learning library and built on top of PyTorch,
76
77
torchaudio is standardized around the following naming conventions. Tensors are
assumed to have channel as the first dimension and time as the last
jamarshon's avatar
jamarshon committed
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
dimension (when applicable). This makes it consistent with PyTorch's dimensions.
For size names, the prefix `n_` is used (e.g. "a tensor of size (`n_freq`, `n_mel`)")
whereas dimension names do not have this prefix (e.g. "a tensor of
dimension (channel, time)")

* `waveform`: a tensor of audio samples with dimensions (channel, time)
* `sample_rate`: the rate of audio dimensions (samples per second)
* `specgram`: a tensor of spectrogram with dimensions (channel, freq, time)
* `mel_specgram`: a mel spectrogram with dimensions (channel, mel, time)
* `hop_length`: the number of samples between the starts of consecutive frames
* `n_fft`: the number of Fourier bins
* `n_mel`, `n_mfcc`: the number of mel and MFCC bins
* `n_freq`: the number of bins in a linear spectrogram
* `min_freq`: the lowest frequency of the lowest band in a spectrogram
* `max_freq`: the highest frequency of the highest band in a spectrogram
* `win_length`: the length of the STFT window
* `window_fn`: for functions that creates windows e.g. torch.hann_window

Transforms expect the following dimensions.

* `Spectrogram`: (channel, time) -> (channel, freq, time)
* `AmplitudeToDB`: (channel, freq, time) -> (channel, freq, time)
* `MelScale`: (channel, time) -> (channel, mel, time)
* `MelSpectrogram`: (channel, time) -> (channel, mel, time)
* `MFCC`: (channel, time) -> (channel, mfcc, time)
* `MuLawEncode`: (channel, time) -> (channel, time)
* `MuLawDecode`: (channel, time) -> (channel, time)
* `Resample`: (channel, time) -> (channel, time)

Contributing Guidelines
-----------------------

Please let us know if you encounter a bug by filing an [issue](https://github.com/pytorch/audio/issues).

We appreciate all contributions. If you are planning to contribute back
bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions or extensions to the
core, please first open an issue and discuss the feature with us. Sending a PR
without discussion might end up resulting in a rejected PR, because we might be
taking the core in a different direction than you might be aware of.