"...git@developer.sourcefind.cn:OpenDAS/lmdeploy.git" did not exist on "d29b70ae850252ee85aaf1f509f9cc2b9115c4dc"
Commit ae3070cc authored by jamarshon's avatar jamarshon Committed by cpuhrsch
Browse files

README updates (#180)

parent 33bc3581
...@@ -3,6 +3,15 @@ torchaudio: an audio library for PyTorch ...@@ -3,6 +3,15 @@ torchaudio: an audio library for PyTorch
[![Build Status](https://travis-ci.org/pytorch/audio.svg?branch=master)](https://travis-ci.org/pytorch/audio) [![Build Status](https://travis-ci.org/pytorch/audio.svg?branch=master)](https://travis-ci.org/pytorch/audio)
The aim of torchaudio is to apply [PyTorch](https://github.com/pytorch/pytorch) to
the audio domain. By supporting PyTorch, torchaudio will follow the same philosophy
of providing strong GPU acceleration, having a focus on trainable features through
the autograd system, and having consistent style (tensor names and dimension names).
Therefore, it will be primarily a machine learning library and not a general signal
processing library. The benefits of Pytorch will be seen in torchaudio through
having all the computations be through Pytorch operations which makes it easy
to use and feel like a natural extension.
- [Support audio I/O (Load files, Save files)](http://pytorch.org/audio/) - [Support audio I/O (Load files, Save files)](http://pytorch.org/audio/)
- Load the following formats into a torch Tensor - Load the following formats into a torch Tensor
- mp3, wav, aac, ogg, flac, avr, cdda, cvs/vms, - mp3, wav, aac, ogg, flac, avr, cdda, cvs/vms,
...@@ -63,28 +72,47 @@ API Reference is located here: http://pytorch.org/audio/ ...@@ -63,28 +72,47 @@ API Reference is located here: http://pytorch.org/audio/
Conventions Conventions
----------- -----------
Torchaudio is standardized around the following naming conventions. With torchaudio being a machine learning library and built on top of PyTorch,
torchaudio is standardized around the following naming conventions. In particular,
* waveform: a tensor of audio samples with dimensions (channel, time) tensors are assumed to have channel as the first dimension and time as the last
* sample_rate: the rate of audio dimensions (samples per second) dimension (when applicable). This makes it consistent with PyTorch's dimensions.
* specgram: a tensor of spectrogram with dimensions (channel, freq, time) For size names, the prefix `n_` is used (e.g. "a tensor of size (`n_freq`, `n_mel`)")
* mel_specgram: a mel spectrogram with dimensions (channel, mel, time) whereas dimension names do not have this prefix (e.g. "a tensor of
* hop_length: the number of samples between the starts of consecutive frames dimension (channel, time)")
* n_fft: the number of Fourier bins
* n_mel, n_mfcc: the number of mel and MFCC bins * `waveform`: a tensor of audio samples with dimensions (channel, time)
* n_freq: the number of bins in a linear spectrogram * `sample_rate`: the rate of audio dimensions (samples per second)
* min_freq: the lowest frequency of the lowest band in a spectrogram * `specgram`: a tensor of spectrogram with dimensions (channel, freq, time)
* max_freq: the highest frequency of the highest band in a spectrogram * `mel_specgram`: a mel spectrogram with dimensions (channel, mel, time)
* win_length: the length of the STFT window * `hop_length`: the number of samples between the starts of consecutive frames
* window_fn: for functions that creates windows e.g. torch.hann_window * `n_fft`: the number of Fourier bins
* `n_mel`, `n_mfcc`: the number of mel and MFCC bins
Transforms expect the following dimensions. In particular, the input of all transforms and functions assumes channel first. * `n_freq`: the number of bins in a linear spectrogram
* `min_freq`: the lowest frequency of the lowest band in a spectrogram
* Spectrogram: (channel, time) -> (channel, freq, time) * `max_freq`: the highest frequency of the highest band in a spectrogram
* AmplitudeToDB: (channel, freq, time) -> (channel, freq, time) * `win_length`: the length of the STFT window
* MelScale: (channel, time) -> (channel, mel, time) * `window_fn`: for functions that creates windows e.g. torch.hann_window
* MelSpectrogram: (channel, time) -> (channel, mel, time)
* MFCC: (channel, time) -> (channel, mfcc, time) Transforms expect the following dimensions.
* MuLawEncode: (channel, time) -> (channel, time)
* MuLawDecode: (channel, time) -> (channel, time) * `Spectrogram`: (channel, time) -> (channel, freq, time)
* Resample: (channel, time) -> (channel, time) * `AmplitudeToDB`: (channel, freq, time) -> (channel, freq, time)
* `MelScale`: (channel, time) -> (channel, mel, time)
* `MelSpectrogram`: (channel, time) -> (channel, mel, time)
* `MFCC`: (channel, time) -> (channel, mfcc, time)
* `MuLawEncode`: (channel, time) -> (channel, time)
* `MuLawDecode`: (channel, time) -> (channel, time)
* `Resample`: (channel, time) -> (channel, time)
Contributing Guidelines
-----------------------
Please let us know if you encounter a bug by filing an [issue](https://github.com/pytorch/audio/issues).
We appreciate all contributions. If you are planning to contribute back
bug-fixes, please do so without any further discussion.
If you plan to contribute new features, utility functions or extensions to the
core, please first open an issue and discuss the feature with us. Sending a PR
without discussion might end up resulting in a rejected PR, because we might be
taking the core in a different direction than you might be aware of.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment