- 12 Apr, 2022 1 commit
-
-
Nicolas Patry authored
* Change the chunk_iter function to handle the subtle cases where the last chunk gets ignored since all the data is in the `left_strided` data. We need to remove the right striding on the previous item. * Remove commented line.
-
- 02 Mar, 2022 1 commit
-
-
Nicolas Patry authored
* Adding timestamps for CTC with LM in ASR pipeline. * iRemove print. * Nit change.
-
- 28 Feb, 2022 1 commit
-
-
Nicolas Patry authored
* Fixing the timestamps with chunking. * The changes modified (and fixed) the striding tests. * Adding a tokenizer test. * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Defense -> comment. * Update src/transformers/models/wav2vec2/tokenization_wav2vec2.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 25 Feb, 2022 1 commit
-
-
Nicolas Patry authored
* Adding the option to return_timestamps on pure CTC ASR models. * Remove `math.prod` which was introduced in Python 3.8 * int are not floats. * Reworking the PR to support "char" vs "word" output. * Fixup! * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Quality. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 23 Feb, 2022 1 commit
-
-
Lysandre Debut authored
* Per-folder tests reorganization Co-authored-by:
sgugger <sylvain.gugger@gmail.com> Co-authored-by:
Stas Bekman <stas@stason.org>
-
- 15 Feb, 2022 2 commits
-
-
Nicolas Patry authored
Co-authored-by:
Giuseppe Attanasio <giuseppeattanasio6@gmail.com> Co-authored-by:
Giuseppe Attanasio <giuseppeattanasio6@gmail.com>
-
Javier de la Rosa authored
Fix ASR pipelines from local directories with wav2vec models that have language models attached (#15590) * Fix loading pipelines with wav2vec models with lm when in local paths * Adding tests * Fix test * Adding tests * Flake8 fixes * Removing conflict files :( * Adding task type to test * Remove unnecessary test and imports
-
- 07 Feb, 2022 1 commit
-
-
Patrick von Platen authored
-
- 02 Feb, 2022 1 commit
-
-
Nicolas Patry authored
* Adding support for `microphone` streaming within pipeline. - Uses `ffmpeg` to get microphone data. - Makes sure alignment is made to `size_of_sample`. - Works by sending `{"raw": ..data.., "stride": (n, left, right), "partial": bool}` directly to the pipeline enabling to stream partial results and still get inference. - Let's `partial` information flow through the pipeline to enable caller to get it back and choose to display text or not. - The striding reconstitution is bound to have errors since CTC does not keep previous state. Currently most of the errors are we don't know if there's a space or not between two chunks. Since we have some left striding info, we could use that during decoding to choose what to do with those spaces and even extra letters maybe (if the stride is long enough, it's bound to cover at least a few symbols) Fixing tests. Protecting with `require_torch`. `raw_ctc` support for nicer demo. Post rebase fixes. Revamp to split raw_mic_data from it's live chunking. - Requires a refactor to make everything a bit cleaner. Automatic resampling. Small fix. Small fix. * Post rebase fix (need to let super handle more logic, reorder args.) * Update docstrings * Docstring format. * Remove print. * Prevent flow of `input_values`. * Fixing `stride` too. * Fixing the PR by removing `raw_ctc`. * Better docstrings. * Fixing init. * Update src/transformers/pipelines/audio_utils.py Co-authored-by:Anton Lozhkov <aglozhkov@gmail.com> * Update tests/test_pipelines_automatic_speech_recognition.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Quality. Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
- 19 Jan, 2022 1 commit
-
-
Nicolas Patry authored
* [WIP] Make chuking smartly (long files) work on asr ctc_with_lm. * Slow test with functionality. * Fixing regular test. * fix for batch size 1 * Handling batch outside `rescale_Stride`. - Renamed to `rescale_stride`. * Disable equality in the test. * Remove print. Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Jan, 2022 2 commits
-
-
Patrick von Platen authored
* [ASR pipeline] correct with lm pipeline * improve error
-
Nicolas Patry authored
* `is_ctc` needs to be updated to `self.type == "ctc". * Adding fast test for this functionality.
-
- 12 Jan, 2022 1 commit
-
-
Nicolas Patry authored
* Pipeline ASR with LM. * Revamped into `self.decoder`. * Fixing. * 2nd fix. * Update src/transformers/pipelines/__init__.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Fixing. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 04 Jan, 2022 1 commit
-
-
Nicolas Patry authored
* Hotfix `chunk_length_s` instead of `_ms`. * Adding fix of `pad_token` which should be last/previous token for CTC proper decoding * Fixing ChunkPipeline unwrapping. * Adding a PackIterator specific test.
-
- 03 Jan, 2022 1 commit
-
-
Anton Lozhkov authored
* Naive ASR chunking * Fixing batching for ASR. Co-authored-by:Nicolas Patry <patry.nicolas@protonmail.com>
-
- 30 Dec, 2021 1 commit
-
-
Patrick von Platen authored
* [Generate] correct encoder_outputs are passed without attention_mask * Apply suggestions from code review * up
-
- 27 Dec, 2021 1 commit
-
-
Nicolas Patry authored
* Pipeline chunks. * Batching for Chunking pipelines ? * Batching for `question-answering` and `zero-shot-cls`. * Fixing for FNet. * Making ASR a chunk pipeline. * Chunking ASR API. * doc style. * Fixing ASR test. * Fixing QA eror (p_mask, padding is 1, not 0). * Enable both vad and simple chunking. * Max length for vad. * remove inference mode, crashing on s2t. * Revert ChunkPipeline for ASRpipeline. Too many knobs for simple integration within the pipeline, better stick to external convenience functions instead, more control to be had, simpler pipeline and also easier to replace with other things later. * Drop necessity for PT for these. * Enabling generators. * Add mic + cleanup. * Typo. * Typo2. * Remove ASR work, it does not belong in this PR anymore. * Update src/transformers/pipelines/pt_utils.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/pipelines/zero_shot_classification.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Adding many comments. * Doc quality. * `hidden_states` handling. * Adding doc. * Bad rebase. * Autofixing docs. * Fixing CRITICAL bug in the new Zerocls pipeline. Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 16 Dec, 2021 1 commit
-
-
Lysandre Debut authored
-
- 17 Nov, 2021 1 commit
-
-
Patrick von Platen authored
* add new wav2vec2 translation * correct * up * add tests * correct end copy * correct more * up * correct unispeech sat * finish * finalize * finish * up
-
- 29 Oct, 2021 1 commit
-
-
Nicolas Patry authored
* Tentative enabling of `batch_size` for pipelines. * Add systematic test for pipeline batching. * Enabling batch_size on almost all pipelines - Not `zero-shot` (it's already passing stuff as batched so trickier) - Not `QA` (preprocess uses squad features, we need to switch to real tensors at this boundary. * Adding `min_length_for_response` for conversational. * Making CTC, speech mappings avaiable regardless of framework. * Attempt at fixing automatic tests (ffmpeg not enabled for fast tests) * Removing ffmpeg dependency in tests. * Small fixes. * Slight cleanup. * Adding docs and adressing comments. * Quality. * Update docs/source/main_classes/pipelines.rst Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/question_answering.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/zero_shot_classification.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Improving docs. * Update docs/source/main_classes/pipelines.rst Co-authored-by:
Philipp Schmid <32632186+philschmid@users.noreply.github.com> * N -> oberved_batch_size softmax trick. * Follow `padding_side`. * Supporting image pipeline batching (and padding). * Rename `unbatch` -> `loader_batch`. * unbatch_size forgot. * Custom padding for offset mappings. * Attempt to remove librosa. * Adding require_audio. * torchaudio. * Back to using datasets librosa. * Adding help to set a pad_token on the tokenizer. * Update src/transformers/pipelines/base.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/base.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/base.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Quality. Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Philipp Schmid <32632186+philschmid@users.noreply.github.com>
-
- 14 Oct, 2021 1 commit
-
-
Patrick von Platen authored
-
- 21 Sep, 2021 2 commits
-
-
Anton Lozhkov authored
* Test np padding * Pass feature extraction tests * Update type hints * Fix flaky integration tests * Try a more stable waveform * Add to_numpy jax support * int32 attention masks * Refactor normalization tests
-
Patrick von Platen authored
* upload * correct * correct * correct * finish * up * up * up again
-
- 01 Sep, 2021 1 commit
-
-
Patrick von Platen authored
* fix_torch_device_generate_test * remove @ * up * correct some bugs * correct model * finish speech2text extension * up * up * up * up * Update utils/custom_init_isort.py * up * up * update with tokenizer * correct old tok * correct old tok * fix bug * up * up * add more tests * up * fix docs * up * fix some more tests * add better config * correct some more things " * fix tests * improve docs * Apply suggestions from code review * Apply suggestions from code review * final fixes * finalize * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * apply suggestions Lysandre and Sylvain * apply nicos suggestions * upload everything * finish Co-authored-by:
Patrick von Platen <patrick@huggingface.co> Co-authored-by: your_github_username <your_github_email> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 07 Jul, 2021 1 commit
-
-
Nicolas Patry authored
* Adding support for `pipeline("automatic-speech-recognition")`. - Ugly `"config"` choice for AutoModel. It would be great to have the possibility to have something like `AutoModelFor` that would implement the same logic (Load the config, check Architectures and load the first one) * Remove `model_id` was not needed in the end. * Rebased ! * Remove old code. * Rename `nlp`.
-
- 30 Apr, 2021 1 commit
-
-
Nicolas Patry authored
* Adding `AutomaticSpeechRecognitionPipeline`. - Because we added everything to enable this pipeline, we probably should add it to `transformers`. - This PR tries to limit the scope and focuses only on the pipeline part (what should go in, and out). - The tests are very specific for S2T and Wav2vec2 to make sure both architectures are supported by the pipeline. We don't use the mixin for tests right now, because that requires more work in the `pipeline` function (will be done in a follow up PR). - Unsure about the "helper" function `ffmpeg_read`. It makes a lot of sense from a user perspective, it does not add any additional dependencies (as in hard dependency, because users can always use their own load mechanism). Meanwhile, it feels slightly clunky to have so much optional preprocessing. - The pipeline is not done to support streaming audio right now. Future work: - Add `automatic-speech-recognition` as a `task`. And add the FeatureExtractor.from_pretrained within `pipeline` function. - Add small models within tests - Add the Mixin to tests. - Make the logic between ForCTC vs ForConditionalGeneration better. * Update tests/test_pipelines_automatic_speech_recognition.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Adding docs + main import + type checking + LICENSE. * Doc style !. * Fixing TYPE_HINT. * Specifying waveform shape in the docs. * Adding asserts + specify in the documentation the shape of the input np.ndarray. * Update src/transformers/pipelines/automatic_speech_recognition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Adding require to tests + move the `feature_extractor` doc. Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-