- 02 Sep, 2024 1 commit
-
-
mayp777 authored
-
- 12 Oct, 2022 1 commit
-
-
Zhaoheng Ni authored
Summary: following pr https://github.com/pytorch/audio/issues/2716 - For preprocessing - The HuBERT feature takes lots of memory which may not fit some machines. Enable to use a subset of feature for training a k-means model. - For pre-training - Normalize the loss based on the total number of masked frames across all GPUs. - Use mixed precision training. fp16 is not well supported in pytorch_lightning. - Log accuracies of masked/unmasked frames during training. - Clip the gradients with norm `10.0`. - For ASR fine-tuning - Normalize the loss based on the total number of batches across all GPUs, same as in the conformer recipe of TorchAudio. - Use mixed precision training. - Add "|" after the end of transcription to capture the silence/word termination, same as in fairseq recipe. - Update the WER results on LibriSpeech dev and test sets. | | WER% (Viterbi)| WER% (KenLM) | |:-----------------:|--------------:|--------------:| | dev-clean | 10.9 | 4.2 | | dev-other | 17.5 | 9.4 | | test-clean | 10.9 | 4.4 | | test-other | 17.8 | 9.5 | Pull Request resolved: https://github.com/pytorch/audio/pull/2744 Reviewed By: carolineechen Differential Revision: D40282322 Pulled By: nateanl fbshipit-source-id: 4723584c912e70e8970149fe09de005385eaab90
-
- 28 Jul, 2022 1 commit
-
-
Zhaoheng Ni authored
Summary: - The optimizer in fine-tuning recipe should also be `AdamW`. See https://github.com/pytorch/audio/pull/2412 - Fix the import of `DistributedBatchSampler` in hubert dataset - Fix `dataset_path` in fine-tuning module. Pull Request resolved: https://github.com/pytorch/audio/pull/2588 Reviewed By: carolineechen Differential Revision: D38243423 Pulled By: nateanl fbshipit-source-id: badc88ce9eddfd71270201a65ae89433fae2733f
-
- 23 Jun, 2022 1 commit
-
-
Summary: Meta: **If you take no action, this diff will be automatically accepted on 2022-06-23.** (To remove yourself from auto-accept diffs and just let them all land, add yourself to [this Butterfly rule](https://www.internalfb.com/butterfly/rule/904302247110220)) Produced by `tools/arcanist/lint/codemods/black-fbsource`. #nocancel Rules run: - CodemodTransformerSimpleShell Config Oncall: [lint](https://our.intern.facebook.com/intern/oncall3/?shortname=lint) CodemodConfig: [CodemodConfigFBSourceBlackLinter](https://www.internalfb.com/code/www/flib/intern/codemod_service/config/fbsource_arc_f/CodemodConfigFBSourceBlackLinter.php) ConfigType: php Sandcastle URL: https://www.internalfb.com/intern/sandcastle/job/13510799586951394/ This diff was automatically created with CodemodService. To learn more about CodemodService, check out the [CodemodService wiki](https://fburl.com/CodemodService). _____ ## Questions / Comments / Feedback? **[Click here to give feedback about this diff](https://www.internalfb.com/codemod_service/feedback?sandcastle_job_id=13510799586951394).** * Returning back to author or abandoning this diff will only cause the diff to be regenerated in the future. * Do **NOT** post in the CodemodService Feedback group about this specific diff. drop-conflicts Reviewed By: adamjernst Differential Revision: D37375235 fbshipit-source-id: 3d7eb39e5c0539a78d1412f37562dec90b0fc759
-
- 07 Jun, 2022 1 commit
-
-
Zhaoheng Ni authored
Summary: The PR contains the CTC fine-tuning recipe of HuBERT Base model. The files include: - lightning module - training script - README and the result table - evaluation scripts Pull Request resolved: https://github.com/pytorch/audio/pull/2352 Reviewed By: hwangjeff Differential Revision: D36915712 Pulled By: nateanl fbshipit-source-id: 0249635ad5e81a8aa2d228c1d5fe84d78b62a15b
-