Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
774ed4a0
Unverified
Commit
774ed4a0
authored
Jan 04, 2022
by
flozi00
Committed by
GitHub
Jan 04, 2022
Browse files
Fix Code block (#14983)
parent
f2ab2183
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
0 deletions
+2
-0
examples/pytorch/speech-pretraining/README.md
examples/pytorch/speech-pretraining/README.md
+2
-0
No files found.
examples/pytorch/speech-pretraining/README.md
View file @
774ed4a0
...
@@ -88,6 +88,7 @@ The results of this run can be seen [here](https://wandb.ai/patrickvonplaten/wav
...
@@ -88,6 +88,7 @@ The results of this run can be seen [here](https://wandb.ai/patrickvonplaten/wav
To pre-train
`"base-sized"`
Wav2Vec2 model,
*e.g.*
[
facebook/wav2vec2-base
](
https://huggingface.co/facebook/wav2vec2-base
)
To pre-train
`"base-sized"`
Wav2Vec2 model,
*e.g.*
[
facebook/wav2vec2-base
](
https://huggingface.co/facebook/wav2vec2-base
)
on
[
librispeech_asr
](
https://huggingface.co/datasets/librispeech_asr
)
, the following command can be run:
on
[
librispeech_asr
](
https://huggingface.co/datasets/librispeech_asr
)
, the following command can be run:
```
bash
accelerate launch run_wav2vec2_pretraining_no_trainer.py
\
accelerate launch run_wav2vec2_pretraining_no_trainer.py
\
--dataset_name
=
librispeech_asr
\
--dataset_name
=
librispeech_asr
\
--dataset_config_names
clean clean other
\
--dataset_config_names
clean clean other
\
...
@@ -109,6 +110,7 @@ accelerate launch run_wav2vec2_pretraining_no_trainer.py \
...
@@ -109,6 +110,7 @@ accelerate launch run_wav2vec2_pretraining_no_trainer.py \
--adam_beta2
=
"0.98"
\
--adam_beta2
=
"0.98"
\
--adam_epsilon
=
"1e-06"
\
--adam_epsilon
=
"1e-06"
\
--gradient_checkpointing
\
--gradient_checkpointing
\
```
The experiment was run on 8 GPU V100 (16 GB RAM each) for 4 days.
The experiment was run on 8 GPU V100 (16 GB RAM each) for 4 days.
In case you have more than 8 GPUs available for a higher effective
`batch_size`
,
In case you have more than 8 GPUs available for a higher effective
`batch_size`
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment