T5 pipeline parallel changes (#1279)
* Free output tensor on each pipeline stage for smaller memory footprint see: https://github.com/NVIDIA/Megatron-LM/commit/057b086c689b164864455430c223ab52fd86bbcb * ref: https://github.com/NVIDIA/Megatron-LM/commit/945ece943149b63511e9d0ec3df8effe7f3c13ff * ref: https://github.com/NVIDIA/Megatron-LM/commit/9a8b89acd8f6ba096860170d0e30ddc0bc2bacd4 * remove position embedding group in destroy * pass deallocate_pipeline_outputs to backward_step * fix typo * missing deallocate_pipeline_outputs * fix typo: grad_ouptut -> grad_output * update tests * remove accessed todo * test with data parallel size of 2 if there's equal to or more than 8 gpus
Showing
Please register or sign in to comment