• Masaki Kozuki's avatar
    [transformer] Port Sequence Parallelism (takeover of #1396) (#1400) · 3ff1a10f
    Masaki Kozuki authored
    * it looks possible to remove this file
    
    * add communication collectives
    
    * update Column|RowParallelLinear
    
    * update checkpoint function
    
    * update function name
    
    * parity between public and private collectives
    
    * row parallel linear
    
    * column parallel linear
    
    * sequence parallel: p2p comm
    
    fix typo
    
    * sequence parallel: pipeline parallel
    
    * fix typo
    
    * add layernorm with sequence_parallel_enabled attr
    
    * class variable -> member variable
    
    * fix col parallel test with sequence parallel
    
    * Initial test of `forward_backward_pipelining_without_interleaving` with `model_type=ModelType.encoder_and_decoder`
    
    * add cases pretending to test sequence_parallel
    
    * Apply 2 suggestion(s) to 1 file(s)
    
    * update sequence_parallel_enabled docstring
    
    * update docstring: order of tensor dimensions, sequence_parallel_enabled behavior
    
    * Divide sequence_length if sequence parallel
    
    tensor shape should be updated if sequence parallel is enabled.
    
    * cherry-pick https://github.com/NVIDIA/Megatron-LM/commit/8474e6e54fcb9dfa37aea039352f9fb485fb6f61
    
    * type annotation
    
    * Fix matmul call in RowParallelLinear
    
    Fix `sequence_parallel_enabled` to `False` as you can see in
    https://github.com/NVIDIA/Megatron-LM/blob/d898a8991d1a08d29074f87819d1bf41517e35f5/megatron/mpu/layers.py#L511-L514
    
    * update rowparallellinear test
    
    * fix `loss_weight` is not defined in test_layers
    
    * @eqy's comment
    
    * mixed fused layer norm
    
    * fix typo
    
    * misc
    
    * test_layers cleanup
    
    * Skip Bert/GPT script
    
    Since these two models haven't gotten updated for sequence parallle, e.g. the update of the order of dimension from (batch, sequence, feature) to (sequence, batch, feature) and global variables of arguments
    
    * debug part 1/N: comment out `x.retain_grad`
    
    * debug part 2/N: [ColumnParallelLinear] comment out overriding of sequence_parallel_enabled
    
    * debug 3/N: add pipeline test with parallel mlp
    
    * Fix handling `self.input_tensor` and argument
    
    * tp2pp4 ModelType.encoder_or_decoder is failing, which can be at my fault because the backward is blaming the output and the grad_ouptut shape don't match
    
    * revert debug 1/N
    
    * defer tensor model parallel size > 1
    
    * split tensor in sequence dim
    
    * cosmetic
    
    * cosmetic: remove archaic comment
    
    * enable TP>1 for encoder_and_decoder as well
    
    * set requires_grad=True always...
    
    * Set `scatter_gather_tensors_in_pipeline` to :obj:`False`
    
    for the sake of nemo megatron's GPT works with sequence parallel enabled.
    
    * brush up comment of `requires_grad()`
    
    There's a possibility that PyTorch DistributedDataParallel hangs
    when some tensor (or parameter) doesn't require grad according to @ptrblck.
    This forced `requires_grad` in my understanding is different from that.
    
    * misc changes of scatter_gather_tensors_in_pipeline comment
    
    * guard for torch_ucc
    
    * cosmetic changes related to tests
    
    * update command line arguments
    
    * update TransformerLanguageModel
    
    * rename
    
    * move gpt to gpt.py
    
    * update bert
    
    * add all_gather for params in sequence parallel region
    
    * misc. some diffs were lost during rebasing...
    
    * updates for non sequence parallel execution
    
    * gpt with sequence parallel
    
    * Apply 2 suggestion(s) to 2 file(s)
    
    * update tensor&pipeline parallel size
    
    * why `sequence_parallel_enabled` is not supplied!? Did I messed up when rebasing?
    
    * cosmetic fix
    
    * correct key is sequence_parallel_enabled
    3ff1a10f
test_mapping.py 3.53 KB