[PyTorch] forward attention_type in MultiHeadAttention (#621)
[PyTorch] fix forward attention_type in MultiheadAttention Signed-off-by:Markus Schnoes <markus.schnoes@gmx.de> Co-authored-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com>
Showing
Please register or sign in to comment