Unverified Commit 16c82574 authored by Kirthi Shankar Sivamani's avatar Kirthi Shankar Sivamani Committed by GitHub
Browse files

Remove leftover implementations for optional userbuffers support (#932)



* Remove optional UB build leftovers
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

* rm unused import
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci



---------
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>
Co-authored-by: default avatarpre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
parent f458fcf4
......@@ -368,8 +368,6 @@ size_t get_cublasLt_version();
size_t get_cudnn_version();
void placeholder();
/***************************************************************************************************
* Support THD format for Context Parallel
**************************************************************************************************/
......
......@@ -5,12 +5,7 @@
************************************************************************/
#include "extensions.h"
#ifdef NVTE_WITH_USERBUFFERS
#include "comm_gemm_overlap.h"
#endif // NVTE_WITH_USERBUFFERS
size_t get_cublasLt_version() { return cublasLtGetVersion(); }
size_t get_cudnn_version() { return cudnnGetVersion(); }
void placeholder() {} // TODO(ksivamani) clean this up
......@@ -10,7 +10,6 @@ from typing import Callable, List, Optional, Tuple, Union
import torch
import transformer_engine_torch as tex
from transformer_engine.pytorch.module import LayerNormMLP, LayerNorm, RMSNorm
from transformer_engine.pytorch.attention import (
InferenceParams,
......@@ -270,9 +269,6 @@ class TransformerLayer(torch.nn.Module):
) -> None:
super().__init__()
if ub_tp_comm_overlap:
assert tex.userbuf_comm_available(), "Userbuffer communication backend not available."
self.self_attn_mask_type = self_attn_mask_type
self.window_size = window_size
self.window_size = check_set_window_size(self_attn_mask_type, self.window_size)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment