Unverified Commit 6c715f81 authored by Dhruv Srikanth's avatar Dhruv Srikanth Committed by GitHub
Browse files

[Bug Fix] Update torch import reference in bnb quantization (#1902)

# What does this PR do?

Fixes `Import Error` occurring from mismatch of usage between
torch.nn.Module and nn.Module.
parent a69ef52c
...@@ -70,7 +70,7 @@ class Linear8bitLt(torch.nn.Module): ...@@ -70,7 +70,7 @@ class Linear8bitLt(torch.nn.Module):
return out return out
class Linear4bit(nn.Module): class Linear4bit(torch.nn.Module):
def __init__(self, weight, bias, quant_type): def __init__(self, weight, bias, quant_type):
super().__init__() super().__init__()
self.weight = Params4bit( self.weight = Params4bit(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment