Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
24609b66
Commit
24609b66
authored
Feb 25, 2023
by
Max Ryabinin
Browse files
Reduce diff
parent
d15822a5
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
bitsandbytes/nn/modules.py
bitsandbytes/nn/modules.py
+1
-1
No files found.
bitsandbytes/nn/modules.py
View file @
24609b66
...
...
@@ -212,7 +212,7 @@ class Int8Params(torch.nn.Parameter):
class
Linear8bitLt
(
nn
.
Linear
):
def
__init__
(
self
,
input_features
,
output_features
,
bias
=
True
,
has_fp16_weights
=
True
,
memory_efficient_backward
=
False
,
threshold
=
0.0
,
index
=
None
):
memory_efficient_backward
=
False
,
threshold
=
0.0
,
index
=
None
):
super
().
__init__
(
input_features
,
output_features
,
bias
)
assert
not
memory_efficient_backward
,
"memory_efficient_backward is no longer required and the argument is deprecated in 0.37.0 and will be removed in 0.39.0"
self
.
state
=
bnb
.
MatmulLtState
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment