Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
bitsandbytes
Commits
7923c4a0
"benchmarks/experimental/offload.py" did not exist on "7ee228bf68fe8624da596860949ceb5ebc1b1dfe"
Commit
7923c4a0
authored
Oct 07, 2021
by
Tim Dettmers
Browse files
Changed from testpypi to pypi. Release 0.0.24
parent
74399248
Changes
6
Show whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
15 additions
and
27 deletions
+15
-27
CHANGELOG.md
CHANGELOG.md
+1
-0
Makefile
Makefile
+1
-0
bitsandbytes/optim/lamb.py
bitsandbytes/optim/lamb.py
+0
-1
deploy.sh
deploy.sh
+0
-13
deploy_from_slurm.sh
deploy_from_slurm.sh
+8
-8
setup.py
setup.py
+5
-5
No files found.
CHANGELOG.md
View file @
7923c4a0
...
...
@@ -21,3 +21,4 @@ Features:
v0.0.24:
-
Fixed a bug where a float/half conversion led to a compilation error for CUDA 11.1 on Turning GPUs.
-
removed Apex dependency for bnb LAMB
Makefile
View file @
7923c4a0
...
...
@@ -52,6 +52,7 @@ $(BUILD_DIR):
$(ROOT_DIR)/dependencies/cub
:
git clone https://github.com/NVlabs/cub
$(ROOT_DIR)
/dependencies/cub
cd
dependencies/cub
;
git checkout 1.11.0
clean
:
rm
cuda_build/
*
./bitsandbytes/libbitsandbytes.so
...
...
bitsandbytes/optim/lamb.py
View file @
7923c4a0
...
...
@@ -2,7 +2,6 @@
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import
apex
from
bitsandbytes.optim.optimizer
import
Optimizer2State
class
LAMB
(
Optimizer2State
):
...
...
deploy.sh
deleted
100644 → 0
View file @
74399248
#!/bin/bash
rm
-rf
dist build
make clean
CUDA_HOME
=
/usr/local/cuda-10.2 make
CUDA_VERSION
=
102 python
-m
build
python
-m
twine upload
--repository
testpypi dist/
*
--verbose
rm
-rf
dist build
make clean
CUDA_HOME
=
/usr/local/cuda-11.1 make
CUDA_VERSION
=
111 python
-m
build
python
-m
twine upload
--repository
testpypi dist/
*
--verbose
deploy_from_slurm.sh
View file @
7923c4a0
...
...
@@ -10,7 +10,7 @@ module load gcc/7.3.0
CUDA_HOME
=
/public/apps/cuda/9.2
make
CUDA_VERSION
=
92 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
...
...
@@ -21,7 +21,7 @@ module load cuda/10.0
CUDA_HOME
=
/public/apps/cuda/10.0
make cuda10x
CUDA_VERSION
=
100 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
module unload gcc
module load gcc/8.4
...
...
@@ -33,7 +33,7 @@ module load cuda/10.1
CUDA_HOME
=
/public/apps/cuda/10.1
make cuda10x
CUDA_VERSION
=
101 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
rm
-rf
dist build
...
...
@@ -43,7 +43,7 @@ module load cuda/10.2
CUDA_HOME
=
/public/apps/cuda/10.2/
make cuda10x
CUDA_VERSION
=
102 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
...
...
@@ -54,7 +54,7 @@ module load cuda/11.0
CUDA_HOME
=
/public/apps/cuda/11.0
make cuda110
CUDA_VERSION
=
110 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
rm
-rf
dist build
...
...
@@ -64,7 +64,7 @@ module load cuda/11.1
CUDA_HOME
=
/public/apps/cuda/11.1
make cuda11x
CUDA_VERSION
=
111 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
rm
-rf
dist build
...
...
@@ -74,7 +74,7 @@ module load cuda/11.2
CUDA_HOME
=
/public/apps/cuda/11.2
make cuda11x
CUDA_VERSION
=
112 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
rm
-rf
dist build
...
...
@@ -82,5 +82,5 @@ make clean
make cleaneggs
CUDA_HOME
=
/private/home/timdettmers/git/autoswap/local/cuda-11.3 make cuda11x
CUDA_VERSION
=
113 python
-m
build
python
-m
twine upload
--repository
testpypi
dist/
*
--verbose
python
-m
twine upload dist/
*
--verbose
module unload cuda
setup.py
View file @
7923c4a0
...
...
@@ -13,19 +13,19 @@ def read(fname):
setup
(
name
=
f
"bitsandbytes-cuda
{
os
.
environ
[
'CUDA_VERSION'
]
}
"
,
version
=
"0.0.2
3
"
,
version
=
"0.0.2
4
"
,
author
=
"Tim Dettmers"
,
author_email
=
"
tim.
dettmers@
gmail.com
"
,
description
=
(
"
Numpy-like library for GPU
s."
),
author_email
=
"dettmers@
cs.washington.edu
"
,
description
=
(
"
8-bit optimizers and quantization routine
s."
),
license
=
"MIT"
,
keywords
=
"gpu"
,
keywords
=
"gpu
optimizers optimization 8-bit quantization compression
"
,
url
=
"http://packages.python.org/bitsandbytes"
,
packages
=
find_packages
(),
package_data
=
{
''
:
[
'libbitsandbytes.so'
]},
long_description
=
read
(
'README.md'
),
long_description_content_type
=
'text/markdown'
,
classifiers
=
[
"Development Status ::
1
-
Planning
"
,
"Development Status ::
4
-
Beta
"
,
'Topic :: Scientific/Engineering :: Artificial Intelligence'
],
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment