Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
6a3086a5
Unverified
Commit
6a3086a5
authored
Jan 30, 2024
by
digger yu
Committed by
GitHub
Jan 30, 2024
Browse files
fix typo under extensions/ (#5330)
parent
febed232
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
12 additions
and
12 deletions
+12
-12
extensions/README.md
extensions/README.md
+8
-8
extensions/cuda_extension.py
extensions/cuda_extension.py
+1
-1
extensions/flash_attention/flash_attention_dao_cuda.py
extensions/flash_attention/flash_attention_dao_cuda.py
+1
-1
extensions/flash_attention/flash_attention_xformers_cuda.py
extensions/flash_attention/flash_attention_xformers_cuda.py
+1
-1
extensions/triton_extension.py
extensions/triton_extension.py
+1
-1
No files found.
extensions/README.md
View file @
6a3086a5
...
@@ -3,12 +3,12 @@
...
@@ -3,12 +3,12 @@
## 📌 Table of Contents
## 📌 Table of Contents
-
[
🔌 Extensions
](
#-extensions
)
-
[
🔌 Extensions
](
#-extensions
)
-
[
📌 Table of Contents
](
#-table-of-contents
)
-
[
📌 Table of Contents
](
#-table-of-contents
)
-
[
📚 Introduction
](
#-introduction
)
-
[
📚 Introduction
](
#-introduction
)
-
[
🪅 Design
](
#-design
)
-
[
🪅 Design
](
#-design
)
-
[
🛠 API Usage
](
#-api-usage
)
-
[
🛠 API Usage
](
#-api-usage
)
-
[
🏗 Write a customized extension
](
#-write-a-customized-extension
)
-
[
🏗 Write a customized extension
](
#-write-a-customized-extension
)
-
[
✏️ Acknowledgement
](
#️-acknowledgement
)
-
[
✏️ Acknowledgement
](
#️-acknowledgement
)
## 📚 Introduction
## 📚 Introduction
...
@@ -46,12 +46,12 @@ kernel = CPUAdamLoader().load()
...
@@ -46,12 +46,12 @@ kernel = CPUAdamLoader().load()
-
Case 2: Load a specific kernel
-
Case 2: Load a specific kernel
This case applies if you are familar with the extensions available.
This case applies if you are famil
i
ar with the extensions available.
```
python
```
python
from
colossalai.kernel.kernel_loader
import
CPUAdamLoader
from
colossalai.kernel.kernel_loader
import
CPUAdamLoader
# load the kernel by giving the kern
a
l name
# load the kernel by giving the kern
e
l name
kernel
=
CPUAdamLoader
().
load
(
ext_name
=
"cpu_adam_arm"
)
kernel
=
CPUAdamLoader
().
load
(
ext_name
=
"cpu_adam_arm"
)
```
```
...
...
extensions/cuda_extension.py
View file @
6a3086a5
...
@@ -20,7 +20,7 @@ class _CudaExtension(_CppExtension):
...
@@ -20,7 +20,7 @@ class _CudaExtension(_CppExtension):
"""
"""
def
is_hardware_available
(
self
)
->
bool
:
def
is_hardware_available
(
self
)
->
bool
:
# cuda extension can only be built if cuda is availabe
# cuda extension can only be built if cuda is availab
l
e
try
:
try
:
import
torch
import
torch
...
...
extensions/flash_attention/flash_attention_dao_cuda.py
View file @
6a3086a5
...
@@ -6,7 +6,7 @@ class FlashAttentionDaoCudaExtension(_Extension):
...
@@ -6,7 +6,7 @@ class FlashAttentionDaoCudaExtension(_Extension):
super
().
__init__
(
name
=
"flash_attention_dao_cuda"
,
support_aot
=
False
,
support_jit
=
False
,
priority
=
10
)
super
().
__init__
(
name
=
"flash_attention_dao_cuda"
,
support_aot
=
False
,
support_jit
=
False
,
priority
=
10
)
def
is_hardware_available
(
self
)
->
bool
:
def
is_hardware_available
(
self
)
->
bool
:
# cuda extension can only be built if cuda is availabe
# cuda extension can only be built if cuda is availab
l
e
try
:
try
:
import
torch
import
torch
...
...
extensions/flash_attention/flash_attention_xformers_cuda.py
View file @
6a3086a5
...
@@ -6,7 +6,7 @@ class FlashAttentionXformersCudaExtension(_Extension):
...
@@ -6,7 +6,7 @@ class FlashAttentionXformersCudaExtension(_Extension):
super
().
__init__
(
name
=
"flash_attention_xformers_cuda"
,
support_aot
=
False
,
support_jit
=
False
)
super
().
__init__
(
name
=
"flash_attention_xformers_cuda"
,
support_aot
=
False
,
support_jit
=
False
)
def
is_hardware_available
(
self
)
->
bool
:
def
is_hardware_available
(
self
)
->
bool
:
# cuda extension can only be built if cuda is availabe
# cuda extension can only be built if cuda is availab
l
e
try
:
try
:
import
torch
import
torch
...
...
extensions/triton_extension.py
View file @
6a3086a5
...
@@ -8,7 +8,7 @@ class _TritonExtension(_Extension):
...
@@ -8,7 +8,7 @@ class _TritonExtension(_Extension):
super
().
__init__
(
name
,
support_aot
=
False
,
support_jit
=
True
,
priority
=
priority
)
super
().
__init__
(
name
,
support_aot
=
False
,
support_jit
=
True
,
priority
=
priority
)
def
is_hardware_compatible
(
self
)
->
bool
:
def
is_hardware_compatible
(
self
)
->
bool
:
# cuda extension can only be built if cuda is availabe
# cuda extension can only be built if cuda is availab
l
e
try
:
try
:
import
torch
import
torch
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment