Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
jerrrrry
infinicore
Commits
58771213
Unverified
Commit
58771213
authored
Mar 06, 2026
by
pengcheng888
Committed by
GitHub
Mar 06, 2026
Browse files
Merge pull request #1058 from InfiniTensor/issue/1033d
issue/1033 - update doc
parents
f3f4bf16
c1ac5a61
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
4 deletions
+9
-4
README.md
README.md
+9
-4
No files found.
README.md
View file @
58771213
...
@@ -159,14 +159,19 @@ python scripts/install.py [XMAKE_CONFIG_FLAGS]
...
@@ -159,14 +159,19 @@ python scripts/install.py [XMAKE_CONFIG_FLAGS]
```
shell
```
shell
(
1
)
在third_party目录拉取cutlass和flash attn库的源码
(
不需要--recursive
)
#
在third_party目录拉取cutlass和flash attn库的源码(不需要--recursive)
(
2
)
设置
(
1
)
中cutlass路径的环境变量CUTLASS_ROOT
# 设置cutlass路径的环境变量CUTLASS_ROOT(部分环境可选)
export
CUTLASS_ROOT
=
<path-to>/InfiniCore/third_party/cutlass
(
3
)
xmake配置环节额外打开
--aten
开关,并设置
--flash-attn
库位置,例:
#
xmake配置环节额外打开 --aten 开关,并设置 --flash-attn 库位置,例
(cuda路径部分环境可使用默认)
:
xmake f
--nv-gpu
=
y
--ccl
=
y
--cuda
=
$CUDA_HOME
--aten
=
y
--flash-attn
=
<path-to>/InfiniCore/third_party/flash-attention
-cv
xmake f
--nv-gpu
=
y
--ccl
=
y
--cuda
=
$CUDA_HOME
--aten
=
y
--flash-attn
=
<path-to>/InfiniCore/third_party/flash-attention
-cv
(
4
)
flash attenion库会伴随infinicore_cpp_api一同编译安装
# 设置额外的环境变量
export
CPLUS_INCLUDE_PATH
=
$CUDA_HOME
/include:
$CPLUS_INCLUDE_PATH
# flash attenion库会伴随infinicore_cpp_api一同编译安装
```
```
2.
编译安装
2.
编译安装
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment