Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
73f4dc57
Unverified
Commit
73f4dc57
authored
Jan 29, 2024
by
Frank Lee
Committed by
GitHub
Jan 29, 2024
Browse files
[workflow] updated CI image (#5318)
parent
7cfed5f0
Changes
7
Show whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
8 additions
and
8 deletions
+8
-8
.github/workflows/build_on_schedule.yml
.github/workflows/build_on_schedule.yml
+1
-1
.github/workflows/example_check_on_dispatch.yml
.github/workflows/example_check_on_dispatch.yml
+1
-1
.github/workflows/example_check_on_pr.yml
.github/workflows/example_check_on_pr.yml
+1
-1
.github/workflows/example_check_on_schedule.yml
.github/workflows/example_check_on_schedule.yml
+1
-1
.github/workflows/run_chatgpt_examples.yml
.github/workflows/run_chatgpt_examples.yml
+1
-1
.github/workflows/run_chatgpt_unit_tests.yml
.github/workflows/run_chatgpt_unit_tests.yml
+1
-1
.github/workflows/run_colossalqa_unit_tests.yml
.github/workflows/run_colossalqa_unit_tests.yml
+2
-2
No files found.
.github/workflows/build_on_schedule.yml
View file @
73f4dc57
...
@@ -12,7 +12,7 @@ jobs:
...
@@ -12,7 +12,7 @@ jobs:
if
:
github.repository == 'hpcaitech/ColossalAI'
if
:
github.repository == 'hpcaitech/ColossalAI'
runs-on
:
[
self-hosted
,
gpu
]
runs-on
:
[
self-hosted
,
gpu
]
container
:
container
:
image
:
hpcaitech/pytorch-cuda:2.
0
.0-1
1.7
.0
image
:
hpcaitech/pytorch-cuda:2.
1
.0-1
2.1
.0
options
:
--gpus all --rm -v /dev/shm -v /data/scratch/llama-tiny:/data/scratch/llama-tiny
options
:
--gpus all --rm -v /dev/shm -v /data/scratch/llama-tiny:/data/scratch/llama-tiny
timeout-minutes
:
90
timeout-minutes
:
90
steps
:
steps
:
...
...
.github/workflows/example_check_on_dispatch.yml
View file @
73f4dc57
...
@@ -45,7 +45,7 @@ jobs:
...
@@ -45,7 +45,7 @@ jobs:
fail-fast
:
false
fail-fast
:
false
matrix
:
${{fromJson(needs.manual_check_matrix_preparation.outputs.matrix)}}
matrix
:
${{fromJson(needs.manual_check_matrix_preparation.outputs.matrix)}}
container
:
container
:
image
:
hpcaitech/pytorch-cuda:2.
0
.0-1
1.7
.0
image
:
hpcaitech/pytorch-cuda:2.
1
.0-1
2.1
.0
options
:
--gpus all --rm -v /data/scratch/examples-data:/data/
options
:
--gpus all --rm -v /data/scratch/examples-data:/data/
timeout-minutes
:
15
timeout-minutes
:
15
steps
:
steps
:
...
...
.github/workflows/example_check_on_pr.yml
View file @
73f4dc57
...
@@ -77,7 +77,7 @@ jobs:
...
@@ -77,7 +77,7 @@ jobs:
fail-fast
:
false
fail-fast
:
false
matrix
:
${{fromJson(needs.detect-changed-example.outputs.matrix)}}
matrix
:
${{fromJson(needs.detect-changed-example.outputs.matrix)}}
container
:
container
:
image
:
hpcaitech/pytorch-cuda:2.
0
.0-1
1.7
.0
image
:
hpcaitech/pytorch-cuda:2.
1
.0-1
2.1
.0
options
:
--gpus all --rm -v /data/scratch/examples-data:/data/
options
:
--gpus all --rm -v /data/scratch/examples-data:/data/
timeout-minutes
:
20
timeout-minutes
:
20
concurrency
:
concurrency
:
...
...
.github/workflows/example_check_on_schedule.yml
View file @
73f4dc57
...
@@ -34,7 +34,7 @@ jobs:
...
@@ -34,7 +34,7 @@ jobs:
fail-fast
:
false
fail-fast
:
false
matrix
:
${{fromJson(needs.matrix_preparation.outputs.matrix)}}
matrix
:
${{fromJson(needs.matrix_preparation.outputs.matrix)}}
container
:
container
:
image
:
hpcaitech/pytorch-cuda:2.
0
.0-1
1.7
.0
image
:
hpcaitech/pytorch-cuda:2.
1
.0-1
2.1
.0
timeout-minutes
:
10
timeout-minutes
:
10
steps
:
steps
:
-
name
:
📚 Checkout
-
name
:
📚 Checkout
...
...
.github/workflows/run_chatgpt_examples.yml
View file @
73f4dc57
...
@@ -18,7 +18,7 @@ jobs:
...
@@ -18,7 +18,7 @@ jobs:
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
runs-on
:
[
self-hosted
,
gpu
]
runs-on
:
[
self-hosted
,
gpu
]
container
:
container
:
image
:
hpcaitech/pytorch-cuda:
1
.1
2
.0-1
1.3
.0
image
:
hpcaitech/pytorch-cuda:
2
.1.0-1
2.1
.0
options
:
--gpus all --rm -v /data/scratch/github_actions/chat:/data/scratch/github_actions/chat --shm-size=10.24gb
options
:
--gpus all --rm -v /data/scratch/github_actions/chat:/data/scratch/github_actions/chat --shm-size=10.24gb
timeout-minutes
:
30
timeout-minutes
:
30
defaults
:
defaults
:
...
...
.github/workflows/run_chatgpt_unit_tests.yml
View file @
73f4dc57
...
@@ -20,7 +20,7 @@ jobs:
...
@@ -20,7 +20,7 @@ jobs:
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
runs-on
:
[
self-hosted
,
gpu
]
runs-on
:
[
self-hosted
,
gpu
]
container
:
container
:
image
:
hpcaitech/pytorch-cuda:
1
.1
2
.0-1
1.3
.0
image
:
hpcaitech/pytorch-cuda:
2
.1.0-1
2.1
.0
options
:
--gpus all --rm -v /data/scratch/chatgpt:/data/scratch/chatgpt
options
:
--gpus all --rm -v /data/scratch/chatgpt:/data/scratch/chatgpt
timeout-minutes
:
30
timeout-minutes
:
30
defaults
:
defaults
:
...
...
.github/workflows/run_colossalqa_unit_tests.yml
View file @
73f4dc57
...
@@ -19,7 +19,7 @@ jobs:
...
@@ -19,7 +19,7 @@ jobs:
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
github.event.pull_request.base.repo.full_name == 'hpcaitech/ColossalAI'
runs-on
:
[
self-hosted
,
gpu
]
runs-on
:
[
self-hosted
,
gpu
]
container
:
container
:
image
:
hpcaitech/pytorch-cuda:
1
.1
2
.0-1
1.3
.0
image
:
hpcaitech/pytorch-cuda:
2
.1.0-1
2.1
.0
volumes
:
volumes
:
-
/data/scratch/test_data_colossalqa:/data/scratch/test_data_colossalqa
-
/data/scratch/test_data_colossalqa:/data/scratch/test_data_colossalqa
-
/data/scratch/llama-tiny:/data/scratch/llama-tiny
-
/data/scratch/llama-tiny:/data/scratch/llama-tiny
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment