Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Megatron-LM
Commits
6fc1b02f
Commit
6fc1b02f
authored
Dec 05, 2021
by
zihanl
Browse files
update commands
parent
a9738f63
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
26 additions
and
26 deletions
+26
-26
tasks/knwl_dialo/scripts/eval_generation.sh
tasks/knwl_dialo/scripts/eval_generation.sh
+2
-2
tasks/knwl_dialo/scripts/finetune_knwl_gen.sh
tasks/knwl_dialo/scripts/finetune_knwl_gen.sh
+6
-6
tasks/knwl_dialo/scripts/finetune_resp_gen.sh
tasks/knwl_dialo/scripts/finetune_resp_gen.sh
+6
-6
tasks/knwl_dialo/scripts/prompt_knwl_gen.sh
tasks/knwl_dialo/scripts/prompt_knwl_gen.sh
+6
-6
tasks/knwl_dialo/scripts/prompt_resp_gen.sh
tasks/knwl_dialo/scripts/prompt_resp_gen.sh
+6
-6
No files found.
tasks/knwl_dialo/scripts/eval_generation.sh
View file @
6fc1b02f
...
@@ -10,8 +10,8 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
...
@@ -10,8 +10,8 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
--master_addr localhost
\
--master_addr localhost
\
--master_port 6000"
--master_port 6000"
OUTPUT_PATH
=
<
SPECIFIC_
PATH_F
OR
_THE_OUTPUT_GENERATION>
OUTPUT_PATH
=
<PATH_
O
F_THE_OUTPUT_GENERATION>
GROUND_TRUTH_PATH
=
<
SPECIFIC_
PATH_F
OR
_THE_GROUND_TRUTH>
GROUND_TRUTH_PATH
=
<PATH_
O
F_THE_GROUND_TRUTH>
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
--num-layers
24
\
--num-layers
24
\
...
...
tasks/knwl_dialo/scripts/finetune_knwl_gen.sh
View file @
6fc1b02f
...
@@ -12,12 +12,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
...
@@ -12,12 +12,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
--master_addr localhost
\
--master_addr localhost
\
--master_port 6000"
--master_port 6000"
CHECKPOINT_PATH
=
<
Specify path
for
the language model
>
CHECKPOINT_PATH
=
<
PATH_OF_THE_LANGUAGE_MODEL
>
OUTPUT_MODEL_PATH
=
<
Specify path
for
the saved model
>
OUTPUT_MODEL_PATH
=
<
PATH_OF_THE_SAVED_MODEL
>
VOCAB_PATH
=
<
Specify path
for
the vocab file
>
VOCAB_PATH
=
<
PATH_OF_THE_VOCAB_FILE
>
MERGE_PATH
=
<
Specify path
for
the merge file
>
MERGE_PATH
=
<
PATH_OF_THE_MERGE_FILE
>
TRAIN_PATH
=
<
Specify path
for
the training dataset
>
TRAIN_PATH
=
<
PATH_OF_THE_TRAINING_DATASET
>
TEST_PATH
=
<
Specify path
for
the
test
dataset
>
TEST_PATH
=
<
PATH_OF_THE_TEST_DATASET
>
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
--num-layers
24
\
--num-layers
24
\
...
...
tasks/knwl_dialo/scripts/finetune_resp_gen.sh
View file @
6fc1b02f
...
@@ -12,12 +12,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
...
@@ -12,12 +12,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
--master_addr localhost
\
--master_addr localhost
\
--master_port 6000"
--master_port 6000"
CHECKPOINT_PATH
=
<
Specify path
for
the language model
>
CHECKPOINT_PATH
=
<
PATH_OF_THE_LANGUAGE_MODEL
>
OUTPUT_MODEL_PATH
=
<
Specify path
for
the saved model
>
OUTPUT_MODEL_PATH
=
<
PATH_OF_THE_SAVED_MODEL
>
VOCAB_PATH
=
<
Specify path
for
the vocab file
>
VOCAB_PATH
=
<
PATH_OF_THE_VOCAB_FILE
>
MERGE_PATH
=
<
Specify path
for
the merge file
>
MERGE_PATH
=
<
PATH_OF_THE_MERGE_FILE
>
TRAIN_PATH
=
<
Specify path
for
the training dataset
>
TRAIN_PATH
=
<
PATH_OF_THE_TRAINING_DATASET
>
TEST_PATH
=
<
Specify path
for
the
test
dataset
>
TEST_PATH
=
<
PATH_OF_THE_TEST_DATASET
>
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
--num-layers
24
\
--num-layers
24
\
...
...
tasks/knwl_dialo/scripts/prompt_knwl_gen.sh
View file @
6fc1b02f
...
@@ -12,12 +12,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
...
@@ -12,12 +12,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
--master_addr localhost
\
--master_addr localhost
\
--master_port 6000"
--master_port 6000"
CHECKPOINT_PATH
=
<
Specify path
for
the language model
>
CHECKPOINT_PATH
=
<
PATH_OF_THE_LANGUAGE_MODEL
>
INPUT_PATH
=
<
Specific path
for
the input
test
dataset
>
INPUT_PATH
=
<
PATH_OF_THE_INPUT_TEST_DATA_FILE
>
VOCAB_PATH
=
<
Specify path
for
the vocab file
>
VOCAB_PATH
=
<
PATH_OF_THE_VOCAB_FILE
>
MERGE_PATH
=
<
Specify path
for
the merge file
>
MERGE_PATH
=
<
PATH_OF_THE_MERGE_FILE
>
OUTPUT_PATH
=
<
Speicifc path
for
the output
>
OUTPUT_PATH
=
<
PATH_OF_THE_OUTPUT_GENERATION_FILE
>
PROMPT_PATH
=
<
Specific path
for
the prompts
>
PROMPT_PATH
=
<
PATH_OF_THE_KNOWLEDGE_GENERATION_PROMPTS
>
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
--num-layers
24
\
--num-layers
24
\
...
...
tasks/knwl_dialo/scripts/prompt_resp_gen.sh
View file @
6fc1b02f
...
@@ -13,12 +13,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
...
@@ -13,12 +13,12 @@ DISTRIBUTED_ARGS="--nproc_per_node $WORLD_SIZE \
--master_addr localhost
\
--master_addr localhost
\
--master_port 6000"
--master_port 6000"
CHECKPOINT_PATH
=
<
Specify path
for
the language model
>
CHECKPOINT_PATH
=
<
PATH_OF_THE_LANGUAGE_MODEL
>
INPUT_PATH
=
<
Specific path
for
the input
test
dataset
>
INPUT_PATH
=
<
PATH_OF_THE_INPUT_TEST_DATA_FILE
>
VOCAB_PATH
=
<
Specify path
for
the vocab file
>
VOCAB_PATH
=
<
PATH_OF_THE_VOCAB_FILE
>
MERGE_PATH
=
<
Specify path
for
the merge file
>
MERGE_PATH
=
<
PATH_OF_THE_MERGE_FILE
>
OUTPUT_PATH
=
<
Speicifc path
for
the output
>
OUTPUT_PATH
=
<
PATH_OF_THE_OUTPUT_GENERATION_FILE
>
PROMPT_PATH
=
<
Specific path
for
the prompts
>
PROMPT_PATH
=
<
PATH_OF_THE_RESPONSE_GENERATION_PROMPTS
>
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
python
-m
torch.distributed.launch
$DISTRIBUTED_ARGS
./tasks/main.py
\
--num-layers
24
\
--num-layers
24
\
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment