Commit 92e5632e authored by Jiaxin Shan's avatar Jiaxin Shan
Browse files

Change device parameter to cuda:0 to avoid runtime error

parent 83dbfbf6
......@@ -37,7 +37,7 @@ python main.py \
--model hf-causal \
--model_args pretrained=EleutherAI/gpt-j-6B \
--tasks lambada_openai,hellaswag \
--device 0
--device cuda:0
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partialy trained checkpoints:
......@@ -47,7 +47,7 @@ python main.py \
--model hf-causal \
--model_args pretrained=EleutherAI/pythia-160m,revision=step100000 \
--tasks lambada_openai,hellaswag \
--device 0
--device cuda:0
```
To evaluate models that are called via `AutoSeq2SeqLM`, you instead use `hf-seq2seq`.
......@@ -111,7 +111,7 @@ python main.py \
--model gpt2 \
--tasks sciq \
--decontamination_ngrams_path path/containing/training/set/ngrams \
--device 0
--device cuda:0
```
## Cite as
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment