Commit 0856828f authored by haileyschoelkopf's avatar haileyschoelkopf
Browse files

remove references to main.py

parent 2e13caa6
...@@ -2,11 +2,11 @@ ...@@ -2,11 +2,11 @@
## Usage ## Usage
Simply add a "--decontamination_ngrams_path" when running main.py. The provided directory should contain Simply add a "--decontamination_ngrams_path" when running \__main\__.py. The provided directory should contain
the ngram files and info.json produced in "Pile Ngram Generation" further down. the ngram files and info.json produced in "Pile Ngram Generation" further down.
```bash ```bash
python main.py \ python -m lm_eval \
--model gpt2 \ --model gpt2 \
--device 0 \ --device 0 \
--tasks sciq \ --tasks sciq \
......
...@@ -4,7 +4,7 @@ This document details the interface exposed by `lm-eval` and provides details on ...@@ -4,7 +4,7 @@ This document details the interface exposed by `lm-eval` and provides details on
## Command-line Interface ## Command-line Interface
A majority of users run the library by cloning it from Github and running the `main.py` script. A majority of users run the library by cloning it from Github, installing the package as editable, and running the `python -m lm_eval` script.
Equivalently, running the library can be done via the `lm-eval` entrypoint at the command line. Equivalently, running the library can be done via the `lm-eval` entrypoint at the command line.
......
...@@ -70,9 +70,9 @@ smth smth tokenizer-agnostic ...@@ -70,9 +70,9 @@ smth smth tokenizer-agnostic
Congrats on implementing your model! Now it's time to test it out. Congrats on implementing your model! Now it's time to test it out.
To make your model usable via the command line interface to `lm-eval` using `main.py`, you'll need to tell `lm-eval` what your model's name is. To make your model usable via the command line interface to `lm-eval` using `python -m lm_eval`, you'll need to tell `lm-eval` what your model's name is.
This is done via a *decorator*, `lm_eval.api.registry.register_model`. Using `register_model()`, one can both tell the package what the model's name(s) to be used are when invoking it with `python main.py --model <name>` and alert `lm-eval` to the model's existence. This is done via a *decorator*, `lm_eval.api.registry.register_model`. Using `register_model()`, one can both tell the package what the model's name(s) to be used are when invoking it with `python -m lm_eval --model <name>` and alert `lm-eval` to the model's existence.
```python ```python
from lm_eval.api.registry import register_model from lm_eval.api.registry import register_model
......
...@@ -258,7 +258,7 @@ You can do this via adding the Python snippet ...@@ -258,7 +258,7 @@ You can do this via adding the Python snippet
from lm_eval.tasks import include_task_folder from lm_eval.tasks import include_task_folder
include_task_folder("/path/to/yaml/parent/folder") include_task_folder("/path/to/yaml/parent/folder")
``` ```
to the top of any Python file that is run or imported when performing evaluation, such as `main.py`. to the top of any Python file that is run or imported when performing evaluation, such as `\_\_main\_\_.py`.
Passing `--tasks /path/to/yaml/file` is also accepted. Passing `--tasks /path/to/yaml/file` is also accepted.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment