"git@developer.sourcefind.cn:yangql/composable_kernel-1.git" did not exist on "70456328851dea8e7c6d908193e2d32c2d7c509c"
Unverified Commit 2a741477 authored by Fengzhe Zhou's avatar Fengzhe Zhou Committed by GitHub
Browse files

update links and checkers (#890)

parent 4c1533e5
...@@ -14,4 +14,8 @@ jobs: ...@@ -14,4 +14,8 @@ jobs:
- name: linkchecker - name: linkchecker
run: | run: |
pip install linkchecker pip install linkchecker
linkchecker https://opencompass.readthedocs.io/ --no-robots -t 30 linkchecker https://opencompass.readthedocs.io/ --no-robots -t 30 --no-warnings |
--ignore-url https://opencompass\.readthedocs\.io/.*/static/images/opencompass_logo\.svg |
--ignore-url https://opencompass\.readthedocs\.io/.*/_static/images/icon-menu-dots\.svg |
--ignore-url https://opencompass\.readthedocs\.io/policy |
--ignore-url https://opencompass\.readthedocs\.io/(en|zh_CN)/[0-9a-f]{40}/.*
...@@ -117,7 +117,7 @@ Model inference and code evaluation services located in different machines which ...@@ -117,7 +117,7 @@ Model inference and code evaluation services located in different machines which
### Collect Inference Results(Only for Humanevalx) ### Collect Inference Results(Only for Humanevalx)
In OpenCompass's tools folder, there is a script called `collect_code_preds.py` provided to process and collect the inference results after providing the task launch configuration file during startup along with specifying the working directory used corresponding to the task. In OpenCompass's tools folder, there is a script called `collect_code_preds.py` provided to process and collect the inference results after providing the task launch configuration file during startup along with specifying the working directory used corresponding to the task.
It is the same with `-r` option in `run.py`. More details can be referred through the [documentation](https://opencompass.readthedocs.io/en/latest/get_started.html#launch-evaluation). It is the same with `-r` option in `run.py`. More details can be referred through the [documentation](https://opencompass.readthedocs.io/en/latest/get_started/quick_start.html#launching-evaluation).
```shell ```shell
python tools/collect_code_preds.py [config] [-r latest] python tools/collect_code_preds.py [config] [-r latest]
......
...@@ -6,7 +6,7 @@ We now support evaluation of models accelerated by the [LMDeploy](https://github ...@@ -6,7 +6,7 @@ We now support evaluation of models accelerated by the [LMDeploy](https://github
### Install OpenCompass ### Install OpenCompass
Please follow the [instructions](https://opencompass.readthedocs.io/en/latest/get_started.html) to install the OpenCompass and prepare the evaluation datasets. Please follow the [instructions](https://opencompass.readthedocs.io/en/latest/get_started/installation.html) to install the OpenCompass and prepare the evaluation datasets.
### Install LMDeploy ### Install LMDeploy
......
...@@ -115,7 +115,7 @@ humanevalx_datasets = [ ...@@ -115,7 +115,7 @@ humanevalx_datasets = [
### 收集推理结果(仅针对Humanevalx) ### 收集推理结果(仅针对Humanevalx)
OpenCompass 在 `tools` 中提供了 `collect_code_preds.py` 脚本对推理结果进行后处理并收集,我们只需要提供启动任务时的配置文件,以及指定复用对应任务的工作目录,其配置与 `run.py` 中的 `-r` 一致,细节可参考[文档](https://opencompass.readthedocs.io/zh_CN/latest/get_started.html#id7) OpenCompass 在 `tools` 中提供了 `collect_code_preds.py` 脚本对推理结果进行后处理并收集,我们只需要提供启动任务时的配置文件,以及指定复用对应任务的工作目录,其配置与 `run.py` 中的 `-r` 一致,细节可参考[文档](https://opencompass.readthedocs.io/zh-cn/latest/get_started/quick_start.html#id4)
```shell ```shell
python tools/collect_code_preds.py [config] [-r latest] python tools/collect_code_preds.py [config] [-r latest]
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
### 安装 OpenCompass ### 安装 OpenCompass
请根据 OpenCompass [安装指南](https://opencompass.readthedocs.io/en/latest/get_started.html) 来安装算法库和准备数据集。 请根据 OpenCompass [安装指南](https://opencompass.readthedocs.io/en/latest/get_started/installation.html) 来安装算法库和准备数据集。
### 安装 LMDeploy ### 安装 LMDeploy
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment