Commit bffed0fe authored by dengjb's avatar dengjb
Browse files

update

parents
FROM python:3.8
WORKDIR /packages
RUN wget https://registry.npmmirror.com/-/binary/node/latest-v16.x/node-v16.13.1-linux-x64.tar.gz \
&& tar -xvf node-v16.13.1-linux-x64.tar.gz \
&& mv node-v16.13.1-linux-x64 /usr/local/nodejs \
&& ln -s /usr/local/nodejs/bin/node /usr/local/bin \
&& ln -s /usr/local/nodejs/bin/npm /usr/local/bin \
&& node -v
RUN apt-get update \
&& apt-get install -y texlive-full \
&& pdflatex -v
RUN git clone https://github.com/ImageMagick/ImageMagick.git ImageMagick-7.1.1 \
&& cd ImageMagick-7.1.1 \
&& ./configure \
&& make \
&& make install \
&& ldconfig /usr/local/lib \
&& convert --version
WORKDIR /code
COPY . /code
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
\ No newline at end of file
<div align="center">
[English](./README.md) | [简体中文]
<h1>Image Over Text: Transforming Formula Recognition Evaluation with Character Detection Matching</h1>
[[ 论文 ]](https://arxiv.org/pdf/2409.03643) [[ 网站 ]](https://github.com/opendatalab/UniMERNet/tree/main/cdm)
[[在线Demo 🤗(Hugging Face)]](https://huggingface.co/spaces/opendatalab/CDM-Demo)
</div>
# 概述
公式识别因其复杂的结构和多样的符号表示而面临重大挑战。尽管公式识别模型不断进步,但现有评估指标如 BLEU 和编辑距离仍存在显著局限性。这些指标忽视了同一公式的多种表示形式,并对训练数据的分布高度敏感,导致评估不公。为此,我们提出了字符检测匹配(CDM)指标,通过设计基于图像而非 LaTeX 的评分方法来确保评估的客观性。具体而言,CDM 将模型预测的 LaTeX 和真实 LaTeX 公式渲染为图像格式,然后使用视觉特征提取和定位技术进行精确的字符级匹配,结合空间位置信息。相比于仅依赖文本字符匹配的 BLEU 和编辑距离,CDM 提供了更准确和公平的评估。
CDM与BLEU、EditDistance等指标对比示意图:
<div align="center">
<img src="assets/demo/cdm_demo.png" alt="Demo" width="95%">
</div>
> 从上述对比图中可以看出:
- Case1: 模型预测正确,理论上ExpRate/BLEU/EditDist应该为1/1/0,实际上为0/0.449/0.571,完全无法反应识别准确性;
- Case2 Vs Case1: 预测错误的模型(Case2) BLEU/EditDist指标确远优于识别正确的模型结果(Case1);
- Case3: 模型预测错误较多,而BLEU指标确高达0.907,不符合直觉。
CDM的算法流程图如下:
<div align="center">
<img src="assets/demo/cdm_framework_new.png" alt="Overview" width="95%">
</div>
可以看到CDM基于渲染图像的字符匹配方式,结果更加直观,且不受公式表达多样性影响。
# 更新记录
- 2025/09/28
- 解决了一些中文公式的渲染bug.
- 2025/06/09
- 修复了字符框在匹配过程的一个bug.
- 优化了匹配过程的一些参数.
- 2025/06/05
- 支持中文公式评测.
- 优化了处理速度.
# 使用方法
## 在线Demo体验
请点击HuggingFace Demo链接: [(Hugging Face)🤗](https://huggingface.co/spaces/opendatalab/CDM-Demo)
## 本地安装CDM
CDM需要对公式进行渲染,需要相关依赖包,推荐在Linux系统安装配置
## 准备环境
需要的依赖包括:Nodejs, imagemagic, pdflatex,请按照下面的指令进行安装:
### 步骤.1 安装 nodejs
```
wget https://registry.npmmirror.com/-/binary/node/latest-v16.x/node-v16.13.1-linux-x64.tar.gz
tar -xvf node-v16.13.1-linux-x64.tar.gz
mv node-v16.13.1-linux-x64/* /usr/local/nodejs/
ln -s /usr/local/nodejs/bin/node /usr/local/bin
ln -s /usr/local/nodejs/bin/npm /usr/local/bin
node -v
```
### 步骤.2 安装 imagemagic
`apt-gt`命令安装的imagemagic版本是6.x,我们需要安装7.x的,所以从源码编译安装:
```
git clone https://github.com/ImageMagick/ImageMagick.git ImageMagick-7.1.1
cd ImageMagick-7.1.1
./configure
make
sudo make install
sudo ldconfig /usr/local/lib
convert --version
```
### 步骤.3 安装 latexpdf
```
apt-get update
sudo apt-get install texlive-full
```
### step.4 安装 python 依赖
```
pip install -r requirements.txt
```
## 通过docker部署
如果安装上述的环境有问题,也可以通过docker来安装,步骤如下:
- build docker image
```
docker build -f DockerFile -t cdm:latest .
```
构建镜像的过程可能比较长,如果最后终端出现`Successfully tagged cdm:latest`,说明镜像构建成功。
- start a container
```
docker run -it cdm bash
```
此时启动了容器并进入bash环境,可以进行CDM的评测。如果在启动的时候希望建立映射,可以加入参数:`-v xxx:xxx`,这样退出容器后,评测的结果还保存在宿主机。
## 使用CDM
如果安装过程顺利,现在可以使用CDM对公式识别的结果进行评测了。
### 1. 批量评测
- 准备输入的json文件
在UniMERNet上评测,可以用下面的脚本获取json文件:
```
python convert2cdm_format.py -i {UniMERNet predictions} -o {save path}
```
或者,也可以参考下面的格式自行准备json文件:
```
[
{
"img_id": "case_1", # 非必须的key
"gt": "y = 2z + 3x",
"pred": "y = 2x + 3z"
},
{
"img_id": "case_2",
"gt": "y = x^2 + 1",
"pred": "y = x^2 + 1"
},
...
]
```
`注意在json文件中,一些特殊字符比如 "\" 需要进行转义, 比如 "\begin" 在json文件中就需要保存为 "\\begin".`
- 评测:
```
python evaluation.py -i {path_to_your_input_json}
```
### 2. 启动 gradio demo
```
python app.py
```
\ No newline at end of file
<div align="center">
English | [简体中文](./README-CN.md)
<h1>Image Over Text: Transforming Formula Recognition Evaluation with Character Detection Matching</h1>
[[ Paper ]](https://arxiv.org/pdf/2409.03643) [[ Website ]](https://github.com/opendatalab/UniMERNet/tree/main/cdm)
[[Demo 🤗(Hugging Face)]](https://huggingface.co/spaces/opendatalab/CDM-Demo)
</div>
# Overview
Formula recognition presents significant challenges due to the complicated structure and varied notation of mathematical expressions. Despite continuous advancements in formula recognition models, the evaluation metrics employed by these models, such as BLEU and Edit Distance, still exhibit notable limitations. They overlook the fact that the same formula has diverse representations and is highly sensitive to the distribution of training data, thereby causing the unfairness in formula recognition evaluation. To this end, we propose a Character Detection Matching (CDM) metric, ensuring the evaluation objectivity by designing a image-level rather than LaTex-level metric score. Specifically, CDM renders both the model-predicted LaTeX and the ground-truth LaTeX formulas into image-formatted formulas, then employs visual feature extraction and localization techniques for precise character-level matching, incorporating spatial position information. Such a spatially-aware and character-matching method offers a more accurate and equitable evaluation compared with previous BLEU and Edit Distance metrics that rely solely on text-based character matching.
Comparison between CDM and BLEU, Edit Distance metrics:
<div align="center">
<img src="assets/demo/cdm_demo.png" alt="Demo" width="95%">
</div>
The algorithm flow of CDM is as follows:
<div align="center">
<img src="assets/demo/cdm_framework_new.png" alt="Overview" width="95%">
</div>
CDM's character matching method based on rendered images provides more intuitive results and is not affected by the diversity of formula representations.
# Changelog
- 2025/09/28
- Fix some chinese formula issues.
- 2025/06/09
- Fix a bug in bbox match process.
- Update some parameters.
- 2025/06/05
- Support chinese formula evaluation.
- Upgrade process speed.
# Usage
## Try Online Demo
Try CDM on our online demo: [(Hugging Face)🤗](https://huggingface.co/spaces/opendatalab/CDM-Demo)
## Install CMD Locally
Given CDM's complex environment dependencies, we recommend trying it on Linux systems.
Nodejs, imagemagic, pdflatex are requried packages when render pdf files and convert them to images, here are installation guides.
### step.1 install nodejs
```
wget https://registry.npmmirror.com/-/binary/node/latest-v16.x/node-v16.13.1-linux-x64.tar.gz
tar -xvf node-v16.13.1-linux-x64.tar.gz
mv node-v16.13.1-linux-x64/* /usr/local/nodejs/
ln -s /usr/local/nodejs/bin/node /usr/local/bin
ln -s /usr/local/nodejs/bin/npm /usr/local/bin
node -v
```
### step.2 install imagemagic
the version of imagemagic installed by `apt-gt` usually be 6.x, so we also install it from source code.
```
git clone https://github.com/ImageMagick/ImageMagick.git ImageMagick-7.1.1
cd ImageMagick-7.1.1
./configure
make
sudo make install
sudo ldconfig /usr/local/lib
convert --version
```
### step.3 install latexpdf
```
apt-get update
sudo apt-get install texlive-full
```
### step.4 install python requriements
```
pip install -r requirements.txt
```
## install by docker
you can also install CDM by docker:
- build docker image
```
docker build -f DockerFile -t cdm:latest .
```
The process of building the image may take some time. If the terminal finally shows `Successfully tagged cdm:latest`, it indicates that the image has been built successfully.
- start a container
```
docker run -it cdm bash
```
At this point, the container has been started and the bash environment has been entered, allowing for the evaluation of CDM. If you want to establish a mapping when starting, you can add the parameter: `-v xxx:xxx`, so that the evaluation results are still saved on the host machine after exiting the container.
## Use CDM Locally
Should the installation goes well, you may now use CDM to evaluate your formula recognition results.
### 1. batch evaluation
- prepare input json
evaluate on UniMERNet results, use this convert script to get json file:
```
python convert2cdm_format.py -i {UniMERNet predictions} -o {save path}
```
otherwise, prepare a json file follow this format:
```
[
{
"img_id": "case_1", # optional key
"gt": "y = 2z + 3x",
"pred": "y = 2x + 3z"
},
{
"img_id": "case_2",
"gt": "y = x^2 + 1",
"pred": "y = x^2 + 1"
},
...
]
```
`Note that in json files, some special characters such as "\" need escaped character, for example "\begin" should be written as "\\begin".`
- evaluate:
```
python evaluation.py -i {path_to_your_input_json}
```
### 2. launch a gradio demo
```
python app.py
```
\ No newline at end of file
import sys
import os
import re
import json
import time
import shutil
import numpy as np
import gradio as gr
from datetime import datetime
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
from PIL import Image, ImageDraw
from skimage.measure import ransac
import matplotlib.pyplot as plt
from modules.latex2bbox_color import latex2bbox_color
from modules.tokenize_latex.tokenize_latex import tokenize_latex
from modules.visual_matcher import HungarianMatcher, SimpleAffineTransform
DATA_ROOT = "output"
def gen_color_list(num=10, gap=15):
num += 1
single_num = 255 // gap + 1
max_num = single_num ** 3
num = min(num, max_num)
color_list = []
for idx in range(num):
R = idx // single_num**2
GB = idx % single_num**2
G = GB // single_num
B = GB % single_num
color_list.append((R*gap, G*gap, B*gap))
return color_list[1:]
def process_latex(groundtruths, predictions, user_id="test"):
data_root = DATA_ROOT
temp_dir = os.path.join(data_root, "temp_dir")
data_root = os.path.join(data_root, user_id)
output_dir_info = {}
input_args = []
for subset, latex_list in zip(['gt', 'pred'], [groundtruths, predictions]):
sub_temp_dir = os.path.join(temp_dir, f"{user_id}_{subset}")
os.makedirs(sub_temp_dir, exist_ok=True)
output_path = os.path.join(data_root, subset)
output_dir_info[output_path] = []
os.makedirs(os.path.join(output_path, 'bbox'), exist_ok=True)
os.makedirs(os.path.join(output_path, 'vis'), exist_ok=True)
total_color_list = gen_color_list(num=5800)
for idx, latex in enumerate(latex_list):
basename = f"sample_{idx}"
input_arg = latex, basename, output_path, sub_temp_dir, total_color_list
a = time.time()
latex2bbox_color(input_arg)
b = time.time()
for subset in ['gt', 'pred']:
shutil.rmtree(os.path.join(temp_dir, f"{user_id}_{subset}"))
def update_inliers(ori_inliers, sub_inliers):
inliers = np.copy(ori_inliers)
sub_idx = -1
for idx in range(len(ori_inliers)):
if ori_inliers[idx] == False:
sub_idx += 1
if sub_inliers[sub_idx] == True:
inliers[idx] = True
return inliers
def reshape_inliers(ori_inliers, sub_inliers):
inliers = np.copy(ori_inliers)
sub_idx = -1
for idx in range(len(ori_inliers)):
if ori_inliers[idx] == False:
sub_idx += 1
if sub_inliers[sub_idx] == True:
inliers[idx] = True
else:
inliers[idx] = False
return inliers
def evaluation(user_id="test"):
data_root = DATA_ROOT
data_root = os.path.join(data_root, user_id)
gt_box_dir = os.path.join(data_root, "gt")
pred_box_dir = os.path.join(data_root, "pred")
match_vis_dir = os.path.join(data_root, "vis_match")
os.makedirs(match_vis_dir, exist_ok=True)
max_iter = 3
min_samples = 3
residual_threshold = 25
max_trials = 50
metrics_per_img = {}
gt_basename_list = [item.split(".")[0] for item in os.listdir(os.path.join(gt_box_dir, 'bbox'))]
for basename in gt_basename_list:
gt_valid, pred_valid = True, True
if not os.path.exists(os.path.join(gt_box_dir, 'bbox', basename+".jsonl")):
gt_valid = False
else:
with open(os.path.join(gt_box_dir, 'bbox', basename+".jsonl"), 'r') as f:
box_gt = []
for line in f:
info = json.loads(line)
if info['bbox']:
box_gt.append(info)
if not box_gt:
gt_valid = False
if not gt_valid:
continue
if not os.path.exists(os.path.join(pred_box_dir, 'bbox', basename+".jsonl")):
pred_valid = False
else:
with open(os.path.join(pred_box_dir, 'bbox', basename+".jsonl"), 'r') as f:
box_pred = []
for line in f:
info = json.loads(line)
if info['bbox']:
box_pred.append(info)
if not box_pred:
pred_valid = False
if not pred_valid:
metrics_per_img[basename] = {
"recall": 0,
"precision": 0,
"F1_score": 0,
}
continue
gt_img_path = os.path.join(gt_box_dir, 'vis', basename+"_base.png")
pred_img_path = os.path.join(pred_box_dir, 'vis', basename+"_base.png")
img_gt = Image.open(gt_img_path)
img_pred = Image.open(pred_img_path)
matcher = HungarianMatcher()
matched_idxes = matcher(box_gt, box_pred, img_gt.size, img_pred.size)
src = []
dst = []
for (idx1, idx2) in matched_idxes:
x1min, y1min, x1max, y1max = box_gt[idx1]['bbox']
x2min, y2min, x2max, y2max = box_pred[idx2]['bbox']
x1_c, y1_c = float((x1min+x1max)/2), float((y1min+y1max)/2)
x2_c, y2_c = float((x2min+x2max)/2), float((y2min+y2max)/2)
src.append([y1_c, x1_c])
dst.append([y2_c, x2_c])
src = np.array(src)
dst = np.array(dst)
if src.shape[0] <= min_samples:
inliers = np.array([True for _ in matched_idxes])
else:
inliers = np.array([False for _ in matched_idxes])
for i in range(max_iter):
if src[inliers==False].shape[0] <= min_samples:
break
model, inliers_1 = ransac((src[inliers==False], dst[inliers==False]), SimpleAffineTransform, min_samples=min_samples, residual_threshold=residual_threshold, max_trials=max_trials)
if inliers_1 is not None and inliers_1.any():
inliers = update_inliers(inliers, inliers_1)
else:
break
if len(inliers[inliers==True]) >= len(matched_idxes):
break
for idx, (a,b) in enumerate(matched_idxes):
if inliers[idx] == True and matcher.cost['token'][a, b] == 1:
inliers[idx] = False
final_match_num = len(inliers[inliers==True])
recall = round(final_match_num/(len(box_gt)), 3)
precision = round(final_match_num/(len(box_pred)), 3)
F1_score = round(2*final_match_num/(len(box_gt)+len(box_pred)), 3)
metrics_per_img[basename] = {
"recall": recall,
"precision": precision,
"F1_score": F1_score,
}
if True:
gap = 5
W1, H1 = img_gt.size
W2, H2 = img_pred.size
H = H1 + H2 + gap
W = max(W1, W2)
vis_img = Image.new('RGB', (W, H), (255, 255, 255))
vis_img.paste(img_gt, (0, 0))
vis_img.paste(Image.new('RGB', (W, gap), (0, 150, 200)), (0, H1))
vis_img.paste(img_pred, (0, H1+gap))
match_img = vis_img.copy()
match_draw = ImageDraw.Draw(match_img)
gt_matched_idx = {
a: flag
for (a,b), flag in
zip(matched_idxes, inliers)
}
pred_matched_idx = {
b: flag
for (a,b), flag in
zip(matched_idxes, inliers)
}
for idx, box in enumerate(box_gt):
if idx in gt_matched_idx and gt_matched_idx[idx]==True:
color = "green"
else:
color = "red"
x_min, y_min, x_max, y_max = box['bbox']
match_draw.rectangle([x_min-1, y_min-1, x_max+1, y_max+1], fill=None, outline=color, width=2)
for idx, box in enumerate(box_pred):
if idx in pred_matched_idx and pred_matched_idx[idx]==True:
color = "green"
else:
color = "red"
x_min, y_min, x_max, y_max = box['bbox']
match_draw.rectangle([x_min-1, y_min-1+H1+gap, x_max+1, y_max+1+H1+gap], fill=None, outline=color, width=2)
vis_img.save(os.path.join(match_vis_dir, basename+"_base.png"))
if W < 500:
padding = (500 - W)//2 + 1
reshape_match_img = Image.new('RGB', (500, H), (255, 255, 255))
reshape_match_img.paste(match_img, (padding, 0))
reshape_match_img.paste(Image.new('RGB', (500, gap), (0, 150, 200)), (0, H1))
reshape_match_img.save(os.path.join(match_vis_dir, basename+".png"))
else:
match_img.save(os.path.join(match_vis_dir, basename+".png"))
acc_list = [val['F1_score'] for _, val in metrics_per_img.items()]
metrics_res = {
"mean_score": round(np.mean(acc_list), 3),
"details": metrics_per_img
}
metric_res_path = os.path.join(data_root, "metrics_res.json")
with open(metric_res_path, "w") as f:
f.write(json.dumps(metrics_res, indent=2))
return metrics_res, metric_res_path, match_vis_dir
def calculate_metric_single(groundtruth, prediction):
user_id = datetime.now().strftime('%Y%m%d-%H%M%S')
process_latex([groundtruth], [prediction], user_id)
metrics_res, metric_res_path, match_vis_dir = evaluation(user_id)
basename = "sample_0"
image_path = os.path.join(match_vis_dir, basename+".png")
sample = metrics_res["details"][basename]
score = sample['F1_score']
recall = sample['recall']
precision = sample['precision']
return score, recall, precision, gr.Image(image_path)
def calculate_metric_batch(json_input):
user_id = datetime.now().strftime('%Y%m%d-%H%M%S')
with open(json_input.name, "r") as f:
input_data = json.load(f)
groundtruths = []
predictions = []
for item in input_data:
groundtruths.append(item['gt'])
predictions.append(item['pred'])
process_latex(groundtruths, predictions, user_id)
metrics_res, metric_res_path, match_vis_dir = evaluation(user_id)
return metric_res_path
def gradio_reset_single():
return gr.update(value=None, placeholder='type gt latex code here'), gr.update(value=None, placeholder='type pred latex code here'), \
gr.update(value=None), gr.update(value=None), gr.update(value=None), gr.update(value=None)
def gradio_reset_batch():
return gr.update(value=None), gr.update(value=None)
def select_example1():
gt = "y = 2x + 3z"
pred = "y = 2z + 3x"
return gr.update(value=gt, placeholder='type gt latex code here'), gr.update(value=pred, placeholder='type pred latex code here')
def select_example2():
gt = "r = \\frac { \\alpha } { \\beta } \\vert \\sin \\beta \\left( \\sigma _ { 1 } \\pm \\sigma _ { 2 } \\right) \\vert"
pred = "r={\\frac{\\alpha}{\\beta}}|\\sin\\beta\\left(\\sigma_{2}+\\sigma_{1}\\right)|"
return gr.update(value=gt, placeholder='type gt latex code here'), gr.update(value=pred, placeholder='type pred latex code here')
def select_example3():
gt = "\\begin{array} { r l r } & { } & { \\mathbf { J } _ { L } = \\left( \\begin{array} { c c } { 0 } & { 0 } \\\\ { v _ { n } } & { 0 } \\end{array} \\right) , ~ \\mathbf { J } _ { R } = \\left( \\begin{array} { c c } { u _ { n - 1 } } & { 0 } \\\\ { 0 } & { 0 } \\end{array} \\right) , ~ } \\\\ & { } & {\\mathbf { K } = \\left( \\begin{array} { c c } { V _ { n - 1 } } & { u _ { n } } \\\\ { v _ { n - 1 } } & { V _ { n } } \\end{array} \\right) , } \\end{array}"
pred = "\\mathbf{J}_{U}={\\left(\\begin{array}{l l}{0}&{0}\\\\ {v_{n}}&{0}\\end{array}\\right)}\\,,\\ \\mathbf{J}_{R}={\\left(\\begin{array}{l l}{u_{n-1}}&{0}\\\\ {0}&{0}\\end{array}\\right)}\\,,\\mathbf{K}={\\left(\\begin{array}{l l}{V_{n-1}}&{u_{n}}\\\\ {v_{n-1}}&{V_{n}}\\end{array}\\right)}\\,,"
return gr.update(value=gt, placeholder='type gt latex code here'), gr.update(value=pred, placeholder='type pred latex code here')
if __name__ == "__main__":
title = """<h1 align="center">Character Detection Matching (CDM)</h1>"""
with gr.Blocks() as demo:
gr.Markdown(title)
gr.Button(value="Quick Try: type latex code of gt and pred, get metrics and visualization.", interactive=False, variant="primary")
with gr.Row():
with gr.Column():
gt_input = gr.Textbox(label='gt', placeholder='type gt latex code here', interactive=True)
pred_input = gr.Textbox(label='pred', placeholder='type pred latex code here', interactive=True)
with gr.Row():
clear_single = gr.Button("Clear")
submit_single = gr.Button(value="Submit", interactive=True, variant="primary")
with gr.Accordion("Examples:"):
with gr.Row():
example1 = gr.Button("Example A(short)")
example2 = gr.Button("Example B(middle)")
example3 = gr.Button("Example C(long)")
with gr.Column():
with gr.Row():
score_output = gr.Number(label="F1 Score", interactive=False)
recall_output = gr.Number(label="Recall", interactive=False)
recision_output = gr.Number(label="Precision", interactive=False)
gr.Button(value="Visualization (green bbox means correcttlly matched, red bbox means missed or wrong.)", interactive=False)
vis_output = gr.Image(label=" ", interactive=False)
example1.click(select_example1, inputs=None, outputs=[gt_input, pred_input])
example2.click(select_example2, inputs=None, outputs=[gt_input, pred_input])
example3.click(select_example3, inputs=None, outputs=[gt_input, pred_input])
clear_single.click(gradio_reset_single, inputs=None, outputs=[gt_input, pred_input, score_output, recall_output, recision_output, vis_output])
submit_single.click(calculate_metric_single, inputs=[gt_input, pred_input], outputs=[score_output, recall_output, recision_output, vis_output])
gr.Button(value="Batch Run: upload a json file and batch processing, this may take some times according to your latex amount and length.", interactive=False, variant="primary")
with gr.Row():
with gr.Column():
json_input = gr.File(label="Input Json", file_types=[".json"])
json_example = gr.File(label="Input Example", value="assets/example/input_example.json")
with gr.Row():
clear_batch = gr.Button("Clear")
submit_batch = gr.Button(value="Submit", interactive=True, variant="primary")
metric_output = gr.File(label="Output Metrics")
clear_batch.click(gradio_reset_batch, inputs=None, outputs=[json_input, metric_output])
submit_batch.click(calculate_metric_batch, inputs=[json_input], outputs=[metric_output])
demo.launch(share=True, server_name="0.0.0.0", server_port=10005, debug=True)
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment