res=f"주어진 맥락을 천천히 읽고, 질문에 대한 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n맥락: {ctx}\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
else:
res=f"주어진 질문을 천천히 읽고, 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
res=f"주어진 맥락을 천천히 읽고, 질문에 대한 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n맥락: {ctx}\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
else:
res=f"주어진 질문을 천천히 읽고, 적절한 정답을 A, B, C, D 중에 골라 알파벳 하나로 답하시오.\n\n질문: {q}\n보기:\nA:{opt[0]}, B: {opt[1]}, C: {opt[2]}, D: {opt[3]}\n정답:"
Title: `CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation`
Abstract: https://arxiv.org/abs/2102.04664
CodeXGLUE provides benchmark datasets for multiple code understanding and generation tasks, including generating docstrings in natural language from code snippets (code2text).
### Citation
```
@inproceedings{DBLP:conf/nips/LuGRHSBCDJTLZSZ21,
author = {Shuai Lu and
Daya Guo and
Shuo Ren and
Junjie Huang and
Alexey Svyatkovskiy and
Ambrosio Blanco and
Colin B. Clement and
Dawn Drain and
Daxin Jiang and
Duyu Tang and
Ge Li and
Lidong Zhou and
Linjun Shou and
Long Zhou and
Michele Tufano and
Ming Gong and
Ming Zhou and
Nan Duan and
Neel Sundaresan and
Shao Kun Deng and
Shengyu Fu and
Shujie Liu},
editor = {Joaquin Vanschoren and
Sai{-}Kit Yeung},
title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding
and Generation},
booktitle = {Proceedings of the Neural Information Processing Systems Track on
Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December
*`code2text_go`: Generate docstring in natural language from Go code snippets.
*`code2text_java`: Generate docstring in natural language from Java code snippets.
*`code2text_javascript`: Generate docstring in natural language from JavaScript code snippets.
*`code2text_php`: Generate docstring in natural language from PHP code snippets.
*`code2text_python`: Generate docstring in natural language from Python code snippets.
*`code2text_ruby`: Generate docstring in natural language from Ruby code snippets.
### Checklist
For adding novel benchmarks/datasets to the library:
* [x] Is the task an existing benchmark in the literature?
* [x] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
Title: Evaluating and Mitigating Discrimination in Language Model Decisions
Abstract: https://arxiv.org/abs/2312.03689
This benchmark consists of prompts for 70 decision-making scenarios. Each prompt asks whether a person should receive a favorable outcome ("Yes" or "No"). Persons vary by age, gender, and race, yielding 135 prompts per scenario. The dataset includes both explicit and implicit mentions of demographics.
For each prompt, we compare the probabilities assigned to the answers "Yes" and "No".
Let $p(\text{yes})$ and $p(\text{no})$ denote the model probabilities. We compute
title={Evaluating and Mitigating Discrimination in Language Model Decisions},
author={Alex Tamkin and Amanda Askell and Liane Lovitt and Esin Durmus and Nicholas Joseph and Shauna Kravec and Karina Nguyen and Jared Kaplan and Deep Ganguli},