Unverified Commit 233806dc authored by pariskang's avatar pariskang 💬 Committed by GitHub
Browse files

Update README-EN.md

parent f806b10f
......@@ -8,6 +8,10 @@ This model aims to illuminate the profound knowledge of Traditional Chinese Medi
<p align="center"> <img src="https://raw.githubusercontent.com/pariskang/CMLM-ZhongJing/main/logo.png" alt="logo" title="logo" width="50%"> </p>
<p align="center"><b>Fig 1. A logo of CMLM-ZhongJing generated by Bing’s drawing output combined with human creative prompts.</b></p>
# ZhongJing-2-1_8b Train Details & Inference Capability Statement
Our model, a meticulously fine-tuned version of Qwen1.5-1.8B-Chat, has been optimized for high-speed inference on a Tesla T4 graphics processing unit (GPU). This enhancement was achieved through extensive training on our exclusive medical datasets, ensuring the model's proficiency in understanding and generating responses relevant to the medical field, particularly in the domain of Traditional Chinese Medicine (TCM). The model weights are available for access at [https://huggingface.co/CMLL/ZhongJing-2-1_8b](https://huggingface.co/CMLL/ZhongJing-2-1_8b), facilitating its integration and application in relevant projects and research.
## 1.Instruction Data Construction
While many works such as Alpaca, Belle, etc., are based on the self-instruct approach which effectively harnesses the knowledge of large language models to generate diverse and creative instructions, this approach may lead to noise in instruction data, thereby affecting the accuracy of the model in fields where professional knowledge has a low tolerance for errors, such as medical and legal scenarios. Therefore, how to quickly invoke the OpenAI API without sacrificing the professionalism of instruction data has become an important research direction for instruction data construction and annotation scenarios. Here, we will briefly describe our preliminary experimental exploration.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment