Our model, a meticulously fine-tuned version of Qwen1.5-1.8B-Chat, has been optimized for high-speed inference on a Tesla T4 graphics processing unit (GPU). This enhancement was achieved through extensive training on our exclusive medical datasets, ensuring the model's proficiency in understanding and generating responses relevant to the medical field, particularly in the domain of Traditional Chinese Medicine (TCM). The model weights are available for access at [https://huggingface.co/CMLL/ZhongJing-2-1_8b](https://huggingface.co/CMLL/ZhongJing-2-1_8b), facilitating its integration and application in relevant projects and research.
## 1.Instruction Data Construction
## 1.Instruction Data Construction
While many works such as Alpaca, Belle, etc., are based on the self-instruct approach which effectively harnesses the knowledge of large language models to generate diverse and creative instructions, this approach may lead to noise in instruction data, thereby affecting the accuracy of the model in fields where professional knowledge has a low tolerance for errors, such as medical and legal scenarios. Therefore, how to quickly invoke the OpenAI API without sacrificing the professionalism of instruction data has become an important research direction for instruction data construction and annotation scenarios. Here, we will briefly describe our preliminary experimental exploration.
While many works such as Alpaca, Belle, etc., are based on the self-instruct approach which effectively harnesses the knowledge of large language models to generate diverse and creative instructions, this approach may lead to noise in instruction data, thereby affecting the accuracy of the model in fields where professional knowledge has a low tolerance for errors, such as medical and legal scenarios. Therefore, how to quickly invoke the OpenAI API without sacrificing the professionalism of instruction data has become an important research direction for instruction data construction and annotation scenarios. Here, we will briefly describe our preliminary experimental exploration.