In preliminary comparisons with large language models such as Wenxin Yiyan and Spark, we found that our model exhibits good generalization capabilities on a diversified therapeutic decomposition instruction dataset constructed based on 300 Traditional Chinese Medicine prescription data. This perhaps initially confirms that, like humans, large language models are more conducive to learning metaphorical knowledge and logic from text content represented in diverse forms.
This finding is significant as it suggests that our approach of using a multi-task therapeutic decomposition strategy and a domain-specific million-level instruct data set is effective in enhancing the model's reasoning ability for prescription data and diagnostic thinking logic. It also indicates the potential of large language models in fields where professional knowledge has a low tolerance for errors, such as medical and legal scenarios.
## To Do List
- [ ] Adopt a multi-task therapeutic decomposition strategy, based on multidisciplinary data such as internal medicine, gynecology, pediatrics, and orthopedics, to fine-tune the model with a domain-specific million-level instruct data.
- [ ] Continuously iterate and update. Subsequent releases will include Li Shizhen, Wang Shuhe, Huangfu Mi, Sun Simiao, Ge Hong, and Qihuang version of the large language model for Traditional Chinese Medicine.