Unverified Commit b5128936 authored by BlueRum's avatar BlueRum Committed by GitHub
Browse files

Polish readme link (#3306)

parent a0b37492
...@@ -280,7 +280,7 @@ For more details, see [`inference/`](https://github.com/hpcaitech/ColossalAI/tre ...@@ -280,7 +280,7 @@ For more details, see [`inference/`](https://github.com/hpcaitech/ColossalAI/tre
</details> </details>
You can find more examples in this [repo](https://github.com/XueFuzhao/InstructionWild/blob/main/compare.md). You can find more examples in this [repo](https://github.com/XueFuzhao/InstructionWild/blob/main/comparison.md).
### Limitation for LLaMA-finetuned models ### Limitation for LLaMA-finetuned models
- Both Alpaca and ColossalChat are based on LLaMA. It is hard to compensate for the missing knowledge in the pre-training stage. - Both Alpaca and ColossalChat are based on LLaMA. It is hard to compensate for the missing knowledge in the pre-training stage.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment