# Pretraining LLaMA: best practices for building LLaMA-like base models<pid="ColossalChat-Speed"align="center"><imgsrc="https://raw.githubusercontent.com/hpcaitech/public_assets/main/examples/images/LLaMA_pretraining.png"width=600/></p>- 65-billion-parameter large model pretraining accelerated by 38%[[code]](https://github.com/hpcaitech/ColossalAI/tree/example/llama/examples/language/llama)[[blog]](https://www.hpc-ai.tech/blog/large-model-pretraining)> Since the main branch is being updated, in order to maintain the stability of the code, this example is temporarily kept as an [independent branch](https://github.com/hpcaitech/ColossalAI/tree/example/llama/examples/language/llama).