This guide walks through creating an Ollama model from an existing model on HuggingFace from PyTorch, Safetensors or GGUF. It optionally covers pushing the model to [ollama.ai](https://ollama.ai/library).
This guide walks through importing a PyTorch, Safetensors or GGUF model from a HuggingFace repo to Ollama.
## Supported models
...
...
@@ -11,7 +11,7 @@ Ollama supports a set of model architectures, with support for more coming soon:
- GPT-NeoX
- BigCode
To view a model's architecture, check its`config.json` file. You should see an entry under `architecture` (e.g. `LlamaForCausalLM`).
To view a model's architecture, check the`config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).