Unverified Commit 91442835 authored by Jeffrey Morgan's avatar Jeffrey Morgan Committed by GitHub
Browse files

Update import.md

parent 9afea9e3
# Import a model
This guide walks through importing a PyTorch, Safetensors or GGUF model.
## Supported models
Ollama supports a set of model architectures, with support for more coming soon:
- Llama & Mistral
- Falcon & RW
- GPT-NeoX
- BigCode
To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
This guide walks through importing a GGUF, PyTorch or Safetensors model.
## Importing (GGUF)
......@@ -48,6 +37,17 @@ ollama run example "What is your favourite condiment?"
## Importing (PyTorch & Safetensors)
### Supported models
Ollama supports a set of model architectures, with support for more coming soon:
- Llama & Mistral
- Falcon & RW
- GPT-NeoX
- BigCode
To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
### Step 1: Clone the HuggingFace repository (optional)
If the model is currently hosted in a HuggingFace repository, first clone that repository to download the raw model.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment