@@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon:
...
@@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon:
- Llama & Mistral
- Llama & Mistral
- Falcon & RW
- Falcon & RW
- GPT-NeoX
- BigCode
- BigCode
To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
...
@@ -184,9 +183,6 @@ python convert.py <path to model directory>
...
@@ -184,9 +183,6 @@ python convert.py <path to model directory>
# FalconForCausalLM
# FalconForCausalLM
python convert-falcon-hf-to-gguf.py <path to model directory>
python convert-falcon-hf-to-gguf.py <path to model directory>
# GPTNeoXForCausalLM
python convert-gptneox-hf-to-gguf.py <path to model directory>
# GPTBigCodeForCausalLM
# GPTBigCodeForCausalLM
python convert-starcoder-hf-to-gguf.py <path to model directory>
python convert-starcoder-hf-to-gguf.py <path to model directory>