Unverified Commit f1ef3f99 authored by Matt Williams's avatar Matt Williams Committed by GitHub
Browse files

remove mention of gpt-neox in import (#1381)


Signed-off-by: default avatarMatt Williams <m@technovangelist.com>
parent bf704423
...@@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon: ...@@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon:
- Llama & Mistral - Llama & Mistral
- Falcon & RW - Falcon & RW
- GPT-NeoX
- BigCode - BigCode
To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`). To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
...@@ -184,9 +183,6 @@ python convert.py <path to model directory> ...@@ -184,9 +183,6 @@ python convert.py <path to model directory>
# FalconForCausalLM # FalconForCausalLM
python convert-falcon-hf-to-gguf.py <path to model directory> python convert-falcon-hf-to-gguf.py <path to model directory>
# GPTNeoXForCausalLM
python convert-gptneox-hf-to-gguf.py <path to model directory>
# GPTBigCodeForCausalLM # GPTBigCodeForCausalLM
python convert-starcoder-hf-to-gguf.py <path to model directory> python convert-starcoder-hf-to-gguf.py <path to model directory>
``` ```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment