"...text-generation-inference.git" did not exist on "2b53d71991e8fe975be41a82ffe3b52b0bcd40a3"
Commit 27a7ce60 authored by Jeffrey Morgan's avatar Jeffrey Morgan
Browse files

correct spelling for Core ML

parent e1388938
...@@ -7,7 +7,7 @@ _Note: this project is a work in progress. The features below are still in devel ...@@ -7,7 +7,7 @@ _Note: this project is a work in progress. The features below are still in devel
**Features** **Features**
- Run models locally on macOS (Windows, Linux and other platforms coming soon) - Run models locally on macOS (Windows, Linux and other platforms coming soon)
- Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, core ml and other loaders coming soon) - Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, Core ML and other loaders coming soon)
- Import models from local files - Import models from local files
- Find and download models on Hugging Face and other sources (coming soon) - Find and download models on Hugging Face and other sources (coming soon)
- Support for running and switching between multiple models at a time (coming soon) - Support for running and switching between multiple models at a time (coming soon)
...@@ -42,7 +42,7 @@ Hello, how may I help you? ...@@ -42,7 +42,7 @@ Hello, how may I help you?
```python ```python
import ollama import ollama
ollama.generate("./llama-7b-ggml.bin", "hi") ollama.generate("orca-mini-3b", "hi")
``` ```
### `ollama.generate(model, message)` ### `ollama.generate(model, message)`
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment