Commit 5d99629c authored by Jeffrey Morgan's avatar Jeffrey Morgan
Browse files

small `README.md` tweaks

parent ad4ffdf7
# Ollama # Ollama
- Run models, fast - Run models easily
- Download, manage and import models - Download, manage and import models
## Install ## Install
...@@ -23,7 +23,7 @@ ollama.generate(model_name, "hi") ...@@ -23,7 +23,7 @@ ollama.generate(model_name, "hi")
### `ollama.load` ### `ollama.load`
Load a model from a path or a docker image Load a model for generation
```python ```python
ollama.load("model name") ollama.load("model name")
...@@ -39,7 +39,7 @@ ollama.generate(model, "hi") ...@@ -39,7 +39,7 @@ ollama.generate(model, "hi")
### `ollama.models` ### `ollama.models`
List models List available local models
``` ```
models = ollama.models() models = ollama.models()
...@@ -53,7 +53,7 @@ Serve the ollama http server ...@@ -53,7 +53,7 @@ Serve the ollama http server
### `ollama.pull` ### `ollama.pull`
Examples: Download a model
```python ```python
ollama.pull("huggingface.co/thebloke/llama-7b-ggml") ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
...@@ -61,7 +61,7 @@ ollama.pull("huggingface.co/thebloke/llama-7b-ggml") ...@@ -61,7 +61,7 @@ ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
### `ollama.import` ### `ollama.import`
Import an existing model into the model store Import a model from a file
```python ```python
ollama.import("./path/to/model") ollama.import("./path/to/model")
...@@ -77,6 +77,9 @@ ollama.search("llama-7b") ...@@ -77,6 +77,9 @@ ollama.search("llama-7b")
## Future CLI ## Future CLI
In the future, there will be an easy CLI for testing out models
``` ```
ollama run huggingface.co/thebloke/llama-7b-ggml ollama run huggingface.co/thebloke/llama-7b-ggml
> Downloading [================> ] 66.67% (2/3) 30.2MB/s
``` ```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment