"...text-generation-inference.git" did not exist on "2e68ac01c0d331ff587f023c74308080c5860b4d"
Commit 7e6fd7b4 authored by Jeffrey Morgan's avatar Jeffrey Morgan
Browse files

small `README.md` tweaks

parent 9497afb8
...@@ -18,23 +18,23 @@ ollama.generate("./llama-7b-ggml.bin", "hi") ...@@ -18,23 +18,23 @@ ollama.generate("./llama-7b-ggml.bin", "hi")
## Reference ## Reference
### `ollama.load` ### `ollama.generate(model, message)`
Load a model for generation Generate a completion
```python ```python
ollama.load("model name") ollama.generate("./llama-7b-ggml.bin", "hi")
``` ```
### `ollama.generate("message")` ### `ollama.load(model)`
Generate a completion Load a model for generation
```python ```python
ollama.generate(model, "hi") ollama.load("model name")
``` ```
### `ollama.models` ### `ollama.models()`
List available local models List available local models
...@@ -42,13 +42,13 @@ List available local models ...@@ -42,13 +42,13 @@ List available local models
models = ollama.models() models = ollama.models()
``` ```
### `ollama.serve` ### `ollama.serve()`
Serve the ollama http server Serve the ollama http server
## Cooing Soon ## Cooming Soon
### `ollama.pull` ### `ollama.pull("model")`
Download a model Download a model
...@@ -56,7 +56,7 @@ Download a model ...@@ -56,7 +56,7 @@ Download a model
ollama.pull("huggingface.co/thebloke/llama-7b-ggml") ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
``` ```
### `ollama.import` ### `ollama.import("file")`
Import a model from a file Import a model from a file
...@@ -64,7 +64,7 @@ Import a model from a file ...@@ -64,7 +64,7 @@ Import a model from a file
ollama.import("./path/to/model") ollama.import("./path/to/model")
``` ```
### `ollama.search` ### `ollama.search("query")`
Search for compatible models that Ollama can run Search for compatible models that Ollama can run
...@@ -74,7 +74,7 @@ ollama.search("llama-7b") ...@@ -74,7 +74,7 @@ ollama.search("llama-7b")
## Future CLI ## Future CLI
In the future, there will be an easy CLI for testing out models In the future, there will be an easy CLI for running models
``` ```
ollama run huggingface.co/thebloke/llama-7b-ggml ollama run huggingface.co/thebloke/llama-7b-ggml
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment