Unverified Commit 5a85070c authored by Matt Williams's avatar Matt Williams Committed by GitHub
Browse files

Update readmes, requirements, packagejsons, etc for all examples (#1452)



Most of the examples needed updates of Readmes to show how to run them. Some of the requirements.txt files had extra content that wasn't needed, or missing altogether. Apparently some folks like to run npm start
to run typescript, so a script was added to all typescript examples which
hadn't been done before.

Basically just a lot of cleanup.
Signed-off-by: default avatarMatt Williams <m@technovangelist.com>
parent 291700c9
node_modules node_modules
bun.lockb
.vscode
# OSX # OSX
.DS_STORE .DS_STORE
# Models # Models
models/ models/
......
# LangChain Web Summarization # LangChain Web Summarization
This example summarizes a website This example summarizes the website, [https://ollama.ai/blog/run-llama2-uncensored-locally](https://ollama.ai/blog/run-llama2-uncensored-locally)
## Setup ## Running the Example
``` 1. Ensure you have the `llama2` model installed:
pip install -r requirements.txt
```
## Run ```bash
ollama pull llama2
```
``` 2. Install the Python Requirements.
python main.py
``` ```bash
pip install -r requirements.txt
```
3. Run the example:
```bash
python main.py
```
langchain==0.0.259 langchain==0.0.259
bs4==0.0.1
\ No newline at end of file
...@@ -2,20 +2,23 @@ ...@@ -2,20 +2,23 @@
This example is a basic "hello world" of using LangChain with Ollama. This example is a basic "hello world" of using LangChain with Ollama.
## Setup ## Running the Example
``` 1. Ensure you have the `llama2` model installed:
pip install -r requirements.txt
```
## Run ```bash
ollama pull llama2
```
``` 2. Install the Python Requirements.
python main.py
```
Running this example will print the response for "hello": ```bash
pip install -r requirements.txt
```
``` 3. Run the example:
Hello! It's nice to meet you. hopefully you are having a great day! Is there something I can help you with or would you like to chat?
``` ```bash
python main.py
```
\ No newline at end of file
from langchain.llms import Ollama from langchain.llms import Ollama
input = input("What is your question?")
llm = Ollama(model="llama2") llm = Ollama(model="llama2")
res = llm.predict("hello") res = llm.predict(input)
print (res) print (res)
...@@ -2,20 +2,22 @@ ...@@ -2,20 +2,22 @@
This example is a basic "hello world" of using LangChain with Ollama using Node.js and Typescript. This example is a basic "hello world" of using LangChain with Ollama using Node.js and Typescript.
## Setup ## Running the Example
```shell 1. Install the prerequisites:
npm install
```
## Run ```bash
npm install
```
```shell 2. Ensure the `mistral` model is available:
ts-node main.ts
```
Running this example will print the response for "hello": ```bash
ollama pull mistral
```
```plaintext 3. Run the example:
Hello! It's nice to meet you. hopefully you are having a great day! Is there something I can help you with or would you like to chat?
``` ```bash
npm start
```
import { Ollama} from 'langchain/llms/ollama'; import { Ollama } from 'langchain/llms/ollama';
import * as readline from "readline";
async function main() { async function main() {
const ollama = new Ollama({ const ollama = new Ollama({
model: 'mistral' model: 'mistral'
// other parameters can be found at https://js.langchain.com/docs/api/llms_ollama/classes/Ollama // other parameters can be found at https://js.langchain.com/docs/api/llms_ollama/classes/Ollama
}) });
const stream = await ollama.stream("Hello");
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
for await (const chunk of stream) { rl.question("What is your question: \n", async (user_input) => {
process.stdout.write(chunk); const stream = await ollama.stream(user_input);
}
for await (const chunk of stream) {
process.stdout.write(chunk);
}
rl.close();
})
} }
main(); main();
\ No newline at end of file
{ {
"name": "with-langchain-typescript-simplegenerate", "name": "langchain-typescript-simple",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
......
{ {
"scripts": {
"start": "tsx main.ts"
},
"devDependencies": { "devDependencies": {
"typescript": "^5.2.2" "tsx": "^4.6.2",
"typescript": "^5.3.3"
}, },
"dependencies": { "dependencies": {
"langchain": "^0.0.165" "langchain": "^0.0.165",
"readline": "^1.3.0"
} }
} }
# Example Modelfile - Tweetwriter
This simple examples shows what you can do without any code, simply relying on a Modelfile. The file has two instructions:
1. FROM - The From instructions defines the parent model to use for this one. If you choose a model from the library, you can enter just the model name. For all other models, you need to specify the namespace as well. You could also use a local file. Just include the relative path to the converted, quantized model weights file. To learn more about creating that file, see the `import.md` file in the docs folder of this repository.
2. SYSTEM - This defines the system prompt for the model and overrides the system prompt from the parent model.
## Running the Example
1. Create the model:
```bash
ollama create tweetwriter
```
2. Enter a topic to generate a tweet about.
3. Show the Modelfile in the REPL.
```bash
/show modelfile
```
Notice that the FROM and SYSTEM match what was in the file. But there is also a TEMPLATE and PARAMETER. These are inherited from the parent model.
\ No newline at end of file
# DockerIt # DockerIt
DockerIt is a tool to help you build and run your application in a Docker container. It consists of a model that defines the system prompt and model weights to use, along with a python script to then build the container and run the image automatically. DockerIt is a tool to help you build and run your application in a Docker container. It consists of a model that defines the system prompt and model weights to use, along with a python script to then build the container and run the image automatically.
## Caveats ## Running the Example
1. Ensure you have the `mattw/dockerit` model installed:
```bash
ollama pull mattw/dockerit
```
2. Make sure Docker is running on your machine.
3. Install the Python Requirements.
This is an simple example. It's assuming the Dockerfile content generated is going to work. In many cases, even with simple web servers, it fails when trying to copy files that don't exist. It's simply an example of what you could possibly do. ```bash
pip install -r requirements.txt
```
## Example Usage 4. Run the example:
```bash
python dockerit.py "simple postgres server with admin password set to 123"
```
5. Enter the name you would like to use for your container image.
## Caveats
```bash This is a simple example. It's assuming the Dockerfile content generated is going to work. In many cases, even with simple web servers, it fails when trying to copy files that don't exist. It's simply an example of what you could possibly do.
> python3 ./dockerit.py "simple postgres server with admin password set to 123"
Enter the name of the image: matttest
Container named happy_keller started with id: 7c201bb6c30f02b356ddbc8e2a5af9d7d7d7b8c228519c9a501d15c0bd9d6b3e
```
...@@ -4,6 +4,32 @@ ...@@ -4,6 +4,32 @@
There are two python scripts in this example. `randomaddresses.py` generates random addresses from different countries. `predefinedschema.py` sets a template for the model to fill in. There are two python scripts in this example. `randomaddresses.py` generates random addresses from different countries. `predefinedschema.py` sets a template for the model to fill in.
## Running the Example
1. Ensure you have the `llama2` model installed:
```bash
ollama pull llama2
```
2. Install the Python Requirements.
```bash
pip install -r requirements.txt
```
3. Run the Random Addresses example:
```bash
python randomaddresses.py
```
4. Run the Predefined Schema example:
```bash
python predefinedschema.py
```
## Review the Code ## Review the Code
Both programs are basically the same, with a different prompt for each, demonstrating two different ideas. The key part of getting JSON out of a model is to state in the prompt or system prompt that it should respond using JSON, and specifying the `format` as `json` in the data body. Both programs are basically the same, with a different prompt for each, demonstrating two different ideas. The key part of getting JSON out of a model is to state in the prompt or system prompt that it should respond using JSON, and specifying the `format` as `json` in the data body.
......
...@@ -16,12 +16,12 @@ def find_errors_in_log_file(): ...@@ -16,12 +16,12 @@ def find_errors_in_log_file():
with open(log_file_path, 'r') as log_file: with open(log_file_path, 'r') as log_file:
log_lines = log_file.readlines() log_lines = log_file.readlines()
error_logs = [] error_logs = []
for i, line in enumerate(log_lines): for i, line in enumerate(log_lines):
if "error" in line.lower(): if "error" in line.lower():
start_index = max(0, i - prelines) start_index = max(0, i - prelines)
end_index = min(len(log_lines), i + postlines + 1) end_index = min(len(log_lines), i + postlines + 1)
error_logs.extend(log_lines[start_index:end_index]) error_logs.extend(log_lines[start_index:end_index])
return error_logs return error_logs
...@@ -32,7 +32,6 @@ data = { ...@@ -32,7 +32,6 @@ data = {
"model": "mattw/loganalyzer" "model": "mattw/loganalyzer"
} }
response = requests.post("http://localhost:11434/api/generate", json=data, stream=True) response = requests.post("http://localhost:11434/api/generate", json=data, stream=True)
for line in response.iter_lines(): for line in response.iter_lines():
if line: if line:
......
...@@ -2,12 +2,34 @@ ...@@ -2,12 +2,34 @@
![loganalyzer 2023-11-10 08_53_29](https://github.com/jmorganca/ollama/assets/633681/ad30f1fc-321f-4953-8914-e30e24db9921) ![loganalyzer 2023-11-10 08_53_29](https://github.com/jmorganca/ollama/assets/633681/ad30f1fc-321f-4953-8914-e30e24db9921)
This example shows one possible way to create a log file analyzer. To use it, run: This example shows one possible way to create a log file analyzer. It uses the model **mattw/loganalyzer** which is based on **codebooga**, a 34b parameter model.
To use it, run:
`python loganalysis.py <logfile>` `python loganalysis.py <logfile>`
You can try this with the `logtest.logfile` file included in this directory. You can try this with the `logtest.logfile` file included in this directory.
## Running the Example
1. Ensure you have the `mattw/loganalyzer` model installed:
```bash
ollama pull mattw/loganalyzer
```
2. Install the Python Requirements.
```bash
pip install -r requirements.txt
```
3. Run the example:
```bash
python loganalysis.py logtest.logfile
```
## Review the code ## Review the code
The first part of this example is a Modelfile that takes `codebooga` and applies a new System Prompt: The first part of this example is a Modelfile that takes `codebooga` and applies a new System Prompt:
...@@ -45,4 +67,4 @@ for line in response.iter_lines(): ...@@ -45,4 +67,4 @@ for line in response.iter_lines():
There is a lot more that can be done here. This is a simple way to detect errors, looking for the word error. Perhaps it would be interesting to find anomalous activity in the logs. It could be interesting to create embeddings for each line and compare them, looking for similar lines. Or look into applying Levenshtein Distance algorithms to find similar lines to help identify the anomalous lines. There is a lot more that can be done here. This is a simple way to detect errors, looking for the word error. Perhaps it would be interesting to find anomalous activity in the logs. It could be interesting to create embeddings for each line and compare them, looking for similar lines. Or look into applying Levenshtein Distance algorithms to find similar lines to help identify the anomalous lines.
Also try different models and different prompts to analyze the data. You could consider adding retrieval augmented generation (RAG) to this to help understand newer log formats. Try different models and different prompts to analyze the data. You could consider adding retrieval augmented generation (RAG) to this to help understand newer log formats.
...@@ -14,9 +14,22 @@ This example goes through a series of steps: ...@@ -14,9 +14,22 @@ This example goes through a series of steps:
This example lets you pick from a few different topic areas, then summarize the most recent x articles for that topic. It then creates chunks of sentences from each article and then generates embeddings for each of those chunks. This example lets you pick from a few different topic areas, then summarize the most recent x articles for that topic. It then creates chunks of sentences from each article and then generates embeddings for each of those chunks.
You can run the example like this: ## Running the Example
```bash 1. Ensure you have the `mistral-openorca` model installed:
pip install -r requirements.txt
python summ.py ```bash
``` ollama pull mistral-openorca
```
2. Install the Python Requirements.
```bash
pip install -r requirements.txt
```
3. Run the example:
```bash
python summ.py
```
...@@ -24,7 +24,6 @@ def chat(messages): ...@@ -24,7 +24,6 @@ def chat(messages):
# the response streams one token at a time, print that as we receive it # the response streams one token at a time, print that as we receive it
print(content, end="", flush=True) print(content, end="", flush=True)
if body.get("done", False): if body.get("done", False):
message["content"] = output message["content"] = output
return message return message
...@@ -32,9 +31,11 @@ def chat(messages): ...@@ -32,9 +31,11 @@ def chat(messages):
def main(): def main():
messages = [] messages = []
while True: while True:
user_input = input("Enter a prompt: ") user_input = input("Enter a prompt: ")
if not user_input:
exit()
print() print()
messages.append({"role": "user", "content": user_input}) messages.append({"role": "user", "content": user_input})
message = chat(messages) message = chat(messages)
......
# Simple Chat Example # Simple Chat Example
The **chat** endpoint is one of two ways to generate text from an LLM with Ollama. At a high level you provide the endpoint an array of objects with a role and content specified. Then with each output and prompt, you add more of those role/content objects, which builds up the history. The **chat** endpoint is one of two ways to generate text from an LLM with Ollama, and is introduced in version 0.1.14. At a high level, you provide the endpoint an array of objects with a role and content specified. Then with each output and prompt, you add more of those role/content objects, which builds up the history.
## Running the Example
1. Ensure you have the `llama2` model installed:
```bash
ollama pull llama2
```
2. Install the Python Requirements.
```bash
pip install -r requirements.txt
```
3. Run the example:
```bash
python client.py
```
## Review the Code ## Review the Code
......
# Simple Generate Example
This is a simple example using the **Generate** endpoint.
## Running the Example
1. Ensure you have the `stablelm-zephyr` model installed:
```bash
ollama pull stablelm-zephyr
```
2. Install the Python Requirements.
```bash
pip install -r requirements.txt
```
3. Run the example:
```bash
python client.py
```
## Review the Code
The **main** function simply asks for input, then passes that to the generate function. The output from generate is then passed back to generate on the next run.
The **generate** function uses `requests.post` to call `/api/generate`, passing the model, prompt, and context. The `generate` endpoint returns a stream of JSON blobs that are then iterated through, looking for the response values. That is then printed out. The final JSON object includes the full context of the conversation so far, and that is the return value from the function.
...@@ -2,7 +2,7 @@ import json ...@@ -2,7 +2,7 @@ import json
import requests import requests
# NOTE: ollama must be running for this to work, start the ollama app or run `ollama serve` # NOTE: ollama must be running for this to work, start the ollama app or run `ollama serve`
model = 'llama2' # TODO: update this for whatever model you wish to use model = 'stablelm-zephyr' # TODO: update this for whatever model you wish to use
def generate(prompt, context): def generate(prompt, context):
r = requests.post('http://localhost:11434/api/generate', r = requests.post('http://localhost:11434/api/generate',
...@@ -30,6 +30,8 @@ def main(): ...@@ -30,6 +30,8 @@ def main():
context = [] # the context stores a conversation history, you can use this to make the model more context aware context = [] # the context stores a conversation history, you can use this to make the model more context aware
while True: while True:
user_input = input("Enter a prompt: ") user_input = input("Enter a prompt: ")
if not user_input:
exit()
print() print()
context = generate(user_input, context) context = generate(user_input, context)
print() print()
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment