In this tutorial, we'll use `poetry` to manage our project dependencies. If you don't have `poetry` installed, you can install it by following the instructions [here](https://python-poetry.org/docs/).
## Creating A Project
You'll start by creating a new python project. You can name it whatever you like; for this tutorial, we'll call it `awel-tutorial`.
We suggest making a project directory in your home directory, but you can put it wherever you like.
Open a terminal and run the following commands to make a project directory and an AWEL tutorial directory:
For Linux, macOS, or PowerShell, enter this:
```bash
mkdir-p ~/projects
cd ~/projects
```
Then, run the following commands to create a new project and change to the new directory:
```bash
poetry new awel-tutorial
cd awel-tutorial
```
The tree of the project should look like this:
```plaintext
awel-tutorial
├── README.md
├── awel_tutorial
│ └── __init__.py
├── pyproject.toml
└── tests
└── __init__.py
```
## Adding DB-GPT Dependency
```bash
poetry add "dbgpt>=0.5.1"
```
## First Hello World
Next, you'll create a simple DAG that prints "Hello, world" to the console.
Now create a new file called `first_hello_world.py` in the `awel_tutorial` directory and add the following code:
And run the following command to execute the code:
```bash
poetry run python awel_tutorial/frist_http_trigger_hello.py
```
And the main output should look like this:
```plaintext
2024-03-03 16:26:57 | INFO | dbgpt.core.awel.trigger.http_trigger | Mount http trigger success, path: /api/v1/awel/trigger/awel_tutorial/hello_world
2024-03-03 16:26:57 | INFO | dbgpt.core.awel.trigger.trigger_manager | Include router <fastapi.routing.APIRouter object at 0x10ed64e50> to prefix path /api/v1/awel/trigger
INFO: Started server process [69774]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:5555 (Press CTRL+C to quit)
```
In AWEL, all HTTP endpoints are prefixed with `/api/v1/awel/trigger` by default.
Now, open a new terminal and run the following command to send a request to the server:
```bash
curl -X GET http://127.0.0.1:5555/api/v1/awel/trigger/awel_tutorial/hello_world
```
The output should look like this:
```plaintext
"Hello, world!"
```
Congratulations! You have created your first HTTP trigger.
Then you can stop the server by pressing `Ctrl+C` in the terminal.
## How It Works
In above code, we created a `HttpTrigger` operator and a `MapOperator` operator.
`HttpTrigger` defines the endpoint of the HTTP request, and the method of the request
is "GET" by default.
The `setup_dev_environment` function is used to start the server and register dags, it
will block the main thread if there are `HttpTrigger` operators in the DAG and listen
on 5555 port by default.
When the server receives a request, it will call the `MapOperator` operator to process
the request and return the result.
In `HttpTrigger`, you can configure the endpoint, method, request body, response body,
response status code, etc.
In next section, we will introduce more about the `HttpTrigger`.
the traditional knowledge extraction preparation process of Native RAG aims at the process of turning documents into databases, including reading unstructured documents-> knowledge slices-> document slices turning-> import vector databases.
# Applicable Scenarios
+ supports simple intelligent question and answer scenarios and recalls context information through semantic similarity.
+ Users can cut and add existing embedded processing processes according to their own business scenarios.
- `document knowledge loader operator `: Knowledge loading factory, by loading the specified document type, find the corresponding document processor for document content parsing.
- `Document Chunk Manager operator `: Slice the loaded document content according to the specified slicing parameters.
- `Vector storage machining operator `: You can connect different vector databases for vector storage, and you can also connect different Embedding models and services for vector extraction.
+ Register Post as http request
```bash
curl --location--request POST 'http://localhost:5670/api/v1/awel/trigger/rag/knowledge/embedding/process'\
--header'Content-Type: application/json'\
--data-raw'{}'
```
```bash
[
{
"content": "\"What is AWEL?\": Agentic Workflow Expression Language(AWEL) is a set of intelligent agent workflow expression language specially designed for large model application\ndevelopment. It provides great functionality and flexibility. Through the AWEL API, you can focus on the development of business logic for LLMs applications\nwithout paying attention to cumbersome model and environment details. \nAWEL adopts a layered API design. AWEL's layered API design architecture is shown in the figure below. \n<p align=\"left\">\n<img src={'/img/awel.png'} width=\"480px\"/>\n</p>",
At present, the DB-GPT knowledge base provides knowledge processing capabilities such as `document uploading` ->` parsing` ->` chunking` ->` Embedding` -> `Knowledge Graph triple extraction `-> `vector database storage` -> `graph database storage`, but it does not have the ability to extract complex information from documents, including vector extraction and Knowledge Graph extraction from document blocks at the same time. The hybrid knowledge processing template defines complex knowledge processing workflow, it also supports document vector extraction, Keyword extraction and Knowledge Graph extraction.
# Applicable Scenarios
+ It is not limited to the traditional, single knowledge processing process (only Embedding processing or knowledge graph extraction processing), knowledge processing workflow implements Embedding and Knowledge Graph extraction at the same time, as a mixed knowledge recall retrieval data storage.
+ Users can tailor and add existing knowledge processing processes based on their own business scenarios.
- `Document knowledge loading operator `: Knowledge loading factory, by loading the specified document type, find the corresponding document processor for document content parsing.
- `Document Chunk slicing operator `: Slice the loaded document content according to the specified slicing parameters.
- `Knowledge Processing branch operator `: You can connect different knowledge processing processes, including knowledge map processing processes, vector processing processes, and keyword processing processes.
- `Vector storage machining operator `: You can connect different vector databases for vector storage, and you can also connect different Embedding models and services for vector extraction.
- `Knowledge Graph processing operator `: You can connect different knowledge graph processing operators, including native knowledge graph processing operators and community summary Knowledge Graph processing operators. You can also specify different graph databases for storage. Currently, TuGraph databases are supported.
- `Result aggregation operator `: Summarize the results of vector extraction and Knowledge Graph extraction.
+ Register Post as http request
```bash
curl --location--request POST 'http://localhost:5670/api/v1/awel/trigger/rag/knowledge/hybrid/process'\
Unlike traditional Native RAG, which requires vectors as data carriers, GraphRAG requires triple extraction (entity -> relationship -> entity) to build a knowledge graph, so the entire knowledge processing can also be regarded as the process of building a knowledge graph.
- `document knowledge loading operator `: Knowledge loading factory, by loading the specified document type, find the corresponding document processor for document content parsing.
- `Document Chunk slicing operator `: Slice the loaded document content according to the specified slicing parameters.
- `Knowledge Graph processing operator `: You can connect different knowledge graph processing operators, including native knowledge graph processing operators and community summary Knowledge Graph processing operators. You can also specify different graph databases for storage. Currently, TuGraph databases are supported.
+ Register Post as http request
```bash
curl --location--request POST 'http://localhost:5670/api/v1/awel/trigger/rag/knowledge/kg/process'\
--header'Content-Type: application/json'\
--data-raw'{}'
```
```bash
[
{
"content": "\"What is AWEL?\": Agentic Workflow Expression Language(AWEL) is a set of intelligent agent workflow expression language specially designed for large model application\ndevelopment. It provides great functionality and flexibility. Through the AWEL API, you can focus on the development of business logic for LLMs applications\nwithout paying attention to cumbersome model and environment details. \nAWEL adopts a layered API design. AWEL's layered API design architecture is shown in the figure below. \n<p align=\"left\">\n<img src={'/img/awel.png'} width=\"480px\"/>\n</p>",
print(asyncio.run(result_task.call("What is the AWEL?")))
```
The output will be:
```bash
AWEL stands for Agentic Workflow Expression Language, which is a set of intelligent agent workflow expression language designed for large model application development. It simplifies the process by providing functionality and flexibility through its layered API design architecture, including the operator layer, AgentFrame layer, and DSL layer. Its goal is to allow developers to focus on business logic for LLMs applications without having to deal with intricate model and environment details.
```
Congratulations! You have created a RAG program with AWEL.
### Full Code
And let's look the full code of `first_rag_with_awel.py`:
```python
import asyncio
import shutil
from dbgpt.core.awel import DAG, MapOperator, InputOperator, JoinOperator, InputSource
from dbgpt.core.operators import PromptBuilderOperator, RequestBuilderOperator
from dbgpt_ext.rag import ChunkParameters
from dbgpt.rag.knowledge import KnowledgeType
from dbgpt.rag.operators import EmbeddingAssemblerOperator, KnowledgeOperator,
EmbeddingRetrieverOperator
from dbgpt.rag.embedding import DefaultEmbeddingFactory
from dbgpt.storage.vector_store.chroma_store import ChromaStore, ChromaVectorConfig
from dbgpt.model.operators import LLMOperator
from dbgpt.model.proxy import OpenAILLMClient
# Here we use the openai embedding model, if you want to use other models, you can
# replace it according to the previous example.
embeddings = DefaultEmbeddingFactory.openai()
# Here we use the openai LLM model, if you want to use other models, you can replace
# it according to the previous example.
llm_client = OpenAILLMClient()
# Delete old vector store directory(/tmp/awel_rag_test_vector_store)
AI: Elon Musk is a well-known entrepreneur and business magnate. He is the CEO and founder of SpaceX, Tesla Inc., Neuralink, and The Boring Company. Musk is known for his work in the technology and space industries, and he is also involved in the development of electric vehicles, renewable energy, and artificial intelligence.
Second round
User: Is he rich?
AI: Yes, Elon Musk is one of the richest people in the world. As the CEO and founder of multiple successful companies, including SpaceX and Tesla, his net worth fluctuates but is consistently in the billions of dollars.
chunks = asyncio.run(retriever_task.call("Query the name and age of users younger than 18 years old"))
print("Retrieved schema:\n", chunks)
```
## Chat With Database
### Prepare LLM
We use LLM to generate SQL queries. Here we use OpenAI's LLM model, you can replace it
with other models according to [Prepare LLM](./first_rag_with_awel.md#prepare-llm).
```python
from dbgpt.model.proxy import OpenAILLMClient
llm_client = OpenAILLMClient()
```
### Prepare Some Decisions
Sometimes, we hope LLM can make some decisions, here we provide some decisions which are chart types.
```python
antv_charts = [
{"response_line_chart": "used to display comparative trend analysis data"},
{
"response_pie_chart": "suitable for scenarios such as proportion and distribution statistics"
},
{
"response_table": "suitable for display with many display columns or non-numeric columns"
},
# {"response_data_text":" the default display method, suitable for single-line or simple content display"},
{
"response_scatter_plot": "Suitable for exploring relationships between variables, detecting outliers, etc."
},
{
"response_bubble_chart": "Suitable for relationships between multiple variables, highlighting outliers or special situations, etc."
},
{
"response_donut_chart": "Suitable for hierarchical structure representation, category proportion display and highlighting key categories, etc."
},
{
"response_area_chart": "Suitable for visualization of time series data, comparison of multiple groups of data, analysis of data change trends, etc."
},
{
"response_heatmap": "Suitable for visual analysis of time series data, large-scale data sets, distribution of classified data, etc."
},
]
display_type = "\n".join(
f"{key}:{value}" for dict_item in antv_charts for key, value in dict_item.items()
)
```
### Generate SQL
Now, let's pass the user query and database schema to LLM to generate SQL.
```python
import asyncio
import json
from dbgpt.core import (
ChatPromptTemplate,
HumanPromptTemplate,
SystemPromptTemplate,
SQLOutputParser
)
from dbgpt.core.awel import DAG, InputOperator, InputSource, MapOperator, JoinOperator
from dbgpt.core.operators import PromptBuilderOperator, RequestBuilderOperator
from dbgpt.rag.operators import DBSchemaRetrieverOperator
from dbgpt.model.operators import LLMOperator
system_prompt = """You are a database expert. Please answer the user's question based on the database selected by the user and some of the available table structure definitions of the database.
Database name:
{db_name}
Table structure definition:
{table_info}
Constraint:
1.Please understand the user's intention based on the user's question, and use the given table structure definition to create a grammatically correct {dialect} sql. If sql is not required, answer the user's question directly..
2.Always limit the query to a maximum of {top_k} results unless the user specifies in the question the specific number of rows of data he wishes to obtain.
3.You can only use the tables provided in the table structure information to generate sql. If you cannot generate sql based on the provided table structure, please say: "The table structure information provided is not enough to generate sql queries." It is prohibited to fabricate information at will.
4.Please be careful not to mistake the relationship between tables and columns when generating SQL.
5.Please check the correctness of the SQL and ensure that the query performance is optimized under correct conditions.
6.Please choose the best one from the display methods given below for data rendering, and put the type name into the name parameter value that returns the required format. If you cannot find the most suitable one, use 'Table' as the display method.
the available data display methods are as follows: {display_type}
User Question:
{user_input}
Please think step by step and respond according to the following JSON format:
{response}
Ensure the response is correct json and can be parsed by Python json.loads.
"thoughts": "The user wants to retrieve the name and age of users who are younger than 18 years old from the 'user_management' database.",
"sql": "SELECT name, age FROM user WHERE age < 18",
"display_type": "response_table"
}
Result:
{'thoughts': "The user wants to retrieve the name and age of users who are younger than 18 years old from the 'user_management' database.", 'sql': 'SELECT name, age FROM user WHERE age < 18', 'display_type': 'response_table'}
```
### Execute SQL
Let's add an operator to execute the SQL on previous generated SQL.
```python
from dbgpt.datasource.operators import DatasourceOperator
{"response_line_chart": "used to display comparative trend analysis data"},
{
"response_pie_chart": "suitable for scenarios such as proportion and distribution statistics"
},
{
"response_table": "suitable for display with many display columns or non-numeric columns"
},
# {"response_data_text":" the default display method, suitable for single-line or simple content display"},
{
"response_scatter_plot": "Suitable for exploring relationships between variables, detecting outliers, etc."
},
{
"response_bubble_chart": "Suitable for relationships between multiple variables, highlighting outliers or special situations, etc."
},
{
"response_donut_chart": "Suitable for hierarchical structure representation, category proportion display and highlighting key categories, etc."
},
{
"response_area_chart": "Suitable for visualization of time series data, comparison of multiple groups of data, analysis of data change trends, etc."
},
{
"response_heatmap": "Suitable for visual analysis of time series data, large-scale data sets, distribution of classified data, etc."
},
]
display_type = "\n".join(
f"{key}:{value}" for dict_item in antv_charts for key, value in dict_item.items()
)
system_prompt = """You are a database expert. Please answer the user's question based on the database selected by the user and some of the available table structure definitions of the database.
Database name:
{db_name}
Table structure definition:
{table_info}
Constraint:
1.Please understand the user's intention based on the user's question, and use the given table structure definition to create a grammatically correct {dialect} sql. If sql is not required, answer the user's question directly..
2.Always limit the query to a maximum of {top_k} results unless the user specifies in the question the specific number of rows of data he wishes to obtain.
3.You can only use the tables provided in the table structure information to generate sql. If you cannot generate sql based on the provided table structure, please say: "The table structure information provided is not enough to generate sql queries." It is prohibited to fabricate information at will.
4.Please be careful not to mistake the relationship between tables and columns when generating SQL.
5.Please check the correctness of the SQL and ensure that the query performance is optimized under correct conditions.
6.Please choose the best one from the display methods given below for data rendering, and put the type name into the name parameter value that returns the required format. If you cannot find the most suitable one, use 'Table' as the display method.
the available data display methods are as follows: {display_type}
User Question:
{user_input}
Please think step by step and respond according to the following JSON format:
{response}
Ensure the response is correct json and can be parsed by Python json.loads.
"AWEL(Agentic Workflow Expression Language) makes it easy to build complex llm apps, and it provides great functionality and flexibility. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic example use AWEL: http request + output rewrite\n",
"\n",
"The basic usage about AWEL is to build a http request and rewrite some output value. To see how this works, let's see an example.\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### DAG Planning\n",
"First, let's look at an introductory example of basic AWEL orchestration. The core function of the example is the handling of input and output for an HTTP request. Thus, the entire orchestration consists of only two steps:\n",
"- HTTP Request\n",
"- Processing HTTP Response Result\n",
"\n",
"In DB-GPT, some basic dependent operators have already been encapsulated and can be referenced directly."
"Define a Request handler operator called `RequestHandleOperator`, which is an operator that extends the basic `MapOperator`. The actions of the `RequestHandleOperator` are straightforward: parse the request body and extract the name and age fields, then concatenate them into a sentence. For example:\n",
" return f\"Hello, {input_value.name}, your age is {input_value.age}\""
],
"outputs": []
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### DAG Pipeline\n",
"\n",
"After writing the above operators, they can be assembled into a DAG orchestration. This DAG has a total of two nodes: the first node is an `HttpTrigger`, which primarily processes HTTP requests (this operator is built into DB-GPT), and the second node is the newly defined `RequestHandleOperator` that processes the request body. The DAG code below can be used to link the two nodes together.\n"
"Before performing access verification, the project needs to be started first: `python dbgpt/app/dbgpt_server.py`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "powershell"
}
},
"source": [
"\n",
"% curl -X GET http://127.0.0.1:5000/api/v1/awel/trigger/examples/hello\\?name\\=zhangsan\n",
"\"Hello, zhangsan, your age is 18\""
],
"outputs": []
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Of course, to make it more convenient for users to test, we also provide a test environment. This test environment allows testing without starting the dbgpt_server. Add the following code below simple_dag_example, then directly run the simple_dag_example.py script to run the test script without starting the project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "powershell"
}
},
"source": [
"if __name__ == \"__main__\":\n",
" if dag.leaf_nodes[0].dev_mode:\n",
" # Development mode, you can run the dag locally for debugging.\n",
" from dbgpt.core.awel import setup_dev_environment\n",
"\n",
" setup_dev_environment([dag], port=5555)\n",
" else:\n",
" # Production mode, DB-GPT will automatically load and execute the current file after startup.\n",
" pass"
],
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "powershell"
}
},
"source": [
"curl -X GET http://127.0.0.1:5555/api/v1/awel/trigger/examples/hello\\?name\\=zhangsan\n",
AWEL(Agentic Workflow Expression Language) makes it easy to build complex llm apps, and it provides great functionality and flexibility.
## Basic example use AWEL: http request + output rewrite
The basic usage about AWEL is to build a http request and rewrite some output value. To see how this works, let's see an example.
### DAG Planning
First, let's look at an introductory example of basic AWEL orchestration. The core function of the example is the handling of input and output for an HTTP request. Thus, the entire orchestration consists of only two steps:
- HTTP Request
- Processing HTTP Response Result
In DB-GPT, some basic dependent operators have already been encapsulated and can be referenced directly.
Define an HTTP request body that accepts two parameters: name and age.
```python
classTriggerReqBody(BaseModel):
name:str=Field(...,description="User name")
age:int=Field(18,description="User age")
```
Define a Request handler operator called `RequestHandleOperator`, which is an operator that extends the basic `MapOperator`. The actions of the `RequestHandleOperator` are straightforward: parse the request body and extract the name and age fields, then concatenate them into a sentence. For example:
returnf"Hello, {input_value.name}, your age is {input_value.age}"
```
### DAG Pipeline
After writing the above operators, they can be assembled into a DAG orchestration. This DAG has a total of two nodes: the first node is an `HttpTrigger`, which primarily processes HTTP requests (this operator is built into DB-GPT), and the second node is the newly defined `RequestHandleOperator` that processes the request body. The DAG code below can be used to link the two nodes together.
Before performing access verification, the project needs to be started first: `python dbgpt/app/dbgpt_server.py`
```bash
% curl -X GET http://127.0.0.1:5670/api/v1/awel/trigger/examples/hello\?name\=zhangsan
"Hello, zhangsan, your age is 18"
```
Of course, to make it more convenient for users to test, we also provide a test environment. This test environment allows testing without starting the dbgpt_server. Add the following code below simple_dag_example, then directly run the simple_dag_example.py script to run the test script without starting the project.
```python
if__name__=="__main__":
ifdag.leaf_nodes[0].dev_mode:
# Development mode, you can run the dag locally for debugging.
fromdbgpt.core.awelimportsetup_dev_environment
setup_dev_environment([dag],port=5555)
else:
# Production mode, DB-GPT will automatically load and execute the current file after startup.
pass
```
```bash
curl -X GET http://127.0.0.1:5555/api/v1/awel/trigger/examples/hello\?name\=zhangsan
AWEL (Agentic Workflow Expression Language) is an intelligent agent workflow expression language specifically designed for the development of LLMs applications. In the design of DB-GPT, Agents are considered first-class citizens. RAGs, Datasources (DS), SMMF(Service-oriented Multi-model Management Framework), and Plugins are all resources that agents depend on.
We currently also see that the auto-orchestration capabilities of multi-agents are greatly limited by the model's capabilities, and at the same time, for scenarios that require determinism. For instance, tasks like pipeline work do not need to utilize the auto-orchestration capabilities of large models. Therefore, in DB-GPT, the integration of AWEL with agents can satisfy the implementation of a production-level pipeline and the auto-orchestration of agents systems that address open-ended problems.
Through the orchestration capabilities of AWEL, it is possible to develop large language model applications with a minimal amount of code.
# Released V0.5.0 | Develop native data applications through workflows and agents
## Release Notes for Version 0.5.0
After a period of intensive development, version 0.5.0 has taken over two months to come to fruition. This marks the first stable release that will be maintained over an extended period within the DB-GPT project. Concurrently, the long-term vision for DB-GPT has been officially set: it aims to be an AI native data application development framework utilizing Agentic Workflow Expression Language (AWEL) and agents.
In essence, this framework facilitates the creation of data-centric applications through an intelligent agent-based expression language.
<palign="left">
<imgsrc={'/img/app/app_list.png'}width="720px"/>
</p>
## Introduction to Version Update
In its early releases, the DB-GPT project offered six default use cases, namely:
These scenarios were designed to satisfy basic and simple use requirements. However, for large-scale production deployment, particularly when dealing with complex business scenarios, it becomes necessary to develop custom scenarios tailored to specific business conditions. This presents significant challenges in terms of flexibility and development complexity.
To further enhance the usability and flexibility of the business framework, we have built upon our existing features, including the multi-model management (SMMF), knowledge base, Agents, data sources, plugins, and Prompts. We have abstracted the capabilities of intelligent agent orchestration (AWEL) and application construction. Additionally, to facilitate application management and distribution, we have introduced the [dbgpts](https://github.com/eosphoros-ai/dbgpts) subproject, which specifically manages the construction of native intelligent data applications, AWEL common operators, AWEL generic workflow templates, and Agents on top of DB-GPT.
This version update will not affect the usage of the previously established six scenarios. However, with subsequent iterations, these default scenarios will gradually be rewritten as Data Apps. We also plan to incorporate them into the `dbgpts` project as default applications, making them readily available for installation and use.
Now, let's provide a systematic explanation of the main updates in this local release.
### Glossary of Terms:
1.**Data App**: an intelligent Data application built on DB-GPT.
2.**AWEL**: Agentic Workflow Expression Language, intelligent Workflow Expression Language
3.**AWEL Flow**: workflow orchestration using the intelligent workflow Expression Language
4.**SMMF**: a service-oriented multi-model management framework.
5.**Datasource**: data sources, such as MySQL, PG, StarRocks, and Clickhouse.
## AWEL workflow and application
As shown in the following figure, in the left-side navigation pane, there is an AWEL workflow menu. After you open it, you can orchestrate the workflow.
As shown in the figure, the dbgpt command supports multiple operations, including model-related operations, knowledge base operations, and Trace logs. Here we will focus on the operation of the app.
Pass `dbgpt app` list-remote command, we can see that there are three AWEL workflows available in the current warehouse. Here we install `awel-flow-web-info-search` this workflow. Run the command `dbgpt app install awel-flow-web-info-search`
After the installation is successful, restart the DB-GPT service (dynamic hot loading is on the way), refresh the page, and then `AWEL workflow page` see the corresponding workflow.
In addition to installing the default AWEL flows using the official commands, you'll often need to build your own in practical scenarios. As illustrated below, by clicking on `New AWEL Flow`, you will be brought to the editing page as shown.
During the editing process, each task's downstream nodes and operators support auto-completion. By clicking the plus sign (➕) located at the bottom right of each operator, you can bring up a list of potential downstream operators that can be connected to the current one. This feature enhances the user experience by providing suggestions and making it easier to construct complex workflows without needing to remember the exact names or types of operators that are available for use.
We introduced the construction and installation of AWEL workflow. Next, we will introduce how to create a data application based on a large model.
### Search Chat App
The core capability of the search dialog application is to search for relevant knowledge through search engines (such as Baidu and Google) and then summarize and answer. The effect is as follows:
Creating the preceding application is very simple. On the application creation panel, click `create` , enter the following parameters to complete the creation. Note several parameters. 1. Working mode 2. Flows the working mode we use here is `awel_layout` the selected AWEL workflow is installed earlier. `awel-flow-web-info-search` this workflow.
<palign="left">
<imgsrc={'/img/app/app_awel.png'}width="720px"/>
</p>
### Data analysis assistant
Use Multi-Agents to write a data analysis Assistant application. The results are as follows.
- Release of dbgpt core sdk (#1092): Now includes AWEL operator orchestration capabilities.
To install, you can use the command: `pip install dbgpt`
- Support for Jina Embeddings (#1105): The update integrates with Jina AI, which provides a way to create and manage embeddings for various data types, enhancing search and similarity tasks within the applications.
- New example of schema-linking using AWEL (#1081): There's a new example available demonstrating how to use AWEL for schema-linking, which can be valuable for tasks that require mapping between different data schemas.
- Unified card UI style, including knowledge base cards, model management cards, etc.: This update brings a more consistent look and feel across different UI components that display information in a card format.
## Bug Fixes
- MySQL databases no longer support automatic table creation and field auto-updates (#1133): This change may require developers to manually handle database schema changes, improving control over database migrations.
- Fixed the issue with default dialogues carrying history message records (#1117): This addresses potential privacy or performance issues by ensuring that history records are handled properly.
- Fixed the issue in examples/awel where model_name was fetched from model_config improperly (#1112): This improves the reliability of AWEL examples by ensuring that the model configuration is fetched and used correctly.
- Fixed DAGs sharing data issue (#1102): This fix might relate to data isolation in Directed Acyclic Graphs (DAGs) to ensure that workflows do not inadvertently share or overwrite data.
- Fixed issue with examples/awel default loading model text2vec-large-chinese (#1095): This fix ensures that the large Chinese text-to-vector model loads as expected in the given examples.
These changes reflect ongoing improvements to the dbgpt project, enhancing its capabilities, fixing known issues, and refining user experience. Users should refer to the official documentation or release notes for detailed instructions and information on these updates.
## Upgrade to V0.5.0
If your current version is V0.4.6 or V0.4.7, you need to upgrade to V0.5.0.