文章目录
- 一、用 Llama-index 创建 Agent
- 1. 测试模型
- 2. 自定义一个接口类
- 3. 使用 ReActAgent & FunctionTool 构建 Agent
- 二、数据库对话 Agent
- 1. SQLite 数据库
- 1.1 创建数据库 & 连接
- 1.2 创建、插入、查询、更新、删除数据
- 1.3 关闭连接
- 建立数据库
- 2. ollama
- 3. 配置对话 & Embedding 模型
- 三、RAG 接入Agent
一、用 Llama-index 创建 Agent
LlamaIndex 实现 Agent,需要导入:
- Function Tool:将工具函数放在 Function Tool 对象中
- 工具函数 -> 完成 Agent 任务。⚠️大模型会根据函数注释来判断使用哪个函数来完成任务,所以,注释一定要写清楚函数功能和返回值
- ReActAgent:通过结合推理(Reasoning)和行动(Acting)来创建动态的 LLM Agent 的框架
- 初始推理:agent首先进行推理步骤,以理解任务、收集相关信息并决定下一步行为
- 行动:agent基于其推理采取行动——例如查询API、检索数据或执行命令
- 观察:agent观察行动的结果并收集任何新的信息
- 优化推理:利用新消息,代理再次进行推理,更新其理解、计划或假设
- 重复:代理重复该循环,在推理和行动之间交替,直到达到满意的结论或完成任务
1. 测试模型
- 使用一个数学能力较差的模型
# https://bailian.console.aliyun.com/#/model-market/detail/chatglm3-6b?tabKey=sdk
from dashscope import Generation
messages = [
{'role': "system", 'content': 'You are a helpful assistant.'},
{'role': "user", 'content': '9.11 和 9.8 哪个大?'},
]
gen = Generation()
response = gen.call(
api_key=os.getenv("API_KEY"),
model='chatglm3-6b',
messages=messages,
result_format='message',
)
print(response.output.choices[0].message.content)
9.11 比 9.8 更大。
2. 自定义一个接口类
# https://www.datawhale.cn/learn/content/86/3058
from llama_index.core.llms import CustomLLM, LLMMetadata, CompletionResponse
from llama_index.core.llms.callbacks import llm_completion_callback
import os
from typing import Any, Generator
class MyLLM(CustomLLM):
api_key: str = Field(default=os.getenv("API_KEY"))
base_url: str = Field(default=os.getenv("BASE_URL"))
client: Generation = Field(default=Generation(), exclude=True)
model_name: str
@property
def metadata(self) -> LLMMetadata:
return LLMMetadata(
model_name=self.model_name,
context_window=32768, # 根据模型实际情况设置
num_output=512
)
@llm_completion_callback()
def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
messages = [
{'role': "user", 'content': prompt}, # 根据API需求调整
]
response = self.client.call(
api_key=self.api_key,
model=self.model_name,
messages=messages,
result_format='message',
)
return CompletionResponse(text=response.output.choices[0].message.content)
@llm_completion_callback()
def stream_complete(self, prompt: str, **kwargs: Any) -> Generator[CompletionResponse, None, None]:
response = self.client.call(
api_key=self.api_key,
model=self.model_name,
messages=[{'role': "user", 'content': prompt}],
stream=True,
)
current_text = ""
for chunk in response:
content = chunk.output.choices[0].delta.get('content', '')
current_text += content
yield CompletionResponse(text=current_text, delta=content)
# 实例化时使用大写环境变量名
llm = MyLLM(
api_key=os.getenv("API_KEY"),
base_url=os.getenv("BASE_URL"),
model_name='chatglm3-6b'
)
3. 使用 ReActAgent & FunctionTool 构建 Agent
from llama_index.core.tools import FunctionTool
from llama_index.core.agent import ReActAgent
def compare_number(a: float, b: float) -> str:
"""比较两个数的大小"""
if a > b:
return f"{a} 大于 {b}"
elif a < b:
return f"{a} 小于 {b}"
else:
return f"{a} 等于 {b}"
tool = FunctionTool.from_defaults(fn=compare_number)
agent = ReActAgent.from_tools([tool], llm=llm, verbose=True)
response = agent.chat("9.11 和 9.8 哪个大?使用工具计算")
print(response)
> Running step 8c56594a-4edd-4d63-a196-99198df94e12. Step input: 9.11 和 9.8 哪个大?使用工具计算
Observation: Error: Could not parse output. Please follow the thought-action-input format. Try again.
Running step 22bbb997-4b52-4230-8a4d-d8eda252b7d1. Step input: None
Thought: The user is asking to compare the numbers 9.11 and 9.8, and they would like to know which one is greater. I can use the compare_number function to achieve this.
Action: compare_number
Action Input: {'a': 9.11, 'b': 9.8}
Observation: 9.11 小于 9.8
> Running step c6ce4186-3ea7-48c8-8f76-7d219118afc4. Step input: None
Thought: 根据比较结果,9.11小于9.8。
Answer: 9.11 < 9.8
9.11 < 9.8
二、数据库对话 Agent
1. SQLite 数据库
1.1 创建数据库 & 连接
import sqlite3
# 连接数据库
conn = sqlite3.connect('mydatabase.db')
# 创建游标对象
cursor = conn.cursor()
1.2 创建、插入、查询、更新、删除数据
- 创建
# create
create_tabel_sql = """
CREATE TABLE
IF NOT EXISTS employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL
);
"""
cursor.execute(create_table_sql)
# 提交事务
conn.commit()
- 插入
insert_sql = "INSERT INTO employees (name, department, salary) VALUES (?, ?, ?)"
# insert single
data = ("Alice", "Engineering", 75000.0)
cursor.execute(insert_sql, data)
cursor.commit()
# insert many
employees = [
("Bob", "Marketing", 68000.0),
("Charlie", "Sales", 72000.0)
]
cursor.executemany(insert_sql, employees)
cursor.commit()
- 查询
# 查询
# 条件查询(按部门筛选)
cursor.execute("SELECT name, salary FROM employees WHERE department=?", ("Engineering",))
engineering_employees = cursor.fetchall()
print("\nEngineering department:")
for emp in engineering_employees:
print(f"{emp[0]} - ${emp[1]:.2f}")
- 更新
update_sql = "UPDATE employees SET salary = ? WHERE name = ?"
cursor.execute(update_sql, (8000.0, 'Alice'))
cursor.commit()
- 删除
delect_sql = "DELECT FROM employees WHERE name = ?"
cursor.execute(delect_sql, ("Bob",))
conn.commit()
1.3 关闭连接
# 关闭游标和连接(释放资源)
cursor.close()
conn.close()
建立数据库
python建立数据库的方法
import sqlite3
# create sql
sqlite_path = "llmdb.db"
# 1. 创建数据库、创建游标对象
conn = sqlite3.connect(sqlite_path)
curosr = conn.cursor()
create_sql = """
CREATE TABLE `section_stats` (
`部门` varchar(100) DEFAULT NULL,
`人数` int(11) DEFAULT NULL
);
"""
insert_sql = """
INSERT INTO section_stats (部门, 人数)
values(?, ?)
"""
data = [['专利部', 22], ['商务部', 25]]
# 2. 创建数据库
cursor.execute(create_sql)
cursor.commit()
# 3. 插入数据
cursor.executemany(insert_sql, data)
cursor.commit()
# 4. 关闭连接
cursor.close()
conn.close()
2. ollama
安装 ollama
- 官网下载安装: [https://ollama.com](https://ollama.com/)
- 模型安装, 如运行 ollama run qwen2.5:7b(出现了success安装成功)
- 然后出现 >>> 符号,即对话窗口, 输入 /bye 推出交互页面
- 浏览器输入 127.0.0.1:11434, 如果出现 ollama is running,说明端口运行正常
- 环境配置
- `OLLAMA_MODELS` & `OLLAMA_HOST` 环境配置
1. 创建存储路径,如`mkdir -p ~/programs/ollama/models`
2. 编辑环境变量配置路径
`vim ~/.bash_profile # ~/.zshrc`
`export OLLAMA_MODELS=~/programs/ollama/models`
`export OLLAMA_HOST=0.0.0.0:11434`
- 确定mac地址和防火墙允许:系统偏好设置 -> 网络 (安全性和隐私-> 防火墙)
- 使配置生效
`source ~/.bash_profile # ~/.zshrc`
3. 配置对话 & Embedding 模型
!pip install llama-index-llms-dashscope
三、RAG 接入Agent
https://github.com/deepseek-ai/DeepSeek-R1/blob/main/README.md
https://github.com/deepseek-ai/DeepSeek-R1/blob/main/README.md