AI Agentic Design Patterns with AutoGen(上):顺序对话、代理反思

news2025/1/11 22:53:18

文章目录

    • 一、多代理对话:单口喜剧
      • 1.1 Agent的基本功能
      • 1.2 多代理对话示例:单口喜剧
        • 1.2.1 创建Agent
        • 1.2.2 开始对话
        • 1.2.3 查看对话信息,自定义对话摘要
        • 1.2.4 设置对话终止条件
    • 二、顺序对话:客户入职
      • 2.1 创建Agent
      • 2.2 构建顺序对话流程
      • 2.3 查看聊天结果和成本
    • 三、代理反思:博客写作
      • 3.1 创建撰写代理
      • 3.2 创建评论代理进行反思和改进
      • 3.3 创建审阅代理进行嵌套对话,实现复杂反思
        • 3.3.1 创建不同的审阅代理
        • 3.3.2 配置审阅对话
        • 3.3.3 注册嵌套对话,启动聊天

  • 课程主页《AI Agentic Design Patterns with AutoGen》、bilibili中文视频
  • microsoft/autogen、autogen文档

一、多代理对话:单口喜剧

1.1 Agent的基本功能

  在AutoGen中,Agent是一个可以代表人类意图执行操作的实体,发送消息,接收消息,执行操作,生成回复,并与其他代理交互。AutoGen具有一个名为Conversible Agent的内置代理类,它将不同类型的代理统一在同一个编程抽象中。

  Conversible Agent带有许多内置功能,如使用大型语言模型配置生成回复、执行代码或函数、保持人工干预并检查停止响应等。你可以打开或关闭每个组件,并根据应用程序的需求进行定制,通过这些不同的功能,你可以使用相同的接口创建具有不同角色的代理。

在这里插入图片描述

  下面先演示agent的基本功能,以下代码,可以直接在《AI Agentic Design Patterns with AutoGen》中成功运行。

  1. 导入OpenAI API密钥,配置LLM
    使用getOpenAI API key函数从环境中导入OpenAI API密钥,并定义一个LLM配置。在本课程中,我们将使用GPT-3.5 Turbo作为模型。
from utils import get_openai_api_key
OPENAI_API_KEY = get_openai_api_key()
llm_config = {"model": "gpt-3.5-turbo"}
  1. 定义一个AutoGen agent
    我们使用ConversibleAgent类定义一个名为chatbot的代理。将前面定义的LLM配置传递给这个ConversibleAgent,使其能够使用大型语言模型生成回复。
from autogen import ConversableAgent

agent = ConversableAgent(
    name="chatbot",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

  human_input_mode(人类输入模式)设置为“NEVER”,这意味着agent将不会寻求人工输入,而是只使用大型语言模型生成回复。你也可以设置为“ALWAYS”,那么代理在生成回复之前总是会询问人工输入。这是该代理的基本设置,你还可以添加代码执行配置、函数执行和其他设置。

  1. 使用generate_reply方法生成回复
    调用agent的generateReply函数,并给它一个消息列表,例如“tell me a joke”,角色为用户。运行后,agent将回复一个笑话。
reply = agent.generate_reply(
    messages=[{"content": "Tell me a joke.", "role": "user"}]
)
print(reply)
Sure! Here's a joke for you: Why did the math book look sad? Because it had too many problems.

  如果再次调用generate_reply函数,并将内容更改为“repeat the joke”,代理不会重复笑话,因为每次调用generateReply函数时,它都会生成一个新的回复,而不会记住之前的回复。

reply = agent.generate_reply(
    messages=[{"content": "Repeat the joke.", "role": "user"}]
)
print(reply)
Of course! Please provide me with the joke you would like me to repeat.

1.2 多代理对话示例:单口喜剧

  接下来,我们创建一个多代理对话示例。我们将创建两个ConversibleAgent,模拟两个单口喜剧演员之间的对话。

1.2.1 创建Agent

创建两个agent,分别命名为CassieJoe,并给它们一个系统消息,让代理知道它们的名字并且它是一个单口喜剧演员。后者增加一个指示:“从上一个笑话的笑点开始下一个笑话”。

cathy = ConversableAgent(
    name="cathy",
    system_message=
    "Your name is Cathy and you are a stand-up comedian.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

joe = ConversableAgent(
    name="joe",
    system_message=
    "Your name is Joe and you are a stand-up comedian. "
    "Start the next joke from the punchline of the previous joke.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)
1.2.2 开始对话

  调用agent的initiate_chat函数初始化对话信息。例如,让Joe开始对话,设置接收人为Cassie,并设置初始消息和对话的最大轮数。

chat_result = joe.initiate_chat(
    recipient=cathy, 
    message="I'm Joe. Cathy, let's keep the jokes rolling.",
    max_turns=2,
)
joe (to cathy):

I'm Joe. Cathy, let's keep the jokes rolling.

--------------------------------------------------------------------------------
cathy (to joe):

Sure thing, Joe! So, did you know that I tried to write a joke about the wind, but it just ended up being too drafty? It was full of holes!

--------------------------------------------------------------------------------
joe (to cathy):

Well, Cathy, that joke might have been breezy, but I like to think of it as a breath of fresh air!

--------------------------------------------------------------------------------
cathy (to joe):

I'm glad you found it refreshing, Joe! But let me ask you this - why did the math book look sad? Because it had too many problems!

--------------------------------------------------------------------------------
1.2.3 查看对话信息,自定义对话摘要

  我们还可以检查对话历史和消耗的token数,比如使用pprint库打印对话历史,并检查tokens使用情况和总成本。

import pprint

pprint.pprint(chat_result.chat_history)
[{'content': "I'm Joe. Cathy, let's keep the jokes rolling.",
  'role': 'assistant'},
 {'content': 'Sure thing, Joe! So, did you know that I tried to write a joke '
             'about the wind, but it just ended up being too drafty? It was '
             'full of holes!',
  'role': 'user'},
 {'content': 'Well, Cathy, that joke might have been breezy, but I like to '
             'think of it as a breath of fresh air!',
  'role': 'assistant'},
 {'content': "I'm glad you found it refreshing, Joe! But let me ask you this - "
             'why did the math book look sad? Because it had too many '
             'problems!',
  'role': 'user'}]
pprint.pprint(chat_result.cost)
{'usage_excluding_cached_inference': {'gpt-3.5-turbo-0125': {'completion_tokens': 95,
                                                             'cost': 0.000259,
                                                             'prompt_tokens': 233,
                                                             'total_tokens': 328},
                                      'total_cost': 0.000259},

  你也可以查看此对话的摘要,默认情况下,我们使用最后一条信息作为对话的摘要。

pprint.pprint(chat_result.summary)
("I'm glad you found it refreshing, Joe! But let me ask you this - why did the "
 'math book look sad? Because it had too many problems!')

  你也可以设置不同的总结的方式,来获得更准确的摘要结果。比如,你可以设置 summary_methodreflection_with_llm(大模型总结),并给出具体的总结提示summary_prompt,这样大模型将在对话结束后按照提示总结这段对话的内容。

chat_result = joe.initiate_chat(
    cathy, 
    message="I'm Joe. Cathy, let's keep the jokes rolling.", 
    max_turns=2, 
    summary_method="reflection_with_llm",
    summary_prompt="Summarize the conversation",
)

pprint.pprint(chat_result.summary)
('The conversation is light-hearted and involves Joe and Cathy exchanging '
 'jokes about wind and math problems.')
1.2.4 设置对话终止条件

  如果你无法预测到哪一轮对话会结束,则可以更改对话终止条件。比如使用布尔函数isTerminationMessage来检测对话中是否包含其设置的终止消息"I gotta go",如果检测到则终止对话。同时,我们更改了cathyjoe的系统消息,告诉它们如果要结束对话,就说"I gotta go"。这样任何一个角色,检测到其接收的消息中包含"I gotta go",就会自动结束这段对话。

cathy = ConversableAgent(
    name="cathy",
    system_message=
    "Your name is Cathy and you are a stand-up comedian. "
    "When you're ready to end the conversation, say 'I gotta go'.",
    llm_config=llm_config,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "I gotta go" in msg["content"],
)

joe = ConversableAgent(
    name="joe",
    system_message=
    "Your name is Joe and you are a stand-up comedian. "
    "When you're ready to end the conversation, say 'I gotta go'.",
    llm_config=llm_config,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "I gotta go" in msg["content"] or "Goodbye" in msg["content"],
)
chat_result = joe.initiate_chat(
    recipient=cathy,
    message="I'm Joe. Cathy, let's keep the jokes rolling."
)
joe (to cathy):

I'm Joe. Cathy, let's keep the jokes rolling.

--------------------------------------------------------------------------------
cathy (to joe):

Hey there, Joe! Glad to hear you're ready for some laughs. So, did you hear about the claustrophobic astronaut? He just needed a little space!

--------------------------------------------------------------------------------
joe (to cathy):

Haha, good one! Did you hear about the mathematician who’s afraid of negative numbers? He’ll stop at nothing to avoid them!

--------------------------------------------------------------------------------
cathy (to joe):

Haha, that's a classic! How about this one: Why did the scarecrow win an award? Because he was outstanding in his field!

--------------------------------------------------------------------------------
joe (to cathy):

Haha, that's a good one too! Here's another one: Why couldn't the bicycle stand up by itself? Because it was two tired!

--------------------------------------------------------------------------------
cathy (to joe):

Haha, nice one, Joe! It seems like we're on a roll with these jokes. But hey, I gotta make sure we leave the audience wanting more, so I gotta go. Thanks for the laughs! Talk to you later!

--------------------------------------------------------------------------------

  最后,我们测试一下agent是否记得对话状态。让Cassie发送另一个消息“What’s the last joke we talked about?”,并将接收人设置为Joe。这次,Joe能够记住上次的笑话,并且在相同的终止消息中结束对话。

cathy.send(message="What's last joke we talked about?", recipient=joe)
cathy (to joe):

What's last joke we talked about?

--------------------------------------------------------------------------------
joe (to cathy):

The last joke we talked about was: Why couldn't the bicycle stand up by itself? Because it was two tired!

--------------------------------------------------------------------------------
cathy (to joe):

Thanks, Joe! That was a good one. I gotta go now. Have a great day!

--------------------------------------------------------------------------------

  以上只是使用代理构建对话的基本示例。在接下来的课程中,我们将学习其他对话模式和一些代理设计模式,包括工具使用、反思、规划和代码执行等。

二、顺序对话:客户入职

  在本课中,你将学习如何利用多代理设计来完成涉及多个步骤的复杂任务。我们将构建多个代理之间的对话序列,这些代理将协作提供优质的产品客户入职体验。你还将体验如何将人类无缝地融入AI系统中。接下来,我们将编写代码实现这一过程。

  Customer Onboarding指的是客户入职或新客户导入的过程,通常包括向新客户介绍产品/服务、收集客户信息、完成必要设置等步骤,将新客户成功纳入公司系统中。

  在客户入职场景中,典型流程包括首先收集一些客户信息,然后调查客户的兴趣,最后基于收集的信息与客户互动。我们将客户入职任务分解为三个子任务:信息收集、兴趣调查和客户互动。

2.1 创建Agent

首先,我们使用AutoGen中的ConversibleAgent类实现这些代理。

  • 信息收集代理:目标是收集客户个人信息,可以通过设置系统消息来实现这一点。我们告诉代理它是a helpful customer onboarding agent,并给出详细的指令。然后将human_input_mode设置为NEVER,因为这里使用大型语言模型生成响应。
  • 兴趣调查代理:同上,设置系统消息为询问客户的兴趣或偏好,同样将human_input_mode设置为NEVER
  • 客户互动代理:基于收集的用户个人信息和兴趣来进行互动,比如提供有趣的事实、笑话或故事。我们同样通过设置系统消息来实现这一点,并设置对话终止条件。
  • 客户代理:此代理表示真实的客户,human_input_mode设置为ALWAYS,以便此代理始终可以从真实客户那里获取意见。

在这里插入图片描述

from autogen import ConversableAgent

llm_config={"model": "gpt-3.5-turbo"}

onboarding_personal_information_agent = ConversableAgent(
    name="Onboarding Personal Information Agent",
    system_message='''You are a helpful customer onboarding agent,
    you are here to help new customers get started with our product.
    Your job is to gather customer's name and location.
    Do not ask for other information. Return 'TERMINATE' 
    when you have gathered all the information.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
)

onboarding_topic_preference_agent = ConversableAgent(
    name="Onboarding Topic preference Agent",
    system_message='''You are a helpful customer onboarding agent,
    you are here to help new customers get started with our product.
    Your job is to gather customer's preferences on news topics.
    Do not ask for other information.
    Return 'TERMINATE' when you have gathered all the information.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
)

customer_engagement_agent = ConversableAgent(
    name="Customer Engagement Agent",
    system_message='''You are a helpful customer service agent
    here to provide fun for the customer based on the user's
    personal information and topic preferences.
    This could include fun facts, jokes, or interesting stories.
    Make sure to make it engaging and fun!
    Return 'TERMINATE' when you are done.''',
    llm_config=llm_config,
    code_execution_config=False,
    human_input_mode="NEVER",
    is_termination_msg=lambda msg: "terminate" in msg.get("content").lower(),
)

customer_proxy_agent = ConversableAgent(
    name="customer_proxy_agent",
    llm_config=False,
    code_execution_config=False,
    human_input_mode="ALWAYS",
    is_termination_msg=lambda msg: "terminate" in msg.get("content").lower(),
)

2.2 构建顺序对话流程

  现在,我们可以创建一个顺序对话来完成客户入职过程。在这个例子中,每个对话都是一个特定入职代理和客户代理之间的双代理对话,我们为每段对话都设置了发起者、接收者和启动信息。

  另外,顺序对话中,下一段对话任务依赖上一段对话的内容,所以需要对上一段对话内容进行总结。此处使用reflection_with_llm作为总结方法,并设置具体的summary_prompt来指导模型如何进行总结,以及返回的格式,将第一个对话中的客户个人信息总结并传递到第二个对话。

chats = [
    {
        "sender": onboarding_personal_information_agent,
        "recipient": customer_proxy_agent,
        "message": 
            "Hello, I'm here to help you get started with our product."
            "Could you tell me your name and location?",
        "summary_method": "reflection_with_llm",
        "summary_args": {
            "summary_prompt" : "Return the customer information "
                             "into as JSON object only: "
                             "{'name': '', 'location': ''}",
        },
        "max_turns": 2,
        "clear_history" : True
    },
    {
        "sender": onboarding_topic_preference_agent,
        "recipient": customer_proxy_agent,
        "message": 
                "Great! Could you tell me what topics you are "
                "interested in reading about?",
        "summary_method": "reflection_with_llm",
        "max_turns": 1,
        "clear_history" : False
    },
    {
        "sender": customer_proxy_agent,
        "recipient": customer_engagement_agent,
        "message": "Let's find something fun to read.",
        "max_turns": 1,
        "summary_method": "reflection_with_llm",
    },
]

至此,我们构建好了整个顺序对话流程。下面我们启动这个流程,而你将充当真实客户。

from autogen import initiate_chats

chat_results = initiate_chats(chats)

在这里插入图片描述

2.3 查看聊天结果和成本

  在对话完成后,可以检查每个对话会话的结果。例如,检查客户的个人信息、兴趣总结和互动消息。同时,也可以查看每个对话的成本,包括总成本、prompt_tokens成本和completion_tokens成本。

for chat_result in chat_results:
    print(chat_result.summary)
    print("\n")
{
    "name": "Alex",
    "location": "China"
}

Alex from China is interested in reading about topics related to dogs.

Alex from China is interested in reading about topics related to dogs. The world's oldest known breed of domesticated dog is the Saluki, dating back to ancient Egypt. The Saluki is known for its speed and endurance, often kept as hunting dogs by pharaohs and nobles.
for chat_result in chat_results:
    print(chat_result.cost)
    print("\n")
{'usage_including_cached_inference': {'total_cost': 0.00013250000000000002, 'gpt-3.5-turbo-0125': {'cost': 0.00013250000000000002, 'prompt_tokens': 178, 'completion_tokens': 29, 'total_tokens': 207}}
{'usage_including_cached_inference': {'total_cost': 5.25e-05, 'gpt-3.5-turbo-0125': {'cost': 5.25e-05, 'prompt_tokens': 66, 'completion_tokens': 13, 'total_tokens': 79}}
{'usage_including_cached_inference': {'total_cost': 0.000416, 'gpt-3.5-turbo-0125': {'cost': 0.000416, 'prompt_tokens': 304, 'completion_tokens': 176, 'total_tokens': 480}}, 

  本课中,我们学习了如何使用顺序对话完成一系列相互依赖的任务。在下一课中,你将学习如何实现嵌套对话的反思代理设计模式,其中一个对话或一系列对话嵌套在另一个对话中,类似于代理的独白。

三、代理反思:博客写作

  在这节课中,您将学习代理反思框架(agent reflection framework),学习如何使用嵌套聊天对话模式来实现复杂的反思过程,并利用其强大功能创建高质量的博客文章。您将构建一个系统,其中一组审稿代理(reviewer agents)嵌套在评论代理(critic agent)中,作为内在独白,对撰写代理(writer agent)写的博客文章进行反思。

  具体来说,这句话描述了一个多层次的反思和评审过程,其中critic agent负责整体的反思,在critic agent内部,还有一组reviewer agents,这些reviewer agentscritic agent的内在独白,负责更细致的评审工作,它们的反馈和意见共同构成了critic agent的内在思考过程。

  假设我们要写一篇关于deeplearning.ai的简洁但引人入胜的博客文章,要求文章不超过100字。我们首先创建一个撰写代理来完成这项任务。

3.1 创建撰写代理

  使用AutoGen构建一个基于模型的助手代理作为撰写代理。然后你可以调用generate_reply函数并提供任务作为输入,让此代理直接撰写博客文章。

import autogen

llm_config = {"model": "gpt-3.5-turbo"}

task = '''
        Write a concise but engaging blogpost about
       DeepLearning.AI. Make sure the blogpost is
       within 100 words.
       '''

writer = autogen.AssistantAgent(
    name="Writer",
    system_message="You are a writer. You write engaging and concise " 
        "blogpost (with title) on given topics. You must polish your "
        "writing based on the feedback you receive and give a refined "
        "version. Only return your final work without additional comments.",
    llm_config=llm_config,
)

reply = writer.generate_reply(messages=[{"content": task, "role": "user"}])
print(reply)
Title: "Unleashing the Power of AI with DeepLearning.AI"

Discover the incredible world of artificial intelligence with DeepLearning.AI. This innovative platform offers a gateway to mastering deep learning concepts and cutting-edge technology. Whether you are a beginner or an expert, DeepLearning.AI provides comprehensive courses designed to elevate your skills. With engaging content and practical insights, you can delve into the realms of machine learning, neural networks, and much more. Embrace the future of AI and unlock endless possibilities with DeepLearning.AI by your side. Start your journey today and empower yourself with the knowledge to shape tomorrow's technology landscape.

3.2 创建评论代理进行反思和改进

  为了提升博客文章的质量,我们可以利用反思这一有效的代理设计模式。实现反思的方法之一,是引入另一个代理参与任务,比如通过引入评论代理对撰写代理的工作进行反思并提供反馈。

  • 使用AutoGen创建一个评论代理,提示其对撰写代理的写作进行审查并提供反馈。
  • 启动评论代理和撰写代理之间的对话,进行反复交流,撰写代理根据评论代理的反馈不断修改文章。
critic = autogen.AssistantAgent(
    name="Critic",
    is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
    llm_config=llm_config,
    system_message="You are a critic. You review the work of "
                "the writer and provide constructive "
                "feedback to help improve the quality of the content.",
)

res = critic.initiate_chat(
    recipient=writer,
    message=task,
    max_turns=2,
    summary_method="last_msg"
)
Critic (to Writer):


        Write a concise but engaging blogpost about
       DeepLearning.AI. Make sure the blogpost is
       within 100 words.
       


--------------------------------------------------------------------------------
Writer (to Critic):

Title: "Unleashing the Power of AI with DeepLearning.AI"

Discover the incredible world of artificial intelligence with DeepLearning.AI. This innovative platform offers a gateway to mastering deep learning concepts and cutting-edge technology. Whether you are a beginner or an expert, DeepLearning.AI provides comprehensive courses designed to elevate your skills. With engaging content and practical insights, you can delve into the realms of machine learning, neural networks, and much more. Embrace the future of AI and unlock endless possibilities with DeepLearning.AI by your side. Start your journey today and empower yourself with the knowledge to shape tomorrow's technology landscape.

--------------------------------------------------------------------------------
Critic (to Writer):

This blogpost on DeepLearning.AI delivers an engaging and informative introduction to the platform, capturing the essence of its offerings concisely. The use of compelling language and a call-to-action is commendable in drawing readers in. However, to enhance the review, consider briefly mentioning some key features of DeepLearning.AI that set it apart from other platforms in the AI education space. Providing a specific example of a successful outcome or a testimonial from a learner could also add credibility to the post. Overall, solid groundwork, but adding concrete details could elevate the impact of the review further.

--------------------------------------------------------------------------------
Writer (to Critic):

Title: "Unleashing the Power of AI: Explore DeepLearning.AI"

Dive into the world of artificial intelligence with DeepLearning.AI, a transformative platform reshaping AI education. Offering a blend of theory and hands-on experience, it sets itself apart by providing industry insights and real-world applications. From interactive projects to expert-led tutorials, DeepLearning.AI equips learners with practical skills essential in today's AI landscape. Imagine unlocking the potential to innovate, create, and lead in AI through this platform. Join a community of passionate learners and experts shaping the future. Embrace the AI revolution with DeepLearning.AI and chart your path to success in this dynamic field.

--------------------------------------------------------------------------------

3.3 创建审阅代理进行嵌套对话,实现复杂反思

  上一节中,评论代理对撰写代理写的博客提供了反馈意见,帮助其进行改进,但是这份意见还是太过笼统。为了实现更复杂的反思工作流,我们将在评论代理内嵌套多个审阅代理,这些审阅代理分别审查博客文章的不同方面。

3.3.1 创建不同的审阅代理
  • SEO_reviewer:负责优化内容以提高其在搜索引擎中的排名,吸引流量。我们通过设置系统消息来实现这一点,下同。
  • legal_reviewer:确保内容合法合规。
  • ethics_reviewer:确保内容符合伦理规范,没有潜在的伦理问题。
  • meta_reviewer :汇总所有评论代理的意见,给出最终建议。
SEO_reviewer = autogen.AssistantAgent(
    name="SEO Reviewer",
    llm_config=llm_config,
    system_message="You are an SEO reviewer, known for "
        "your ability to optimize content for search engines, "
        "ensuring that it ranks well and attracts organic traffic. " 
        "Make sure your suggestion is concise (within 3 bullet points), "
        "concrete and to the point. "
        "Begin the review by stating your role.",
)

legal_reviewer = autogen.AssistantAgent(
    name="Legal Reviewer",
    llm_config=llm_config,
    system_message="You are a legal reviewer, known for "
        "your ability to ensure that content is legally compliant "
        "and free from any potential legal issues. "
        "Make sure your suggestion is concise (within 3 bullet points), "
        "concrete and to the point. "
        "Begin the review by stating your role.",
)

ethics_reviewer = autogen.AssistantAgent(
    name="Ethics Reviewer",
    llm_config=llm_config,
    system_message="You are an ethics reviewer, known for "
        "your ability to ensure that content is ethically sound "
        "and free from any potential ethical issues. " 
        "Make sure your suggestion is concise (within 3 bullet points), "
        "concrete and to the point. "
        "Begin the review by stating your role. ",
)

meta_reviewer = autogen.AssistantAgent(
    name="Meta Reviewer",
    llm_config=llm_config,
    system_message="You are a meta reviewer, you aggragate and review "
    "the work of other reviewers and give a final suggestion on the content.",
)
3.3.2 配置审阅对话
  • 定义反思消息生成函数reflection_message,要求指定的审阅代理审查内容,并根据其角色提供反馈。
  • 设置一个包含各个审阅代理任务配置的列表review_chats。每个审阅代理会收到由reflection_message函数生成的消息,并在一轮对话中提供简短、具体的反馈。每个审阅代理的反馈会以JSON对象的形式返回。
def reflection_message(recipient, messages, sender, config):
    return f'''Review the following content. 
            \n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}'''

review_chats = [
    {
     "recipient": SEO_reviewer, 
     "message": reflection_message, 
     "summary_method": "reflection_with_llm",
     "summary_args": {"summary_prompt" : 
        "Return review into as JSON object only:"
        "{'Reviewer': '', 'Review': ''}. Here Reviewer should be your role",},
     "max_turns": 1},
    {
    "recipient": legal_reviewer, 
    "message": reflection_message, 
     "summary_method": "reflection_with_llm",
     "summary_args": {"summary_prompt" : 
        "Return review into as JSON object only:"
        "{'Reviewer': '', 'Review': ''}.",},
     "max_turns": 1},
    {"recipient": ethics_reviewer, 
    "message": reflection_message, 
     "summary_method": "reflection_with_llm",
     "summary_args": {"summary_prompt" : 
        "Return review into as JSON object only:"
        "{'reviewer': '', 'review': ''}",},
     "max_turns": 1},
     {"recipient": meta_reviewer, 
      "message": "Aggregrate feedback from all reviewers and give final suggestions on the writing.", 
     "max_turns": 1},
]

  列表中每一段对话都指定了一个审阅代理作为接收者,但是没有指定sender。因为稍后我们会将这个聊天列表注册给评论代理,评论代理将默认作为sender,此处不再需要显式地指定。

  另外,我们需要设置正确的初始消息,以便嵌套的审阅者可以访问要审阅的内容。一种常见的方式是从聊天内容摘要中自动收集内容,这也是我们定义reflection_message函数的原因(收集上一轮对话的摘要信息作为下一轮对话的初始消息)。

3.3.3 注册嵌套对话,启动聊天
critic.register_nested_chats(
    review_chats,
    trigger=writer,
)

res = critic.initiate_chat(
    recipient=writer,
    message=task,
    max_turns=2,
    summary_method="last_msg"
)
  • critic.register_nested_chats: 将定义好的审阅聊天配置注册到评论代理(Critic)中,触发器选择writer。当评论代理收到撰写代理的内容时,它会自动触发这些嵌套的审稿过程。
  • critic.initiate_chat: 启动评论代理与撰写代理的对话,传递写作任务,并设置最多2轮的对话轮次。总结方法为“last_msg”,即最后一条消息的内容。

  通过这个多层次的代理系统,撰写代理(Writer)会首先创建一篇博客文章,然后评论代理(Critic)会触发嵌套的审稿过程,包括SEO、法律和道德方面的审查。最后,汇总审阅代理会整合所有反馈并提供最终的改进建议。这种结构化的审阅过程确保了内容的质量和合规性。

print(res.summary)  # 返回最后改进的博客文章
Title: "Empower Your Future: Exploring DeepLearning.AI"

Unleash the potential of artificial intelligence through DeepLearning.AI, your gateway to mastering cutting-edge technology. From beginners to experts, discover comprehensive courses on deep learning, machine learning, and more. Shape tomorrow's technology landscape with engaging content and practical insights at your fingertips. Kickstart your journey today with DeepLearning.AI and empower yourself with the knowledge to drive innovation. Optimize your AI skills, enhance user engagement, and stay transparent with disclosed affiliations. Embrace the future with confidence, supported by accurate data and real-world examples. Dive into DeepLearning.AI and unlock endless possibilities for growth.

  本课中,我们学习了如何使用嵌套聊天实现反思代理设计模式,特别是通过专门的工作流进行细致的反思。在下一课中,我们将进一步学习如何利用嵌套聊天提升代理设计模式。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1805655.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

profile-3d-contrib,github三维立体图的使用

图片展示: 参考: https://zhuanlan.zhihu.com/p/681786778 原仓库链接: GitHub - yoshi389111/github-profile-3d-contrib: This GitHub Action creates a GitHub contribution calendar on a 3D profile image. 第一步 在你的个人仓库下找到Actions 第二步 点…

揭秘:币圈黑科技私钥碰撞器的项目教程玩法,外面竟然收费200+

一、前言:探寻币圈私钥探测器神秘面纱 在风华正茂的币圈,不仅有无数高涨或者暴跌的数字货币,更有许多被誉为“黑科技”的工具备受热议。其中,私钥碰撞器就是这些黑科技中的一员,它能够搜索并碰撞出特定地址的私钥&…

嘉立创面板制作不规则图案技巧

首先附上效果图展示: 所需软件:嘉立创EDA(专业版)、photoshop、Adobe Illustrator 嘉立创EDA(专业版): 嘉立创面板绘制很容易上手,只要了解这几个图层的作用便可以做出自己想要的面板。 材料边界层: 代表选⽤的材料…

SpringAI的Transform入门

Transform转换 帮助将文档分割以适应 AI 模型的上下文窗口。 假如我们想要用 openai api 对一个段文本进行总结,我们通常的做法就是直接发给 api 让他总结。但是如果文本超过了 api 最大的 token 限制就会报错。这时,我们一般会进行对文章进行分段&#…

Linux线程安全:线程互斥

一、线程互斥的概念 1.1临界资源与互斥的关系 临界资源:多线程执行流共享的资源就叫做临界资源。 临界区:每个线程内部,访问临界资源的代码,就叫做临界区。 互斥:任何时刻,互斥保证有且只有一个执行流进入…

[书生·浦语大模型实战营]——LMDeploy 量化部署 LLM 实践

1.基础作业 1.1配置 LMDeploy 运行环境 创建开发机 创建新的开发机,选择镜像Cuda12.2-conda;选择10% A100*1GPU;点击“立即创建”。注意请不要选择Cuda11.7-conda的镜像,新版本的lmdeploy会出现兼容性问题。其他和之前一样&…

算法刷题【二分法】

题目: 注意题目中说明了数据时非递减的,那么这样就存在二分性,能够实现logn的复杂度。二分法每次只能取寻找特定的某一个值,所以我们要分别求左端点和有端点。 分析第一组用例得到结果如下: 成功找到左端点8 由此可知&#xff0…

【SpringBoot + Vue 尚庭公寓实战】标签和配套管理接口实现接口实现(六)

【SpringBoot Vue 尚庭公寓实战】标签和配套管理接口实现接口实现(六) 文章目录 【SpringBoot Vue 尚庭公寓实战】标签和配套管理接口实现接口实现(六)1、保存或更新标签信息2、根据id删除标签信息3、根据类型查询配套列表4、新…

Vite - 开发初体验,以及按需导入配置

目录 开始 创建一个 Vite 项目 项目结构 /src/main.js index.html package.json vite.config.js Vite 项目中使用 vue-router Vite 组件的“按需引入” 传统的方式引入一个组件 传统方式引入带来的问题 解决办法(配置 按需引入 插件) 示例&…

CBoard开源数据可视化工具

CBoard开源数据可视化工具 文章目录 CBoard开源数据可视化工具介绍资源列表基础环境一、安装JDK二、安装Maven2.1、安装Maven2.2、配置Maven 三、安装Tomcat8四、安装MySQL5版本4.1、安装相关依赖4.2、二进制安装4.3、设定配置文件4.4、配置systemcatl方式启动4.5、访问MySQL数…

PVE|中小型虚拟化平台|proxmox-ve的安装部署和初步使用

一、 虚拟化平台简单介绍 市面上虚拟化工具或者说虚拟机平台是非常多的,比如,openshifit,open stack,华为云的魔改open stack 的HCE,VMware workstation,VMware sphere,VMware esix&#xff0…

2024年11个博客初学者建议

博客仍然是在线赚钱的最佳机会之一,因为您可以吸引受众,然后销售产品、服务或赞助。 然而,如果您刚刚开始博客生涯,那么建立一个可以带来数千美元收入的博客的前景可能会让您感到畏惧。 博客领域的竞争比以往更加激烈&#xff0…

稀疏高效扩散模型:推动扩散模型的部署与应用

数据驱动的世界中,生成模型扮演着至关重要的角色,尤其是在需要创建逼真样本的任务中。扩散模型(Diffusion Models, DM),以其卓越的样本质量和广泛的模式覆盖能力,已经成为众多数据生成任务的首选。然而&…

【图论】Leetcode 130. 被围绕的区域【中等】

被围绕的区域 给你一个 m x n 的矩阵 board ,由若干字符 ‘X’ 和 ‘O’ ,找到所有被 ‘X’ 围绕的区域,并将这些区域里所有的 ‘O’ 用 ‘X’ 填充。 示例 1: 输入: board [[“X”,“X”,“X”,“X”],[“X”,“O…

【PL理论】(12) F#:模块 | 命名空间 | 异常处理 | 内置异常 |:? | 相互递归函数

💭 写在前面:本章我们将介绍 F# 的模块,我们前几章讲的列表、集合和映射都是模块。然后我们将介绍 F# 中的异常,以及内置异常,最后再讲解一下相互递归函数。 目录 0x00 F# 模块(Module) 0x01…

UFS协议入门-分层结构

写在前面:本文参考UFS jedec3.1,本文思维导图如下 1. 分层概述 UFS协议分为3层,从上至下分别是:应用层(UAP),传输层(UTP),互联层(UIC),具体结构如下图所示。 2.1 应用层 在应用层(UAP)中,包括:UFS指令集(UCS),设备管理器(Device Manager),任务管理器(Task Manager…

植物大战僵尸杂交版最新2.0.88手机+电脑+苹果+修改器

在这个充满奇妙的平行宇宙中,植物和僵尸竟然能够和谐共存!是的,你没听错!一次意外的实验,让这两个看似对立的生物种类发生了基因杂交,创造出了全新的生物种类——它们既能够进行光合作用,也具备…

《C++ Primer Plus》第十三章复习题和编程练习

目录 一、复习题**二、编程练习 一、复习题** 1. 派生类从基类那里继承了什么? 答:在类的继承和派生中,C中的派生类能够继承基类的所有数据成员和大部分成员函数。但是基类中不同访问控制权限的成员在派生中的访问权限也不相同。公有成员直…

c++|unordered系列关联式容器(unordered_set、unordered_map介绍使用+哈希结构)

目录 一、unordered_set的介绍与使用 1.1unordered_set介绍 1.2unordered_set使用 2.2.1构造 2.2.2容量 2.2.3修改 二、unordered_map的介绍与使用 2.1unordered_map介绍 2.2unordered_map使用 2.2.1构造 2.2.2容量 2.2.3修改 三、底层结构(哈希) 3.1哈希概念 3.2哈…

函数的概念及图像

注: 判断两函数是否相同,只看定义域和对应法则。 1. 函数的定义 一般的,在一个变化过程中有两个变量 x,y。如果对于x在某个变化范围内的每一个确定值,按照某个对应法则,都有唯一确定的值y和他对应。那么y就…