LangChain教程 - 表达式语言 (LCEL) -构建智能链

news2024/12/28 2:57:41

LangChain提供了一种灵活且强大的表达式语言 (LangChain Expression Language, LCEL),用于创建复杂的逻辑链。通过将不同的可运行对象组合起来,LCEL可以实现顺序链、嵌套链、并行链、路由以及动态构建等高级功能,从而满足各种场景下的需求。本文将详细介绍这些功能及其实现方式。

顺序链

LCEL的核心功能是将可运行对象按顺序组合起来,其中前一个对象的输出会自动传递给下一个对象作为输入。我们可以使用管道操作符 (|) 或显式的 .pipe() 方法来构建顺序链。

以下是一个简单的例子:

from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

model = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")

chain = prompt | model | StrOutputParser()

result = chain.invoke({"topic": "bears"})
print(result)

输出:

Here's a bear joke for you:

Why did the bear dissolve in water?
Because it was a polar bear!

在上述例子中,提示模板将输入格式化为聊天模型的输入格式,聊天模型生成笑话,最后通过输出解析器将结果转换为字符串。

嵌套链

嵌套链允许我们将多个链组合起来以创建更复杂的逻辑。例如,可以将一个生成笑话的链与另一个链组合,该链负责分析笑话的有趣程度。

analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()

result = composed_chain.invoke({"topic": "bears"})
print(result)

输出:

Haha, that's a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing.

并行链

RunnableParallel 使得可以并行运行多个链,并将每个链的结果组合成一个字典。这种方式适用于需要同时处理多个任务的场景。

from langchain_core.runnables import RunnableParallel

joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model

parallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)

result = parallel_chain.invoke({"topic": "bear"})
print(result)

输出:

{
 'joke': "Why don't bears like fast food? Because they can't catch it!",
 'poem': "In the quiet of the forest, the bear roams free\nMajestic and wild, a sight to see."
}

路由

路由允许根据输入动态选择要执行的子链。LCEL提供了两种实现路由的方式:

使用自定义函数

通过 RunnableLambda 实现动态路由:

from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambda

chain = (
    PromptTemplate.from_template(
        """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.

Do not respond with more than one word.

<question>
{question}
</question>

Classification:"""
    )
    | OllamaLLM(model="qwen2.5:0.5b")
    | StrOutputParser()
)

langchain_chain = PromptTemplate.from_template(
    """You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template(
    """You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template(
    """Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")

def route(info):
    if "anthropic" in info["topic"].lower():
        return anthropic_chain
    elif "langchain" in info["topic"].lower():
        return langchain_chain
    else:
        return general_chain

full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)

result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)

def route(info):
    if "anthropic" in info["topic"].lower():
        return anthropic_chain
    elif "langchain" in info["topic"].lower():
        return langchain_chain
    else:
        return general_chain

from langchain_core.runnables import RunnableLambda

full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)

result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)

使用 RunnableBranch

RunnableBranch 通过条件匹配选择分支:

from langchain_core.runnables import RunnableBranch

branch = RunnableBranch(
    (lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),
    (lambda x: "langchain" in x["topic"].lower(), langchain_chain),
    general_chain,
)

full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)

动态构建

动态构建链可以根据输入在运行时生成链的部分。通过 RunnableLambda 的返回值机制,可以返回一个新的 Runnable

from langchain_core.runnables import chain, RunnablePassthrough

llm = OllamaLLM(model="qwen2.5:0.5b")

contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", contextualize_instructions),
        ("placeholder", "{chat_history}"),
        ("human", "{question}"),
    ]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()

@chain
def contextualize_if_needed(input_: dict):
    if input_.get("chat_history"):
        return contextualize_question
    else:
        return RunnablePassthrough() | itemgetter("question")

@chain
def fake_retriever(input_: dict):
    return "egypt's population in 2024 is about 111 million"

qa_instructions = (
    """Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages(
    [("system", qa_instructions), ("human", "{question}")]
)

full_chain = (
    RunnablePassthrough.assign(question=contextualize_if_needed).assign(
        context=fake_retriever
    )
    | qa_prompt
    | llm
    | StrOutputParser()
)

result = full_chain.invoke({
    "question": "what about egypt",
    "chat_history": [
        ("human", "what's the population of indonesia"),
        ("ai", "about 276 million"),
    ],
})
print(result)

输出:

According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.

完整代码实例

from operator import itemgetter

from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

print("\n-----------------------------------\n")

# Simple demo
model = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")

chain = prompt | model | StrOutputParser()

result = chain.invoke({"topic": "bears"})
print(result)

print("\n-----------------------------------\n")

# Compose demo
analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()

result = composed_chain.invoke({"topic": "bears"})
print(result)

print("\n-----------------------------------\n")

# Parallel demo
from langchain_core.runnables import RunnableParallel

joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model

parallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)

result = parallel_chain.invoke({"topic": "bear"})
print(result)

print("\n-----------------------------------\n")

# Route demo
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambda

chain = (
    PromptTemplate.from_template(
        """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.

Do not respond with more than one word.

<question>
{question}
</question>

Classification:"""
    )
    | OllamaLLM(model="qwen2.5:0.5b")
    | StrOutputParser()
)

langchain_chain = PromptTemplate.from_template(
    """You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template(
    """You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template(
    """Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")

def route(info):
    if "anthropic" in info["topic"].lower():
        return anthropic_chain
    elif "langchain" in info["topic"].lower():
        return langchain_chain
    else:
        return general_chain

full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)

result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)

print("\n-----------------------------------\n")

# Branch demo
from langchain_core.runnables import RunnableBranch

branch = RunnableBranch(
    (lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),
    (lambda x: "langchain" in x["topic"].lower(), langchain_chain),
    general_chain,
)

full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)

print("\n-----------------------------------\n")

# Dynamic demo
from langchain_core.runnables import chain, RunnablePassthrough

llm = OllamaLLM(model="qwen2.5:0.5b")

contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", contextualize_instructions),
        ("placeholder", "{chat_history}"),
        ("human", "{question}"),
    ]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()

@chain
def contextualize_if_needed(input_: dict):
    if input_.get("chat_history"):
        return contextualize_question
    else:
        return RunnablePassthrough() | itemgetter("question")

@chain
def fake_retriever(input_: dict):
    return "egypt's population in 2024 is about 111 million"

qa_instructions = (
    """Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages(
    [("system", qa_instructions), ("human", "{question}")]
)

full_chain = (
    RunnablePassthrough.assign(question=contextualize_if_needed).assign(
        context=fake_retriever
    )
    | qa_prompt
    | llm
    | StrOutputParser()
)

result = full_chain.invoke({
    "question": "what about egypt",
    "chat_history": [
        ("human", "what's the population of indonesia"),
        ("ai", "about 276 million"),
    ],
})
print(result)

print("\n-----------------------------------\n")

J-LangChain实现上面实例

J-LangChain - 智能链构建

总结

LangChain的LCEL通过提供顺序链、嵌套链、并行链、路由和动态构建等功能,为开发者构建复杂的语言任务提供了强大的工具。无论是简单的逻辑流还是复杂的动态决策,LCEL都能高效地满足需求。通过合理使用这些功能,开发者可以快速搭建高效、灵活的智能链,为各种场景的应用提供支持。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2266712.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Task :prepareKotlinBuildScriptModel UP-TO-DATE,编译卡在这里不动或报错

这里写自定义目录标题 原因方案其他思路 原因 一般来说&#xff0c;当编译到这个task之后&#xff0c;后续是要进行一些资源的下载的&#xff0c;如果你卡在这边不动的话&#xff0c;很有可能就是你的IDE目前没有办法进行下载。 方案 开关一下IDE内部的代理&#xff0c;或者…

webauthn介绍及应用

1、webauthn介绍 官网&#xff1a;https://webauthn.io/ 1.1、什么是webauthn&#xff1f; webauthn即Web Authentication&#xff0c;是一个符合W3C标准的Web认证规范。它通过公私钥加密技术&#xff0c;实现无密码认证&#xff0c;用户仅需通过pin码、指纹、面部识别、usb …

中文学习系统:成本效益分析与系统优化

2.1 SSM框架介绍 本课题程序开发使用到的框架技术&#xff0c;英文名称缩写是SSM&#xff0c;在JavaWeb开发中使用的流行框架有SSH、SSM、SpringMVC等&#xff0c;作为一个课题程序采用SSH框架也可以&#xff0c;SSM框架也可以&#xff0c;SpringMVC也可以。SSH框架是属于重量级…

centos单机部署seata

文章目录 场景分析下载seata包启动 场景 centos7.9 jdk17 安装部署seata 分析 jdk和seata的版本对应关系如图 JDK版本 推荐 Seata 版本 理由 JDK 8 任何 Seata 版本 JDK 8 是 Seata 长期支持的版本&#xff0c;兼容性最好。 JDK 11 Seata 1.2.0 适合需要长期支持且性能较高的应…

若依前端挂Nginx、打包部署运行!!!!

先了解知识&#xff1a; const proxy require(http-proxy-middleware);module.exports { devServer:{host: localhost, //target hostport: 8080,//proxy:{/api:{}},代理器中设置/api,项目中请求路径为/api的替换为targetproxy:{/api:{target: http://192.168.1.30:8085,/…

Vue CLI 3 项目构建

Vue CLI 是一个功能强大、易于使用的工具&#xff0c;可以极大地简化 Vue.js 应用的开发过程。通过快速创建项目、灵活的插件系统和丰富的配置选项&#xff0c;开发者可以更专注于业务逻辑&#xff0c;而不是底层配置。无论是新手还是经验丰富的开发者&#xff0c;Vue CLI 都是…

电脑提示报错NetLoad.dll文件丢失或损坏?是什么原因?

一、NetLoad.dll文件丢失或损坏的根源 程序安装不完整&#xff1a;某些程序在安装过程中可能因为磁盘错误、网络中断或安装程序本身的缺陷&#xff0c;导致NetLoad.dll文件未能正确安装或复制。 恶意软件攻击&#xff1a;病毒、木马等恶意软件可能会篡改或删除系统文件&#x…

SpringBoot(二)—— yaml配置文件

接上篇&#xff0c;我们对SpringBoot有了基本的了解&#xff0c;接下来探究配置文件。 目录 二、配置文件 1. SpringBoot热部署 2. 配置文件 2.1 配置文件的作用 2.2 YAML 配置文件 2.3 YAML 与 XML 比较 3. YAML语法 3.1 键值对 3.2 值的写法 3.3 对象/Map&#x…

基于PyQt5的UI界面开发——多界面切换

介绍 最初&#xff0c;因为课设的缘故&#xff0c;我只是想做一个通过按键进行切面切换而已&#xff0c;但是我看网上资料里面仅是语焉不详&#xff0c;让我困惑的很&#xff0c;但后面我通过摸索才发现这件事实在是太简单了&#xff0c;因此我想要记录下来。 本博客将介绍如…

Virtualbox硬盘扩容

前言 有没有使用虚拟机安装操作系统的时候&#xff0c;虚拟硬盘一开始分配的虚拟硬盘空间不够用&#xff1f;在后期去扩容的伙伴们&#xff0c;下面我看看如何扩容virtualbox的虚拟硬盘&#xff1f; 重新分配虚拟硬盘大小 在virtualbox菜单选择【管理】-【工具】-【虚拟介质…

如何实现 MySQL 的读写分离?

面试题 你们有没有做 MySQL 读写分离&#xff1f;如何实现 MySQL 的读写分离&#xff1f;MySQL 主从复制原理的是啥&#xff1f;如何解决 MySQL 主从同步的延时问题&#xff1f; 面试官心理分析 高并发这个阶段&#xff0c;肯定是需要做读写分离的&#xff0c;啥意思&#x…

路由器的原理

✍作者&#xff1a;柒烨带你飞 &#x1f4aa;格言&#xff1a;生活的情况越艰难&#xff0c;我越感到自己更坚强&#xff1b;我这个人走得很慢&#xff0c;但我从不后退。 &#x1f4dc;系列专栏&#xff1a;网路安全入门系列 目录 路由器的原理一&#xff0c;路由器基础及相关…

学习C++:标识符命名规则

标识符命名规则&#xff1a; 作用&#xff1a;C规定给标识符&#xff08;变量、常量&#xff09;命名时&#xff0c;有一套自己的规则 标识符不能是关键字 标识符只能由字母、数字、下划线组成 第一个字符必须为字母或下划线 标识符中字母区分大小写 &#xff08;给标识符命…

Git如何设置和修改当前分支跟踪的上游分支

目录 前言 背景 设置当前分支跟踪的上游分支 当前分支已有关联&#xff0c;删除其关联&#xff0c;重新设置上游 常用的分支操作 参考资料 前言 仅做学习记录&#xff0c;侵删 背景 在项目开发过程中&#xff0c;从master新建分支时&#xff0c;会出现没有追踪的上游分…

【数据科学导论】第一二章·大数据与数据表示与存储

&#x1f308; 个人主页&#xff1a;十二月的猫-CSDN博客 &#x1f525; 系列专栏&#xff1a; &#x1f3c0;数据处理与分析_十二月的猫的博客-CSDN博客 &#x1f4aa;&#x1f3fb; 十二月的寒冬阻挡不了春天的脚步&#xff0c;十二点的黑夜遮蔽不住黎明的曙光 目录 1. 前言…

LeetCode - Google 校招100题 第8天 图(Graph) (2题)

欢迎关注我的CSDN:https://spike.blog.csdn.net/ 本文地址:https://spike.blog.csdn.net/article/details/144744820 LeetCode 合计最常见的 112 题: 校招100题 第1天 链表(List) (19题)校招100题 第2天 树(Tree) (21题)校招100题 第3天 动态规划(DP) (20题)

五分钟学会如何在GitHub上自动化部署个人博客(hugo框架 + stack主题)

上一篇文章&#xff1a; 10分钟学会免费搭建个人博客&#xff08;Hugo框架 stack主题&#xff09; 前言 首先&#xff0c;想要实现这个功能的小伙伴需要完成几个前置条件&#xff1a; 有一个GitHub账号安装了git&#xff0c;并可以通过git推送commit到GitHub上完成第一篇文章…

kubernetes Gateway API-部署和基础配置

文章目录 1 部署2 最简单的 Gateway3 基于主机名和请求头4 重定向 Redirects4.1 HTTP-to-HTTPS 重定向4.2 路径重定向4.2.1 ReplaceFullPath 替换完整路径4.2.2 ReplacePrefixMatch 替换路径前缀5 重写 Rewrites5.1 重写 主机名5.2 重写 路径5.2.1 重新完整路径5.2.1 重新部分路…

操作002:HelloWorld

文章目录 操作002&#xff1a;HelloWorld一、目标二、具体操作1、创建Java工程①消息发送端&#xff08;生产者&#xff09;②消息接收端&#xff08;消费者&#xff09;③添加依赖 2、发送消息①Java代码②查看效果 3、接收消息①Java代码②控制台打印③查看后台管理界面 操作…

使 el-input 内部的内容紧贴左边

<el-inputv-model"form.invitor"placeholder"PC端的自动取当前账号的手机号"readonlyclass"no-border-input" />::v-deep(.no-border-input .el-input__inner) { border: none; box-shadow: none; padding-left: 0; /* 确保内容紧贴左边 *…