J-LangChain - 复杂智能链流式执行

news2025/3/1 0:07:58

系列文章索引
J-LangChain 入门

介绍

j-langchain是一个Java版的LangChain开发框架,具有灵活编排和流式执行能力,旨在简化和加速各类大模型应用在Java平台的落地开发。它提供了一组实用的工具和类,使得开发人员能够更轻松地构建类似于LangChain的Java应用程序。

github: https://github.com/flower-trees/j-langchain

复杂智能链流式执行实例

1、分支路由

根据 chain 输入参数 vendor,判断使用 llama3gpt-4、还是回复 无法回答

在这里插入图片描述

LangChain实现

from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_template("tell me a joke about ${topic}")
def route(info):
    if "ollama" in info["vendor"]:
        return prompt | OllamaLLM(model="llama3:8b")
    elif "chatgpt" in info["vendor"]:
        return prompt | ChatOpenAI(model="gpt-4")
    else:
        return prompt | RunnableLambda(lambda x: "sorry, I don't know how to do that")

chain = route | StrOutputParser()

result = chain.stream({"topic": "bears", "vendor": "ollama"})
for chunk in result:
    print(chunk, end="", flush=False)

J-LangChain实现

使用 flow组件 原生功能 .next() 实现分支执行 子chain

FlowInstance chain = chainActor.builder()
            ......
            .next(
                Info.c("vendor == 'ollama'", chatOllama),
                Info.c("vendor == 'chatgpt'", chatOpenAI),
                Info.c(input -> "sorry, I don't know how to do that")
            )
            ......
public void SwitchDemo() {

	BaseRunnable<StringPromptValue, ?> prompt = PromptTemplate.fromTemplate("tell me a joke about ${topic}");
    ChatOllama chatOllama = ChatOllama.builder().model("llama3:8b").build();
    ChatOpenAI chatOpenAI = ChatOpenAI.builder().model("gpt-4").build();

    FlowInstance chain = chainActor.builder()
            .next(prompt)
            .next(
                Info.c("vendor == 'ollama'", chatOllama),
                Info.c("vendor == 'chatgpt'", chatOpenAI),
                Info.c(input -> "sorry, I don't know how to do that")
            )
            .next(new StrOutputParser()).build();

    ChatGenerationChunk chunk = chainActor.stream(chain, Map.of("topic", "bears", "vendor", "ollama"));

    StringBuilder sb = new StringBuilder();
    while (chunk.getIterator().hasNext()) {
        sb.append(chunk.getIterator().next());
        System.out.println(sb);
    }
}

2、组合嵌套

主chain 调用 子chain 生成一个笑话,并对笑话是否可笑进行评价。

在这里插入图片描述

LangChain实现

from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

model = OllamaLLM(model="llama3:8b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")

chain = prompt | model | StrOutputParser()

analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()

result = composed_chain.stream({"topic": "bears"})
for chunk in result:
    print(chunk, end="", flush=False)

J-LangChain实现

flow组件 原生功能支持嵌套执行。

💡 Notes:

  • 这里需要 .next(new InvokeChain(chain)) 来一次性调用嵌套链,返回结果。
public void ComposeDemo() throws TimeoutException {

  	ChatOllama llm = ChatOllama.builder().model("llama3:8b").build();
	StrOutputParser parser = new StrOutputParser();
	
	BaseRunnable<StringPromptValue, ?> prompt = PromptTemplate.fromTemplate("tell me a joke about ${topic}");
	FlowInstance chain = chainActor.builder().next(prompt).next(llm).next(parser).build();
	
	BaseRunnable<StringPromptValue, ?> analysisPrompt = PromptTemplate.fromTemplate("is this a funny joke? ${joke}");
	
	FlowInstance analysisChain = chainActor.builder()
	        .next(new InvokeChain(chain)) //invoke 执行嵌套链
	        .next(input -> { System.out.printf("joke content: '%s' \n\n", input); return input; })
	        .next(input -> Map.of("joke", ((Generation)input).getText()))
	        .next(analysisPrompt)
	        .next(llm)
	        .next(parser).build();
	
	ChatGenerationChunk chunk = chainActor.stream(analysisChain, Map.of("topic", "bears"));
	StringBuilder sb = new StringBuilder();
	while (chunk.getIterator().hasNext()) {
	    sb.append(chunk.getIterator().next());
	    System.out.println(sb);
	}
}

3、并行执行

主chain 并行执行 joke_chainpoem_chain,并交替输出stream答案。

在这里插入图片描述

LangChain实现

from langchain_core.runnables import RunnableParallel

joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model

parallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)

result = parallel_chain.stream({"topic": "bear"})

joke = "joke: "
poem = "poem: "
for chunk in result:
    if 'joke' in chunk:  
        joke += chunk['joke']
        print(joke, flush=True)
    if 'poem' in chunk:
        poem += chunk['poem']
        print(poem, flush=True)

输出:

joke: Why
joke: Why did
joke: Why did the
poem: Bear
joke: Why did the bear
poem: Bear stands
joke: Why did the bear break
poem: Bear stands tall
joke: Why did the bear break up
poem: Bear stands tall,
joke: Why did the bear break up with
poem: Bear stands tall, wise
joke: Why did the bear break up with his
poem: Bear stands tall, wise and
......

J-LangChain实现

使用 flow组件 原生功能 .concurrent() 实现并发执行。

FlowInstance chain = chainActor.builder()
		.concurrent(
            (IResult<Map<String, AIMessageChunk>>) (iContextBus, isTimeout) ->
                    Map.of("joke", iContextBus.getResult(jokeChain.getFlowId()), "poem", iContextBus.getResult(poemChain.getFlowId())),
            jokeChain, poemChain
    	).build();

💡 Notes:

  • 这里 ChatOllama 并不是线程安全的,并发时需要 new 新实例。
public void ParallelDemo() {

    BaseRunnable<StringPromptValue, ?> joke = PromptTemplate.fromTemplate("tell me a joke about ${topic}");
    BaseRunnable<StringPromptValue, ?> poem = PromptTemplate.fromTemplate("write a 2-line poem about ${topic}");

    FlowInstance jokeChain = chainActor.builder().next(joke).next(ChatOllama.builder().model("llama3:8b").build()).build();
    FlowInstance poemChain = chainActor.builder().next(poem).next(ChatOllama.builder().model("llama3:8b").build()).build();

    FlowInstance chain = chainActor.builder().concurrent(
            (IResult<Map<String, AIMessageChunk>>) (iContextBus, isTimeout) ->
                    Map.of("joke", iContextBus.getResult(jokeChain.getFlowId()), "poem", iContextBus.getResult(poemChain.getFlowId())),
            jokeChain, poemChain
    ).build();

    Map<String, AIMessageChunk> result = chainActor.stream(chain, Map.of("topic", "bears"));

    CompletableFuture.runAsync(() -> {
        AIMessageChunk jokeChunk = result.get("joke");
        StringBuilder jokeSb = new StringBuilder().append("joke: ");
        while (true) {
            try {
                if (!jokeChunk.getIterator().hasNext()) break;
            } catch (TimeoutException e) {
                throw new RuntimeException(e);
            }
            jokeSb.append(jokeChunk.getIterator().next().getContent());
            System.out.println(jokeSb);
        }
    });

    CompletableFuture.runAsync(() -> {
        AIMessageChunk poemChunk = result.get("poem");
        StringBuilder poemSb = new StringBuilder().append("poem: ");
        while (true) {
            try {
                if (!poemChunk.getIterator().hasNext()) break;
            } catch (TimeoutException e) {
                throw new RuntimeException(e);
            }
            poemSb.append(poemChunk.getIterator().next().getContent());
            System.out.println(poemSb);
        }
    }).join();
}

4、动态路由

chain 1 总结用户问题 topic主chain 根据 topic 动态路由执行 langchain_chainanthropic_chain、或者 general_chain

在这里插入图片描述

LangChain实现
通过 RunnableLambda 实现动态路由:

from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambda

chain = (
    PromptTemplate.from_template(
        """Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.

Do not respond with more than one word.

<question>
{question}
</question>

Classification:"""
    )
    | OllamaLLM(model="llama3:8b")
    | StrOutputParser()
)

langchain_chain = PromptTemplate.from_template(
    """You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="llama3:8b")
anthropic_chain = PromptTemplate.from_template(
    """You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="llama3:8b")
general_chain = PromptTemplate.from_template(
    """Respond to the following question:

Question: {question}
Answer:"""
) | OllamaLLM(model="llama3:8b")

def route(info):
    if "anthropic" in info["topic"].lower():
        return anthropic_chain
    elif "langchain" in info["topic"].lower():
        return langchain_chain
    else:
        return general_chain

full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)

result = full_chain.stream({"question": "how do I use LangChain?"})
for chunk in result:
    print(chunk, end="", flush=False)

J-LangChain实现
使用 flow组件 原生的功能 .next() 实现动态路由:

FlowInstance fullChain = chainActor.builder()
		......
		.next(
		        Info.c("topic == 'anthropic'", anthropicChain),
		        Info.c("topic == 'langchain'", langchainChain),
		        Info.c(generalChain)
		)
		......
public void RouteDemo() throws TimeoutException {

    ChatOllama llm = ChatOllama.builder().model("llama3:8b").build();

    BaseRunnable<StringPromptValue, Object> prompt = PromptTemplate.fromTemplate(
            """
            Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.
    
            Do not respond with more than one word.
    
            <question>
            ${question}
            </question>
    
            Classification:
            """
    );

    FlowInstance chain = chainActor.builder().next(prompt).next(llm).next(new StrOutputParser()).build();

    FlowInstance langchainChain = chainActor.builder().next(PromptTemplate.fromTemplate(
            """
            You are an expert in langchain. \
            Always answer questions starting with "As Harrison Chase told me". \
            Respond to the following question:
            
            Question: ${question}
            Answer:
            """
    )).next(ChatOllama.builder().model("llama3:8b").build()).build();

    FlowInstance anthropicChain = chainActor.builder().next(PromptTemplate.fromTemplate(
            """
            You are an expert in anthropic. \
            Always answer questions starting with "As Dario Amodei told me". \
            Respond to the following question:
        
            Question: ${question}
            Answer:
            """
    )).next(ChatOllama.builder().model("llama3:8b").build()).build();

    FlowInstance generalChain = chainActor.builder().next(PromptTemplate.fromTemplate(
            """
            Respond to the following question:
        
            Question: ${question}
            Answer:
            """
    )).next(ChatOllama.builder().model("llama3:8b").build()).build();

    FlowInstance fullChain = chainActor.builder()
            .next(new InvokeChain(chain)) //invoke 执行嵌套链
            .next(input -> { System.out.printf("topic: '%s' \n\n", input); return input; })
            .next(input -> Map.of("prompt", input, "question", ((Map<?, ?>)ContextBus.get().getFlowParam()).get("question")))
            .next(input -> { System.out.printf("topic: '%s' \n\n", input); return input; })
            .next(
                    Info.c("topic == 'anthropic'", anthropicChain),
                    Info.c("topic == 'langchain'", langchainChain),
                    Info.c(generalChain)
            ).build();

    AIMessageChunk chunk = chainActor.stream(fullChain, Map.of("question", "how do I use Anthropic?"));
    StringBuilder sb = new StringBuilder();
    while (chunk.getIterator().hasNext()) {
        sb.append(chunk.getIterator().next().getContent());
        System.out.println(sb);
    }
}

动态构建

主chain 调用 子chain 1 获取加工后的用户问题,子chain 1 根据用户输入问题是否带 对话历史,判断是调用子chain 2 根据历史修改用户问题,还是直接透传用户问题,主chain 根据最终问题,并添加 system 内容后,给出答案。

在这里插入图片描述

LangChain实现

from langchain_core.runnables import chain, RunnablePassthrough

llm = OllamaLLM(model="llama3:8b")

contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", contextualize_instructions),
        ("placeholder", "{chat_history}"),
        ("human", "{question}"),
    ]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()

@chain
def contextualize_if_needed(input_: dict):
    if input_.get("chat_history"):
        return contextualize_question
    else:
        return RunnablePassthrough() | itemgetter("question")

@chain
def fake_retriever(input_: dict):
    return "egypt's population in 2024 is about 111 million"

qa_instructions = (
    """Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages(
    [("system", qa_instructions), ("human", "{question}")]
)

full_chain = (
    RunnablePassthrough.assign(question=contextualize_if_needed).assign(
        context=fake_retriever
    )
    | qa_prompt
    | llm
    | StrOutputParser()
)

result = full_chain.stream({
    "question": "what about egypt",
    "chat_history": [
        ("human", "what's the population of indonesia"),
        ("ai", "about 276 million"),
    ],
})
for chunk in result:
    print(chunk, end="", flush=False)

J-LangChain实现
使用 flow组件 原生功能 .all() 实现执行 子chain 获取问题,并结合 system内容 给出答案的功能。

FlowInstance fullChain = chainActor.builder()
           .all(
                   (iContextBus, isTimeout) -> Map.of(
                           "question", iContextBus.getResult(contextualizeIfNeeded.getFlowId()).toString(),
                           "context", iContextBus.getResult("fakeRetriever")),
                   Info.c(contextualizeIfNeeded),
                   Info.c(input -> "egypt's population in 2024 is about 111 million").cAlias("fakeRetriever")
           )
           ......
public void DynamicDemo() throws TimeoutException {

    ChatOllama llm = ChatOllama.builder().model("llama3:8b").build();

    String contextualizeInstructions = """
            Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text).""";

    BaseRunnable<ChatPromptValue, Object> contextualizePrompt = ChatPromptTemplate.fromMessages(
            List.of(
                    Pair.of("system", contextualizeInstructions),
                    Pair.of("placeholder", "${chatHistory}"),
                    Pair.of("human", "${question}")
            )
    );

    FlowInstance contextualizeQuestion = chainActor.builder()
            .next(contextualizePrompt)
            .next(llm)
            .next(new StrOutputParser())
            .build();

    FlowInstance contextualizeIfNeeded = chainActor.builder().next(
            Info.c("chatHistory != null", new InvokeChain(contextualizeQuestion)),
            Info.c(input -> Map.of("question", ((Map<String, String>)input).get("question")))
    ).build();

    String qaInstructions =
            """
            Answer the user question given the following context:\n\n${context}.
            """;
    BaseRunnable<ChatPromptValue, Object>  qaPrompt = ChatPromptTemplate.fromMessages(
            List.of(
                    Pair.of("system", qaInstructions),
                    Pair.of("human", "${question}")
            )
    );

    FlowInstance fullChain = chainActor.builder()
            .all(
                    (iContextBus, isTimeout) -> Map.of(
                            "question", iContextBus.getResult(contextualizeIfNeeded.getFlowId()).toString(),
                            "context", iContextBus.getResult("fakeRetriever")),
                    Info.c(contextualizeIfNeeded),
                    Info.c(input -> "egypt's population in 2024 is about 111 million").cAlias("fakeRetriever")
            )
            .next(qaPrompt)
            .next(input -> { System.out.printf("topic: '%s' \n\n", JsonUtil.toJson(input)); return input; })
            .next(llm)
            .next(new StrOutputParser())
            .build();

    ChatGenerationChunk chunk = chainActor.stream(fullChain,
            Map.of(
                    "question", "what about egypt",
                    "chatHistory",
                            List.of(
                                    Pair.of("human", "what's the population of indonesia"),
                                    Pair.of("ai", "about 276 million")
                            )
            )
    );
    StringBuilder sb = new StringBuilder();
    while (chunk.getIterator().hasNext()) {
        sb.append(chunk.getIterator().next().getText());
        System.out.println(sb);
    }
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2275367.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

《HeadFirst设计模式》笔记(上)

设计模式的目录&#xff1a; 1 设计模式介绍 要不断去学习如何利用其它开发人员的智慧与经验。学习前人的正统思想。 我们认为《Head First》的读者是一位学习者。 一些Head First的学习原则&#xff1a; 使其可视化将文字放在相关图形内部或附近&#xff0c;而不是放在底部…

springboot整合h2

在 Spring Boot 中整合 H2 数据库非常简单。H2 是一个轻量级的嵌入式数据库&#xff0c;非常适合开发和测试环境。以下是整合 H2 数据库的步骤&#xff1a; 1. 添加依赖 首先&#xff0c;在你的 pom.xml 文件中添加 H2 数据库的依赖&#xff1a; <dependency><grou…

安装rocketmq dashboard

1、访问如下地址&#xff1a; GitHub - apache/rocketmq-dashboard: The state-of-the-art Dashboard of Apache RoccketMQ provides excellent monitoring capability. Various graphs and statistics of events, performance and system information of clients and applica…

mysql中创建计算字段

目录 1、计算字段 2、拼接字段 3、去除空格和使用别名 &#xff08;1&#xff09;去除空格 &#xff08;2&#xff09;使用别名&#xff1a;AS 4、执行算术计算 5、小结 1、计算字段 存储在数据库表中的数据一般不是应用程序所需要的格式&#xff0c;下面举几个例子。 …

【批量拆分PDF】批量按页码范围拆分PDF并按页码重命名:技术难题与总结

按照页码范围拆分PDF项目实战参考&#xff1a; 【批量个性化拆分PDF】批量拆分PDF只取PDF的首页&#xff0c;批量按照文件大小来拆分PDF&#xff0c;PDF按照目录页码范围批量计算拆分分割文件PDF个性化拆分&#xff08;单个拆分&#xff0c;取首页拆分&#xff0c;按页码计算拆…

MySQL表的增删改查(基础)-上篇

目录 CRUD 新增 查询 (1)全列查询 (2)指定列查询 (3)查询时指定表达式 (4)别名 (5)去重查询 (6)排序查询 (7)条件查询 (8)分页查询 CRUD 即增加(Create)、查询(Retrieve)、更新(Update)、删除(Delete)四个单词的首字母缩写 新增 也可插入中文字符串 查询 (1)全列查…

【论文速读】| 利用大语言模型在灰盒模糊测试中生成初始种子

基本信息 论文标题: Harnessing Large Language Models for Seed Generation in Greyb0x Fuzzing 作者: Wenxuan Shi, Yunhang Zhang, Xinyu Xing, Jun Xu 作者单位: Northwestern University, University of Utah 关键词: Greyb0x fuzzing, Large Language Models, Seed g…

Linux:操作系统简介

前言&#xff1a; 在本片文章&#xff0c;小编将带大家理解冯诺依曼体系以及简单理解操作喜欢&#xff0c;并且本篇文章将围绕什么以及为什么两个话题进行展开说明。 冯诺依曼体系&#xff1a; 是什么&#xff1a; 冯诺依曼体系&#xff08;Von Neumann architecture&#xff…

为什么选择平滑样条?

为什么选择平滑样条&#xff1f; 抗噪声能力&#xff1a; 平滑样条通过引入平滑参数 λ \lambda λ&#xff0c;允许你在以下两者之间找到平衡&#xff1a; 拟合误差&#xff08;与数据的偏离&#xff09;&#xff1a;希望曲线接近数据点。光滑性&#xff08;曲线的平滑程度&a…

边缘计算网关解决车间数据采集的关键问题

随着工业4.0和智能制造的快速发展&#xff0c;车间数据采集与分析已成为提升生产效率、保证产品质量、优化加工过程的关键环节。传统的数据采集方式&#xff0c;如中心化的数据处理模式&#xff0c;在面对海量数据、实时性要求高的工业场景时&#xff0c;往往显得力不从心。边缘…

C语言之assert断言

1.assert的使用形式 #include <assert.h>assert (表达式); (1)在c语言中&#xff0c;宏&#xff0c;是一种预处理指令。assert(表示式) 就是一个宏 (2)表达式必须是一个能计算出真或假的布尔条件&#xff0c;它通常意味着 该表达式是一个能够返回整数值的表达式&#…

【Linux】正则表达式

正则表达式是一种可供Linux工具过滤文本的自定义模板&#xff0c;Linux工具&#xff08;如sed、gawk&#xff09;会在读取数据时使用正则表达式对数据进行模式匹配。 正则表达式使用元字符来描述数据流中的一个或多个字符。它是由正则表达式引擎实现的。正则表达式引擎是一种底…

hutool糊涂工具通过注解设置excel宽度

import java.lang.annotation.*;Documented Retention(RetentionPolicy.RUNTIME) Target({ElementType.METHOD, ElementType.FIELD, ElementType.PARAMETER}) public interface ExcelStyle {int width() default 0; }/*** 聊天记录*/ Data public class DialogContentInfo {/**…

全面教程:Nacos 2.4.2 启用鉴权与 MySQL 数据存储配置

全面教程&#xff1a;Nacos 2.4.2 启用鉴权与 MySQL 数据存储配置 1. 配置 Nacos 开启鉴权功能 1.1 修改 application.properties 配置文件 在 Nacos 2.4.2 中&#xff0c;开启鉴权功能需要修改 conf/application.properties 文件。按照以下方式配置&#xff1a; # 开启鉴权…

【学习】CMMM智能制造能力成熟度评估的重要性

CMMM认证通过对企业当前生产状态的全面评估&#xff0c;能够精准地确定其智能化生产的程度&#xff0c;并将企业的智能化生产水平划分为五个等级&#xff0c;包括初始级、已定义级、以管理级、卓越级和顶级。这种等级划分使得不同类型的企业能够根据自身实际情况&#xff0c;选…

特制一个自己的UI库,只用CSS、图标、emoji图 第二版

图&#xff1a; 代码&#xff1a; index.html <!DOCTYPE html> <html lang"zh-CN"> <head><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, initial-scale1.0"><title>M…

Y3编辑器地图教程:ORPG教程、防守图教程

文章目录 Part1&#xff1a;ORPG教程一、章节人物选择1.1 Logo与界面动画1.2 章节选择与投票1.2.1 设计章节选择完毕后的操作1.2.2 玩家投票统计 1.3 多样化的人物选择系统1.3.1 异步模型显示1.3.2 双击和键盘选人1.3.3 UI选人 1.4 简易存档 二、对话与任务系统2.1对话UI与触发…

Ubuntu问题 -- 硬盘存储不够了, 如何挂载一个新的硬盘上去, 图文简单明了, 已操作成功

需求 我现在有一个ubuntu22.04操作系统的服务器, 但是当前硬盘不够用了, 我买了一个1T的SSD固态硬盘, 且已经安装在服务器上了, 我需要将这个硬盘挂载到当前ubuntu的某个目录上 开始 1. 确认新硬盘是否被系统识别 打开终端&#xff0c;输入以下命令查看系统识别到的硬盘&…

吴恩达 提示词工程 课程笔记

一、Introduction 二、Guidelines Principle1: 清晰&#xff08;不一定是简短的&#xff09;而具体的指令 Tactic1: 使用分隔符 Triple quotes: “”" Triple backticks: Triple dashes: — Angle brackets:< > XML tags: < tag></ tag> Tactic2:…

网络安全设备主要有什么

网络安全设备指的肯定是硬件设备了&#xff0c;国内卖安全硬件的没几家&#xff0c;天融信&#xff0c;启明星辰&#xff0c;绿盟&#xff0c;深信服&#xff0c;就这四家卖的比较齐全吧&#xff0c;上它们官网看一下&#xff0c;就知道市面上主要的网络安全设备有哪些了。分类…