langchain教程-2.prompt

news2025/2/7 20:20:11

前言

该系列教程的代码: https://github.com/shar-pen/Langchain-MiniTutorial

我主要参考 langchain 官方教程, 有选择性的记录了一下学习内容

这是教程清单

  • 1.初试langchain
  • 2.prompt
  • 3.OutputParser/输出解析
  • 4.model/vllm模型部署和langchain调用
  • 5.DocumentLoader/多种文档加载器
  • 6.TextSplitter/文档切分
  • 7.Embedding/文本向量化
  • 8.VectorStore/向量数据库存储和检索
  • 9.Retriever/检索器
  • 10.Reranker/文档重排序
  • 11.RAG管道/多轮对话RAG
  • 12.Agent/工具定义/Agent调用工具/Agentic RAG

Prompt Template

Prompt 模板对于生成动态且灵活的提示至关重要,可用于各种场景,例如会话历史记录、结构化输出和特定查询。

在本教程中,我们将探讨创建 PromptTemplate 对象的方法,应用部分变量,通过 YAML 文件管理模板,并利用 ChatPromptTemplateMessagePlaceholder 等高级工具来增强功能。

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
	base_url='http://localhost:5551/v1',
	api_key='EMPTY',
	model_name='Qwen2.5-7B-Instruct',
	temperature=0.2,
)

创建 PromptTemplate 对象

有两种方法可以创建 PromptTemplate 对象:

  • 1. 使用 from_template() 方法。
  • 2. 直接创建 PromptTemplate 对象并同时生成提示词。

方法 1. 使用 from_template() 方法

  • 使用 {variable} 语法定义模板,其中 variable 代表可替换的变量。
from langchain_core.prompts import PromptTemplate

# {}内部是变量
template = "What is the capital of {country}?"

# 使用`from_template`函数来创建模板
prompt = PromptTemplate.from_template(template)
prompt
PromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, template='What is the capital of {country}?')
PromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, template='What is the capital of {country}?')

类已经解析出country这个变量,可以通过为变量 country 赋值来完成提示词。

# 类似str的`format`方法来创建实例
prompt.format(country="United States of America")
'What is the capital of United States of America?'

进一步用chain来简化流程

template = "What is the capital of {country}?"
prompt = PromptTemplate.from_template(template)
chain = prompt | llm
chain.invoke("United States of America").content
'The capital of the United States of America is Washington, D.C.'

方法 2. 直接创建 PromptTemplate 对象并同时生成提示

  • 明确指定 input_variables 以进行额外的验证。
  • 否则,如果 input_variables 与模板字符串中的变量不匹配,实例化时可能会引发异常。
from langchain_core.prompts import PromptTemplate
# Define template
template = "What is the capital of {country}?"

# Create a prompt template with `PromptTemplate` object
prompt = PromptTemplate(
    template=template,
    input_variables=["country"],
)
prompt
PromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, template='What is the capital of {country}?')

partial variables

可临时固定的可变参数, 是特殊的 input_variables, 是对应 input_variables 在缺失时的默认值。
使用 partial_variables,您可以部分应用函数。这在需要共享 通用变量 时特别有用。

常见示例:

  • 日期或时间(date or time) 是典型的应用场景。

例如,假设您希望在提示中指定当前日期:

  • 直接硬编码日期每次手动传递日期变量 可能不太灵活。
  • 更好的方法 是使用一个返回当前日期的函数,将其部分应用于提示模板,从而动态填充日期变量,使提示更具适应性。
from langchain_core.prompts import PromptTemplate
# Define template
template = "What are the capitals of {country1} and {country2}, respectively?"

# Create a prompt template with `PromptTemplate` object
prompt = PromptTemplate(
    template=template,
    input_variables=["country1"],
    partial_variables={
        "country2": "United States of America"  # Pass `partial_variables` in dictionary form
    },
)
prompt
PromptTemplate(input_variables=['country1'], input_types={}, partial_variables={'country2': 'United States of America'}, template='What are the capitals of {country1} and {country2}, respectively?')
prompt.format(country1="South Korea")
'What are the capitals of South Korea and United States of America, respectively?'

通过partial()函数修改或者增加临时变量, 或者直接修改 PromptTemplate.partial_variables

  • prompt_partial = prompt.partial(country2=“India”), 可创建新实例的同时保留原实例
  • prompt.partial_variables = {‘country2’:‘china’}, 直接修改原实例
prompt_partial = prompt.partial(country2="India")
prompt_partial.format(country1="South Korea")
'What are the capitals of South Korea and India, respectively?'
prompt.partial_variables = {'country2':'china'}
prompt.format(country1="South Korea")
'What are the capitals of South Korea and china, respectively?'

partial variables 可以临时用新值, 不会影响缺失时的默认值

print(prompt_partial.format(country1="South Korea", country2="Canada"))
print(prompt_partial.format(country1="South Korea"))
What are the capitals of South Korea and Canada, respectively?
What are the capitals of South Korea and India, respectively?

partial variables 可用函数传递, 不需要手动设置新值

from datetime import datetime

def get_today():
    return datetime.now().strftime("%B %d")

prompt = PromptTemplate(
    template="Today's date is {today}. Please list {n} celebrities whose birthday is today. Please specify their date of birth.",
    input_variables=["n"],
    partial_variables={
        "today": get_today  # Pass `partial_variables` in dictionary form
    },
)

prompt.format(n=3)
"Today's date is January 30. Please list 3 celebrities whose birthday is today. Please specify their date of birth."

从 YAML 文件加载 Prompt 模板

您可以将 Prompt 模板 存储在单独的 YAML 文件 中,并使用 load_prompt 进行加载和管理。

以下是一个yaml示例:


_type: "prompt"
template: "What is the color of {fruit}?"
input_variables: ["fruit"]

from langchain_core.prompts import load_prompt

prompt = load_prompt("prompts/fruit_color.yaml", encoding="utf-8")
prompt

ChatPromptTemplate

ChatPromptTemplate 可用于将会话历史记录包含到提示词中,以提供上下文信息。消息以 (role, message) 元组的形式组织,并存储在 列表 中。

角色(role):

  • "system" :系统设置信息,通常用于全局指令或设定 AI 的行为。
  • "human" :用户输入的消息。
  • "ai" :AI 生成的响应消息。
from langchain_core.prompts import ChatPromptTemplate

chat_prompt = ChatPromptTemplate.from_template("What is the capital of {country}?")
chat_prompt
ChatPromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, template='What is the capital of {country}?'), additional_kwargs={})])

ChatPromptTemplate(input_variables=[‘country’], input_types={}, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=[‘country’], input_types={}, partial_variables={}, template=‘What is the capital of {country}?’), additional_kwargs={})])

注意这个prompt被 HumanMessagePromptTemplate包装了,而且位于一个list中

chat_prompt.format(country="United States of America")
'Human: What is the capital of United States of America?'

多角色

使用 ChatPromptTemplate.from_messages来定义模板, 内部是 chat list, 每个 chat 都是以 (role, message) 元组的形式组织

from langchain_core.prompts import ChatPromptTemplate

chat_template = ChatPromptTemplate.from_messages(
    [
        # role, message
        ("system", "You are a friendly AI assistant. Your name is {name}."),
        ("human", "Nice to meet you!"),
        ("ai", "Hello! How can I assist you?"),
        ("human", "{user_input}"),
    ]
)

# Create chat messages
messages = chat_template.format_messages(name="Teddy", user_input="What is your name?")
messages
[SystemMessage(content='You are a friendly AI assistant. Your name is Teddy.', additional_kwargs={}, response_metadata={}),
 HumanMessage(content='Nice to meet you!', additional_kwargs={}, response_metadata={}),
 AIMessage(content='Hello! How can I assist you?', additional_kwargs={}, response_metadata={}),
 HumanMessage(content='What is your name?', additional_kwargs={}, response_metadata={})]

可直接用上面的 Message list 的形式调用大模型

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
	base_url='http://localhost:5551/v1',
	api_key='EMPTY',
	model_name='Qwen2.5-7B-Instruct',
	temperature=0.2,
)
llm.invoke(messages).content
"My name is Teddy. It's nice to meet you! How can I help you today?"

MessagePlaceholder

LangChain 提供了 MessagePlaceholder,用途包括:

  • 当不确定使用哪些角色 作为消息提示模板的一部分时,它可以提供灵活性。
  • 在格式化时插入一组消息列表,适用于动态会话历史记录的场景。
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

chat_prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "You are a summarization specialist AI assistant. Your mission is to summarize conversations using key points.",
        ),
        MessagesPlaceholder(variable_name="conversation"),
        ("human", "Summarize the conversation so far in {word_count} words."),
    ]
)
chat_prompt
ChatPromptTemplate(input_variables=['conversation', 'word_count'], input_types={'conversation': list[typing.Annotated[typing.Union[typing.Annotated[langchain_core.messages.ai.AIMessage, Tag(tag='ai')], typing.Annotated[langchain_core.messages.human.HumanMessage, Tag(tag='human')], typing.Annotated[langchain_core.messages.chat.ChatMessage, Tag(tag='chat')], typing.Annotated[langchain_core.messages.system.SystemMessage, Tag(tag='system')], typing.Annotated[langchain_core.messages.function.FunctionMessage, Tag(tag='function')], typing.Annotated[langchain_core.messages.tool.ToolMessage, Tag(tag='tool')], typing.Annotated[langchain_core.messages.ai.AIMessageChunk, Tag(tag='AIMessageChunk')], typing.Annotated[langchain_core.messages.human.HumanMessageChunk, Tag(tag='HumanMessageChunk')], typing.Annotated[langchain_core.messages.chat.ChatMessageChunk, Tag(tag='ChatMessageChunk')], typing.Annotated[langchain_core.messages.system.SystemMessageChunk, Tag(tag='SystemMessageChunk')], typing.Annotated[langchain_core.messages.function.FunctionMessageChunk, Tag(tag='FunctionMessageChunk')], typing.Annotated[langchain_core.messages.tool.ToolMessageChunk, Tag(tag='ToolMessageChunk')]], FieldInfo(annotation=NoneType, required=True, discriminator=Discriminator(discriminator=<function _get_type at 0x7ff1a966cfe0>, custom_error_type=None, custom_error_message=None, custom_error_context=None))]]}, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], input_types={}, partial_variables={}, template='You are a summarization specialist AI assistant. Your mission is to summarize conversations using key points.'), additional_kwargs={}), MessagesPlaceholder(variable_name='conversation'), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['word_count'], input_types={}, partial_variables={}, template='Summarize the conversation so far in {word_count} words.'), additional_kwargs={})])
formatted_chat_prompt = chat_prompt.format(
    word_count=5,
    conversation=[
        ("human", "Hello! I’m Teddy. Nice to meet you."),
        ("ai", "Nice to meet you! I look forward to working with you."),
    ],
)

print(formatted_chat_prompt)
System: You are a summarization specialist AI assistant. Your mission is to summarize conversations using key points.
Human: Hello! I’m Teddy. Nice to meet you.
AI: Nice to meet you! I look forward to working with you.
Human: Summarize the conversation so far in 5 words.

Few-Shot Prompting

LangChain 的 Few-Shot Prompting 提供了一种强大的框架,通过提供精心挑选的示例,引导语言模型生成高质量的输出。此技术减少了大量模型微调的需求,同时确保在各种应用场景中提供精准且符合上下文的结果。

  • Few-Shot Prompt 模板

    • 通过嵌入示例定义提示的结构和格式,指导模型生成一致的输出。
  • 示例选择策略(Example Selection Strategies)

    • 动态选择最相关的示例 以匹配特定查询,增强模型的上下文理解能力,提高响应准确性。
  • Chroma 向量存储(Chroma Vector Store)

    • 用于存储和检索基于语义相似度的示例,提供可扩展且高效的 Prompt 结构构建。

FewShotPromptTemplate

Few-shot prompting 是一种强大的技术,它通过提供少量精心设计的示例,引导语言模型生成准确且符合上下文的输出。LangChain 的 FewShotPromptTemplate 简化了这一过程,使用户能够构建灵活且可复用的提示,适用于问答、摘要、文本校正等任务。

1. 设计 Few-Shot 提示(Designing Few-Shot Prompts)

  • 定义示例,展示所需的输出结构和风格。
  • 确保示例覆盖边界情况,以增强模型的理解能力和性能。

2. 动态示例选择(Dynamic Example Selection)

  • 利用语义相似性或向量搜索,选择最相关的示例,以匹配输入查询。

3. 集成 Few-Shot 提示(Integrating Few-Shot Prompts)

  • 结合 Prompt 模板与语言模型,构建强大的链式调用,以生成高质量的响应。
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
	base_url='http://localhost:5551/v1',
	api_key='EMPTY',
	model_name='Qwen2.5-7B-Instruct',
	temperature=0.2,
)

# User query
question = "What is the capital of United States of America?"

# Query the model
response = llm.invoke(question)

# Print the response
print(response.content)
The capital of the United States of America is Washington, D.C.

以下是一个 CoT 的示例prompt

from langchain_core.prompts import PromptTemplate, FewShotPromptTemplate

# Define examples for the few-shot prompt
examples = [
    {
        "question": "Who lived longer, Steve Jobs or Einstein?",
        "answer": """Does this question require additional questions: Yes.
Additional Question: At what age did Steve Jobs die?
Intermediate Answer: Steve Jobs died at the age of 56.
Additional Question: At what age did Einstein die?
Intermediate Answer: Einstein died at the age of 76.
The final answer is: Einstein
""",
    },
    {
        "question": "When was the founder of Naver born?",
        "answer": """Does this question require additional questions: Yes.
Additional Question: Who is the founder of Naver?
Intermediate Answer: Naver was founded by Lee Hae-jin.
Additional Question: When was Lee Hae-jin born?
Intermediate Answer: Lee Hae-jin was born on June 22, 1967.
The final answer is: June 22, 1967
""",
    },
    {
        "question": "Who was the reigning king when Yulgok Yi's mother was born?",
        "answer": """Does this question require additional questions: Yes.
Additional Question: Who is Yulgok Yi's mother?
Intermediate Answer: Yulgok Yi's mother is Shin Saimdang.
Additional Question: When was Shin Saimdang born?
Intermediate Answer: Shin Saimdang was born in 1504.
Additional Question: Who was the king of Joseon in 1504?
Intermediate Answer: The king of Joseon in 1504 was Yeonsangun.
The final answer is: Yeonsangun
""",
    },
    {
        "question": "Are the directors of Oldboy and Parasite from the same country?",
        "answer": """Does this question require additional questions: Yes.
Additional Question: Who is the director of Oldboy?
Intermediate Answer: The director of Oldboy is Park Chan-wook.
Additional Question: Which country is Park Chan-wook from?
Intermediate Answer: Park Chan-wook is from South Korea.
Additional Question: Who is the director of Parasite?
Intermediate Answer: The director of Parasite is Bong Joon-ho.
Additional Question: Which country is Bong Joon-ho from?
Intermediate Answer: Bong Joon-ho is from South Korea.
The final answer is: Yes
""",
    },
]

example_prompt = PromptTemplate.from_template(
    "Question:\n{question}\nAnswer:\n{answer}"
)

# Print the first formatted example
print(example_prompt.format(**examples[0]))
Question:
Who lived longer, Steve Jobs or Einstein?
Answer:
Does this question require additional questions: Yes.
Additional Question: At what age did Steve Jobs die?
Intermediate Answer: Steve Jobs died at the age of 56.
Additional Question: At what age did Einstein die?
Intermediate Answer: Einstein died at the age of 76.
The final answer is: Einstein

以下这个 FewShotPromptTemplate 将 examples 以 example_prompt 格式添加到真正 QA 的前面。真正的 QA 按照 suffix 格式展示

# Initialize the FewShotPromptTemplate
few_shot_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    suffix="Question:\n{question}\nAnswer:",
    input_variables=["question"],
)

# Example question
question = "How old was Bill Gates when Google was founded?"

# Generate the final prompt
final_prompt = few_shot_prompt.format(question=question)
print(final_prompt)
Question:
Who lived longer, Steve Jobs or Einstein?
Answer:
Does this question require additional questions: Yes.
Additional Question: At what age did Steve Jobs die?
Intermediate Answer: Steve Jobs died at the age of 56.
Additional Question: At what age did Einstein die?
Intermediate Answer: Einstein died at the age of 76.
The final answer is: Einstein


Question:
When was the founder of Naver born?
Answer:
Does this question require additional questions: Yes.
Additional Question: Who is the founder of Naver?
Intermediate Answer: Naver was founded by Lee Hae-jin.
Additional Question: When was Lee Hae-jin born?
Intermediate Answer: Lee Hae-jin was born on June 22, 1967.
The final answer is: June 22, 1967


Question:
Who was the reigning king when Yulgok Yi's mother was born?
Answer:
Does this question require additional questions: Yes.
Additional Question: Who is Yulgok Yi's mother?
Intermediate Answer: Yulgok Yi's mother is Shin Saimdang.
Additional Question: When was Shin Saimdang born?
Intermediate Answer: Shin Saimdang was born in 1504.
Additional Question: Who was the king of Joseon in 1504?
Intermediate Answer: The king of Joseon in 1504 was Yeonsangun.
The final answer is: Yeonsangun


Question:
Are the directors of Oldboy and Parasite from the same country?
Answer:
Does this question require additional questions: Yes.
Additional Question: Who is the director of Oldboy?
Intermediate Answer: The director of Oldboy is Park Chan-wook.
Additional Question: Which country is Park Chan-wook from?
Intermediate Answer: Park Chan-wook is from South Korea.
Additional Question: Who is the director of Parasite?
Intermediate Answer: The director of Parasite is Bong Joon-ho.
Additional Question: Which country is Bong Joon-ho from?
Intermediate Answer: Bong Joon-ho is from South Korea.
The final answer is: Yes


Question:
How old was Bill Gates when Google was founded?
Answer:
response = llm.invoke(final_prompt)
print(response.content)
Does this question require additional questions: Yes.
Additional Question: When was Google founded?
Intermediate Answer: Google was founded in 1998.
Additional Question: When was Bill Gates born?
Intermediate Answer: Bill Gates was born on October 28, 1955.
The final answer is: Bill Gates was 43 years old when Google was founded.

特殊 prompt

RAG 文档分析

基于检索到的文档上下文处理并回答问题,确保高准确性和高相关性

from langchain.prompts import ChatPromptTemplate


system = """You are a precise and helpful AI assistant specializing in question-answering tasks based on provided context.
Your primary task is to:
1. Analyze the provided context thoroughly
2. Answer questions using ONLY the information from the context
3. Preserve technical terms and proper nouns exactly as they appear
4. If the answer cannot be found in the context, respond with: 'The provided context does not contain information to answer this question.'
5. Format responses in clear, readable paragraphs with relevant examples when available
6. Focus on accuracy and clarity in your responses
"""

human = """#Question:
{question}

#Context:
{context}

#Answer:
Please provide a focused, accurate response that directly addresses the question using only the information from the provided context."""

prompt = ChatPromptTemplate.from_messages(
	[
		("system", system), 
		("human", human)
	]
)

prompt
ChatPromptTemplate(input_variables=['context', 'question'], input_types={}, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], input_types={}, partial_variables={}, template="You are a precise and helpful AI assistant specializing in question-answering tasks based on provided context.\nYour primary task is to:\n1. Analyze the provided context thoroughly\n2. Answer questions using ONLY the information from the context\n3. Preserve technical terms and proper nouns exactly as they appear\n4. If the answer cannot be found in the context, respond with: 'The provided context does not contain information to answer this question.'\n5. Format responses in clear, readable paragraphs with relevant examples when available\n6. Focus on accuracy and clarity in your responses\n"), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], input_types={}, partial_variables={}, template='#Question:\n{question}\n\n#Context:\n{context}\n\n#Answer:\nPlease provide a focused, accurate response that directly addresses the question using only the information from the provided context.'), additional_kwargs={})])

具有来源归因的 RAG(RAG with Source Attribution)

增强型 RAG 实现,支持详细的来源追踪和引用,以提高可追溯性和验证可靠性

from langchain.prompts import ChatPromptTemplate


system = """You are a precise and thorough AI assistant that provides well-documented answers with source attribution.
Your responsibilities include:
1. Analyzing provided context thoroughly
2. Generating accurate answers based solely on the given context
3. Including specific source references for each key point
4. Preserving technical terminology exactly as presented
5. Maintaining clear citation format [source: page/document]
6. If information is not found in the context, state: 'The provided context does not contain information to answer this question.'

Format your response as:
1. Main Answer
2. Sources Used (with specific locations)
3. Confidence Level (High/Medium/Low)"""

human = """#Question:
{question}

#Context:
{context}

#Answer:
Please provide a detailed response with source citations using only information from the provided context."""

prompt = ChatPromptTemplate.from_messages(
	[
		("system", system), 
		("human", human)
	]
)
PROMPT_OWNER = "eun"
hub.push(f"{PROMPT_OWNER}/{prompt_title}", prompt, new_repo_is_public=True)

其实在回答要求里加入了源引用的要求

LLM 响应评估(LLM Response Evaluation)

基于多项质量指标对 LLM 响应进行全面评估,并提供详细的评分方法

from langchain.prompts import PromptTemplate


evaluation_prompt = """Evaluate the LLM's response based on the following criteria:

INPUT:
Question: {question}
Context: {context}
LLM Response: {answer}

EVALUATION CRITERIA:
1. Accuracy (0-10)
- Perfect (10): Completely accurate, perfectly aligned with context
- Good (7-9): Minor inaccuracies
- Fair (4-6): Some significant inaccuracies
- Poor (0-3): Major inaccuracies or misalignment

2. Completeness (0-10)
- Perfect (10): Comprehensive coverage of all relevant points
- Good (7-9): Covers most important points
- Fair (4-6): Missing several key points
- Poor (0-3): Critically incomplete

3. Context Relevance (0-10)
- Perfect (10): Optimal use of context
- Good (7-9): Good use with minor omissions
- Fair (4-6): Partial use of relevant context
- Poor (0-3): Poor context utilization

4. Clarity (0-10)
- Perfect (10): Exceptionally clear and well-structured
- Good (7-9): Clear with minor issues
- Fair (4-6): Somewhat unclear
- Poor (0-3): Confusing or poorly structured

SCORING METHOD:
1. Calculate individual scores
2. Compute weighted average:
   - Accuracy: 40%
   - Completeness: 25%
   - Context Relevance: 25%
   - Clarity: 10%
3. Normalize to 0-1 scale

OUTPUT FORMAT:
{
    "individual_scores": {
        "accuracy": float,
        "completeness": float,
        "context_relevance": float,
        "clarity": float
    },
    "weighted_score": float,
    "normalized_score": float,
    "evaluation_notes": string
}

Return ONLY the normalized_score as a decimal between 0 and 1."""

prompt = PromptTemplate.from_template(evaluation_prompt)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2294451.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

分析用户请求K8S里ingress-nginx提供的ingress流量路径

前言 本文是个人的小小见解&#xff0c;欢迎大佬指出我文章的问题&#xff0c;一起讨论进步~ 我个人的疑问点 进入的流量是如何自动判断进入iptables的四表&#xff1f;k8s nodeport模式的原理&#xff1f; 一 本机环境介绍 节点名节点IPK8S版本CNI插件Master192.168.44.1…

初阶数据结构:树---堆

目录 一、树的概念 二、树的构成 &#xff08;一&#xff09;、树的基本组成成分 &#xff08;二&#xff09;、树的实现方法 三、树的特殊结构------二叉树 &#xff08;一&#xff09;、二叉树的概念 &#xff08;二&#xff09;、二叉树的性质 &#xff08;三&#…

feign 远程调用详解

在平常的开发工作中&#xff0c;我们经常需要跟其他系统交互&#xff0c;比如调用用户系统的用户信息接口、调用支付系统的支付接口等。那么&#xff0c;我们应该通过什么方式进行系统之间的交互呢&#xff1f;今天&#xff0c;简单来总结下 feign 的用法。 1&#xff1a;引入依…

Sentinel的安装和做限流的使用

一、安装 Release v1.8.3 alibaba/Sentinel GitHubA powerful flow control component enabling reliability, resilience and monitoring for microservices. (面向云原生微服务的高可用流控防护组件) - Release v1.8.3 alibaba/Sentinelhttps://github.com/alibaba/Senti…

讯飞智作 AI 配音技术浅析(三):自然语言处理

自然语言处理&#xff08;NLP&#xff09;是讯飞智作 AI 配音技术的重要组成部分&#xff0c;负责将输入的文本转换为机器可理解的格式&#xff0c;并提取出文本的语义和情感信息&#xff0c;以便生成自然、富有表现力的语音。 一、基本原理 讯飞智作 AI 配音的 NLP 技术主要包…

wxWidgets生成HTML文件,带图片转base64数据

编译环境大家可以看我之前的文章,CodeBlocks + msys2 + wx3.2,win10 这里功能就是生成HTML文件,没用HTML库,因为是自己固定的格式,图片是一个vector,可以动态改变数量的。 效果如下: #include <wx/string.h> #include <wx/file.h> #include <wx/ima…

【matlab基本使用笔记】

ctrl a i 代码格式化 fzero求非线性函数的根 arrayfun将函数应用于每个数组元素 format long长格式输出 format long g取消科学计数法 linspace logspace 一、界面使用 1.创建matlab脚本 利用.m后缀的脚本文件&#xff08;又称为m文件&#xff09;编程&#xff1a; 点击…

机器学习--python基础库之Matplotlib (1) 超级详细!!!

机器学习--python基础库Matplotlib 机器学习--python基础库Matplotlib0 介绍1 实现基础绘图-某城市温度变化图1.1绘制基本图像1.2实现一些其他功能 2 再一个坐标系中绘制多个图像3 多个坐标系显示-plt.subplots(面向对象的画图方法)4 折线图的应用场景 机器学习–python基础库M…

bat脚本实现自动化漏洞挖掘

bat脚本 BAT脚本是一种批处理文件&#xff0c;可以在Windows操作系统中自动执行一系列命令。它们可以简化许多日常任务&#xff0c;如文件操作、系统配置等。 bat脚本执行命令 echo off#下面写要执行的命令 httpx 自动存活探测 echo off httpx.exe -l url.txt -o 0.txt nuc…

一文解释nn、nn.Module与nn.functional的用法与区别

&#x1f308; 个人主页&#xff1a;十二月的猫-CSDN博客 &#x1f525; 系列专栏&#xff1a; &#x1f3c0;零基础入门PyTorch框架_十二月的猫的博客-CSDN博客 &#x1f4aa;&#x1f3fb; 十二月的寒冬阻挡不了春天的脚步&#xff0c;十二点的黑夜遮蔽不住黎明的曙光 目录 …

Unity VideoPlayer播放视屏不清晰的一种情况

VideoPlayer的Rnder Texture可以设置Size,如果你的视屏是1920*1080那么就设置成1920*1080。 如果设置成其他分辨率比如800*600会导致视屏不清晰。

【玩转全栈】--创建一个自己的vue项目

目录 vue介绍 创建vue项目 vue页面介绍 element-plus组件库 启动项目 vue介绍 Vue.js 是一款轻量级、易于上手的前端 JavaScript 框架&#xff0c;旨在简化用户界面的开发。它采用了响应式数据绑定和组件化的设计理念&#xff0c;使得开发者可以通过声明式的方式轻松管理数据和…

揭秘区块链隐私黑科技:零知识证明如何改变未来

文章目录 1. 引言&#xff1a;什么是零知识证明&#xff1f;2. 零知识证明的核心概念与三大属性2.1 完备性&#xff08;Completeness&#xff09;2.2 可靠性&#xff08;Soundness&#xff09;2.3 零知识性&#xff08;Zero-Knowledge&#xff09; 3. 零知识证明的工作原理4. 零…

堆的实现——堆的应用(堆排序)

文章目录 1.堆的实现2.堆的应用--堆排序 大家在学堆的时候&#xff0c;需要有二叉树的基础知识&#xff0c;大家可以看我的二叉树文章&#xff1a;二叉树 1.堆的实现 如果有⼀个关键码的集合 K {k0 , k1 , k2 , …&#xff0c;kn−1 } &#xff0c;把它的所有元素按完全⼆叉树…

Ubuntu20.04 本地部署 DeepSeek-R1

一、下载ollama 打开 ollama链接&#xff0c;直接终端运行提供的命令即可。如获取的命令如下&#xff1a; curl -fsSL https://ollama.com/install.sh | sh确保是否安装成功可在终端输入如下命令&#xff1a; ollama -v注意&#xff1a; 如遇到Failed to connect to github.…

2025年2月6日笔记

第 12 届蓝桥杯 C 青少组中 / 高级组选拔赛&#xff08; STEMA &#xff09; 2020 年 11 月 22 日 真题第一题 解题思路&#xff1a; 第一&#xff1a;因为有整数集合的求和字样&#xff08;所以用for循环来做&#xff09; 第二&#xff1a;题中让我们累加1到N&#xff0c;所…

Linux: 网络基础

1.协议 为什么要有协议&#xff1a;减少通信成本。所有的网络问题&#xff0c;本质是传输距离变长了。 什么是协议&#xff1a;用计算机语言表达的约定。 2.分层 软件设计方面的优势—低耦合。 一般我们的分层依据&#xff1a;功能比较集中&#xff0c;耦合度比较高的模块层…

CSS 背景与边框:从基础到高级应用

CSS 背景与边框&#xff1a;从基础到高级应用 1. CSS 背景样式1.1 背景颜色示例代码&#xff1a;设置背景颜色 1.2 背景图像示例代码&#xff1a;设置背景图像 1.3 控制背景平铺行为示例代码&#xff1a;控制背景平铺 1.4 调整背景图像大小示例代码&#xff1a;调整背景图像大小…

大学资产管理系统中的下载功能设计与实现

大学资产管理系统是高校信息化建设的重要组成部分&#xff0c;它负责记录和管理学校内所有固定资产的信息。随着信息技术的发展&#xff0c;下载功能成为提高资产管理效率的关键环节之一。 系统架构的设计是实现下载功能的基础。一个良好的系统架构能够确保数据的高效传输和存储…

园区网设计与实战

想做一个自己学习的有关的csdn账号&#xff0c;努力奋斗......会更新我计算机网络实验课程的所有内容&#xff0c;还有其他的学习知识^_^&#xff0c;为自己巩固一下所学知识。 我是一个萌新小白&#xff0c;有误地方请大家指正&#xff0c;谢谢^_^ 文章目录 前言 这个实验主…