【LLM】Langchain使用[四](评估、代理)

news2024/10/6 5:57:39

note

  • 评估目的:检验LLM是否达到验收标准;分析改动对于LLM应用性能的影响
    • 思路:利用语言模型和链,辅助评估
  • 代理:
    • 代理能方便地将LLM连接自己的信息来源(数据)、API等
    • PythonREPLTool工具,使能够结合自己的python代码场景执行过程,比如下面将姓名列表根据拼音进行排序。

文章目录

  • note
  • 一、评估
    • 1. 创建测试用例数据
    • 2. 评估
  • 二、代理
    • 1. 使用llm-math和wikipedia工具
    • 2. 使用PythonREPLTool工具
    • 3. 定义自己的工具并在代理中使用
  • Reference

一、评估

1. 创建测试用例数据

  • 从许多不同的数据点中获得更全面的模型表现情况
  • 使用语言模型本身和链本身来评估其他语言模型、其他链和其他应用程序
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Author : andy
@Date   : 2023/7/17 10:21
@Contact: 864934027@qq.com 
@File   : evaluate_test.py 
"""
from langchain.chains import RetrievalQA #检索QA链,在文档上进行检索
from langchain.chat_models import ChatOpenAI #openai模型
from langchain.document_loaders import CSVLoader #文档加载器,采用csv格式存储
from langchain.indexes import VectorstoreIndexCreator #导入向量存储索引创建器
from langchain.vectorstores import DocArrayInMemorySearch #向量存储

#加载数据
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file)
data = loader.load()

#查看数据
import pandas as pd
test_data = pd.read_csv(file,header=None)
# name, description

'''
将指定向量存储类,创建完成后,我们将从加载器中调用,通过文档记载器列表加载
'''
import os
os.environ["OPENAI_API_KEY"] = ".."
index = VectorstoreIndexCreator(
    vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])

#通过指定语言模型、链类型、检索器和我们要打印的详细程度来创建检索QA链
llm = ChatOpenAI(temperature = 0.0)
qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=index.vectorstore.as_retriever(),
    verbose=True,
    chain_type_kwargs = {
        "document_separator": "<<<<>>>>>"
    }
)

# 1. 创建评估数据点
data[10]#查看这里的一些文档,我们可以对其中发生的事情有所了解
# Document(page_content=": 10\nname: Cozy Comfort Pullover Set, Stripe\ndescription: Perfect for lounging, this striped knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out.\n\nSize & Fit\n- Pants are Favorite Fit: Sits lower on the waist.\n- Relaxed Fit: Our most generous fit sits farthest from the body.\n\nFabric & Care\n- In the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features\n- Relaxed fit top with raglan sleeves and rounded hem.\n- Pull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg.\n\nImported.", metadata={'source': 'OutdoorClothingCatalog_1000.csv', 'row': 10})

# 2. 创建测试数据用例, 但是这种方式不方便扩展
examples = [
    {
        "query": "Do the Cozy Comfort Pullover Set\
        have side pockets?",
        "answer": "Yes"
    },
    {
        "query": "What collection is the Ultra-Lofty \
        850 Stretch Down Hooded Jacket from?",
        "answer": "The DownTek collection"
    }
]
  • QAGenerateChain链接受文档后从每个文档创建一个问题答案对,每次将声测会给你的问答添加到已有的评估数据集中
# 3. 步骤一:通过langchain生成测试用例
from langchain.evaluation.qa import QAGenerateChain #导入QA生成链,它将接收文档,并从每个文档中创建一个问题答案对
example_gen_chain = QAGenerateChain.from_llm(ChatOpenAI())#通过传递chat open AI语言模型来创建这个链
data[:5]

new_examples = example_gen_chain.apply_and_parse(
    [{"doc": t} for t in data[:5]]
) #我们可以创建许多例子

new_examples #查看用例数据


# 4. 组合用例数据
examples += new_examples
qa.run(examples[0]["query"])

# 5. 评估
''' 
LingChainDebug工具可以了解运行一个实例通过链中间所经历的步骤
'''
import langchain
langchain.debug = True
qa.run(examples[0]["query"])#重新运行与上面相同的示例,可以看到它开始打印出更多的信息

可以根据data[10]的文档数据看出在描述纺织物、套头衫的内容,可以根据这些创建简单的测试用例数据。

[chain/start] [1:chain:RetrievalQA > 3:chain:StuffDocumentsChain > 4:chain:LLMChain] Entering Chain run with input:
{
  "question": "Do the Cozy Comfort Pullover Set        have side pockets?",
  "context": ": 10\nname: Cozy Comfort Pullover Set, Stripe\ndescription: Perfect for lounging, this striped knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out.\n\nSize & Fit\n- Pants are Favorite Fit: Sits lower on the waist.\n- Relaxed Fit: Our most generous fit sits farthest from the body.\n\nFabric & Care\n- In the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features\n- Relaxed fit top with raglan sleeves and rounded hem.\n- Pull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg.\n\nImported.<<<<>>>>>: 73\nname: Cozy Cuddles Knit Pullover Set\ndescription: Perfect for lounging, this knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out. \n\nSize & Fit \nPants are Favorite Fit: Sits lower on the waist. \nRelaxed Fit: Our most generous fit sits farthest from the body. \n\nFabric & Care \nIn the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features \nRelaxed fit top with raglan sleeves and rounded hem. \nPull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg. \nImported.<<<<>>>>>: 632\nname: Cozy Comfort Fleece Pullover\ndescription: The ultimate sweater fleece – made from superior fabric and offered at an unbeatable price. \n\nSize & Fit\nSlightly Fitted: Softly shapes the body. Falls at hip. \n\nWhy We Love It\nOur customers (and employees) love the rugged construction and heritage-inspired styling of our popular Sweater Fleece Pullover and wear it for absolutely everything. From high-intensity activities to everyday tasks, you'll find yourself reaching for it every time.\n\nFabric & Care\nRugged sweater-knit exterior and soft brushed interior for exceptional warmth and comfort. Made from soft, 100% polyester. Machine wash and dry.\n\nAdditional Features\nFeatures our classic Mount Katahdin logo. Snap placket. Front princess seams create a feminine shape. Kangaroo handwarmer pockets. Cuffs and hem reinforced with jersey binding. Imported.\n\n – Official Supplier to the U.S. Ski Team\nTHEIR WILL TO WIN, WOVEN RIGHT IN. LEARN MORE<<<<>>>>>: 151\nname: Cozy Quilted Sweatshirt\ndescription: Our sweatshirt is an instant classic with its great quilted texture and versatile weight that easily transitions between seasons. With a traditional fit that is relaxed through the chest, sleeve, and waist, this pullover is lightweight enough to be worn most months of the year. The cotton blend fabric is super soft and comfortable, making it the perfect casual layer. To make dressing easy, this sweatshirt also features a snap placket and a heritage-inspired Mt. Katahdin logo patch. For care, machine wash and dry. Imported."
}
[llm/start] [1:chain:RetrievalQA > 3:chain:StuffDocumentsChain > 4:chain:LLMChain > 5:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n: 10\nname: Cozy Comfort Pullover Set, Stripe\ndescription: Perfect for lounging, this striped knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out.\n\nSize & Fit\n- Pants are Favorite Fit: Sits lower on the waist.\n- Relaxed Fit: Our most generous fit sits farthest from the body.\n\nFabric & Care\n- In the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features\n- Relaxed fit top with raglan sleeves and rounded hem.\n- Pull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg.\n\nImported.<<<<>>>>>: 73\nname: Cozy Cuddles Knit Pullover Set\ndescription: Perfect for lounging, this knit set lives up to its name. We used ultrasoft fabric and an easy design that's as comfortable at bedtime as it is when we have to make a quick run out. \n\nSize & Fit \nPants are Favorite Fit: Sits lower on the waist. \nRelaxed Fit: Our most generous fit sits farthest from the body. \n\nFabric & Care \nIn the softest blend of 63% polyester, 35% rayon and 2% spandex.\n\nAdditional Features \nRelaxed fit top with raglan sleeves and rounded hem. \nPull-on pants have a wide elastic waistband and drawstring, side pockets and a modern slim leg. \nImported.<<<<>>>>>: 632\nname: Cozy Comfort Fleece Pullover\ndescription: The ultimate sweater fleece – made from superior fabric and offered at an unbeatable price. \n\nSize & Fit\nSlightly Fitted: Softly shapes the body. Falls at hip. \n\nWhy We Love It\nOur customers (and employees) love the rugged construction and heritage-inspired styling of our popular Sweater Fleece Pullover and wear it for absolutely everything. From high-intensity activities to everyday tasks, you'll find yourself reaching for it every time.\n\nFabric & Care\nRugged sweater-knit exterior and soft brushed interior for exceptional warmth and comfort. Made from soft, 100% polyester. Machine wash and dry.\n\nAdditional Features\nFeatures our classic Mount Katahdin logo. Snap placket. Front princess seams create a feminine shape. Kangaroo handwarmer pockets. Cuffs and hem reinforced with jersey binding. Imported.\n\n – Official Supplier to the U.S. Ski Team\nTHEIR WILL TO WIN, WOVEN RIGHT IN. LEARN MORE<<<<>>>>>: 151\nname: Cozy Quilted Sweatshirt\ndescription: Our sweatshirt is an instant classic with its great quilted texture and versatile weight that easily transitions between seasons. With a traditional fit that is relaxed through the chest, sleeve, and waist, this pullover is lightweight enough to be worn most months of the year. The cotton blend fabric is super soft and comfortable, making it the perfect casual layer. To make dressing easy, this sweatshirt also features a snap placket and a heritage-inspired Mt. Katahdin logo patch. For care, machine wash and dry. Imported.\nHuman: Do the Cozy Comfort Pullover Set        have side pockets?"
  ]
}
[llm/end] [1:chain:RetrievalQA > 3:chain:StuffDocumentsChain > 4:chain:LLMChain > 5:llm:ChatOpenAI] [1.70s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Yes, the Cozy Comfort Pullover Set does have side pockets.",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Yes, the Cozy Comfort Pullover Set does have side pockets.",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
   "prompt_tokens": 732,
      "completion_tokens": 14,
      "total_tokens": 746
    },
    "model_name": "gpt-3.5-turbo"
  },
  "run": null
}
[chain/end] [1:chain:RetrievalQA > 3:chain:StuffDocumentsChain > 4:chain:LLMChain] [1.70s] Exiting Chain run with output:
{
  "text": "Yes, the Cozy Comfort Pullover Set does have side pockets."
}
[chain/end] [1:chain:RetrievalQA > 3:chain:StuffDocumentsChain] [1.70s] Exiting Chain run with output:
{
  "output_text": "Yes, the Cozy Comfort Pullover Set does have side pockets."
}
[chain/end] [1:chain:RetrievalQA] [11.65s] Exiting Chain run with output:
{
  "result": "Yes, the Cozy Comfort Pullover Set does have side pockets."
}
  • 首先深入到检索QA链中,然后它进入了一些文档链。如上所述,我们正在使用stuff方法,现在我们正在传递这个上下文,可以看到,这个上下文是由我们检索到的不同文档创建的。因此,在进行问答时,当返回错误结果时,通常不是语言模型本身出错了,实际上是检索步骤出错了,仔细查看问题的确切内容和上下文可以帮助调试出错的原因。
  • 然后,我们可以再向下一级,看看进入语言模型的确切内容,以及 OpenAI 自身,在这里,我们可以看到传递的完整提示,我们有一个系统消息,有所使用的提示的描述,这是问题回答链使用的提示,我们可以看到提示打印出来,使用以下上下文片段回答用户的问题。
  • 然后我们看到一堆之前插入的上下文,我们还可以看到有关实际返回类型的更多信息。我们不仅仅返回一个答案,还有token的使用情况,可以了解到token数的使用情况
  • 由于这是一个相对简单的链,我们现在可以看到最终的响应,舒适的毛衣套装,条纹款,有侧袋,正在起泡,通过链返回给用户,我们刚刚讲解了如何查看和调试单个输入到该链的情况。

2. 评估

''' 
LingChainDebug工具可以了解运行一个实例通过链中间所经历的步骤
'''
import langchain
langchain.debug = True
qa.run(examples[0]["query"])#重新运行与上面相同的示例,可以看到它开始打印出更多的信息


# 我们需要为所有示例创建预测,关闭调试模式,以便不将所有内容打印到屏幕上
langchain.debug = False
# 通过LLM进行评估实例
predictions = qa.apply(examples) #为所有不同的示例创建预测

''' 
对预测的结果进行评估,导入QA问题回答,评估链,通过语言模型创建此链
'''
from langchain.evaluation.qa import QAEvalChain #导入QA问题回答,评估链
#通过调用chatGPT进行评估
llm = ChatOpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(examples, predictions)#在此链上调用evaluate,进行评估

#我们将传入示例和预测,得到一堆分级输出,循环遍历它们打印答案
for i, eg in enumerate(examples):
    print(f"Example {i}:")
    print("Question: " + predictions[i]['query'])
    print("Real Answer: " + predictions[i]['answer'])
    print("Predicted Answer: " + predictions[i]['result'])
    print("Predicted Grade: " + graded_outputs[i]['results'])
    print()
'''
Example 5:
Question: What is the fabric composition of the Refresh Swimwear, V-Neck Tankini Contrasts?
Real Answer: The Refresh Swimwear, V-Neck Tankini Contrasts is made of 82% recycled nylon and 18% Lycra® spandex for the body, and 90% recycled nylon with 10% Lycra® spandex for the lining.
Predicted Answer: The fabric composition of the Refresh Swimwear, V-Neck Tankini Contrasts is 82% recycled nylon with 18% Lycra® spandex for the body, and 90% recycled nylon with 10% Lycra® spandex for the lining.
Predicted Grade: CORRECT
'''

评估思路:当它面前有整个文档时,它可以生成一个真实的答案,我们将打印出预测的答,当它进行QA链时,使用embedding和向量数据库进行检索时,将其传递到语言模型中,然后尝试猜测预测的答案,我们还将打印出成绩,这也是语言模型生成的。当它要求评估链评估正在发生的事情时,以及它是否正确或不正确。因此,当我们循环遍历所有这些示例并将它们打印出来时,可以详细了解每个示例。

模型评估的优势:你有这些答案,它们是任意的字符串。没有单一的真实字符串是最好的可能答案,有许多不同的变体,只要它们具有相同的语义,它们应该被评为相似。如果使用正则进行精准匹配就会丢失语义信息,到目前为止存在的许多评估指标都不够好。目前最有趣和最受欢迎的之一就是使用语言模型进行评估。

二、代理

1. 使用llm-math和wikipedia工具

初始化代理的参数:

  • agent: 代理类型。这里使用的是AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION。其中CHAT代表代理模型为针对对话优化的模型,REACT代表针对REACT设计的提示模版。
  • handle_parsing_errors: 是否处理解析错误。当发生解析错误时,将错误信息返回给大模型,让其进行纠正。
  • verbose: 是否输出中间步骤结果。
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.python import PythonREPL
from langchain.chat_models import ChatOpenAI

# 1. 初始化大模型
# 参数temperature设置为0.0,从而减少生成答案的随机性。
open_api_key = "..."
llm = ChatOpenAI(temperature=0, openai_api_key = open_api_key)

# 2. 加载工具包
tools = load_tools(
    ["llm-math","wikipedia"],
    llm=llm #第一步初始化的模型
)

# 3. 初始化代理
agent= initialize_agent(
    tools, #第二步加载的工具
    llm, #第一步初始化的模型
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,  #代理类型
    handle_parsing_errors=True, #处理解析错误
    verbose = True #输出中间步骤
)

# 4. 使用代理回答问题
agent("计算300的25%,思考过程请使用中文。")
# 思考,行动,观察,得到结果

2. 使用PythonREPLTool工具

  • PythonREPLTool工具,使能够结合自己的python代码场景执行过程,比如下面将姓名列表根据拼音进行排序。
from langchain.agents.agent_toolkits import create_python_agent
from langchain.tools.python.tool import PythonREPLTool

agent = create_python_agent(
    llm,  #使用前面一节已经加载的大语言模型
    tool=PythonREPLTool(), #使用Python交互式环境工具(REPLTool)
    verbose=True #输出中间步骤
)

customer_list = ["小明","小黄","小红","小蓝","小橘","小绿",]
# !pip install -q pinyin
agent.run(f"""在这些客户名字转换为拼音\
并打印输出列表: {customer_list}\
思考过程请使用中文。""")

'''
> Entering new AgentExecutor chain...
我需要使用拼音转换库来将这些客户名字转换为拼音。我可以使用Python的pinyin库来完成这个任务。
Action: Python_REPL
Action Input: import pinyin
Observation: 
Thought:我现在可以使用pinyin库来将客户名字转换为拼音。
Action: Python_REPL
Action Input: names = ['小明', '小黄', '小红', '小蓝', '小橘', '小绿']
pinyin_names = [pinyin.get(i, format='strip', delimiter='') for i in names]
print(pinyin_names)
Observation: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']

Thought:我现在知道了最终答案。
Final Answer: ['xiaoming', 'xiaohuang', 'xiaohong', 'xiaolan', 'xiaoju', 'xiaolv']

> Finished chain.
'''

在调试(debug)模式下再次运行,我们可以把上面的6步分别对应到下面的具体流程

  1. 模型对于接下来需要做什么,给出思考(Thought)

    • [chain/start] [1:chain:AgentExecutor] Entering Chain run with input

    • [chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input

    • [llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] Entering LLM run with input

    • [llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] [12.25s] Exiting LLM run with output

    • [chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [12.25s] Exiting Chain run with output

  2. 模型基于思考采取行动(Action), 因为使用的工具不同,Action的输出也和之前有所不同,这里输出的为python代码

    • [tool/start] [1:chain:AgentExecutor > 4:tool:Python REPL] Entering Tool run with input

    • [tool/end] [1:chain:AgentExecutor > 4:tool:Python REPL] [2.2239999999999998ms] Exiting Tool run with output

  3. 模型得到观察(Observation)

    • [chain/start] [1:chain:AgentExecutor > 5:chain:LLMChain] Entering Chain run with input

  4. 基于观察,模型对于接下来需要做什么,给出思考(Thought)

    • [llm/start] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:ChatOpenAI] Entering LLM run with input

    • [llm/end] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:ChatOpenAI] [6.94s] Exiting LLM run with output

  5. 给出最终答案(Final Answer)

    • [chain/end] [1:chain:AgentExecutor > 5:chain:LLMChain] [6.94s] Exiting Chain run with output

  6. 返回最终答案。

    • [chain/end] [1:chain:AgentExecutor] [19.20s] Exiting Chain run with output

3. 定义自己的工具并在代理中使用

tool函数装饰器可以应用用于任何函数,将函数转化为LongChain工具,使其成为代理可调用的工具。

我们需要给函数加上非常详细的文档字符串, 使得代理知道在什么情况下、如何使用该函数/工具。比如下面的函数time,我们加上了详细的文档字符串,代理就会根据注释中的信息来判断何时调用、如何使用这个工具。

"""
返回今天的日期,用于任何与获取今天日期相关的问题。
输入应该始终是一个空字符串,该函数将始终返回今天的日期。
任何日期的计算应该在此函数之外进行。
"""
# 如果你需要查看安装过程日志,可删除 -q
# !pip install -q DateTime

# 导入tool函数装饰器
from langchain.agents import tool
from datetime import date

@tool
def time(text: str) -> str:
    """
    返回今天的日期,用于任何需要知道今天日期的问题。\
    输入应该总是一个空字符串,\
    这个函数将总是返回今天的日期,任何日期计算应该在这个函数之外进行。
    """
    return str(date.today())


# 初始化代理
agent= initialize_agent(
    tools=[time], #将刚刚创建的时间工具加入代理
    llm=llm, #初始化的模型
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,  #代理类型
    handle_parsing_errors=True, #处理解析错误
    verbose = True #输出中间步骤
)

agent("今天的日期是?")
# {'input': '今天的日期是?', 'output': '今天的日期是2023-07-17。'}

总结

  1. 模型对于接下来需要做什么,给出思考(Thought)

    思考:我需要使用 `time` 工具来获取今天的日期

  2. 模型基于思考采取行动(Action), 因为使用的工具不同,Action的输出也和之前有所不同,这里输出的为python代码

    行动: 使用time工具,输入为空字符串

  3. 模型得到观察(Observation)

    观测: 2023-07-04

  4. 基于观察,模型对于接下来需要做什么,给出思考(Thought)

    思考: 我已成功使用 time 工具检索到了今天的日期

  5. 给出最终答案(Final Answer)

    最终答案: 今天的日期是2023-07-04.

  6. 返回最终答案。

Langchain的其他应用:基于CSV文件回答问题、查询sql数据库、与api交互,有很多例子通过Chain以及不同的提示(Prompts)和输出解析器(output parsers)组合得以实现。

在这里插入图片描述

Reference

[1] 精华笔记:吴恩达 x LangChain《基于LangChain的大语言模型应用开发》(下)
[2] https://python.langchain.com/docs/modules/agents/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/763186.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

WSL2安装CUDA

WSL2安装CUDA 在WSL里面不用安装显卡驱动&#xff0c;显卡驱动必须要提前在Windows系统里面安装好&#xff0c;安装好后相应库文件会自动映射到WSL里面 进入网站 找到自己需要的版本&#xff0c;我这里是CUDA 11.7.1&#xff0c;点击进入详细页面 依次选择 L i n u x → x 86…

IP扫描工具

什么是高级 IP 扫描仪 高级 IP 扫描程序是网络中必不可少的工具&#xff0c;使管理员能够跟踪网络地址空间。在提供要扫描的 IP 地址范围时&#xff0c;高级 IP 扫描程序会借助网络扫描协议按顺序检查该范围内的每个 IP&#xff0c;扫描后&#xff0c;高级 IP 扫描程序工具可查…

【MongoDB实战】数据备份与恢复

场景&#xff1a; 需求&#xff1a; 解决方案&#xff1a; 步骤&#xff1a; Stage 1&#xff1a;【生产环境】修改备份文件映射 Stage 2&#xff1a;【生产环境】重新构建mongodb Stage 3&#xff1a;【客户环境】修改备份文件映射&#xff0c;同 Stage 1 Stage 4&#…

python接口自动化(四十)- logger 日志 - 下(超详解)

简介 按照上一篇的计划&#xff0c;这一篇给小伙伴们讲解一下&#xff1a;&#xff08;1&#xff09;多模块使用logging&#xff0c;&#xff08;2&#xff09;通过文件配置logging模块&#xff0c;&#xff08;3&#xff09;自己封装一个日志&#xff08;logging&#xff09;类…

Stable Diffusion配置要求,显卡推荐

Stable Diffusion 是一款流行的人工智能图像生成器&#xff0c;您可以在自己的 PC 上运行。但是运行Stable Diffusion的最低规格是多少&#xff0c;哪些组件最重要&#xff1f; Stable Diffusion需要什么 PC 硬件&#xff1f; Stable Diffusion最关键的一个组件是显卡 (GPU)。…

LaTex 中的Beamer使用

LaTex 中的Beamer使用 Beamer beamer 中通过frame 来控制每一页的内容&#xff0c;其与编写常用的LaTex文稿基本没有区别 titlepage \title[Short Title]{My Presentation} \subtitle{An Introduction to LaTeX Beamer} \author{River Chandler} \institute{Sichuan Universi…

国科大杭州高等研究院

2021年底的我还没决定开始考研 &#xff0c;过完年才确定开始考研&#xff0c;开学以后才开始学习&#xff0c;此时距离22年底考试时间差不多十个月的样子。 2022年开始正式学习&#xff0c;2 2 408&#xff0c;对于我来说内容挺多挺难的&#xff0c;我英语不太好&#xff0c;以…

懒人自动化生成e2e测试文件:JSON => playwright

前言 本工具实现的是&#xff1a;使用简单的 JSON 配置&#xff0c;生成可执行的 playwright UI 测试文件。 然后通过项目内已经配置好的 playwright 配置实现 UI 测试。 工具工作流程&#xff1a; 期望达到的目的是&#xff1a; ✅ 基础页面&#xff08;进入页面&#xff0…

加密保SSL超安通配版

今天收到几家公司网络IT经理询问&#xff1a;“要买“加密保SSL超安通配版”问有没有&#xff1f;” 这里特别强调一下&#xff01; “加密保SSL超安通配版”这不是国产SSL证书&#xff0c;不是国产SSL证书&#xff0c;属于套牌PKI类型非CA机构官方产品&#xff01; 这是经销…

cuda_11.6.1_510.47.03_linux.run

cuda_11.6.1_510.47.03_linux.run Installing the latest CUDA toolkit cuda_11.6.1_510.47.03_linux.run Download Installer for Linux Ubuntu 20.04 x86_64 cuda_11.6.1_510.47.03_linux.run

【Spring】Spring更简单的读取和存储对象---使用注解

目录 1.Spring的存储对象------存储Bean对象 1.前置工作&#xff0c;配置扫描路径 2.添加注解存储Bean对象 1.Controller&#xff08;控制器存储&#xff09; 2.service&#xff08;服务存储&#xff09; 3.Repository&#xff08;仓库存储&#xff09; 4.Component&…

二叉树展开为链表

给你二叉树的根结点 root &#xff0c;请你将它展开为一个单链表&#xff1a; 展开后的单链表应该同样使用 TreeNode &#xff0c;其中 right 子指针指向链表中下一个结点&#xff0c;而左子指针始终为 null 。 展开后的单链表应该与二叉树 先序遍历 顺序相同。 示例 1&…

CVE-2023-1454注入分析复现

简介 JeecgBoot的代码生成器是一种可以帮助开发者快速构建企业级应用的工具&#xff0c;它可以通过一键生成前后端代码&#xff0c;无需写任何代码&#xff0c;让开发者更多关注业务逻辑。 影响版本 Jeecg-Boot<3.5.1 环境搭建 idea 后端源码&#xff1a; https://git…

【Java SpringBoot Starter】开发一个自己的SpringBoot Starter组件,应该怎么写,需要什么配置

【Java SpringBoot Starter】开发一个自己的SpringBoot Starter组件&#xff0c;应该怎么写&#xff0c;需要什么配置 目录 1. 自定义的Starter&#xff0c;需要的Configuration 类 2. .properties文件的配置&#xff0c;或者yml文件的配置内容&#xff0c;注入bean对象中 …

行业报告 | AI 赋能,人形机器人产业提速,把握产业链受益机会(上)

文 | BFT机器人 01 核心观点 核心观点: 人形机器人产业发展仍处于 0-1 阶段&#xff0c;当前行业投资逻辑偏向事件驱动型的主题投资&#xff0c;但可落地服务场景的人形机器人成长空间非常广阔&#xff0c;值得长期关注。本文将围绕以下热点问题作出讨论: D当前节点人形机器人产…

GPT与人类:人工智能是否能够真正复制人类语言?

人类语言是一种复杂的系统&#xff0c;它不仅包含着无数单词和语法规则&#xff0c;更重要的是具有丰富的含义和上下文。这些语言特征涉及到常识、文化、情感和经验等方面&#xff0c;是人类在长期进化和文明发展中所积累起来的丰富知识和经验的体现。然而&#xff0c;人工智能…

百万级sql server数据库优化案例分享

在我们的IT职业生涯中&#xff0c;能有一次百万级的数据库的优化经历是很难得的&#xff0c;如果你遇到了恭喜你&#xff0c;你的职业生涯将会更加完美&#xff0c;如果你遇到并解决了&#xff0c;那么一定足够你炫耀很多年。 这里我将要分享一次完美的百万级数据库优化经历&am…

高效管理,轻松掌控项目进度

通过Zoho Projects以项目化的方式让企业中的一切任务井井有条&#xff0c;无论是个人事务、团队计划&#xff0c;还是跨部门协同。以下是七个步骤来实现高效的任务协作&#xff0c;使企业中的任务井井有条。 1、安排和跟进任务 通过Zoho Projects项目管理工具轻松创建任务&…

一文了解JNPF低代码开发平台

一、关于低代码 JNPF平台在提供无代码&#xff08;可视化建模&#xff09;和低代码&#xff08;高度可扩展的集成工具以支持跨功能团队协同工作&#xff09;开发工具上是独一无二的。支持简单、快速地构建及不断改进Web端应用程序&#xff0c;可为整个应用程序的生命周期提供全…

matplotlib笔记 sviewgui (鼠标拖拽绘制csv的matplotlib图)

1 先看一下效果 2 导入数据方式 2.1 select选择本地数据 import sviewgui.sview as sv sv.buildGUI()然后会跳出前面的那个GUI界面 2.2 在buildGUI的时候传入数据路径 import sviewgui.sview as sv sv.buildGUI(C:/Users/16000/tip.csv)2.3 参数是DataFrame import sview…