【LLM RAG】GritLM:统一嵌入和生成的大语言模型浅谈

news2024/11/15 18:00:58

前言

目前,所有基于文本的语言问题都可以归结为生成问题,并通过单一的LLM来处理。然而,使用嵌入的任务(如聚类或检索)在这种视角下往往被忽视了。文本嵌入在许多关键的实际应用中扮演着重要角色。如RAG,在向量表征时,通过一些表征模型如:BGE、BCE等进行嵌入。因此,当前的方法在处理生成任务和嵌入任务时通常是分开的,这导致了效率和性能的损失。本文提出了GRIT(Generative Representational Instruction Tuning),这是一种统一嵌入和生成任务的方法。GRIT通过指令区分这两种任务,使得模型能够根据给定的指令执行相应的任务。这种方法在保持生成和嵌入任务性能的同时,实现了两者的统一。

模型架构

GRITLM架构

Representation

  • GRITLM 在处理嵌入任务时使用双向注意力机制来处理输入。在嵌入任务中,对最终隐藏状态进行平均池化(Mean Pooling),以产生最终的表示。该任务使用contrastive objectivein-batch negatives来计算损失。损失函数如下:

    Generation

  • GRITLM 在处理生成任务时使用因果注意力机制。在隐藏状态之上,Language Modeling Head,用于预测下一个标记的损失,即图中的“{response}< /s>”部分。该格式支持多轮对话(用“…”表示)。因此其损失函数为预测 token 和真实 token 之间的交叉熵:

该架构是一个多任务学习的框架,因此,总体损失函数表示如下:

实验结果

  1. 嵌入性能

    比较了GRITLM 7B和GRITLM 8X7B与现有的生成和嵌入模型的性能。他们发现,GRITLM 7B在MTEB上的表现优于所有先前的开放模型,并且在生成任务上也优于所有参数规模相当的模型。GRITLM 8X7B在所有开放的生成语言模型中表现最佳,同时在嵌入任务上也表现出色。

  2. 生成性能

    GRITLM在多个生成任务上的性能超过了其他模型,包括Llama 2 7B和Mistral 7B等。

简化RAG

在传统的RAG设置中,通常需要两个独立的模型:一个用于检索(检索模型),另一个用于生成(生成模型)。这要求将用户查询传递给两个模型,并且在生成阶段,还需要将检索到的上下文传递给生成模型。

GRIT方法通过统一嵌入和生成任务,简化了RAG。由于GRITLM能够处理两种任务,它可以在不需要单独检索模型的情况下,直接在生成过程中利用检索到的上下文。

传统RAG(左)和简化RAG(右)

提出了几种缓存策略来提高RAG的效率,这些策略包括查询缓存(Query Caching)、文档缓存(Doc Caching)、查询-文档缓存(Query-Doc Caching)文档-查询缓存(Doc-Query Caching)

  • 查询缓存:在这种方法中,检索阶段计算的查询表示被缓存,并在生成阶段重用,避免了对查询的重复前向传递。
  • 文档缓存:在这种方法中,检索阶段计算的文档表示被缓存,并在生成阶段直接使用,避免了对文档的重复前向传递。
  • 查询-文档缓存(Query-Doc Caching) 和 **文档-查询缓存(Doc-Query Caching)**结合了查询缓存和文档缓存。它们在缓存策略上有所不同,但都是为了进一步减少在生成阶段所需的计算量。

推理代码

开箱即用

pip install gritlm
  1. basic

    from gritlm import GritLM
    
    # Loads the model for both capabilities; If you only need embedding pass `mode="embedding"` to save memory (no lm head)
    model = GritLM("GritLM/GritLM-7B", torch_dtype="auto")
    # To load the 8x7B you will likely need multiple GPUs.
    # All the kwargs are passed to HF from_pretrained so you can just do the below to load on multiple GPUs:
    # model = GritLM("GritLM/GritLM-8x7B", torch_dtype="auto", device_map="auto")
    # You can also load other models e.g.
    # model = GritLM("Muennighoff/SGPT-125M-weightedmean-nli-bitfit", pooling_method="weighted_mean", attn=None)
    # model = GritLM("hkunlp/instructor-base", pooling_method="mean", attn=None)
    
    ### Embedding/Representation ###
    instruction = "Given a scientific paper title, retrieve the paper's abstract"
    queries = ['Bitcoin: A Peer-to-Peer Electronic Cash System', 'Generative Representational Instruction Tuning']
    documents = [
        "A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.",
        "All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8X7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm."
    ]
    
    def gritlm_instruction(instruction):
        return "<|user|>\n" + instruction + "\n<|embed|>\n" if instruction else "<|embed|>\n"
    
    # No need to add instruction for retrieval documents
    d_rep = model.encode(documents, instruction=gritlm_instruction(""))
    q_rep = model.encode(queries, instruction=gritlm_instruction(instruction))
    
    from scipy.spatial.distance import cosine
    cosine_sim_q0_d0 = 1 - cosine(q_rep[0], d_rep[0])
    cosine_sim_q0_d1 = 1 - cosine(q_rep[0], d_rep[1])
    cosine_sim_q1_d0 = 1 - cosine(q_rep[1], d_rep[0])
    cosine_sim_q1_d1 = 1 - cosine(q_rep[1], d_rep[1])
    
    print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (queries[0][:15], documents[0][:15], cosine_sim_q0_d0))
    # Cosine similarity between "Bitcoin: A Peer" and "A purely peer-t" is: 0.608
    print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (queries[0][:15], documents[1][:15], cosine_sim_q0_d1))
    # Cosine similarity between "Bitcoin: A Peer" and "All text-based " is: 0.101
    print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (queries[1][:15], documents[0][:15], cosine_sim_q1_d0))
    # Cosine similarity between "Generative Repr" and "A purely peer-t" is: 0.120
    print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (queries[1][:15], documents[1][:15], cosine_sim_q1_d1))
    # Cosine similarity between "Generative Repr" and "All text-based " is: 0.533
    
    ### Generation ###
    # We did not finetune GritLM models with system prompts, as you can just include system-like instructions together with your user instruction
    messages = [
        {"role": "user", "content": "Please write me a poem about my recent hike of Mt. Fuji at midnight in the style of Shakespeare."},
    ]
    encoded = model.tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
    encoded = encoded.to(model.device)
    gen = model.generate(encoded, max_new_tokens=256, do_sample=False)
    decoded = model.tokenizer.batch_decode(gen)
    print(decoded[0])
    """
    <s> <|user|>
    Please write me a poem about my recent hike of Mt. Fuji at midnight in the style of Shakespeare.
    <|assistant|>
    Oh, Mt. Fuji, mountain grand,
    A sight to see, a climb to command,
    At midnight, in the dark of night,
    I climbed your slopes, with all my might.
    
    The stars above, they shone so bright,
    A beacon in the darkness, guiding light,
    The wind did blow, with a gentle sigh,
    As I climbed higher, with a steady eye.
    
    The path was steep, the climb was tough,
    But I pressed on, with a steadfast rough,
    For the summit, I longed to see,
    The view from the top, a sight to be.
    
    At last, I reached the peak, and stood,
    With awe and wonder, I gazed aloud,
    The world below, a sight to see,
    A view that's worth the climb, you'll agree.
    
    Mt. Fuji, mountain grand,
    A sight to see, a climb to command,
    At midnight, in the dark of night,
    I climbed your slopes, with all my might.</s>
    """
    
  2. Caching

    import numpy as np
    import torch
    from gritlm import GritLM
    
    # Loads the model for both capabilities; If you only need embedding pass `mode="embedding"` to save memory (no lm head)
    model = GritLM("GritLM/GritLM-7B", torch_dtype="auto")
    # To load the 8x7B you will likely need multiple GPUs.
    # All the kwargs are passed to HF from_pretrained so you can just do the below to load on multiple GPUs:
    # model = GritLM("GritLM/GritLM-8x7B", torch_dtype="auto", device_map="auto")
    # You can also load other models e.g.
    # model = GritLM("Muennighoff/SGPT-125M-weightedmean-nli-bitfit", pooling_method="weighted_mean", attn=None)
    # model = GritLM("hkunlp/instructor-base", pooling_method="mean", attn=None)
    
    queries = ['Please explain to me how Bitcoin works.', 'What is "Generative Representational Instruction Tuning"?']
    documents = [
        "A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.",
        "All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8X7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm."
    ]
    
    CACHE_FORMAT_DOC = "\n<|user|>\n{query}\n\nAnswer the prior query while optionally using the context prior to it\n<|assistant|>\n"
    CACHE_FORMAT_QUERY = "\n<|user|>\n{doc}\n\nOptionally using the prior context answer the query prior to it\n<|assistant|>\n"
    CACHE_FORMAT_QUERY_DOC = "\n<|user|>\nOptionally using the prior context answer the query prior to it\n<|assistant|>\n"
    CACHE_FORMAT_DOC_QUERY = "\n<|user|>\nAnswer the prior query while optionally using the context prior to it\n<|assistant|>\n"
    
    def gritlm_instruction(instruction):
        return "<|user|>\n" + instruction + "\n<|embed|>\n" if instruction else "<|embed|>\n"
    
    ### GRIT DOC CACHING ###
    # cache: Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`
    d_rep, d_cache = model.encode(documents, instruction=gritlm_instruction(""), get_cache=True)
    q_rep = model.encode(queries, instruction=gritlm_instruction(""))
    
    from scipy.spatial.distance import cosine
    sims = {q: [1 - cosine(q_rep[i], d_rep[j]) for j in range(len(d_rep))] for i, q in enumerate(queries)}
    
    for q, q_sims in sims.items():
        sim_idx = np.argmax(q_sims)
        cache = tuple([
            (d_cache[i][0][sim_idx:sim_idx+1], d_cache[i][1][sim_idx:sim_idx+1]) for i, c in enumerate(d_cache)
        ])
        # BOS is already in the cache
        inputs = model.tokenizer(CACHE_FORMAT_DOC.format(query=q), return_tensors="pt", add_special_tokens=False).to(model.device)
        inputs["use_cache"] = True
        # Attend to the cache too
        inputs["attention_mask"] = torch.cat((
            torch.ones((cache[0][0].shape[0], cache[0][0].shape[2]), dtype=torch.long, device=inputs["attention_mask"].device),
            inputs["attention_mask"],
        ), dim=1)
        generation = model.generate(**inputs, max_new_tokens=256, past_key_values=cache, do_sample=False)
        decoded = model.tokenizer.batch_decode(generation)
        print(decoded[0])
    
    """
    <|user|>
    What is "Generative Representational Instruction Tuning"?
    
    Answer the prior query while optionally using the context prior to it
    <|assistant|>
    Generative Representational Instruction Tuning (GRIT) is a method for training language models that can perform both generative and embedding tasks. It involves training a large language model to handle both types of tasks by distinguishing between them through instructions. GRIT is designed to improve the performance of language models on both generative and embedding tasks, and it can be used to unify both types of tasks at no performance loss.</s>
    """
    
    
    ### GRIT QUERY CACHING ###
    # cache: Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`
    d_rep = model.encode(documents, instruction=gritlm_instruction(""))
    q_rep, q_cache = model.encode(queries, instruction=gritlm_instruction(""), get_cache=True)
    
    from scipy.spatial.distance import cosine
    sims = {d: [1 - cosine(q_rep[i], d_rep[j]) for j in range(len(d_rep))] for i, d in enumerate(documents)}
    
    for d, d_sims in sims.items():
        sim_idx = np.argmax(d_sims)
        cache = tuple([
            (q_cache[i][0][sim_idx:sim_idx+1], q_cache[i][1][sim_idx:sim_idx+1]) for i, c in enumerate(q_cache)
        ])
        # BOS is already in the cache
        inputs = model.tokenizer(CACHE_FORMAT_QUERY.format(doc=d), return_tensors="pt", add_special_tokens=False).to(model.device)
        inputs["use_cache"] = True
        # Attend to the cache too
        inputs["attention_mask"] = torch.cat((
            torch.ones((cache[0][0].shape[0], cache[0][0].shape[2]), dtype=torch.long, device=inputs["attention_mask"].device),
            inputs["attention_mask"],
        ), dim=1)
        generation = model.generate(**inputs, max_new_tokens=256, past_key_values=cache, do_sample=False)
        decoded = model.tokenizer.batch_decode(generation)
        print(decoded[0])
    
    """
    <|user|>
    All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8X7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm.
    
    Optionally using the prior context answer the query prior to it
    <|assistant|>
    GRIT stands for generative representational instruction tuning. It is a method for training large language models to handle both generative and embedding tasks by distinguishing between them through instructions. GritLM is a large language model trained using GRIT that sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. GritLM 8X7B is a larger version of GritLM that outperforms all open generative language models that were tried while still being among the best embedding models. GRIT matches training on only generative or embedding data, thus unifying both at no performance loss. This unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at <https://github.com/ContextualAI/gritlm>.</s>
    """
    
    
    ### GRIT QUERY-DOC CACHING ###
    # cache: Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`
    d_rep, d_cache = model.encode(documents, instruction=gritlm_instruction(""), get_cache=True, add_special_tokens=False)
    q_rep, q_cache = model.encode(queries, instruction=gritlm_instruction(""), get_cache=True)
    
    from scipy.spatial.distance import cosine
    sims = {q: [1 - cosine(q_rep[i], d_rep[j]) for j in range(len(d_rep))] for i, q in enumerate(queries)}
    
    for i, (q, q_sims) in enumerate(sims.items()):
        sim_idx = np.argmax(q_sims)
        cache_query = tuple([
            (q_cache[j][0][i:i+1], q_cache[j][1][i:i+1]) for j, c in enumerate(q_cache)
        ])
        cache_doc = tuple([
            (d_cache[j][0][sim_idx:sim_idx+1], d_cache[j][1][sim_idx:sim_idx+1]) for j, c in enumerate(d_cache)
        ])
        # For DOC-QUERY simply swap the order of the cache, change the format to CACHE_FORMAT_DOC_QUERY & set add_special_tokens=True in the `model.encode(..` above
        cache = [(
            torch.cat((layer[0], cache_doc[i][0]), dim=2),
            torch.cat((layer[1], cache_doc[i][1]), dim=2),
        ) for i, layer in enumerate(cache_query)]
        # BOS is already in the cache
        inputs = model.tokenizer(CACHE_FORMAT_QUERY_DOC, return_tensors="pt", add_special_tokens=False).to(model.device)
        inputs["use_cache"] = True
        # Attend to the cache too
        inputs["attention_mask"] = torch.cat((
            torch.ones((cache[0][0].shape[0], cache[0][0].shape[2]), dtype=torch.long, device=inputs["attention_mask"].device),
            inputs["attention_mask"],
        ), dim=1)
        generation = model.generate(**inputs, max_new_tokens=256, past_key_values=cache, do_sample=False)
        decoded = model.tokenizer.batch_decode(generation)
        print(decoded[0])
    
    """
    <|user|>
    Optionally using the prior context answer the query prior to it
    <|assistant|>
    Sure, here's an example of how the prior context could be used to answer a query:
    
    Query: "What is GRIT?"
    
    Prior context: "We introduce generative representation instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions."
    
    Answer: GRIT is a method for training language models to handle both generative and embedding tasks by distinguishing between them through instructions.</s>
    """
    

总结

本文主要介绍了一种新的统一架构,GRIT成功地将文本嵌入和生成任务统一到了一个单一的模型(GRITLM)中,并提出简化RAG的策略。为大模型多任务训练提供了一个方法论。

参考文献

【1】Generative Representational Instruction Tuning,https://arxiv.org/abs/2402.09906

【2】https://github.com/ContextualAI/gritlm/tree/main

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1481422.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

NGINX 高频面试题及实践总结

NGINX 是一个高性能的开源 Web 服务器和反向代理服务器&#xff0c;被广泛应用于互联网架构中。在面试中&#xff0c;对 NGINX 的相关知识可能会成为考察的重点。下面我们整理了一些常见的 NGINX 面试题及答案&#xff0c;希望对大家在面试前的准备有所帮助。 ## 1. 什么是 NG…

通过跳板机拷贝远程服务器文件

## 背景 在日常开发或者运维中&#xff0c;经常会遇到开发环境与线上环境网络隔离&#xff0c;需要通过跳板机连接的场景&#xff0c;如果需要将目标机器上的定位信息搬迁到开发机做进一步排查时&#xff0c;经常取文件比较费劲&#xff0c;一般操作是将目标文件拷贝到跳板机&…

SpringBoot项目连接Redis报错:Connection refused: no further information

今天在使用SpringBoot连接Redis时发生了报错 明明Jedis能够连接成功为什么StringRedisTemplate就不行? 然后在网上找了一下说是关闭防火墙或者修改配置文件但是都不管用 最后发现是Redis在SpringBoot3之后yml的配置方式发生了改变 相较于之前多了一个前缀, 由于我刚开始没有…

kotlin开发框架,50家大厂面试万字精华总结

与其它行业一样&#xff0c;凡是有高级和普通&#xff0c;虽然都是敲代码但也有大牛和普通之分&#xff0c;大牛程序员&#xff0c;一个人比一个团队做项目都做得快&#xff0c;**最为出名的当属十几年前求伯君在做wps时&#xff0c;一个人完成了微软二十人团队没有完成的项目需…

嵌入式中回调函数的实现方法

一、什么是回调函数 1.1、回调函数的定义和基本概念 回调函数是一种特殊的函数&#xff0c;它作为参数传递给另一个函数&#xff0c;并在被调用函数执行完毕后被调用。回调函数通常用于事件处理、异步编程和处理各种操作系统和框架的API。 基本概念&#xff1a; 回调&#xf…

【PyTorch】成功解决AttributeError: ‘Tuple‘ object has no attribute ‘cuda‘

【PyTorch】成功解决AttributeError: ‘Tuple‘ object has no attribute ‘cuda‘ &#x1f308; 个人主页&#xff1a;高斯小哥 &#x1f525; 高质量专栏&#xff1a;Matplotlib之旅&#xff1a;零基础精通数据可视化、Python基础【高质量合集】、PyTorch零基础入门教程&…

【教程】移动互联网时代的APP上架流程和要点

目录 摘要 引言 正文 一、应用商店注册 二、准备APP材料 三、打包上传App 摘要 本文将介绍移动应用程序上架的基本流程和要点&#xff0c;包括应用商店注册、APP材料准备、打包上传App、APP审核以及发布APP的详细步骤。此外&#xff0c;还会提到利用appuploder工具简化i…

强大而灵活的python装饰器

装饰器&#xff08;Decorators&#xff09; 一、概述 在Python中&#xff0c;装饰器是一种特殊类型的函数&#xff0c;它允许我们修改或增强其他函数的功能&#xff0c;而无需修改其源代码。装饰器在函数定义之后立即调用&#xff0c;并以函数对象作为参数。装饰器返回一个新…

【Qt】Sqlite数据库加密

1. 加密方式 对数据库文件加密。既不会暴露表结构&#xff0c;也不会暴露数据细节。 2. 加密工具&#xff08;QtCipherSqlitePlugin&#xff09; 用于密码 SQLite 的 Qt 插件&#xff0c;它基于 SQLite 源和 wxWidget 中的 wxSQLite3插件github地址&#xff1a;https://gith…

Vue ElementUI 修改消息提示框样式—messageBox 的大小

在窄屏模式下&#xff08;移动端或pda&#xff09;&#xff0c;提示框的宽度太宽&#xff0c;会出现显示不完全的问题。 应当如何修改 ElementUI 的样式呢&#xff1f; open() {this.$confirm(window.vm.$i18n.t("tips.conLogOut"),window.vm.$i18n.t("tips.tip…

JVM内存回收算法

1.1 引用计数法 每个对象创建的时候&#xff0c;会分配一个引用计数器&#xff0c;当这个对象被引用的时候计数器就加1&#xff0c;当不被引用或者引用失效的时候计数器就会减1。任何时候&#xff0c;对象的引用计数器值为0就说明这个对象不被使用了&#xff0c;就认为是“垃圾…

回溯 Leetcode 47 全排列II

全排列II 给定一个可包含重复数字的序列 nums &#xff0c;按任意顺序 返回所有不重复的全排列。 Leetcode 47 学习记录自代码随想录 示例 1&#xff1a; 输入&#xff1a;nums [1,1,2] 输出&#xff1a; [[1,1,2], [1,2,1], [2,1,1]] 示例 2&#xff1a; 输入&#xff1…

6. Z 字形变换

将一个给定字符串 s 根据给定的行数 numRows &#xff0c;以从上往下、从左到右进行 Z 字形排列。 比如输入字符串为 "PAYPALISHIRING" 行数为 3 时&#xff0c;排列如下&#xff1a; P A H N A P L S I I G Y I R 之后&#xff0c;你的输出需要从左往右…

【MySQL】数据库中常用的函数

目录 聚合函数COUNT()函数的多种用法COUNT(*)COUNT(主键)COUNT(1)COUNT(常量)COUNT(非主键)COUNT(distinct(字段)) COUNT()函数小结 字符函数length(str)函数&#xff1a;获取参数值的字节个数concat(str1,str2,...)函数&#xff1a;字符串拼接upper(str)、lower(str)函数:大小…

雨云:为你拨开云雾见青天

一、雨云品牌概览 雨云&#xff0c;这名字一听就让人联想到蓝天白云&#xff0c;清爽自然。那么&#xff0c;这个品牌是否真的如其名&#xff0c;能为我们这些在数字世界中漂泊的旅人提供一片宁静、稳定的“云”呢&#xff1f;接下来&#xff0c;让我们深入了解雨云的资质、能…

Python学习 day06(类、对象、构造方法、私有方法、继承

类 程序中数据的组织多种多样&#xff0c;如果我们简单用变量来记录&#xff0c;就会混乱、不统一&#xff0c;如下所示&#xff1a; 类比现实中的表格&#xff0c;我们可以用类来组织数据&#xff0c;如下&#xff1a; 类的定义和使用 类中的变量叫做成员变量&#xff0c;类中…

逆序字符串

逆序字符串 题目描述&#xff1a;解法思路&#xff1a;解法代码&#xff1a;运行结果&#xff1a; 题目描述&#xff1a; 输入⼀个字符串&#xff0c;写⼀个函数将⼀个字符串的内容逆序过来。 测试1&#xff1a; 输⼊&#xff1a;abcdef 输出&#xff1a;fedcba 测试2&#x…

Docsify部署IIS

什么是Docsify&#xff1f; 一个神奇的文档网站生成器。docsify 可以快速帮你生成文档网站。不同于 GitBook、Hexo 的地方是它不会生成静态的 .html 文件&#xff0c;所有转换工作都是在运行时。如果你想要开始使用它&#xff0c;只需要创建一个 index.html 就可以开始编写文档…

Unity游戏项目中的优化之摄像机视锥体剔除优化

在项目中一个完成的游戏场景一般都会有成千上百的物体&#xff0c;假如都去让GPU全部渲染一遍&#xff0c;那带来的消耗其实是挺大的&#xff0c;很多不在摄像机范围内的物体其实没有必要去渲染&#xff0c;尽管GPU自带剔除&#xff0c;但是如果从CPU阶段就提交给GPU指令——哪…

Springboot项目中定时任务的四种实现方式

文章目录 1. 使用Scheduled注解1.1 时间间隔执行1.2 固定时间点执行 2. 使用EnableScheduling注解启用定时任务3. 实现SchedulingConfigurer接口4. 使用Quartz框架4.1 配置QuartzScheduler4.2 定义Job类和Trigger类 5. 总结 在开发现代应用时&#xff0c;定时任务是一个非常常见…