FLAN-T5模型的文本摘要任务

news2024/11/25 2:56:50

Text Summarization with FLAN-T5 — ROCm Blogs (amd.com)

在这篇博客中,我们展示了如何使用HuggingFace在AMD GPU + ROCm系统上对语言模型FLAN-T5进行微调,以执行文本摘要任务。

介绍

FLAN-T5是谷歌发布的一个开源大型语言模型,相较于之前的T5模型有所增强。它是一个已经在指令数据集上进行预训练的编码器-解码器模型,这意味着该模型具备执行诸如摘要、分类和翻译等特定任务的能力。有关FLAN-T5的更多详情,请参考[原始论文](https://arxiv.org/pdf/2210.11416.pdf)。要查看模型相对于之前的T5模型的完整改进细节,请参考[这个模型卡片](https://huggingface.co/docs/transformers/model_doc/t5v1.1)。

先决条件

• [ROCm](ROCm quick start installation guide for Linux — ROCm installation (Linux))
• [PyTorch](Installing PyTorch for ROCm — ROCm installation (Linux))
• [Linux 操作系统](System requirements (Linux) — ROCm installation (Linux))
• [一块AMD GPU](System requirements (Linux) — ROCm installation (Linux))
确保系统能识别出你的GPU:

! rocm-smi --showproductname
================= ROCm System Management Interface ================
========================= Product Info ============================
GPU[0] : Card series: Instinct MI210
GPU[0] : Card model: 0x0c34
GPU[0] : Card vendor: Advanced Micro Devices, Inc. [AMD/ATI]
GPU[0] : Card SKU: D67301
===================================================================
===================== End of ROCm SMI Log =========================

我们来检查是否安装了正确版本的ROCm。

!apt show rocm-libs -a
Package: rocm-libs
Version: 5.7.0.50700-63~22.04
Priority: optional
Section: devel
Maintainer: ROCm Libs Support <rocm-libs.support@amd.com>
Installed-Size: 13.3 kBA
Depends: hipblas (= 1.1.0.50700-63~22.04), hipblaslt (= 0.3.0.50700-63~22.04), hipfft (= 1.0.12.50700-63~22.04), hipsolver (= 1.8.1.50700-63~22.04), hipsparse (= 2.3.8.50700-63~22.04), miopen-hip (= 2.20.0.50700-63~22.04), rccl (= 2.17.1.50700-63~22.04), rocalution (= 2.1.11.50700-63~22.04), rocblas (= 3.1.0.50700-63~22.04), rocfft (= 1.0.23.50700-63~22.04), rocrand (= 2.10.17.50700-63~22.04), rocsolver (= 3.23.0.50700-63~22.04), rocsparse (= 2.5.4.50700-63~22.04), rocm-core (= 5.7.0.50700-63~22.04), hipblas-dev (= 1.1.0.50700-63~22.04), hipblaslt-dev (= 0.3.0.50700-63~22.04), hipcub-dev (= 2.13.1.50700-63~22.04), hipfft-dev (= 1.0.12.50700-63~22.04), hipsolver-dev (= 1.8.1.50700-63~22.04), hipsparse-dev (= 2.3.8.50700-63~22.04), miopen-hip-dev (= 2.20.0.50700-63~22.04), rccl-dev (= 2.17.1.50700-63~22.04), rocalution-dev (= 2.1.11.50700-63~22.04), rocblas-dev (= 3.1.0.50700-63~22.04), rocfft-dev (= 1.0.23.50700-63~22.04), rocprim-dev (= 2.13.1.50700-63~22.04), rocrand-dev (= 2.10.17.50700-63~22.04), rocsolver-dev (= 3.23.0.50700-63~22.04), rocsparse-dev (= 2.5.4.50700-63~22.04), rocthrust-dev (= 2.18.0.50700-63~22.04), rocwmma-dev (= 1.2.0.50700-63~22.04)
Homepage: https://github.com/RadeonOpenCompute/ROCm
Download-Size: 1012 B
APT-Manual-Installed: yes
APT-Sources: http://repo.radeon.com/rocm/apt/5.7 jammy/main amd64 Packages
Description: Radeon Open Compute (ROCm) Runtime software stack

确保PyTorch也识别出了GPU:

import torch
print(f"number of GPUs: {torch.cuda.device_count()}")
print([torch.cuda.get_device_name(i) for i in range(torch.cuda.device_count())])
number of GPUs: 1
['AMD Radeon Graphics']

在你开始之前,确保你已经安装了所有必需的库:

!pip install -q transformers accelerate einops datasets
!pip install --upgrade SQLAlchemy==1.4.46
!pip install -q alembic==1.4.1 numpy==1.23.4 grpcio-status==1.33.2 protobuf==3.19.6 
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
警告:以'root'用户身份运行pip可能会导致权限损坏和与系统包管理器的行为冲突。建议使用虚拟环境: https://pip.pypa.io/warnings/venv

接下来导入将在本博客中使用的模块:

import time 
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer,Seq2SeqTrainingArguments, Seq2SeqTrainer, DataCollatorForSeq2Seq

加载模型

我们来加载模型及其分词器。FLAN-T5有多个不同大小的变体,从`small`到`xxl`。我们首先会使用`xxl`变体运行一些推论,并展示如何使用`small`变体在文本摘要任务上对Flan-T5进行微调。

start_time = time.time()
model_checkpoint = "google/flan-t5-xxl"
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
print(f"Loaded in {time.time() - start_time: .2f} seconds")
print(model)
Loading checkpoint shards: 100%|██████████| 5/5 [01:23<00:00, 16.69s/it]
Loaded in  85.46 seconds
T5ForConditionalGeneration(
  (shared): Embedding(32128, 4096)
  (encoder): T5Stack(
    (embed_tokens): Embedding(32128, 4096)
    (block): ModuleList(
      (0): T5Block(
        (layer): ModuleList(
          (0): T5LayerSelfAttention(
            (SelfAttention): T5Attention(
              (q): Linear(in_features=4096, out_features=4096, bias=False)
              (k): Linear(in_features=4096, out_features=4096, bias=False)
              (v): Linear(in_features=4096, out_features=4096, bias=False)
              (o): Linear(in_features=4096, out_features=4096, bias=False)
              (relative_attention_bias): Embedding(32, 64)
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (1): T5LayerFF(
            (DenseReluDense): T5DenseGatedActDense(
              (wi_0): Linear(in_features=4096, out_features=10240, bias=False)
              (wi_1): Linear(in_features=4096, out_features=10240, bias=False)
              (wo): Linear(in_features=10240, out_features=4096, bias=False)
              (dropout): Dropout(p=0.1, inplace=False)
              (act): NewGELUActivation()
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
      (1-23): 23 x T5Block(
        (layer): ModuleList(
          (0): T5LayerSelfAttention(
            (SelfAttention): T5Attention(
              (q): Linear(in_features=4096, out_features=4096, bias=False)
              (k): Linear(in_features=4096, out_features=4096, bias=False)
              (v): Linear(in_features=4096, out_features=4096, bias=False)
              (o): Linear(in_features=4096, out_features=4096, bias=False)
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (1): T5LayerFF(
            (DenseReluDense): T5DenseGatedActDense(
              (wi_0): Linear(in_features=4096, out_features=10240, bias=False)
              (wi_1): Linear(in_features=4096, out_features=10240, bias=False)
              (wo): Linear(in_features=10240, out_features=4096, bias=False)
              (dropout): Dropout(p=0.1, inplace=False)
              (act): NewGELUActivation()
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
    )
    (final_layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
    (dropout): Dropout(p=0.1, inplace=False)
  )
  (decoder): T5Stack(
    (embed_tokens): Embedding(32128, 4096)
    (block): ModuleList(
      (0): T5Block(
        (layer): ModuleList(
          (0): T5LayerSelfAttention(
            (SelfAttention): T5Attention(
              (q): Linear(in_features=4096, out_features=4096, bias=False)
              (k): Linear(in_features=4096, out_features=4096, bias=False)
              (v): Linear(in_features=4096, out_features=4096, bias=False)
              (o): Linear(in_features=4096, out_features=4096, bias=False)
              (relative_attention_bias): Embedding(32, 64)
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (1): T5LayerCrossAttention(
            (EncDecAttention): T5Attention(
              (q): Linear(in_features=4096, out_features=4096, bias=False)
              (k): Linear(in_features=4096, out_features=4096, bias=False)
              (v): Linear(in_features=4096, out_features=4096, bias=False)
              (o): Linear(in_features=4096, out_features=4096, bias=False)
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (2): T5LayerFF(
            (DenseReluDense): T5DenseGatedActDense(
              (wi_0): Linear(in_features=4096, out_features=10240, bias=False)
              (wi_1): Linear(in_features=4096, out_features=10240, bias=False)
              (wo): Linear(in_features=10240, out_features=4096, bias=False)
              (dropout): Dropout(p=0.1, inplace=False)
              (act): NewGELUActivation()
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
      (1-23): 23 x T5Block(
        (layer): ModuleList(
          (0): T5LayerSelfAttention(
            (SelfAttention): T5Attention(
              (q): Linear(in_features=4096, out_features=4096, bias=False)
              (k): Linear(in_features=4096, out_features=4096, bias=False)
              (v): Linear(in_features=4096, out_features=4096, bias=False)
              (o): Linear(in_features=4096, out_features=4096, bias=False)
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (1): T5LayerCrossAttention(
            (EncDecAttention): T5Attention(
              (q): Linear(in_features=4096, out_features=4096, bias=False)
              (k): Linear(in_features=4096, out_features=4096, bias=False)
              (v): Linear(in_features=4096, out_features=4096, bias=False)
              (o): Linear(in_features=4096, out_features=4096, bias=False)
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (2): T5LayerFF(
            (DenseReluDense): T5DenseGatedActDense(
              (wi_0): Linear(in_features=4096, out_features=10240, bias=False)
              (wi_1): Linear(in_features=4096, out_features=10240, bias=False)
              (wo): Linear(in_features=10240, out_features=4096, bias=False)
              (dropout): Dropout(p=0.1, inplace=False)
              (act): NewGELUActivation()
            )
            (layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
    )
    (final_layer_norm): FusedRMSNorm(torch.Size([4096]), eps=1e-06, elementwise_affine=True)
    (dropout): Dropout(p=0.1, inplace=False)
  )
  (lm_head): Linear(in_features=4096, out_features=32128, bias=False)
)

执行推理

值得注意的是,我们可以直接在没有进行微调的情况下使用FLAN-T5模型。可以先看一些简单的推理。

例如,我们可以让模型回答一个简单的问题:

inputs = tokenizer("How to make milk coffee", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Pour a cup of coffee into a mug. Add a tablespoon of milk. Add a pinch of sugar.']

或者我们可以要求它总结一段文字:

text = """ summarize: 
Amy: Hey Mark, have you heard about the new movie coming out this weekend?
Mark: Oh, no, I haven't. What's it called?
Amy: It's called "Stellar Odyssey." It's a sci-fi thriller with amazing special effects.
Mark: Sounds interesting. Who's in it?
Amy: The main lead is Emily Stone, and she's fantastic in the trailer. The plot revolves around a journey to a distant galaxy.
Mark: Nice! I'm definitely up for a good sci-fi flick. Want to catch it together on Saturday?
Amy: Sure, that sounds great! Let's meet at the theater around 7 pm.
"""
inputs = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
tokenizer.decode(outputs[0], skip_special_tokens=True)
'Amy and Mark are going to see "Stellar Odyssey" on Saturday at 7 pm.'

微调

在本节中,我们将对模型进行微调以进行总结任务。我们将使用来自这个教程的代码作为我们的指导。正如提到的,我们将使用模型的`small`变体来进行微调:

start_time = time.time()
model_checkpoint = "google/flan-t5-small"
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
print(f"Loaded in {time.time() - start_time: .2f} seconds")

加载数据集

我们的示例数据集是samsum数据集,包含约16K条类似Messenger的对话和总结。

from datasets import load_dataset
from evaluate import load

raw_datasets = load_dataset("samsum")

以下是我们数据集的一个样例:

print('Dialogue: ')
print(raw_datasets['train']['dialogue'][100])
print() 
print('Summary: ', raw_datasets['train']['summary'][100])
Dialogue: 
Gabby: How is you? Settling into the new house OK?
Sandra: Good. The kids and the rest of the menagerie are doing fine. The dogs absolutely love the new garden. Plenty of room to dig and run around.
Gabby: What about the hubby?
Sandra: Well, apart from being his usual grumpy self I guess he's doing OK.
Gabby: :-D yeah sounds about right for Jim.
Sandra: He's a man of few words. No surprises there. Give him a backyard shed and that's the last you'll see of him for months.
Gabby: LOL that describes most men I know.
Sandra: Ain't that the truth! 
Gabby: Sure is. :-) My one might as well move into the garage. Always tinkering and building something in there.
Sandra: Ever wondered what he's doing in there?
Gabby: All the time. But he keeps the place locked.
Sandra: Prolly building a portable teleporter or something. ;-)
Gabby: Or a time machine... LOL
Sandra: Or a new greatly improved Rabbit :-P
Gabby: I wish... Lmfao!

Summary:  Sandra is setting into the new house; her family is happy with it. Then Sandra and Gabby discuss the nature of their men and laugh about their habit of spending time in the garage or a shed.

设置度量标准

接下来,我们将加载此任务的度量标准。通常,在总结任务中,我们使用ROUGE(回想导向的内容获取评估的助理)度量标准,这些标准量化原始文件与总结之间的相似度。更具体地说,这些度量标准测量系统总结和参考总结之间n-gram(n个连续词的序列)的重叠。有关此度量标准的更多细节,请参阅链接。

from evaluate import load
metric = load("rouge")
print(metric)
EvaluationModule(name: "rouge", module_type: "metric", features: [{'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id=None)}, {'predictions': Value(dtype='string', id='sequence'), 'references': Value(dtype='string', id='sequence')}], usage: """
Calculates average rouge scores for a list of hypotheses and references
Args:
    predictions: list of predictions to score. Each prediction
        should be a string with tokens separated by spaces.
    references: list of reference for each prediction. Each
        reference should be a string with tokens separated by spaces.
    rouge_types: A list of rouge types to calculate.
        Valid names:
        `"rouge{n}"` (e.g. `"rouge1"`, `"rouge2"`) where: {n} is the n-gram based scoring,
        `"rougeL"`: Longest common subsequence based scoring.
        `"rougeLsum"`: rougeLsum splits text using `"
"`.
        See details in https://github.com/huggingface/datasets/issues/617
    use_stemmer: Bool indicating whether Porter stemmer should be used to strip word suffixes.
    use_aggregator: Return aggregates if this is set to True
Returns:
    rouge1: rouge_1 (f1),
    rouge2: rouge_2 (f1),
    rougeL: rouge_l (f1),
    rougeLsum: rouge_lsum (f1)
Examples:

    >>> rouge = evaluate.load('rouge')
    >>> predictions = ["hello there", "general kenobi"]
    >>> references = ["hello there", "general kenobi"]
    >>> results = rouge.compute(predictions=predictions, references=references)
    >>> print(results)
    {'rouge1': 1.0, 'rouge2': 1.0, 'rougeL': 1.0, 'rougeLsum': 1.0}
""", stored examples: 0)

我们需要创建一个函数来计算ROUGE度量标准:

import nltk
nltk.download('punkt')
import numpy as np

def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
    # We need to replace -100 in the labels since we can't decode it 
    labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
    decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
    
    # Add new line after each sentence for rogue metrics
    decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds]
    decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels]
    
    # compute metrics 
    result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True, use_aggregator=True)
    # Extract a few results
    result = {key: value * 100 for key, value in result.items()}
    
    # compute the average length of the generated text
    prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
    result["gen_len"] = np.mean(prediction_lens)
    
    return {k: round(v, 4) for k, v in result.items()}

处理数据

让我们创建一个函数来处理数据,这包括对每个样本文档的输入和输出进行标记化。我们还设置了长度阈值来截断输入和输出。

prefix = "summarize: "

max_input_length = 1024
max_target_length = 128

def preprocess_function(examples):
    inputs = [prefix + doc for doc in examples["dialogue"]]
    model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)

    # Setup the tokenizer for targets
    labels = tokenizer(text_target=examples["dialogue"], max_length=max_target_length, truncation=True)

    model_inputs["labels"] = labels["input_ids"]
    return model_inputs

tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)

训练模型

要训练我们的模型,我们需要几样东西:

1. 数据收集器,在收集期间根据批次中最长的长度动态填充句子,而不是将整个数据集填充到最大长度。
2. 一个`TrainingArguments`类,用于自定义模型的训练方式。
3. Trainer类,这是一个用于在PyTorch中训练的API。

首先我们创建数据收集器:

data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)

接下来,让我们设置我们的`TrainingArgument`类:

batch_size = 16
model_name = model_checkpoint.split("/")[-1]
args = Seq2SeqTrainingArguments(
    f"{model_name}-finetuned-samsum",
    evaluation_strategy = "epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=batch_size,
    per_device_eval_batch_size=batch_size,
    weight_decay=0.01,
    save_total_limit=3,
    num_train_epochs=2,
    predict_with_generate=True,
    fp16=False,
    push_to_hub=False,
)

注:我们发现,由于模型是在Google TPU上预训练的,而不是在GPU上,我们需要设置`fp16=False`或`bf16=True`。否则我们会遇到溢出问题,从而导致我们的损失值出现NaN值。这可能是由于半精度浮点格式`fp16`和`bf16`之间的差异。

最后我们需要设置一个训练器API

trainer = Seq2SeqTrainer(
    model,
    args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["validation"],
    data_collator=data_collator,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics
)

有了这些,我们就可以训练我们的模型了!

trainer.train()
 [1842/1842 05:37, Epoch 2/2]
Epoch Training Loss Validation Loss Rouge1  Rouge2  Rougel  Rougelsum Gen Len
1 1.865700  1.693366  43.551000 20.046200 36.170400 40.096200 16.926700
2 1.816700  1.685862  43.506000 19.934800 36.278300 40.156700 16.837400

运行上述训练器应该会生成一个本地文件夹`flan-t5-small-finetuned-samsum`来存储我们的模型检查点。

推理

一旦我们有了微调模型,我们就可以使用它进行推理!让我们先重新加载来自我们本地检查点的分词器和经过微调的模型。

model = AutoModelForSeq2SeqLM.from_pretrained("flan-t5-small-finetuned-samsum/checkpoint-1500")
tokenizer = AutoTokenizer.from_pretrained("flan-t5-small-finetuned-samsum/checkpoint-1500")

接下来,我们用一些文本来总结。重要的是要像下面这样加上前缀:

text = """ summarize: 
Hannah: Hey, Mark, have you decided on your New Year's resolution yet?
Mark: Yeah, I'm thinking of finally hitting the gym regularly. What about you?
Hannah: I'm planning to read more books this year, at least one per month.
Mark: That sounds like a great goal. Any particular genre you're interested in?
Hannah: I want to explore more classic literature. Maybe start with some Dickens or Austen.
Mark: Nice choice. I'll hold you to it. We can discuss our progress over coffee.
Hannah: Deal! Accountability partners it is.
"""

最后,我们编码输入并生成摘要

inputs = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
tokenizer.decode(outputs[0], skip_special_tokens=True)
'Hannah is planning to read more books this year. Mark will hold Hannah to it.'

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1826649.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

企业化运维(3)_PHP、nginx结合php-fpm、memcache、openresty、goaccess日志可视化

###1.PHP源码编译### 解压PHP压缩包&#xff0c;切入PHP目录&#xff0c;进行configure-->make-->make installd三部曲 [rootserver1 ~]# yum install -y bzip2 systemd-devel libxml2-devel sqlite-devel libpng-devel libcurl-devel ##依赖性 [rootserver1 ~]# yum…

基于Nios-II实现流水灯

基于Nios-II实现流水灯的主要原理 涉及到FPGA&#xff08;现场可编程门阵列&#xff09;上的嵌入式软核处理器Nios II与LED控制逻辑的结合。以下是详细的实现原理&#xff0c;分点表示并归纳&#xff1a; Nios II软核处理器介绍&#xff1a; Nios II是Altera公司推出的一种应用…

Camtasia2024破解永久激活码注册码分享最新

随着数字时代的到来&#xff0c;视频制作已成为许多人日常生活和工作中不可或缺的一部分。而在众多视频编辑软件中&#xff0c;Camtasia凭借其强大的功能和易用性&#xff0c;赢得了广泛的用户喜爱。近期&#xff0c;Camtasia 2024的破解版本在网络上引起了广泛关注。本文旨在为…

外链建设如何进行?

理解dofollow和nofollow链接&#xff0c;所谓dofollow链接&#xff0c;就是可以传递权重到你的网站的链接&#xff0c;这种链接对你的网站排名非常有帮助&#xff0c;这种链接可以推动你的网站在搜索结果中的位置向上爬&#xff0c;但一个网站全是这种有用的链接&#xff0c;反…

scrapy爬取豆瓣书单存入MongoDB数据库

scrapy爬取豆瓣书单存入MongoDB数据库 一、安装scrapy库二、创建scrapy项目三、创建爬虫四、修改settings,设置UA,开启管道五、使用xpath解析数据六、完善items.py七、在douban.py中导入DoubanshudanItem类八、爬取所有页面数据九、管道中存入数据,保存至csv文件十、将数据写…

解决javadoc一直找不到路径的问题

解决javadoc一直找不到路径的问题 出现以上问题就是我们在下载jdk的时候一些运行程序安装在C:\Program Files\Common Files\Oracle\Java\javapath下&#xff1a; 一开始是没有javadoc.exe文件的&#xff0c;我们只需要从jdk的bin目录下找到复制到这个里面&#xff0c;就可以使用…

玄机平台应急响应—MySQL应急

前言 这个是比较简单的&#xff0c;其实和MySQL没啥太大的关系&#xff0c;没涉及太多MySQL的知识。看一下它的flag要求吧。 flag1 它说黑客写入的shell&#xff0c;那我们就去它的网站目录去看看&#xff0c;果然有一个叫sh.php的文件。 flag1{ccfda79e-7aa1-4275-bc26-a61…

excel中按多列进行匹配并对数量进行累加

公司的生产计划是按订单下发&#xff0c;但不同订单的不同产品中可能有用到相同的配件&#xff0c;按单1对1时&#xff0c;对计算机十分友好&#xff0c;但对于在配件库检料的工人来说就比较麻烦&#xff0c;上百条产品里可能会有多条都是相同的产品&#xff0c;首先考虑的办法…

Android采用Scroller实现底部二楼效果

需求 在移动应用开发中&#xff0c;有时我们希望实现一种特殊的布局效果&#xff0c;即“底部二楼”效果。这个效果类似于在列表底部拖动时出现额外的内容区域&#xff0c;用户可以继续向上拖动查看更多内容。这种效果可以用于展示广告、推荐内容或其他信息。 效果 实现后的…

回答网友的一个Delphi问题

网友想在grid 中 加一个水印&#xff0c;俺就给他写了个例子。先靠效果&#xff1a; 这个例子 包含下面几步&#xff1a; 1、创建背景 dg_bmp:Tbitmap.Create; w: Image1.Picture.Bitmap.width; h: Image1.Picture.Bitmap.height; dg_bmp.width: w*2; dg_bmp.height: …

[渗透测试学习] Runner-HackTheBox

Runner-HackTheBox 信息搜集 nmap扫描端口 nmap -sV -v 10.10.11.13扫描结果如下 PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.6 (Ubuntu Linux; protocol 2.0) 80/tcp open http nginx 1.18.0 (Ubuntu) 8000…

算法day32

第一题 207. 课程表 步骤一&#xff1a; 通过下图的课程数组,首先画出DAG图&#xff08;有向无环图&#xff09; 步骤二&#xff1a; 其次我们按照DAG图&#xff0c;来构建该图的拓扑排序&#xff0c;等有效的点都按照规则排完序后&#xff0c;观察是否有剩下的点的入度不为0&…

基于VSCode和MinGW-w64搭建LVGL模拟开发环境

目录 概述 1 运行环境 1.1 版本信息 1.2 软件安装 1.2.1 下载安装VS Code 1.2.1.1 下载软件 1.2.1.1 安装软件 1.2.2 下载安装MinGW-w64 1.2.2.1 下载软件 1.2.2.2 安装软件 1.2.3 下载安装SDL 1.2.3.1 下载软件 ​1.2.3.2 安装软件 1.2.4 下载安装CMake 1.2.4.…

微服务链路追踪ELK

微服务链路追踪&ELK 链路追踪概述链路追踪sluthzipkinelk日志管理平台 一 链路追踪 1 概述 1.1 为什么需要链路追踪 ​ 微服务架构是一个分布式架构&#xff0c;它按业务划分服务单元&#xff0c;一个分布式系统往往有很多个服务单元。由于服务单元数量众多&#xff0…

紫光展锐5G处理器T750__国产手机芯片5G方案

展锐T750核心板采用6nm EUV制程工艺&#xff0c;CPU架构采用了八核设计&#xff0c;其中包括两个主频为2.0GHz的Arm Cortex-A76性能核心和六个主频为1.8GHz的A55小核。这种组合使得T750具备卓越的处理能力&#xff0c;并能在节能的同时提供出色的性能表现。该核心模块还搭载了M…

Kafka 如何保证消息顺序及其实现示例

Kafka 如何保证消息顺序及其实现示例 Kafka 保证消息顺序的机制主要依赖于分区&#xff08;Partition&#xff09;的概念。在 Kafka 中&#xff0c;消息的顺序保证是以分区为单位的。下面是 Kafka 如何保证消息顺序的详细解释&#xff1a; ⭕分区内消息顺序 顺序写入&#…

基于JSP技术的定西扶贫惠农推介系统

开头语&#xff1a;你好呀&#xff0c;我是计算机学长猫哥&#xff01;如果有相关需求&#xff0c;文末可以找到我的联系方式。 开发语言&#xff1a;JSP 数据库&#xff1a;MySQL 技术&#xff1a;B/S架构、JSP技术 工具&#xff1a;Eclipse、MySQL、Tomcat 系统展示 首…

并查集C++

并查集的原理 并查集&#xff08;Union-Find Set&#xff09;。可以把每个连通分量看成一个集合&#xff0c;该集合包含了连通分量中的所有点。这些点两两连通&#xff08;连通性&#xff09;&#xff0c;而具体的连通方式无关紧要&#xff0c;就好比集合中的元素没有先后顺序之…

AI 客服定制:LangChain集成订单能力

为了提高AI客服的问题解决能力&#xff0c;我们引入了LangChain自定义能力&#xff0c;并集成了订单能力。这使得AI客服可以根据用户提出的问题&#xff0c;自动调用订单接口&#xff0c;获取订单信息&#xff0c;并结合文本知识库内容进行回答。这种能力的应用&#xff0c;使得…