T5的整体介绍【代码实战】

news2024/12/26 12:59:54

T5的整体介绍【代码实战】

  • 0、前言
  • 1.Header
  • 2.summary
  • 3 T5 model
    • 3.1 forward
    • 3.2 预训练任务
      • 3.2.1 multi sentence pairs
    • 3.3 完成 tasks

0、前言

本文是对T5预训练模型的一个介绍,以及能够用来做任务测试,完整的代码稍后挂上链接。

1.Header

import torch
from torch import nn
import torch.nn.functional as F
import transformers
# from transformers_utils import get_params
from transformers import pipeline
# ~/.cache/huggingface/hub
from transformers import AutoTokenizer, AutoConfig, AutoModel
# ~/.cache/huggingface/datasets
from datasets import load_dataset
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
from IPython.display import Image

# trainable parameters of the model
def get_params(model):
    model_parameters = filter(lambda p: p.requires_grad, model.parameters())# 用filter函数过滤掉那些不需要梯度更新的参数,只保留那些需要梯度更新的参数,然后把它们放在一个变量,叫做model_parameters。这个变量也是一个迭代器。
    params = sum([np.prod(p.size()) for p in model_parameters])# 用一个列表推导式遍历model_parameters中的每个参数,然后用np.prod函数计算每个参数的元素个数。np.prod函数的作用是把一个序列中的所有元素相乘。例如,如果一个参数的形状是(2, 3),那么它的元素个数就是2 * 3 = 6。然后把所有参数的元素个数加起来,得到一个总和,放在一个变量,叫做params。
    return params

# default: 100
mpl.rcParams['figure.dpi'] = 150
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device

‘cuda’

2.summary

  • t5: Text-To-Text Transfer Transformer
    • https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html
    • https://huggingface.co/docs/transformers/model_doc/t5
Image('t5.png')

在这里插入图片描述
可见可以做的任务有1.翻译;2.是否接受一个句子;3.句子直接的相似度计算;4.摘要。

  • CoLA: Linguistic Acceptability
    • CoLA,全称为The Corpus of Linguistic Acceptability,是一个英语语言的句子接受度数据集,由华盛顿大学计算机科学与工程系的一组研究人员于2018年创建。该数据集旨在提供一个用于评估自然语言处理模型所生成文本的语言接受度和流畅度的基准测试集。

    • CoLA数据集由10657个英语句子组成,这些句子来自各种不同的来源,包括核心新闻材料和审判文件等。每个句子都被标记为可接受或不可接受,可接受的句子应该具有语法正确性和常识性,相反,不可接受的句子可能会涉及句法错误、歧义、语义冲突等问题。

    • CoLA数据集是典型的二元分类问题,用于测试模型对自然语言句子的语法和语义的理解能力。同时,CoLA数据集还提供了一个挑战,即对于不可接受的句子,模型需要能够识别错误类型以及如何修复它们。

    • CoLA数据集被广泛应用于自然语言处理领域,特别是语言理解、句法分析、语义分析等方面的研究。该数据集已被众多研究人员用于评估自然语言处理模型的性能和对其改进。

  • STSB: Semantic Textual Similarity Beachmark
    • STSB,全称为The Semantic Textual Similarity Benchmark,是一个用于评估自然语言处理模型对文本语义相似性的数据集和基准测试集。该数据集由华盛顿大学计算机科学与工程系的研究人员于2012年创建,主要用于评估模型在句子和文本相似性任务中的性能。

    • STSB数据集包含7,000对句子,这些句子来自各种不同的来源,例如自然语言文本、问题回答对、比较句等。每个句子对都被标记为0到5之间的相似度分数(0表示两个句子毫不相似,5表示两个句子非常相似)。

    • 在STSB数据集中,模型需要对每对句子的相似度进行预测。这是一个连续值预测问题,需要模型预测一个浮点数作为句子对之间的相似性得分。

    • STSB数据集是测试模型在句子和文本相似性任务中的常用基准测试集之一。自该数据集发布以来,很多研究已经使用它来评估不同的自然语言处理模型和各种技术。它在一些实际应用领域,例如问答系统和文本匹配任务中具有重要实际意义。

3 T5 model

  • vocabulary size:32128
model参数量hidden dimencoder/decoder layers
t5-small61M512 (64*8) -> 5126
t5-base223M768 (64*12) -> 76812
t5-large738M1024 (64*16) -> 102424
t5-3b2.85B4096 (128*32) -> 102424
t5-11b11B16384 (128*128) -> 102424
# t5-small
# t5-base
# t5-large
# t5-3b
# tb-11b
model_ckpt = 't5-3b'
# 比 T5Model 多了一层 hidden_state -> vocab_size 的 mlp 的映射
model = AutoModel.from_pretrained(model_ckpt, )
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
config = AutoConfig.from_pretrained(model_ckpt)
config

T5Config {
“_name_or_path”: “t5-3b”,
“architectures”: [
“T5WithLMHeadModel”
],
“d_ff”: 16384,
“d_kv”: 128,
“d_model”: 1024,
“decoder_start_token_id”: 0,
“dense_act_fn”: “relu”,
“dropout_rate”: 0.1,
“eos_token_id”: 1,
“feed_forward_proj”: “relu”,
“initializer_factor”: 1.0,
“is_encoder_decoder”: true,
“is_gated_act”: false,
“layer_norm_epsilon”: 1e-06,
“model_type”: “t5”,
“n_positions”: 512,
“num_decoder_layers”: 24,
“num_heads”: 32,
“num_layers”: 24,
“output_past”: true,
“pad_token_id”: 0,
“relative_attention_max_distance”: 128,
“relative_attention_num_buckets”: 32,
“task_specific_params”: {
“summarization”: {
“early_stopping”: true,
“length_penalty”: 2.0,
“max_length”: 200,
“min_length”: 30,
“no_repeat_ngram_size”: 3,
“num_beams”: 4,
“prefix”: "summarize: "
},
“translation_en_to_de”: {
“early_stopping”: true,
“max_length”: 300,
“num_beams”: 4,
“prefix”: "translate English to German: "
},
“translation_en_to_fr”: {
“early_stopping”: true,
“max_length”: 300,
“num_beams”: 4,
“prefix”: "translate English to French: "
},
“translation_en_to_ro”: {
“early_stopping”: true,
“max_length”: 300,
“num_beams”: 4,
“prefix”: "translate English to Romanian: "
}
},
“transformers_version”: “4.28.0”,
“use_cache”: true,
“vocab_size”: 32128
}

可见相关任务是1. summarize: 2.translate English to German: 3.translate English to French:4. translate English to Romanian:。

model

T5Model(
(shared): Embedding(32128, 1024)
(encoder): T5Stack(
(embed_tokens): Embedding(32128, 1024)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
(relative_attention_bias): Embedding(32, 32)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1-23): 23 x T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(final_layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(decoder): T5Stack(
(embed_tokens): Embedding(32128, 1024)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
(relative_attention_bias): Embedding(32, 32)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1-23): 23 x T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(final_layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)

format(get_params(model), ',')

‘2,851,598,336’

参数是2.85B

3.1 forward

input_ids = tokenizer(
    "Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids  # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids  
# preprocess: Prepend decoder_input_ids with start token which is pad token for T5Model.
# This is not needed for torch's T5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
input_ids

tensor([[6536, 43, 118, 2008, 24, 293, 53, 3, 9, 1782, 19, 207,
21, 25, 1]])

model.eval()
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
last_hidden_states

tensor([[[-0.1611, -0.0524, 0.2812, …, -0.0113, -0.5561, -0.1034],
[-0.0441, 0.0494, 0.0101, …, 0.2337, 0.1868, 0.0204],
[-0.1586, -0.0830, -0.0067, …, 0.1704, 0.0040, 0.1689],
[-0.0349, -0.0160, 0.0020, …, 0.1688, -0.0871, 0.1037]]],
grad_fn=< MulBackward0>)

def t5_forward(model, input_ids, decoder_input_ids):
    encoder_outputs = model.encoder(input_ids=input_ids)
#     print(encoder_outputs)
    hidden_states = encoder_outputs[0]
    decoder_outputs = model.decoder(input_ids=decoder_input_ids, 
                                    encoder_hidden_states=hidden_states,)
    return decoder_outputs.last_hidden_state
t5_forward(model, input_ids, decoder_input_ids)

tensor([[[-0.1611, -0.0524, 0.2812, …, -0.0113, -0.5561, -0.1034],
[-0.0441, 0.0494, 0.0101, …, 0.2337, 0.1868, 0.0204],
[-0.1586, -0.0830, -0.0067, …, 0.1704, 0.0040, 0.1689],
[-0.0349, -0.0160, 0.0020, …, 0.1688, -0.0871, 0.1037]]],
grad_fn=< MulBackward0>)

可以看到两个结果是一样的。

3.2 预训练任务

  • Unsupervised denoising training
    • MLM(Mask Language Model)
    • span mask
  • Supervised training
    • seq2seq
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained(model_ckpt)
model

T5ForConditionalGeneration(
(shared): Embedding(32128, 1024)
(encoder): T5Stack(
(embed_tokens): Embedding(32128, 1024)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
(relative_attention_bias): Embedding(32, 32)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1-23): 23 x T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(final_layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(decoder): T5Stack(
(embed_tokens): Embedding(32128, 1024)
(block): ModuleList(
(0): T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
(relative_attention_bias): Embedding(32, 32)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(1-23): 23 x T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerCrossAttention(
(EncDecAttention): T5Attention(
(q): Linear(in_features=1024, out_features=4096, bias=False)
(k): Linear(in_features=1024, out_features=4096, bias=False)
(v): Linear(in_features=1024, out_features=4096, bias=False)
(o): Linear(in_features=4096, out_features=1024, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(2): T5LayerFF(
(DenseReluDense): T5DenseActDense(
(wi): Linear(in_features=1024, out_features=16384, bias=False)
(wo): Linear(in_features=16384, out_features=1024, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
(act): ReLU()
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(final_layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(lm_head): Linear(in_features=1024, out_features=32128, bias=False)
)

可见T5ForConditionalGeneration多了最后一层,即(lm_head): Linear(in_features=1024, out_features=32128, bias=False),Language Model_head。

# Unsupervised denoising training
# mlm
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids

# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
loss.item()

1.9458732604980469

# Supervised training
# seq2seq

input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids
labels = tokenizer("Das Haus ist wunderbar.", return_tensors="pt").input_ids

# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
loss.item()

0.9009745717048645

3.2.1 multi sentence pairs

# the following 2 hyperparameters are task-specific
max_source_length = 512
max_target_length = 128

# Suppose we have the following 2 training examples:
input_sequence_1 = "Welcome to NYC"
output_sequence_1 = "Bienvenue à NYC"

input_sequence_2 = "HuggingFace is a company"
output_sequence_2 = "HuggingFace est une entreprise"

# encode the inputs
task_prefix = "translate English to French: "
input_sequences = [input_sequence_1, input_sequence_2]

encoding = tokenizer(
    [task_prefix + sequence for sequence in input_sequences],
    padding="longest",
    max_length=max_source_length,
    truncation=True,
    return_tensors="pt",
)

input_ids, attention_mask = encoding.input_ids, encoding.attention_mask

# encode the targets
target_encoding = tokenizer(
    [output_sequence_1, output_sequence_2],
    padding="longest",
    max_length=max_target_length,
    truncation=True,
    return_tensors="pt",
)
labels = target_encoding.input_ids

# replace padding token id's of the labels by -100 so it's ignored by the loss
labels[labels == tokenizer.pad_token_id] = -100

# forward pass
loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss
loss.item()

0.19245588779449463

3.3 完成 tasks

input_ids = tokenizer.encode("translate English to German: Hello, my dog is cute", return_tensors="pt") 
result = model.generate(input_ids)
tokenizer.decode(result[0])

‘ Hallo, mein Hund ist süß’

# inference
input_ids = tokenizer(
    "summarize: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.", 
    return_tensors="pt"
).input_ids  # Batch size 1
outputs = model.generate(input_ids, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# studies have shown that owning a dog is good for you.

transfer learning has emerged as a powerful technique in natural language processing (NLP) in this paper, we explore the landscape of transfer learning techniques for NLP. we introduce a unified framework that converts every language problem into a text-to-text format.

参考:
https://github.com/chunhuizhang/bert_t5_gpt/blob/main/tutorials/09_t5_overall.ipynb

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/601107.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

地震勘探基础(四)之地震干扰波

地震记录的干扰波 如下图所示&#xff0c;图上有坏道&#xff0c;面波这样的干扰波。 什么是有效波和干扰波&#xff1f; 有效波&#xff08;Signal&#xff09;&#xff1a;可用来解决所提出的地质任务的波。干扰波&#xff08;Noise&#xff09;&#xff1a;所有妨碍辨认…

于Python的分布式多主题网络爬虫的研究与设计

本文旨在研究和设计一种基于Python的分布式多主题网络爬虫&#xff0c;以实现高效、快速、准确地获取互联网上的信息资源。 一、研究背景 随着互联网的快速发展&#xff0c;信息资源的数量和种类不断增加&#xff0c;如何高效地获取和利用这些信息资源成为了一个重要的问题。…

MySQL 恢复误删数据

文章目录 1、查看是否启用 binlog 日志2、查看所有 binlog 日志3、查看正在使用的日志4、查找日志所在文件夹5、log 日志转 sql6、delete 转 insert 恢复误删 MySQL 恢复误删数据&#xff0c;针对 window 和 Linux 均适用&#xff0c;只需要找到对应的 binlog 目录文件&#xf…

【5G PHY】5G SLIV(Start and Length Indicator Value)介绍

博主未授权任何人或组织机构转载博主任何原创文章&#xff0c;感谢各位对原创的支持&#xff01; 博主链接 本人就职于国际知名终端厂商&#xff0c;负责modem芯片研发。 在5G早期负责终端数据业务层、核心网相关的开发工作&#xff0c;目前牵头6G算力网络技术标准研究。 博客…

玩转服务器之应用篇:从零开始构建小型高可用环境

高可用环境介绍 搭建高可用环境&#xff0c;可以消除单点故障的影响&#xff0c;使系统在出现故障时自动地切换到其它节点&#xff0c;保障系统的平稳运行&#xff0c;提高系统的可靠性和可用性&#xff0c;同时保证数据的安全性&#xff0c;高可用环境已经是现代企业应用的标…

【git】如何在本地保存git的密码

前言 这个其实在官网上也有&#xff0c;但是平时用的不多&#xff0c;基本弄过一次&#xff0c;长久受益。今天提交代码的时候&#xff0c;莫名其妙的叫我输入git密码&#xff0c;然而我早已忘记&#xff0c;于是乎就在网上找了很多命令在Git Bash Here上疯狂操作&#xff0c;…

记一次 String(-0) 引起的 bug

-0 在js中是存在的&#xff0c;可以通过 var a -0 得到&#xff0c;也可以通过 parseInt(-0.1) 得到 但是存在 -0 0, String(-0) String(0) 的情况 起初&#xff0c;业务中存在一个 给数字转换成 千分位数字字符串的方法 // numInt 为传入的值, 如 1035 let integer pars…

xilinx zynq ps端移植wxworks6.9系统

一&#xff0c;创建bootrom 打开打开Workbench&#xff0c;目录在C:\WindRiver\workbench-3.3\wrwb\platform\x86-win32\eclipse\eclipse-x86-win32 在菜单栏&#xff0c;点击 File->New->Project。The New Project Wizard opens。 在 VxWorks 6.x中&#xff0c;选择 Vx…

WDM波分复用器件的结构组成介绍

目前已知WDM波分复用技术有很多种&#xff0c;如&#xff1a;FBT (熔融拉锥&#xff0c;Fused Biconical Taper)、FBG(光纤布拉格光栅&#xff0c;Fiber Bragg Grating)、TFF (薄膜滤波&#xff0c; Thin Film Filter)、AWG (阵列波导光栅&#xff0c; Arrayed Waveguide Grati…

【GTest】C++在Linux上如何安装构建GoogleTest

&#x1f449;博__主&#x1f448;&#xff1a;米码收割机 &#x1f449;技__能&#x1f448;&#xff1a;C/Python语言 &#x1f449;公众号&#x1f448;&#xff1a;测试开发自动化 &#x1f449;专__注&#x1f448;&#xff1a;专注主流机器人、人工智能等相关领域的开发、…

chatgpt赋能python:Python内置字符串处理方法

Python内置字符串处理方法 Python是一种高级编程语言&#xff0c;拥有丰富的库和模块&#xff0c;方便开发者进行各种编程操作。同时&#xff0c;Python也提供了许多内置的字符串处理方法&#xff0c;使得字符串操作变得更加方便快捷。 字符串的定义 在Python中&#xff0c;…

AI实战营:人体姿态估计与MMPose

目录 人体姿态估计的介绍与应用 2D姿态估计 多人姿态估计&#xff1a;自顶向下方法 多人姿态估计&#xff1a;自底向上方法 多人姿态估计&#xff1a;单阶段方法 基于Transformer的方法 基于回归的自顶向下方法 DensePose(2014) 通过级联提升精度 回归方法的优势与劣…

考前必看|PMP考试通关宝典

项目进度管理 &#xff08;1&#xff09;项目进度计划 如何及何时交付项目范围中的产品、服务和成果&#xff0c;为绩效报告提供进度依据。 选择进度计划的方法&#xff0c;如关键路径法或敏捷方法。 &#xff08;2&#xff09;定义活动 活动由工作包分解而来&#xff0c;作…

【蓝桥杯选拔赛真题59】Scratch影院选座 少儿编程scratch图形化编程 蓝桥杯选拔赛真题解析

目录 scratch影院选座 一、题目要求 编程实现 二、案例分析 1、角色分析

SpringCloudAlibaba:服务容错之Sentinel学习

目录 一、高并发带来的问题 服务雪崩效应 二、常见容错方案 &#xff08;一&#xff09;隔离 &#xff08;二&#xff09;超时 &#xff08;三&#xff09;限流 &#xff08;四&#xff09;熔断 &#xff08;五&#xff09;降级 三、常见的容错组件 四、Sentinel概述 …

子集-回溯算法

1题目 给你一个整数数组 nums &#xff0c;数组中的元素 互不相同 。返回该数组所有可能的子集&#xff08;幂集&#xff09;。 解集 不能 包含重复的子集。你可以按 任意顺序 返回解集。 示例 1&#xff1a; 输入&#xff1a;nums [1,2,3] 输出&#xff1a;[[],[1],[2],[1…

SpringBoot项目整合Redis作为缓存中间件的详细步骤

SpringBoot项目整合Redis作为缓存中间件的详细步骤 1.链接2.整合步骤3.测试Demo4.遇到的问题5.待考虑问题 有更好的建议&#xff0c;欢迎评论区留言~ 有不详细或者不准确的地方&#xff0c;欢迎评论区指正~ 有技术群嘛 hahh 可以拉我么 ~ 1.链接 哔哩教程视频 Redis官方 2.整…

线程池的工作原则揭秘:如何合理管理线程数量?

大家好&#xff0c;我是小米&#xff0c;一个热爱技术分享的小伙伴。在多线程编程中&#xff0c;线程池是一种非常实用的工具&#xff0c;可以帮助我们更好地管理线程&#xff0c;提高程序的性能和稳定性。今天&#xff0c;我将详细介绍线程池的概念、使用方法以及常用参数&…

MFC 状态栏梳理

MFC状态栏梳理 MFC状态栏&#xff0c;觉得挺简单的&#xff0c;但是用的时候总是不得劲&#xff0c;梳理了一下代码。理解通透些。 先说状态栏窗口怎么来的 在MainFrame里面会有一个成员变量&#xff0c;状态栏 m_wndStatusBar protected: // 控件条嵌入成员CMFCMenuBar …

VMware ESXi 8.0U1a 发布 - 领先的裸机 Hypervisor

VMware ESXi 8.0U1a 发布 - 领先的裸机 Hypervisor 请访问原文链接&#xff1a;https://sysin.org/blog/vmware-esxi-8-u1/&#xff0c;查看最新版。原创作品&#xff0c;转载请保留出处。 作者主页&#xff1a;sysin.org 2023-06-01, VMware vSphere 8.0U1a 发布。 详见&am…