基于 BERT+BILSTM 实现情感分析分类(附源码)

news2024/9/24 9:22:35

目录

一、数据集

二、数据清洗和划分

2.1 安装依赖

2.2 清洗和划分

三、下载 Bert 模型

四、训练和测试模型


本文主要基于 Bert 和 BiLSTM 实现情感分类,其中参考了多个博客,具体见参考链接。

源码已上传Gitee : bert-bilstm-in-Sentiment-classification

一、数据集

数据集采用疫情期间微博评论数据集,一共10万条,可从这里下载Weibo nCoV Data。

二、数据清洗和划分

2.1 安装依赖

主要安装如下依赖,pip 更换国内源。

pandas
scikit-learn
matplotlib
seaborn
torch
torchvision
datasets
transformers
pytorch_lightning

2.2 清洗和划分

实现代码如下所示。

import pandas as pd
from sklearn.model_selection import train_test_split
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

# Read Data
df = pd.read_csv('data/nCoV_100k_train.labled.csv')

# Only need text and labels
df = df[['微博中文内容', '情感倾向']]
df = df.rename(columns={'微博中文内容': 'text', '情感倾向': 'label'})
print(df)

# Observing data balance
print(df.label.value_counts())
print(df.label.value_counts() / df.shape[0] * 100)
plt.figure(figsize=(8, 4))
sns.countplot(x='label', data=df)
plt.show()
# print(df_train[df_train.label > 5.0])
# print(df_train[(df_train.label < -1.1)])
# # discarding outliers
# df_train.drop(df_train[(df_train.label < -1.1) | (df_train.label > 5)].index, inplace=True, axis=0)
# df_train.reset_index(inplace=True, drop=True)
# print(df_train.label.value_counts())
# sns.countplot(x='label', data=df_train)
# plt.show()

df.drop(df[(df.label == '4') |
           (df.label == '-') |
           (df.label == '·') |
          (df.label == '-2') |
          (df.label == '10') |
           (df.label == '9')].index, inplace=True, axis=0)
df.reset_index(inplace=True, drop=True)
print(df.value_counts())
sns.countplot(x='label', data=df)
plt.show()


# checking for empty rows
print(df.isnull().sum())
# deleting empty row data
df.dropna(axis=0, how='any', inplace=True)
df.reset_index(inplace=True, drop=True)
print(df.isnull().sum())

# examining duplicate data
print(df.duplicated().sum())
print(df[df.duplicated()==True])
# deleting duplicate data
index = df[df.duplicated() == True].index
df.drop(index, axis=0, inplace=True)
df.reset_index(inplace=True, drop=True)
print(df.duplicated().sum())

# We also need to address duplicate data where the text is the same but the label is different
print(df['text'].duplicated().sum())
print(df[df['text'].duplicated() == True])


# viewing examples
print(df[df['text'] == df.iloc[1473]['text']])
print(df[df['text'] == df.iloc[1814]['text']])

# removing data where the text is the same but the label is different
index = df[df['text'].duplicated() == True].index
df.drop(index, axis=0, inplace=True)
df.reset_index(inplace=True, drop=True)

# checking
print(df['text'].duplicated().sum())  # 0
print(df)


# inspecting shapes and indices
print("======data-clean======")
print(df.tail())
print(df.shape)

# viewing the maximum length of text
print(df['text'].str.len().sort_values())


# Split dataset. 0.6/0.2/0.2
train, test = train_test_split(df, test_size=0.2)
train, val = train_test_split(train, test_size=0.25)
print(train.shape)
print(test.shape)
print(val.shape)

train.to_csv('./data/clean/train.csv', index=None)
val.to_csv('./data/clean/val.csv', index=None)
test.to_csv('./data/clean/test.csv', index=None)

三、下载 Bert 模型

在 Hugging Face google-bert/bert-base-chinese 下载对应的模型文件,魔搭社区下载的运行可能有如下问题。

safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

 下载对应的模型文件。

四、训练和测试模型

训练和测试代码如下所示。

# https://www.kaggle.com/code/isseyice/sentiment-classification-based-on-bert-and-lstm
# https://github.com/iceissey/issey_Kaggle/blob/main/Bert_BiLSTM/BiLSTM_lighting.py#L36
# https://www.cnblogs.com/chuanzhang053/p/17653381.html
import torch
import datasets
import pandas as pd
from datasets import load_dataset  # hugging-face dataset
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import torch.nn as nn
from transformers import BertTokenizer, BertModel
import torch.optim as optim
from torch.nn.functional import one_hot
import pytorch_lightning as pl
from pytorch_lightning import Trainer
from torchmetrics.functional import accuracy, recall, precision, f1_score  # lightning中的评估
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks import ModelCheckpoint


# todo: 定义超参数
batch_size = 16
epochs = 5
dropout = 0.1
rnn_hidden = 768
rnn_layer = 1
class_num = 3
lr = 0.001
PATH = './model'  # model checkpoint path
# 分词器
token = BertTokenizer.from_pretrained('./model/bert-base-chinese')

# Customize dataset 自定义数据集
class MydataSet(Dataset):
    def __init__(self, path, split):
        # self.dataset = load_dataset('csv', data_files=path, split=split)  # TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'.
        self.df = pd.read_csv(path)
        self.dataset = datasets.Dataset.from_pandas(self.df)

    def __getitem__(self, item):
        text = self.dataset[item]['text']
        label = self.dataset[item]['label']
        return text, label

    def __len__(self):
        return len(self.dataset)


# todo: 定义批处理函数
def collate_fn(data):
    sents = [i[0] for i in data]
    labels = [i[1] for i in data]

    # 分词并编码
    data = token.batch_encode_plus(
        batch_text_or_text_pairs=sents,  # 单个句子参与编码
        truncation=True,  # 当句子长度大于max_length时,截断
        padding='max_length',  # 一律补pad到max_length长度
        max_length=300,
        return_tensors='pt',  # 以pytorch的形式返回,可取值tf,pt,np,默认为返回list
        return_length=True,
    )

    # input_ids: 编码之后的数字
    # attention_mask: 是补零的位置是0,其他位置是1
    input_ids = data['input_ids']   # input_ids 就是编码后的词
    attention_mask = data['attention_mask']  # pad的位置是0,其他位置是1
    token_type_ids = data['token_type_ids']  # (如果是一对句子)第一个句子和特殊符号的位置是0,第二个句子的位置是1
    labels = torch.LongTensor(labels)  # 该批次的labels

    # print(data['length'], data['length'].max())
    return input_ids, attention_mask, token_type_ids, labels


# 定义模型,上游使用bert预训练,下游任务选择双向LSTM模型,最后加一个全连接层
class BiLSTMClassifier(nn.Module):
    def __init__(self, drop, hidden_dim, output_dim):
        super(BiLSTMClassifier, self).__init__()
        self.drop = drop
        self.hidden_dim = hidden_dim
        self.output_dim = output_dim

        # 加载bert中文模型
        self.embedding = BertModel.from_pretrained('./model/bert-base-chinese')

        # 冻结上游模型参数(不进行预训练模型参数学习)
        for param in self.embedding.parameters():
            param.requires_grad_(False)
        
        #生成下游RNN层以及全连接层
        self.lstm = nn.LSTM(input_size=768, hidden_size=self.hidden_dim, num_layers=2, batch_first=True,
                            bidirectional=True, dropout=self.drop)
        self.fc = nn.Linear(self.hidden_dim * 2, self.output_dim)
        # 使用CrossEntropyLoss作为损失函数时,不需要激活。因为实际上CrossEntropyLoss将softmax-log-NLLLoss一并实现的。

    def forward(self, input_ids, attention_mask, token_type_ids):
        embedded = self.embedding(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
        # 第0维才是我们需要的embedding,embedding.last_hidden_state = embedding[0]
        embedded = embedded.last_hidden_state  # The embedding we need is in the 0th dimension. So, to extract it, we use: embedding.last_hidden_state = embedding[0].
        out, (h_n, c_n) = self.lstm(embedded)
        output = torch.cat((h_n[-2, :, :], h_n[-1, :, :]), dim=1)  # This part represents the output of the bidirectional LSTM. For detailed explanations, please refer to the article.
        output = self.fc(output)
        return output


# Define PyTorch Lightning
class BiLSTMLighting(pl.LightningModule):
    def __init__(self, drop, hidden_dim, output_dim):
        super(BiLSTMLighting, self).__init__()
        self.model = BiLSTMClassifier(drop, hidden_dim, output_dim)  # Set up model
        self.criterion = nn.CrossEntropyLoss()  # Set up loss function
        self.train_dataset = MydataSet('./data/clean/train.csv', 'train')
        self.val_dataset = MydataSet('./data/clean/val.csv', 'train')
        self.test_dataset = MydataSet('./data/clean/test.csv', 'train')

    def configure_optimizers(self):
        optimizer = optim.AdamW(self.parameters(), lr=lr)
        return optimizer

    def forward(self, input_ids, attention_mask, token_type_ids):  # forward(self,x)
        return self.model(input_ids, attention_mask, token_type_ids)

    def train_dataloader(self):
        train_loader = DataLoader(dataset=self.train_dataset, batch_size=batch_size, collate_fn=collate_fn,
                                  shuffle=True, num_workers=3)
        return train_loader

    def training_step(self, batch, batch_idx):
        input_ids, attention_mask, token_type_ids, labels = batch  # x, y = batch
        y = one_hot(labels + 1, num_classes=3)
        # 将one_hot_labels类型转换成float
        y = y.to(dtype=torch.float)
        # forward pass
        y_hat = self.model(input_ids, attention_mask, token_type_ids)
        # y_hat = y_hat.squeeze()  # 将[128, 1, 3]挤压为[128,3]
        loss = self.criterion(y_hat, y)  # criterion(input, target)
        self.log('train_loss', loss, prog_bar=True, logger=True, on_step=True, on_epoch=True)  # 将loss输出在控制台
        return loss  # 必须把log返回回去才有用

    def val_dataloader(self):
        val_loader = DataLoader(dataset=self.val_dataset, batch_size=batch_size, collate_fn=collate_fn, shuffle=False, num_workers=3)
        return val_loader

    def validation_step(self, batch, batch_idx):
        input_ids, attention_mask, token_type_ids, labels = batch
        y = one_hot(labels + 1, num_classes=3)
        y = y.to(dtype=torch.float)
        # forward pass
        y_hat = self.model(input_ids, attention_mask, token_type_ids)
        # y_hat = y_hat.squeeze()
        loss = self.criterion(y_hat, y)
        self.log('val_loss', loss, prog_bar=False, logger=True, on_step=True, on_epoch=True)
        return loss

    def test_dataloader(self):
        test_loader = DataLoader(dataset=self.test_dataset, batch_size=batch_size, collate_fn=collate_fn, shuffle=False, num_workers=3)
        return test_loader

    def test_step(self, batch, batch_idx):
        input_ids, attention_mask, token_type_ids, labels = batch
        target = labels + 1  # Used for calculating accuracy and F1-score later on
        y = one_hot(target, num_classes=3)
        y = y.to(dtype=torch.float)
        # forward pass
        y_hat = self.model(input_ids, attention_mask, token_type_ids)
        # y_hat = y_hat.squeeze()
        pred = torch.argmax(y_hat, dim=1)
        acc = (pred == target).float().mean()

        loss = self.criterion(y_hat, y)
        self.log('loss', loss)
        # task: Literal["binary", "multiclass", "multilabel"], corresponding to [binary classification, multiclass classification, multilabel classification]
        # average=None: outputs scores for each class separately; without it, the scores are averaged.
        re = recall(pred, target, task="multiclass", num_classes=class_num, average=None)
        pre = precision(pred, target, task="multiclass", num_classes=class_num, average=None)
        f1 = f1_score(pred, target, task="multiclass", num_classes=class_num, average=None)

        def log_score(name, scores):
            for i, score_class in enumerate(scores):
                self.log(f"{name}_class{i}", score_class)

        log_score("recall", re)
        log_score("precision", pre)
        log_score("f1", f1)
        self.log('acc', accuracy(pred, target, task="multiclass", num_classes=class_num))
        self.log('avg_recall', recall(pred, target, task="multiclass", num_classes=class_num, average="weighted"))
        self.log('avg_precision', precision(pred, target, task="multiclass", num_classes=class_num, average="weighted"))
        self.log('avg_f1', f1_score(pred, target, task="multiclass", num_classes=class_num, average="weighted"))

# 训练
def train():
    # 增加回调最优模型,这个比较好用
    checkpoint_callback = ModelCheckpoint(
        monitor='val_loss',  # Monitor object: 'val_loss'
        dirpath='./model/checkpoints/',  # Path to save the model
        filename='model-{epoch:02d}-{val_loss:.2f}',  # Name of the best model
        save_top_k=1,  # Save only the best one
        mode='min'  # When the monitored metric is at its lowest.
    )

    # Trainer可以帮助调试,比如快速运行、只使用一小部分数据进行测试、完整性检查等,
    # 详情请见官方文档https://lightning.ai/docs/pytorch/latest/debug/debugging_basic.html
    # auto自适应gpu数量
    trainer = Trainer(max_epochs=epochs, log_every_n_steps=10, accelerator='gpu', devices="auto", fast_dev_run=False, callbacks=[checkpoint_callback])
    model = BiLSTMLighting(drop = dropout, hidden_dim = rnn_hidden, output_dim = class_num)
    trainer.fit(model)
    
    return model

# 测试
def test(model = None):
    # Load the parameters of the previously trained best model.
    if model is None:
        model = BiLSTMLighting.load_from_checkpoint(checkpoint_path=PATH,
                                                    drop=dropout, hidden_dim=rnn_hidden, output_dim=class_num)
    trainer = Trainer(fast_dev_run=False)
    result = trainer.test(model)
    print(result)

if __name__ == '__main__':

    model = train()
    test(model)

执行如下命令。

python main.py

运行结果:

root@dsw-398300-bf64cb7b7-f28cl:/mnt/workspace/bert-bilstm-in-sentiment-classification# python main.py 
2024-07-13 13:18:25.494250: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-07-13 13:18:25.933228: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-13 13:18:27.151707: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Some weights of the model checkpoint at ./model/bert-base-chinese were not used when initializing BertModel: ['cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Missing logger folder: /mnt/workspace/bert-bilstm-in-sentiment-classification/lightning_logs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name      | Type             | Params
-----------------------------------------------
0 | model     | BiLSTMClassifier | 125 M 
1 | criterion | CrossEntropyLoss | 0     
-----------------------------------------------
23.6 M    Trainable params
102 M     Non-trainable params
125 M     Total params
503.559   Total estimated model params size (MB)
Epoch 4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4506/4506 [13:41<00:00,  5.48it/s, loss=0.52, v_num=0, train_loss_step=0.654, train_loss_epoch=0.566]`Trainer.fit` stopped: `max_epochs=5` reached.                                                                                                                                                                                                                                              
Epoch 4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4506/4506 [13:42<00:00,  5.48it/s, loss=0.52, v_num=0, train_loss_step=0.654, train_loss_epoch=0.566]
root@dsw-398300-bf64cb7b7-f28cl:/mnt/workspace/bert-bilstm-in-sentiment-classification# python main.py 
2024-07-13 15:10:59.831283: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-07-13 15:10:59.868774: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-13 15:11:00.420343: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Some weights of the model checkpoint at ./model/bert-base-chinese were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name      | Type             | Params
-----------------------------------------------
0 | model     | BiLSTMClassifier | 125 M 
1 | criterion | CrossEntropyLoss | 0     
-----------------------------------------------
23.6 M    Trainable params
102 M     Non-trainable params
125 M     Total params
503.559   Total estimated model params size (MB)
Epoch 4: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4506/4506 [13:41<00:00,  5.48it/s, loss=0.544, v_num=1, train_loss_step=0.435, train_loss_epoch=0.568]`Trainer.fit` stopped: `max_epochs=5` reached.                                                                                                                                                                                                                                              
Epoch 4: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4506/4506 [13:41<00:00,  5.48it/s, loss=0.544, v_num=1, train_loss_step=0.435, train_loss_epoch=0.568]
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1764: PossibleUserWarning: GPU available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='gpu', devices=1)`.
  rank_zero_warn(
Testing DataLoader 0:  61%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉                                                                                      | 690/1127 [46:08<29:13,  4.01s/it]
Testing DataLoader 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1127/1127 [1:17:28<00:00,  4.12s/it]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃        Test metric        ┃       DataLoader 0        ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│            acc            │    0.7424814105033875     │
│          avg_f1           │    0.7335338592529297     │
│       avg_precision       │    0.7671085596084595     │
│        avg_recall         │    0.7424814105033875     │
│         f1_class0         │    0.5128293633460999     │
│         f1_class1         │    0.7843108177185059     │
│         f1_class2         │    0.6716415286064148     │
│           loss            │    0.5851244330406189     │
│     precision_class0      │     0.637470543384552     │
│     precision_class1      │    0.7558891773223877     │
│     precision_class2      │    0.7077022790908813     │
│       recall_class0       │    0.4741062521934509     │
│       recall_class1       │    0.8350556492805481     │
│       recall_class2       │    0.6949878334999084     │
└───────────────────────────┴───────────────────────────┘
[{'loss': 0.5851244330406189, 'recall_class0': 0.4741062521934509, 'recall_class1': 0.8350556492805481, 'recall_class2': 0.6949878334999084, 'precision_class0': 0.637470543384552, 'precision_class1': 0.7558891773223877, 'precision_class2': 0.7077022790908813, 'f1_class0': 0.5128293633460999, 'f1_class1': 0.7843108177185059, 'f1_class2': 0.6716415286064148, 'acc': 0.7424814105033875, 'avg_recall': 0.7424814105033875, 'avg_precision': 0.7671085596084595, 'avg_f1': 0.7335338592529297}]
root@dsw-398300-bf64cb7b7-f28cl:/mnt/workspace/bert-bilstm-in-sentiment-classification# 

参考链接:

[1] 使用huggingface实现BERT+BILSTM情感3分类(附数据集源代码)_bert和bilstm中间需要加入drop层么-CSDN博客

[2] https://huggingface.co/google-bert/bert-base-chinese/tree/main

[3] 【NLP实战】基于Bert和双向LSTM的情感分类【上篇】_bert+lstm-CSDN博客

[4] 【NLP实战】基于Bert和双向LSTM的情感分类【中篇】_bert+lstm-CSDN博客

[5] https://github.com/iceissey/issey_Kaggle/tree/main/Bert_BiLSTM

[6] https://www.kaggle.com/code/isseyice/sentiment-classification-based-on-bert-and-lstm#Part-2:-Training-and-Evaluating-the-Model [7]https://www.kaggle.com/datasets/liangqingyuan/chinese-text-multi-classification?resource=download

[8]千言(LUGE)| 全面的中文开源数据集合 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1924496.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

基于JavaSpringBoot+Vue+uniapp微信小程序校园宿舍管理系统设计与实现

基于JavaSpringBootVueuniapp微信小程序实现校园宿舍管理系统 目录 第一章 绪论 1.1 研究背景 1.2 研究现状 1.3 研究内容 第二章 相关技术介绍 2.1 Java语言 2.2 HTML网页技术 2.3 MySQL数据库 2.4 Springboot 框架介绍 2.5 VueJS介绍 2.6 ElementUI介绍 第三章 系…

pytorch训练的时候 shm共享内存不足,导致训练停止

1.查看shm情况 df -h /dev/shm内存已经满了&#xff0c;因为之前训练多次训练意外停止到shm中的缓存不能及时被清理 2、手动清理shm 依然没被释放 3、查看关联的进程&#xff0c;一个一个kill lsof |grep deletedkill -9 46619 44618 44617 。。。。。4、搞定

3011.力扣每日一题7/13 Java(冒泡排序)

博客主页&#xff1a;音符犹如代码系列专栏&#xff1a;算法练习关注博主&#xff0c;后期持续更新系列文章如果有错误感谢请大家批评指出&#xff0c;及时修改感谢大家点赞&#x1f44d;收藏⭐评论✍ 目录 冒泡排序 解题思路 解题过程 时间复杂度 空间复杂度 冒泡排序 冒…

jenkins系列-07.轻易级jpom安装

jpom是一个容器化服务管理工具&#xff1a;在线构建&#xff0c;自动部署&#xff0c;日常运维, 比jenkins轻量多了。 本篇介绍mac m1安装jpom: #下载&#xff1a;https://jpom.top/pages/all-downloads/ 解压&#xff1a;/Users/jelex/Documents/work/jpom-2.10.40 启动前修…

[论文阅读]MaIL: Improving Imitation Learning with Mamba

Abstract 这项工作介绍了mamba模仿学习&#xff08;mail&#xff09;&#xff0c;这是一种新颖的模仿学习&#xff08;il&#xff09;架构&#xff0c;为最先进的&#xff08;sota&#xff09;变换器策略提供了一种计算高效的替代方案。基于变压器的策略由于能够处理具有固有非…

思维+构造,CF 1059C - Sequence Transformation

一、题目 1、题目描述 2、输入输出 2.1输入 2.2输出 3、原题链接 1059C - Sequence Transformation 二、解题报告 1、思路分析 n 1&#xff0c;2&#xff0c;3的情况从样例已知 考虑n > 4的情况 我们考虑要字典序最大&#xff0c;自然要最早出现非1的数&#xff0c;…

老物件线上3D回忆展拓宽了艺术作品的展示空间和时间-深圳华锐视点

在数字技术的浪潮下&#xff0c;3D线上画展为艺术家们开启了一个全新的展示与销售平台。这一创新形式不仅拓宽了艺术作品的展示空间&#xff0c;还为广大观众带来了前所未有的观赏体验。 3D线上画展制作以其独特的互动性&#xff0c;让艺术不再是单一的视觉享受。在这里&#x…

【香菇带你学Linux】Linux环境下gcc编译安装【建议收藏】

文章目录 0. 前言1. 安装前准备工作1.1 创建weihu用户1.2 安装依赖包1.2.1 安装 GMP1.2.2 安装MPFR1.2.3 安装MPC 2. gcc10.0.1版本安装3. 报错解决3. 1. wget下载报错 4. 参考文档 0. 前言 gcc&#xff08;GNU Compiler Collection&#xff09;是GNU项目的一部分&#xff0c;…

Leetcode-203-移除链表元素-临时变量作用域-c++

题目详见https://leetcode.cn/problems/remove-linked-list-elements/ 题解代码 /*** Definition for singly-linked list.* struct ListNode {* int val;* ListNode *next;* ListNode() : val(0), next(nullptr) {}* ListNode(int x) : val(x), next(nullpt…

动手学深度学习(Pytorch版)代码实践 -注意力机制-Transformer

68Transformer 1. PositionWiseFFN 基于位置的前馈网络 原理&#xff1a;这是一个应用于每个位置的前馈神经网络。它使用相同的多层感知机&#xff08;MLP&#xff09;对序列中的每个位置独立进行变换。作用&#xff1a;对输入序列的每个位置独立地进行非线性变换&#xff0c…

【Python】数据分析-Matplotlib绘图

数据分析 Jupyter Notebook Jupyter Notebook: 一款用于编程、文档、笔记和展示的软件。 启动命令&#xff1a; jupyter notebookMatplotlib 设置中文格式&#xff1a;plt.rcParams[font.sans-serif] [KaiTi] # 查看本地所有字体 import matplotlib.font_manager a sorted…

《昇思25天学习打卡营第17天|K近邻算法实现红酒聚类》

K近邻算法原理介绍 K近邻算法&#xff08;K-Nearest-Neighbor, KNN&#xff09;是一种用于分类和回归的非参数统计方法&#xff0c;最初由 Cover和Hart于1968年提出是机器学习最基础的算法之一。它正是基于以上思想&#xff1a;要确定一个样本的类别&#xff0c;可以计算它与所…

Linux-指令

希望你开心&#xff0c;希望你健康&#xff0c;希望你幸福&#xff0c;希望你点赞&#xff01; 最后的最后&#xff0c;关注喵&#xff0c;关注喵&#xff0c;关注喵&#xff0c;大大会看到更多有趣的博客哦&#xff01;&#xff01;&#xff01; 喵喵喵&#xff0c;你对我真的…

韦东山嵌入式linux系列-具体单板的 LED 驱动程序

笔者使用的是STM32MP157的板子 1 怎么写 LED 驱动程序&#xff1f; 详细步骤如下&#xff1a; ① 看原理图确定引脚&#xff0c;确定引脚输出什么电平才能点亮/熄灭 LED ② 看主芯片手册&#xff0c;确定寄存器操作方法&#xff1a;哪些寄存器&#xff1f;哪些位&#xff1f;…

链接追踪系列-00.es设置日志保存7天-番外篇

索引生命周期策略 ELK日志我们一般都是按天存储&#xff0c;例如索引名为"zipkin-span-2023-03-24"&#xff0c;因为日志量所占的存储是非常大的&#xff0c;我们不能一直保存&#xff0c;而是要定期清理旧的&#xff0c;这里就以保留7天日志为例。 自动清理7天以前…

Pytorch中nn.Sequential()函数创建网络的几种方法

1. 创作灵感 在创建大型网络的时候&#xff0c;如果使用nn.Sequential&#xff08;&#xff09;将几个有紧密联系的运算组成一个序列&#xff0c;可以使网络的结构更加清晰。 2.应用举例 为了记录nn.Sequential&#xff08;&#xff09;的用法&#xff0c;搭建以下测试网络&…

node js 快速构建部署 Wiki 风格的文档网站

easy-wiki 快速构建 项目地址 &#xff1a;https://github.com/enncy/easy-wiki 教程文档 &#xff1a;https://enncy.github.io/easy-wiki/index.html 本文将介绍如何通过内置插件快速构建 WIKI 文档&#xff0c;并自带侧边栏&#xff0c;顶部栏&#xff0c;丰富样式等功能 #…

WEB前端03-CSS3基础

CSS3基础 1.CSS基本概念 CSS是Cascading Style Sheets&#xff08;层叠样式表&#xff09;的缩写&#xff0c;它是一种对Web文档添加样式的简单机制&#xff0c;是一种表现HTML或XML等文件外观样式的计算机语言&#xff0c;是一种网页排版和布局设计的技术。 CSS的特点 纯C…

maven的settings.xml无法正确配置本地仓库路径

因为以前使用过新版的maven&#xff0c;现在要换个版本使用。 在配置新的本地仓库路径的时候突然发现居然idea居然识别不了我settings.xml里面配置的路径。 我很是震惊&#xff0c;明明之前一直都是这样子配置的。怎么突然间不行了。当我冥思苦想&#xff0c;在网上搜寻资料无果…

02:项目二:感应开关盖垃圾桶

感应开关盖垃圾桶 1、PWM开发SG901.1、怎样通过C51单片机输出PWM波&#xff1f;1.2、通过定时器输出PWM波来控制SG90 2、超声波测距模块的使用3、感应开关盖垃圾桶 需要材料&#xff1a; 1、SG90舵机模块 2、HC-SR04超声波模块 3、震动传感器 4、蜂鸣器 5、若干杜邦线 1、PWM开…