python resnet实例,残差网络医学分类,基于resnet医学图像分类任务;医学图像处理实战

news2024/11/28 12:52:40

一,数据集介绍:

 数据预处理:

把数据处理成相同大小:

数据集:

 

 PathMNIST:结直肠癌组织学切片;ChestMNIST:胸部CT数据集,来源于NIH-ChestXray14 dataset;DermaMNIST:色素沉着皮肤病变的多源皮肤镜图像等

具体看一个数据集:

"pathmnist": {
        "description": "PathMNIST: A dataset based on a prior study for predicting survival from colorectal cancer histology slides, which provides a dataset NCT-CRC-HE-100K of 100,000 non-overlapping image patches from hematoxylin & eosin stained histological images, and a test dataset CRC-VAL-HE-7K of 7,180 image patches from a different clinical center. 9 types of tissues are involved, resulting a multi-class classification task. We resize the source images of 3 x 224 x 224 into 3 x 28 x 28, and split NCT-CRC-HE-100K into training and valiation set with a ratio of 9:1.",
        "url": "https://zenodo.org/record/4269852/files/pathmnist.npz?download=1",
        "MD5": "a8b06965200029087d5bd730944a56c1",
        "task": "multi-class",
        "label": {
            "0": "adipose", 
            "1": "background",
            "2": "debris",
            "3": "lymphocytes",
            "4": "mucus",
            "5": "smooth muscle",
            "6": "normal colon mucosa",
            "7": "cancer-associated stroma",
            "8": "colorectal adenocarcinoma epithelium"
        }

 二,基本网络结构

基本的resnet结构:

为了解决传统cnn随着深度的增加,学习效果变差的缺陷,采用了残差网络:

 核心模块:

效果如下:

 

三,构建模型

 基本代码单元:dataset.py    models.py      evalustor.py      train.py

首先看一下模型代码models.py:

import torch.nn as nn
import torch.nn.functional as F


class BasicBlock(nn.Module):
    expansion = 1

    def __init__(self, in_planes, planes, stride=1):
        super(BasicBlock, self).__init__()
        self.conv1 = nn.Conv2d(
            in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
                               stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)

        self.shortcut = nn.Sequential()
        if stride != 1 or in_planes != self.expansion*planes:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_planes, self.expansion*planes,
                          kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(self.expansion*planes)
            )

    def forward(self, x):
        #print(x.shape)
        out = F.relu(self.bn1(self.conv1(x)))
        #print(out.shape)
        out = self.bn2(self.conv2(out))
        #print(out.shape)
        out += self.shortcut(x)
        #print(out.shape)
        out = F.relu(out)
        return out


class Bottleneck(nn.Module):
    expansion = 4

    def __init__(self, in_planes, planes, stride=1):
        super(Bottleneck, self).__init__()
        self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)
        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
                               stride=stride, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)
        self.conv3 = nn.Conv2d(planes, self.expansion *
                               planes, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(self.expansion*planes)

        self.shortcut = nn.Sequential()
        if stride != 1 or in_planes != self.expansion*planes:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_planes, self.expansion*planes,
                          kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(self.expansion*planes)
            )

    def forward(self, x):
        out = F.relu(self.bn1(self.conv1(x)))
        out = F.relu(self.bn2(self.conv2(out)))
        out = self.bn3(self.conv3(out))
        out += self.shortcut(x)
        out = F.relu(out)
        return out


class ResNet(nn.Module):
    def __init__(self, block, num_blocks, in_channels=1, num_classes=2):
        super(ResNet, self).__init__()
        self.in_planes = 64

        self.conv1 = nn.Conv2d(in_channels, 64, kernel_size=3,
                               stride=1, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
        self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
        self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
        self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
        self.linear = nn.Linear(512 * block.expansion, num_classes)

    def _make_layer(self, block, planes, num_blocks, stride):
        strides = [stride] + [1]*(num_blocks-1)
        layers = []
        for stride in strides:
            layers.append(block(self.in_planes, planes, stride))
            self.in_planes = planes * block.expansion
        return nn.Sequential(*layers)

    def forward(self, x):
        #print(x.shape)
        out = F.relu(self.bn1(self.conv1(x)))
        #print(out.shape)
        out = self.layer1(out) #特征图个数没变,输入是64 输出也是64 在shortcut中不需要调整
        #print(out.shape)
        out = self.layer2(out)
        #print(out.shape)
        out = self.layer3(out)
        #print(out.shape)
        out = self.layer4(out)
        #print(out.shape)
        out = F.avg_pool2d(out, 4)
        #print(out.shape)
        out = out.view(out.size(0), -1)
        #print(out.shape)
        out = self.linear(out)
        #print(out.shape)
        return out


def ResNet18(in_channels, num_classes):
    return ResNet(BasicBlock, [2, 2, 2, 2], in_channels=in_channels, num_classes=num_classes)


def ResNet50(in_channels, num_classes):
    return ResNet(Bottleneck, [3, 4, 6, 3], in_channels=in_channels, num_classes=num_classes)

数据处理:dataset.py

import os
import json
import numpy as np
from PIL import Image
from torch.utils.data import Dataset


INFO = "medmnist/medmnist.json"


class MedMNIST(Dataset):

    flag = ...

    def __init__(self, root, split='train', transform=None, target_transform=None, download=False):
        ''' dataset
        :param split: 'train', 'val' or 'test', select subset
        :param transform: data transformation
        :param target_transform: target transformation
    
        '''

        with open(INFO, 'r') as f:
            self.info = json.load(f)[self.flag]

        self.root = root

        if download:
            self.download()

        if not os.path.exists(os.path.join(self.root, "{}.npz".format(self.flag))):
            raise RuntimeError('Dataset not found.' +
                               ' You can use download=True to download it')

        npz_file = np.load(os.path.join(self.root, "{}.npz".format(self.flag)))

        self.split = split
        self.transform = transform
        self.target_transform = target_transform

        if self.split == 'train':
            self.img = npz_file['train_images']
            self.label = npz_file['train_labels']
        elif self.split == 'val':
            self.img = npz_file['val_images']
            self.label = npz_file['val_labels']
        elif self.split == 'test':
            self.img = npz_file['test_images']
            self.label = npz_file['test_labels']

    def __getitem__(self, index):
        img, target = self.img[index], self.label[index].astype(int)
        img = Image.fromarray(np.uint8(img))

        if self.transform is not None:
            img = self.transform(img)

        if self.target_transform is not None:
            target = self.target_transform(target)

        return img, target

    def __len__(self):
        return self.img.shape[0]

    def __repr__(self):
        '''Adapted from torchvision.
        '''
        _repr_indent = 4
        head = "Dataset " + self.__class__.__name__
        
        body = ["Number of datapoints: {}".format(self.__len__())]
        body.append("Root location: {}".format(self.root))
        body.append("Split: {}".format(self.split))
        body.append("Task: {}".format(self.info["task"]))
        body.append("Number of channels: {}".format(self.info["n_channels"]))
        body.append("Meaning of labels: {}".format(self.info["label"]))
        body.append("Number of samples: {}".format(self.info["n_samples"]))
        body.append("Description: {}".format(self.info["description"]))
        body.append("License: {}".format(self.info["license"]))

        if hasattr(self, "transforms") and self.transforms is not None:
            body += [repr(self.transforms)]
        lines = [head] + [" " * _repr_indent + line for line in body]
        return '\n'.join(lines)

    def download(self):
        try:
            from torchvision.datasets.utils import download_url
            download_url(url=self.info["url"], root=self.root, 
                        filename="{}.npz".format(self.flag), md5=self.info["MD5"])
        except:
            raise RuntimeError('Something went wrong when downloading! ' +
                    'Go to the homepage to download manually. ' +
                    'https://github.com/MedMNIST/MedMNIST')


class PathMNIST(MedMNIST):
    flag = "pathmnist"


class OCTMNIST(MedMNIST):
    flag = "octmnist"


class PneumoniaMNIST(MedMNIST):
    flag = "pneumoniamnist"


class ChestMNIST(MedMNIST):
    flag = "chestmnist"


class DermaMNIST(MedMNIST):
    flag = "dermamnist"


class RetinaMNIST(MedMNIST):
    flag = "retinamnist"


class BreastMNIST(MedMNIST):
    flag = "breastmnist"


class OrganMNISTAxial(MedMNIST):
    flag = "organmnist_axial"


class OrganMNISTCoronal(MedMNIST):
    flag = "organmnist_coronal"


class OrganMNISTSagittal(MedMNIST):
    flag = "organmnist_sagittal"

训练模型train.py

import os
import argparse
import json
from tqdm import trange
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import torchvision.transforms as transforms

from medmnist.models import ResNet18, ResNet50
from medmnist.dataset import INFO, PathMNIST, ChestMNIST, DermaMNIST, OCTMNIST, PneumoniaMNIST, RetinaMNIST, \
    BreastMNIST, OrganMNISTAxial, OrganMNISTCoronal, OrganMNISTSagittal
from medmnist.evaluator import getAUC, getACC, save_results


def main(flag, input_root, output_root, end_epoch, download):
    ''' main function
    :param flag: name of subset

    '''

    dataclass = {
        "pathmnist": PathMNIST,
        "chestmnist": ChestMNIST,
        "dermamnist": DermaMNIST,
        "octmnist": OCTMNIST,
        "pneumoniamnist": PneumoniaMNIST,
        "retinamnist": RetinaMNIST,
        "breastmnist": BreastMNIST,
        "organmnist_axial": OrganMNISTAxial,
        "organmnist_coronal": OrganMNISTCoronal,
        "organmnist_sagittal": OrganMNISTSagittal,
    }

    with open(INFO, 'r') as f:
        info = json.load(f)
        task = info[flag]['task']
        n_channels = info[flag]['n_channels']
        n_classes = len(info[flag]['label'])

    start_epoch = 0
    lr = 0.001
    batch_size = 128
    val_auc_list = []
    dir_path = os.path.join(output_root, '%s_checkpoints' % (flag))
    if not os.path.exists(dir_path):
        os.makedirs(dir_path)

    print('==> Preparing data...')
    train_transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize(mean=[.5], std=[.5])
    ])

    val_transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize(mean=[.5], std=[.5])
    ])

    test_transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize(mean=[.5], std=[.5])
    ])

    train_dataset = dataclass[flag](root=input_root, split='train', transform=train_transform, download=download)
    train_loader = data.DataLoader(
        dataset=train_dataset, batch_size=batch_size, shuffle=True)
    val_dataset = dataclass[flag](root=input_root, split='val', transform=val_transform, download=download)
    val_loader = data.DataLoader(
        dataset=val_dataset, batch_size=batch_size, shuffle=True)
    test_dataset = dataclass[flag](root=input_root, split='test', transform=test_transform, download=download)
    test_loader = data.DataLoader(
        dataset=test_dataset, batch_size=batch_size, shuffle=True)

    print('==> Building and training model...')
    print(torch.cuda.is_available())
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    model = ResNet18(in_channels=n_channels, num_classes=n_classes).to(device)

    if task == "multi-label, binary-class":
        criterion = nn.BCEWithLogitsLoss()
    else:
        criterion = nn.CrossEntropyLoss()

    optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9)

    for epoch in trange(start_epoch, end_epoch):
        train(model, optimizer, criterion, train_loader, device, task)
        val(model, val_loader, device, val_auc_list, task, dir_path, epoch)

    auc_list = np.array(val_auc_list)
    index = auc_list.argmax()
    print('epoch %s is the best model' % (index))

    print('==> Testing model...')
    restore_model_path = os.path.join(dir_path, 'ckpt_%d_auc_%.5f.pth' % (index, auc_list[index]))
    model.load_state_dict(torch.load(restore_model_path)['net'])
    test(model, 'train', train_loader, device, flag, task, output_root=output_root)
    test(model, 'val', val_loader, device, flag, task, output_root=output_root)
    test(model, 'test', test_loader, device, flag, task, output_root=output_root)


def train(model, optimizer, criterion, train_loader, device, task):
    ''' training function
    :param model: the model to train
    :param optimizer: optimizer used in training
    :param criterion: loss function
    :param train_loader: DataLoader of training set
    :param device: cpu or cuda
    :param task: task of current dataset, binary-class/multi-class/multi-label, binary-class

    '''

    model.train()
    for batch_idx, (inputs, targets) in enumerate(train_loader):
        optimizer.zero_grad()
        outputs = model(inputs.to(device))

        if task == 'multi-label, binary-class':
            targets = targets.to(torch.float32).to(device)
            loss = criterion(outputs, targets)
        else:
            targets = targets.squeeze().long().to(device)
            loss = criterion(outputs, targets)

        loss.backward()
        optimizer.step()


def val(model, val_loader, device, val_auc_list, task, dir_path, epoch):
    ''' validation function
    :param model: the model to validate
    :param val_loader: DataLoader of validation set
    :param device: cpu or cuda
    :param val_auc_list: the list to save AUC score of each epoch
    :param task: task of current dataset, binary-class/multi-class/multi-label, binary-class
    :param dir_path: where to save model
    :param epoch: current epoch

    '''

    model.eval()
    y_true = torch.tensor([]).to(device)
    y_score = torch.tensor([]).to(device)
    with torch.no_grad():
        for batch_idx, (inputs, targets) in enumerate(val_loader):
            outputs = model(inputs.to(device))

            if task == 'multi-label, binary-class':
                targets = targets.to(torch.float32).to(device)
                m = nn.Sigmoid()
                outputs = m(outputs).to(device)
            else:
                targets = targets.squeeze().long().to(device)
                m = nn.Softmax(dim=1)
                outputs = m(outputs).to(device)
                targets = targets.float().resize_(len(targets), 1)

            y_true = torch.cat((y_true, targets), 0)
            y_score = torch.cat((y_score, outputs), 0)

        y_true = y_true.cpu().numpy()
        y_score = y_score.detach().cpu().numpy()
        auc = getAUC(y_true, y_score, task)
        val_auc_list.append(auc)

    state = {
        'net': model.state_dict(),
        'auc': auc,
        'epoch': epoch,
    }

    path = os.path.join(dir_path, 'ckpt_%d_auc_%.5f.pth' % (epoch, auc))
    torch.save(state, path)


def test(model, split, data_loader, device, flag, task, output_root=None):
    ''' testing function
    :param model: the model to test
    :param split: the data to test, 'train/val/test'
    :param data_loader: DataLoader of data
    :param device: cpu or cuda
    :param flag: subset name
    :param task: task of current dataset, binary-class/multi-class/multi-label, binary-class

    '''

    model.eval()
    y_true = torch.tensor([]).to(device)
    y_score = torch.tensor([]).to(device)

    with torch.no_grad():
        for batch_idx, (inputs, targets) in enumerate(data_loader):
            outputs = model(inputs.to(device))

            if task == 'multi-label, binary-class':
                targets = targets.to(torch.float32).to(device)
                m = nn.Sigmoid()
                outputs = m(outputs).to(device)
            else:
                targets = targets.squeeze().long().to(device)
                m = nn.Softmax(dim=1)
                outputs = m(outputs).to(device)
                targets = targets.float().resize_(len(targets), 1)

            y_true = torch.cat((y_true, targets), 0)
            y_score = torch.cat((y_score, outputs), 0)

        y_true = y_true.cpu().numpy()
        y_score = y_score.detach().cpu().numpy()
        auc = getAUC(y_true, y_score, task)
        acc = getACC(y_true, y_score, task)
        print('%s AUC: %.5f ACC: %.5f' % (split, auc, acc))

        if output_root is not None:
            output_dir = os.path.join(output_root, flag)
            if not os.path.exists(output_dir):
                os.mkdir(output_dir)
            output_path = os.path.join(output_dir, '%s.csv' % (split))
            save_results(y_true, y_score, output_path)


if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='RUN Baseline model of MedMNIST')
    #参数设定,数据集选取,输入路径,输出路径
    """
    --data_name pathmnist
    --input_root ./medmnist
    --output_root ./output
    """
    parser.add_argument('--data_name', default='pathmnist', help='subset of MedMNIST', type=str)
    parser.add_argument('--input_root', default='./medmnist', help='input root, the source of dataset files', type=str)
    parser.add_argument('--output_root', default='./output', help='output root, where to save models and results',
                        type=str)
    parser.add_argument('--num_epoch', default=10, help='num of epochs of training', type=int)
    parser.add_argument('--download', default=False, help='whether download the dataset or not', type=bool)

    args = parser.parse_args()
    data_name = args.data_name.lower()
    input_root = args.input_root
    output_root = args.output_root
    end_epoch = args.num_epoch
    download = args.download
    main(data_name, input_root, output_root, end_epoch=end_epoch, download=download)

注意:执行main函数时需要配置参数:

模型评估AUC和ACC:evaluator.py

from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
import numpy as np
import pandas as pd


def getAUC(y_true, y_score, task):
    '''AUC metric.
    :param y_true: the ground truth labels, shape: (n_samples, n_classes) for multi-label, and (n_samples,) for other tasks
    :param y_score: the predicted score of each class, shape: (n_samples, n_classes)
    :param task: the task of current dataset

    '''
    if task == 'binary-class':
        y_score = y_score[:,-1]
        return roc_auc_score(y_true, y_score)
    elif task == 'multi-label, binary-class':
        auc = 0
        for i in range(y_score.shape[1]):
            label_auc = roc_auc_score(y_true[:, i], y_score[:, i])
            auc += label_auc
        return auc / y_score.shape[1]
    else:
        auc = 0
        zero = np.zeros_like(y_true)
        one = np.ones_like(y_true)
        for i in range(y_score.shape[1]):
            y_true_binary = np.where(y_true == i, one, zero)
            y_score_binary = y_score[:, i]
            auc += roc_auc_score(y_true_binary, y_score_binary)
        return auc / y_score.shape[1]


def getACC(y_true, y_score, task, threshold=0.5):
    '''Accuracy metric.
    :param y_true: the ground truth labels, shape: (n_samples, n_classes) for multi-label, and (n_samples,) for other tasks
    :param y_score: the predicted score of each class, shape: (n_samples, n_classes)
    :param task: the task of current dataset
    :param threshold: the threshold for multilabel and binary-class tasks

    '''
    if task == 'multi-label, binary-class':
        zero = np.zeros_like(y_score)
        one = np.ones_like(y_score)
        y_pre = np.where(y_score < threshold, zero, one)
        acc = 0
        for label in range(y_true.shape[1]):
            label_acc = accuracy_score(y_true[:, label], y_pre[:, label])
            acc += label_acc
        return acc / y_true.shape[1]
    elif task == 'binary-class':
        y_pre = np.zeros_like(y_true)
        for i in range(y_score.shape[0]):
            y_pre[i] = (y_score[i][-1] > threshold)
        return accuracy_score(y_true, y_pre)
    else:
        y_pre = np.zeros_like(y_true)
        for i in range(y_score.shape[0]):
            y_pre[i] = np.argmax(y_score[i])
        return accuracy_score(y_true, y_pre)


def save_results(y_true, y_score, outputpath):
    '''Save ground truth and scores
    :param y_true: the ground truth labels, shape: (n_samples, n_classes) for multi-label, and (n_samples,) for other tasks
    :param y_score: the predicted score of each class, shape: (n_samples, n_classes)
    :param outputpath: path to save the result csv

    '''
    idx = []

    idx.append('id')

    for i in range(y_true.shape[1]):
        idx.append('true_%s' % (i))
    for i in range(y_score.shape[1]):
        idx.append('score_%s' % (i))

    df = pd.DataFrame(columns=idx)
    for id in range(y_score.shape[0]):
        dic = {}
        dic['id'] = id
        for i in range(y_true.shape[1]):
            dic['true_%s' % (i)] = y_true[id][i]
        for i in range(y_score.shape[1]):
            dic['score_%s' % (i)] = y_score[id][i]

        df_insert = pd.DataFrame(dic, index = [0])
        df = df.append(df_insert, ignore_index=True)

    df.to_csv(outputpath, sep=',', index=False, header=True, encoding="utf_8_sig")

 

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/470835.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【刷题之路Ⅱ】LeetCode 138. 复制带随机指针的链表

【刷题之路Ⅱ】LeetCode 138. 复制带随机指针的链表 一、题目描述二、解题难点分析方法——插入拷贝节点2、将拷贝节点插入到原节点的后面3、复制原节点的random到拷贝节点中4、将拷贝节点尾插到新链表中并恢复原链表的结构 一、题目描述 原题连接&#xff1a; 138. 复制带随机…

考研拓展:汇编基础

一.说明 本篇博客是基于考研之计算机组成原理中的程序机器级代码表示进行学习的&#xff0c;并不是从汇编语言这一门单独的课程来学习的&#xff0c;涉及的汇编语言知识多是帮助你学习考研之计算机组成原理中对应的考点。 二.相关寄存器 1.相关寄存器 X86处理器中有8个32位…

【三十天精通Vue 3】第二十天 Vue 3的性能优化详解

✅创作者&#xff1a;陈书予 &#x1f389;个人主页&#xff1a;陈书予的个人主页 &#x1f341;陈书予的个人社区&#xff0c;欢迎你的加入: 陈书予的社区 &#x1f31f;专栏地址: 三十天精通 Vue 3 文章目录 引言一、Vue3 性能优化的概念1.1 为什么需要性能优化1.2 性能优化…

基于dsp+fpga+AD+ENDAC的半导体运动台高速数据采集电路仿真设计(四)

整个调试验证与仿真分析分三个步骤&#xff1a;第一步是进行 PCB 检查及电气特性测试&#xff0c;主 要用来验证硬件设计是否正常工作&#xff1b;第二步进行各子模块功能测试&#xff0c;包括高速光纤串行 通信的稳定性与可靠性测试&#xff0c; A/D 及 D/A 转换特性测…

26从零开始学Java之如何对数组进行排序与二分查找?

作者&#xff1a;孙玉昌&#xff0c;昵称【一一哥】&#xff0c;另外【壹壹哥】也是我哦 千锋教育高级教研员、CSDN博客专家、万粉博主、阿里云专家博主、掘金优质作者 前言 在上一篇文章中&#xff0c;壹哥给大家讲解了数组的扩容、缩容及拷贝方式。接下来在今天的文章中&…

深眸科技|深度学习、3D视觉融入机器视觉系统,实现生产数智化

随着“中国制造2025”战略加速落实&#xff0c;制造业生产线正在加紧向智能化、自动化和数字化转型之路迈进。而人工智能技术的兴起以及边缘算力持续提升的同时&#xff0c;机器视觉及其相关技术也在飞速发展&#xff0c;并不断渗透进工业领域&#xff0c;拓展应用场景的同时&a…

Apache Druid中Kafka配置远程代码执行漏洞(MPS-2023-6623)

漏洞描述 Apache Druid 是一个高性能的数据分析引擎。 Kafka Connect模块曾出现JNDI注入漏洞(CVE-2023-25194)&#xff0c;近期安全研究人员发现Apache Druid由于支持从 Kafka 加载数据的实现满足其利用条件&#xff0c;攻击者可通过修改 Kafka 连接配置属性进行 JNDI 注入攻…

软件架构中间件技术

中间件的定义 其实中间件是属于构件的一种。是一种独立的系统软件或服务程序&#xff0c;可以帮助分布式应用软件在不同技术之间共享资源。 我们把它定性为一类系统软件&#xff0c;比如我们常说的消息中间件&#xff0c;数据库中间件等等都是中间件的一种体现。一般情况都是…

减少 try catch ,可以这样干

软件开发过程中&#xff0c;不可避免的是需要处理各种异常&#xff0c;就我自己来说&#xff0c;至少有一半以上的时间都是在处理各种异常情况&#xff0c;所以代码中就会出现大量的try {...} catch {...} finally {...}代码块&#xff0c;不仅有大量的冗余代码&#xff0c;而且…

d3.js学习笔记①创建html文档

本人之前从未学过HTML、CSS、JavaScript&#xff0c;然而我导是做前端的&#xff0c;要求我必须在三周内掌握d3.js&#xff0c;我只能从0学起并以此记录自己的学习过程。 首先对这三种语言有一个初步的认识&#xff1a;HTML是用于搭建网页框架&#xff0c;CSS是美化网页的&…

软件设计师考试——计算机网络、系统安全分析和设计部分

计算机网路 七层模型 OSI/RM七层模型 网络技术标准与协议 TCP协议 DHCP协议 DNS协议 计算机网络的分类——拓扑结构 按分布范围&#xff1a; 局域网城域网广域网因特网 按拓扑结构&#xff1a; 总线型星型环型 网络规划与设计 逻辑网络设计 物理网络设计 分层设计 IP地址…

VirboxLM-免服务版授权码,快速实现一机一码

一、产品介绍 ​ 授权码是由深盾科技开发的一款软件保护及授权管理产品 ​&#xff0c;一方面要保护软件代码不被逆向&#xff0c;另一方面要控制软件的授权使用。软件用户只需要输入授权码&#xff08;由数字和字母组成的一串字符&#xff09;&#xff0c;激活授权码后即可使…

这年头,谁还在「贩卖」生活方式?

【潮汐商业评论/原创】 “我已经很久没有追寻过品牌购物了”Anna如是说。 如今的Anna对商品的选择往往会考虑性价比以及简洁的外观&#xff0c;去品牌化、简单已经成为其新的生活方式。 日本作者三浦展在《第四消费时代》一书中提到&#xff0c;在第三消费社会&#xff0c;新…

Java版本企业工程项目管理系统平台源码(三控:进度组织、质量安全、预算资金成本、二平台:招采、设计管理)

工程项目管理软件&#xff08;工程项目管理系统&#xff09;对建设工程项目管理组织建设、项目策划决策、规划设计、施工建设到竣工交付、总结评估、运维运营&#xff0c;全过程、全方位的对项目进行综合管理 工程项目各模块及其功能点清单 一、系统管理 1、数据字典&#…

Android 获取奔溃crash的日志(adb logcat或者dropbox)

1.通过adb logcat 来获取&#xff1a; 使用场景&#xff1a;测试或者开发小伙伴 抓取。 先执行adb logcat -c 清理缓存日志 接着&#xff0c;抓取当前时间段开始的日志: adb logcat -v time >D:/crash.log 也可以抓取指定进程的日志&#xff1a; adb logcat -v time | fi…

利用POSIX多线程API函数进行多线程开发

本书文字内容源自 <<linux C/C服务器开发实践>> 支持正版图书&#xff0c;测试代码根据测试目的&#xff0c;可自行修改测试。 前言 在用POSIX多线程API函数进行开发之前&#xff0c;我们首先要熟悉这些API函数。常见的与线程有关的基本API函数见下表 使用这些A…

亚马逊云科技综合解决方案助力美的智能化,成本节省30%

很多人都有和客服打交道的体验&#xff0c;而这种体验大概率不佳&#xff0c;人工客服迟迟不应&#xff0c;解答问题也不精准&#xff0c;糟糕的客服体验对于面向消费者的企业来说亦是一大难题&#xff0c;严重者甚至会导致客户流失、评价滑坡等后果&#xff0c;作为知名科技电…

FileInputStream.read和FileChannel.read的区别

FileChannel怎么来的 FileChannel channel new FileInputStream("").getChannel() FileChannel的read()方法 channel.read(byteBuffer) 实现类FileChannelImpl 首先映入眼帘的就是非常熟悉的synchronized关键字&#xff0c; private final Object positionLock ne…

宏基因组组装 | 就现在!做出改变!!

微生态研究的核心难点是什么&#xff01; 基因组组装&#xff01; 从宏基因组数据中组装获得细菌的完整基因组&#xff08;complete MAGs&#xff09;是微生物组研究的长期目标&#xff0c;但基于NGS的宏基因组测序和组装方法是无法实现完整的细菌基因组组装的。即便是红极一…

appium-app测试-环境搭建手机和adb设置

1、手机设置&#xff08;Android手机-readMI k50&#xff09;&#xff1a; 1.1开发者模式设置 入口&#xff1a;设置–我的设备–全部参数与信息&#xff1b; 连续点击MIUI版本7下&#xff0c;进入开发者模式 1.2、开发者选项设置 入口&#xff1a;设置–更多设置–开发者选项…