医学图象分割常用损失函数(附Pytorch和Keras代码)

news2024/11/17 13:58:32

对损失函数没有太大的了解,就是知道它很重要,搜集了一些常用的医学图象分割损失函数,学习一下!

医学图象分割常见损失函数

  • 前言
    • 1 Dice Loss
    • 2 BCE-Dice Loss
    • 3 Jaccard/Intersection over Union (IoU) Loss
    • 4 Focal Loss
    • 5 Tvesky Loss
    • 6 Focal Tvesky Loss
    • 7 Lovasz Hinge Loss
    • 8 Combo Loss
    • 9 参考资料

前言

分割损失函数大致分为四类,分别是基于分布的损失函数,符合损失函数,基于区域的损失函数以及基于边界的损失函数!

在这里插入图片描述

因为有些是评价指标作为损失函数的,因此在反向传播时候,为了使得损失函数趋向为0,需要对类似的损失函数进行1-loss操作!

1 Dice Loss

Dice 系数是像素分割的常用的评价指标,也可以修改为损失函数:
公式:

D i c e = 2 ∣ X ∩ Y ∣ ∣ X ∣ + ∣ Y ∣ Dice=\frac{2|X \cap Y|}{|X|+|Y|} Dice=X+Y2∣XY
其中X为实际区域,Y为预测区域
Pytorch代码:

import numpy
import torch
import torch.nn as nn
import torch.nn.functional as F

class DiceLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(DiceLoss, self).__init__()

    def forward(self, inputs, targets, smooth=1):
        
        #comment out if your model contains a sigmoid or equivalent activation layer
        inputs = F.sigmoid(inputs)       
        
        #flatten label and prediction tensors
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        
        intersection = (inputs * targets).sum()                            
        dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)  
        
        return 1 - dice

测试:
在这里插入图片描述
Keras代码:

import keras
import keras.backend as K

def DiceLoss(targets, inputs, smooth=1e-6):
    
    #flatten label and prediction tensors
    inputs = K.flatten(inputs)
    targets = K.flatten(targets)
    
    intersection = K.sum(K.dot(targets, inputs))
    dice = (2*intersection + smooth) / (K.sum(targets) + K.sum(inputs) + smooth)
    return 1 - dice

2 BCE-Dice Loss

这种损失结合了 Dice 损失标准二元交叉熵 (BCE) 损失,后者通常是分割模型的默认值。将这两种方法结合起来可以使损失具有一定的多样性,同时受益于 BCE 的稳定性。
公式:

D i c e + B C E = 2 ∣ X ∩ Y ∣ ∣ X ∣ + ∣ Y ∣ + 1 N ∑ n = 1 N H ( p n , q n ) Dice + BCE=\frac{2|X \cap Y|}{|X|+|Y|} + \frac{1}{N}\sum_{n=1}^{N}{H(p_n,q_n)} Dice+BCE=X+Y2∣XY+N1n=1NH(pn,qn)

Pytorch代码:

class DiceBCELoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(DiceBCELoss, self).__init__()

    def forward(self, inputs, targets, smooth=1):
        
        #comment out if your model contains a sigmoid or equivalent activation layer
        inputs = F.sigmoid(inputs)       
        
        #flatten label and prediction tensors
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        
        intersection = (inputs * targets).sum()                            
        dice_loss = 1 - (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)   # 注意这里已经使用1-dice 
        BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
        Dice_BCE = BCE + dice_loss
        
        return Dice_BCE

Keras代码:

def DiceBCELoss(targets, inputs, smooth=1e-6):    
       
    #flatten label and prediction tensors
    inputs = K.flatten(inputs)
    targets = K.flatten(targets)
    
    BCE =  binary_crossentropy(targets, inputs)
    intersection = K.sum(K.dot(targets, inputs))    
    dice_loss = 1 - (2*intersection + smooth) / (K.sum(targets) + K.sum(inputs) + smooth)
    Dice_BCE = BCE + dice_loss
    
    return Dice_BCE

3 Jaccard/Intersection over Union (IoU) Loss

IoU 指标,或 Jaccard 指数,类似于 Dice 指标计算为两个集合之间正实例的重叠与其相互组合值之间的比率;与 Dice 指标一样,它是评估像素分割模型的性能。
公式:
J ( A , B ) = ∣ A ∩ B ∣ ∣ A ∪ B ∣ = ∣ A ∩ B ∣ ∣ A ∣ + ∣ B ∣ − ∣ A ∩ B ∣ J(A,B)=\frac{|A \cap B|}{|A \cup B|} = \frac{|A \cap B|}{|A| + |B|-|A\cap B|} J(A,B)=ABAB=A+BABAB
其中A为实际分割区域,B为预测的分割区域
Pytorch代码:

class IoULoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(IoULoss, self).__init__()

    def forward(self, inputs, targets, smooth=1):
        
        #comment out if your model contains a sigmoid or equivalent activation layer
        inputs = F.sigmoid(inputs)       
        
        #flatten label and prediction tensors
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        
        #intersection is equivalent to True Positive count
        #union is the mutually inclusive area of all labels & predictions 
        intersection = (inputs * targets).sum()
        total = (inputs + targets).sum()
        union = total - intersection 
        
        IoU = (intersection + smooth)/(union + smooth)
                
        return 1 - IoU

Keras代码:

def IoULoss(targets, inputs, smooth=1e-6):
    
    #flatten label and prediction tensors
    inputs = K.flatten(inputs)
    targets = K.flatten(targets)
    
    intersection = K.sum(K.dot(targets, inputs))
    total = K.sum(targets) + K.sum(inputs)
    union = total - intersection
    
    IoU = (intersection + smooth) / (union + smooth)
    return 1 - IoU

4 Focal Loss

Focal损失函数是由Facebook AI Research的Lin等人在2017年提出的,作为一种对抗极端不平衡数据集的手段。
公式:
见文章:Focal Loss for Dense Object Detection
在这里插入图片描述

Pytorch代码:

class FocalLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(FocalLoss, self).__init__()

    def forward(self, inputs, targets, alpha=0.8, gamma=2, smooth=1):
        
        #comment out if your model contains a sigmoid or equivalent activation layer
        inputs = F.sigmoid(inputs)       
        
        #flatten label and prediction tensors
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        
        #first compute binary cross-entropy 
        BCE = F.binary_cross_entropy(inputs, targets, reduction='mean')
        BCE_EXP = torch.exp(-BCE)
        focal_loss = alpha * (1-BCE_EXP)**gamma * BCE
                       
        return focal_loss

Keras代码:

def FocalLoss(targets, inputs, alpha=0.8, gamma=2):    
    
    inputs = K.flatten(inputs)
    targets = K.flatten(targets)
    
    BCE = K.binary_crossentropy(targets, inputs)
    BCE_EXP = K.exp(-BCE)
    focal_loss = K.mean(alpha * K.pow((1-BCE_EXP), gamma) * BCE)
    
    return focal_loss

5 Tvesky Loss

公式:
见文章:Tversky loss function for image segmentation using 3D fully convolutional deep networks
在这里插入图片描述
通过公式可以看出,其就是针对不同的指标进行加权,文章中指出,当a = b = 0.5, 就是Dice系数,当a = b = 1,就是Iou系数

Pytorch代码:

class TverskyLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(TverskyLoss, self).__init__()

    def forward(self, inputs, targets, smooth=1, alpha=0.5, beta=0.5):
        
        #comment out if your model contains a sigmoid or equivalent activation layer
        inputs = F.sigmoid(inputs)       
        
        #flatten label and prediction tensors
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        
        #True Positives, False Positives & False Negatives
        TP = (inputs * targets).sum()    
        FP = ((1-targets) * inputs).sum()
        FN = (targets * (1-inputs)).sum()
       
        Tversky = (TP + smooth) / (TP + alpha*FP + beta*FN + smooth)  
        
        return 1 - Tversky

Keras代码:

def TverskyLoss(targets, inputs, alpha=0.5, beta=0.5, smooth=1e-6):
        
        #flatten label and prediction tensors
        inputs = K.flatten(inputs)
        targets = K.flatten(targets)
        
        #True Positives, False Positives & False Negatives
        TP = K.sum((inputs * targets))
        FP = K.sum(((1-targets) * inputs))
        FN = K.sum((targets * (1-inputs)))
       
        Tversky = (TP + smooth) / (TP + alpha*FP + beta*FN + smooth)  
        
        return 1 - Tversky

6 Focal Tvesky Loss

就是将Focal Loss集成到Tvesky中
公式:
F o c a l T v e r s k y = ( 1 − T v e r s k y ) α FocalTversky = (1-Tversky)^{\alpha } FocalTversky=(1Tversky)α

Pytorch代码:

class FocalTverskyLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(FocalTverskyLoss, self).__init__()

    def forward(self, inputs, targets, smooth=1, alpha=0.5, beta=0.5, gamma=2):
        
        #comment out if your model contains a sigmoid or equivalent activation layer
        inputs = F.sigmoid(inputs)       
        
        #flatten label and prediction tensors
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        
        #True Positives, False Positives & False Negatives
        TP = (inputs * targets).sum()    
        FP = ((1-targets) * inputs).sum()
        FN = (targets * (1-inputs)).sum()
        
        Tversky = (TP + smooth) / (TP + alpha*FP + beta*FN + smooth)  
        FocalTversky = (1 - Tversky)**gamma
                       
        return FocalTversky

Keras代码:

def FocalTverskyLoss(targets, inputs, alpha=0.5, beta=0.5, gamma=2, smooth=1e-6):
    
        #flatten label and prediction tensors
        inputs = K.flatten(inputs)
        targets = K.flatten(targets)
        
        #True Positives, False Positives & False Negatives
        TP = K.sum((inputs * targets))
        FP = K.sum(((1-targets) * inputs))
        FN = K.sum((targets * (1-inputs)))
               
        Tversky = (TP + smooth) / (TP + alpha*FP + beta*FN + smooth)  
        FocalTversky = K.pow((1 - Tversky), gamma)
        
        return FocalTversky

7 Lovasz Hinge Loss

该损失函数是由Berman, Triki和Blaschko在他们的论文“The Lovasz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks”中介绍的。
它被设计用于优化语义分割的交集优于联合分数,特别是对于多类实例。具体来说,它根据误差对预测进行排序,然后累积计算每个误差对IoU分数的影响。然后,这个梯度向量与初始误差向量相乘,以最强烈地惩罚降低IoU分数最多的预测。
代码连接:

Pytorch代码:https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytorch/lovasz_losses.py

"""
Lovasz-Softmax and Jaccard hinge loss in PyTorch
Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)
"""

from __future__ import print_function, division

import torch
from torch.autograd import Variable
import torch.nn.functional as F
import numpy as np
try:
    from itertools import  ifilterfalse
except ImportError: # py3k
    from itertools import  filterfalse as ifilterfalse


def lovasz_grad(gt_sorted):
    """
    Computes gradient of the Lovasz extension w.r.t sorted errors
    See Alg. 1 in paper
    """
    p = len(gt_sorted)
    gts = gt_sorted.sum()
    intersection = gts - gt_sorted.float().cumsum(0)
    union = gts + (1 - gt_sorted).float().cumsum(0)
    jaccard = 1. - intersection / union
    if p > 1: # cover 1-pixel case
        jaccard[1:p] = jaccard[1:p] - jaccard[0:-1]
    return jaccard


def iou_binary(preds, labels, EMPTY=1., ignore=None, per_image=True):
    """
    IoU for foreground class
    binary: 1 foreground, 0 background
    """
    if not per_image:
        preds, labels = (preds,), (labels,)
    ious = []
    for pred, label in zip(preds, labels):
        intersection = ((label == 1) & (pred == 1)).sum()
        union = ((label == 1) | ((pred == 1) & (label != ignore))).sum()
        if not union:
            iou = EMPTY
        else:
            iou = float(intersection) / float(union)
        ious.append(iou)
    iou = mean(ious)    # mean accross images if per_image
    return 100 * iou


def iou(preds, labels, C, EMPTY=1., ignore=None, per_image=False):
    """
    Array of IoU for each (non ignored) class
    """
    if not per_image:
        preds, labels = (preds,), (labels,)
    ious = []
    for pred, label in zip(preds, labels):
        iou = []    
        for i in range(C):
            if i != ignore: # The ignored label is sometimes among predicted classes (ENet - CityScapes)
                intersection = ((label == i) & (pred == i)).sum()
                union = ((label == i) | ((pred == i) & (label != ignore))).sum()
                if not union:
                    iou.append(EMPTY)
                else:
                    iou.append(float(intersection) / float(union))
        ious.append(iou)
    ious = [mean(iou) for iou in zip(*ious)] # mean accross images if per_image
    return 100 * np.array(ious)


# --------------------------- BINARY LOSSES ---------------------------


def lovasz_hinge(logits, labels, per_image=True, ignore=None):
    """
    Binary Lovasz hinge loss
      logits: [B, H, W] Variable, logits at each pixel (between -\infty and +\infty)
      labels: [B, H, W] Tensor, binary ground truth masks (0 or 1)
      per_image: compute the loss per image instead of per batch
      ignore: void class id
    """
    if per_image:
        loss = mean(lovasz_hinge_flat(*flatten_binary_scores(log.unsqueeze(0), lab.unsqueeze(0), ignore))
                          for log, lab in zip(logits, labels))
    else:
        loss = lovasz_hinge_flat(*flatten_binary_scores(logits, labels, ignore))
    return loss


def lovasz_hinge_flat(logits, labels):
    """
    Binary Lovasz hinge loss
      logits: [P] Variable, logits at each prediction (between -\infty and +\infty)
      labels: [P] Tensor, binary ground truth labels (0 or 1)
      ignore: label to ignore
    """
    if len(labels) == 0:
        # only void pixels, the gradients should be 0
        return logits.sum() * 0.
    signs = 2. * labels.float() - 1.
    errors = (1. - logits * Variable(signs))
    errors_sorted, perm = torch.sort(errors, dim=0, descending=True)
    perm = perm.data
    gt_sorted = labels[perm]
    grad = lovasz_grad(gt_sorted)
    loss = torch.dot(F.relu(errors_sorted), Variable(grad))
    return loss


def flatten_binary_scores(scores, labels, ignore=None):
    """
    Flattens predictions in the batch (binary case)
    Remove labels equal to 'ignore'
    """
    scores = scores.view(-1)
    labels = labels.view(-1)
    if ignore is None:
        return scores, labels
    valid = (labels != ignore)
    vscores = scores[valid]
    vlabels = labels[valid]
    return vscores, vlabels


class StableBCELoss(torch.nn.modules.Module):
    def __init__(self):
         super(StableBCELoss, self).__init__()
    def forward(self, input, target):
         neg_abs = - input.abs()
         loss = input.clamp(min=0) - input * target + (1 + neg_abs.exp()).log()
         return loss.mean()


def binary_xloss(logits, labels, ignore=None):
    """
    Binary Cross entropy loss
      logits: [B, H, W] Variable, logits at each pixel (between -\infty and +\infty)
      labels: [B, H, W] Tensor, binary ground truth masks (0 or 1)
      ignore: void class id
    """
    logits, labels = flatten_binary_scores(logits, labels, ignore)
    loss = StableBCELoss()(logits, Variable(labels.float()))
    return loss


# --------------------------- MULTICLASS LOSSES ---------------------------


def lovasz_softmax(probas, labels, classes='present', per_image=False, ignore=None):
    """
    Multi-class Lovasz-Softmax loss
      probas: [B, C, H, W] Variable, class probabilities at each prediction (between 0 and 1).
              Interpreted as binary (sigmoid) output with outputs of size [B, H, W].
      labels: [B, H, W] Tensor, ground truth labels (between 0 and C - 1)
      classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average.
      per_image: compute the loss per image instead of per batch
      ignore: void class labels
    """
    if per_image:
        loss = mean(lovasz_softmax_flat(*flatten_probas(prob.unsqueeze(0), lab.unsqueeze(0), ignore), classes=classes)
                          for prob, lab in zip(probas, labels))
    else:
        loss = lovasz_softmax_flat(*flatten_probas(probas, labels, ignore), classes=classes)
    return loss


def lovasz_softmax_flat(probas, labels, classes='present'):
    """
    Multi-class Lovasz-Softmax loss
      probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1)
      labels: [P] Tensor, ground truth labels (between 0 and C - 1)
      classes: 'all' for all, 'present' for classes present in labels, or a list of classes to average.
    """
    if probas.numel() == 0:
        # only void pixels, the gradients should be 0
        return probas * 0.
    C = probas.size(1)
    losses = []
    class_to_sum = list(range(C)) if classes in ['all', 'present'] else classes
    for c in class_to_sum:
        fg = (labels == c).float() # foreground for class c
        if (classes is 'present' and fg.sum() == 0):
            continue
        if C == 1:
            if len(classes) > 1:
                raise ValueError('Sigmoid output possible only with 1 class')
            class_pred = probas[:, 0]
        else:
            class_pred = probas[:, c]
        errors = (Variable(fg) - class_pred).abs()
        errors_sorted, perm = torch.sort(errors, 0, descending=True)
        perm = perm.data
        fg_sorted = fg[perm]
        losses.append(torch.dot(errors_sorted, Variable(lovasz_grad(fg_sorted))))
    return mean(losses)


def flatten_probas(probas, labels, ignore=None):
    """
    Flattens predictions in the batch
    """
    if probas.dim() == 3:
        # assumes output of a sigmoid layer
        B, H, W = probas.size()
        probas = probas.view(B, 1, H, W)
    B, C, H, W = probas.size()
    probas = probas.permute(0, 2, 3, 1).contiguous().view(-1, C)  # B * H * W, C = P, C
    labels = labels.view(-1)
    if ignore is None:
        return probas, labels
    valid = (labels != ignore)
    vprobas = probas[valid.nonzero().squeeze()]
    vlabels = labels[valid]
    return vprobas, vlabels

def xloss(logits, labels, ignore=None):
    """
    Cross entropy loss
    """
    return F.cross_entropy(logits, Variable(labels), ignore_index=255)


# --------------------------- HELPER FUNCTIONS ---------------------------
def isnan(x):
    return x != x
    
    
def mean(l, ignore_nan=False, empty=0):
    """
    nanmean compatible with generators.
    """
    l = iter(l)
    if ignore_nan:
        l = ifilterfalse(isnan, l)
    try:
        n = 1
        acc = next(l)
    except StopIteration:
        if empty == 'raise':
            raise ValueError('Empty mean')
        return empty
    for n, v in enumerate(l, 2):
        acc += v
    if n == 1:
        return acc
    return acc / n

8 Combo Loss

该损失函数是由Taghanaki等人在他们的论文"Combo loss: Handling input and output imbalance in multi-organ segmentation"中介绍的。组合损失是Dice损失和一个修正的BCE函数的组合,像Tversky损失一样,有额外的常数,分别惩罚假阳性或假阴性。

下面这个代码可能有些问题!!
Pytorch代码:

import torch.nn as nn
import torch
ALPHA = 0.5 # < 0.5 penalises FP more, > 0.5 penalises FN more
CE_RATIO = 0.5 #weighted contribution of modified CE loss compared to Dice loss
BETA = 0.5
import torch.nn.functional as F

class ComboLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(ComboLoss, self).__init__()

    def forward(self, inputs, targets, smooth=1, alpha=ALPHA, beta=BETA, eps=1e-9):
        
        inputs  = F.sigmoid(inputs)
        #flatten label and prediction tensors
        inputs = inputs.view(-1)
        targets = targets.view(-1)
        
        #True Positives, False Positives & False Negatives
        intersection = (inputs * targets).sum()    
        dice = (2. * intersection + smooth) / (inputs.sum() + targets.sum() + smooth)
        
        inputs = torch.clamp(inputs, eps, 1.0 - eps)       
        out = - (ALPHA * ((targets * torch.log(inputs)) + ((1 - ALPHA) * (1.0 - targets) * torch.log(1.0 - inputs))))
        weighted_ce = out.mean(-1)
        combo = (CE_RATIO * weighted_ce) - ((1 - CE_RATIO) * dice)
        
        return -combo

结果:
在这里插入图片描述

9 参考资料

医学图象分割常见评价指标

SegLoss

Kaggle比较——SegLoss

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/350147.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

学生投票系统-课后程序(JAVA基础案例教程-黑马程序员编著-第三章-课后作业)

【案例3-4】学生投票系统 记得 关注&#xff0c;收藏&#xff0c;评论哦&#xff0c;作者将持续更新。。。。 【案例介绍】 案例描述 某班级投票竞选班干部&#xff0c;班级学生人数为100人&#xff0c;每个学生只能投一票。 本任务要求&#xff0c;编程实现一个投票程序&…

2023美赛赛题和数据公布啦~【附中文翻译版】

2023美赛赛题和数据公布啦~ 加2023年的美国大学生数学建模竞赛 数学建模竞赛是一项在全球范围内非常受欢迎的竞赛&#xff0c;旨在鼓励学生运用数学知识和建模技能解决实际问题。这项竞赛不仅对学生的数学能力提出了很高的要求&#xff0c;还对他们的创造性、团队协作和沟通能…

经典算法题---链表奇偶重排(好题)双指针系列

我听别人说这世界上有一种鸟是没有脚的&#xff0c;它只能够一直的飞呀飞呀&#xff0c;飞累了就在风里面睡觉&#xff0c;这种鸟一辈子只能下地一次&#xff0c;那一次就是它死亡的时候。——《阿甘正传》这一文章讲解链表的奇偶排序问题&#xff0c;这是一道不难但是挺好的链…

凹凸贴图(Bump Mapping)

凹凸贴图是什么&#xff1f; 我们首先来看low-poly&#xff08;多边形数较少&#xff09;mesh和high-poly&#xff08;多边形数量较多&#xff09;mesh之间的不同。首先&#xff0c;最明显的不同就是high-poly能够表现出更多细节&#xff0c;但high-poly有比较大的性能开销。有…

springboot下@transcation使用基本介绍

springboot下transcation基本使用的几种可能 普通常使用的几种可能&#xff08;事务的传播行为默认值Propagation.REQUIRED&#xff09;&#xff1a; transcation只在使用方法A上&#xff0c;A内无调用其他方法&#xff0c;事务正常方法A和方法B在同一个类下&#xff0c;transc…

net6中使用FluentValidation做实体验证(批量注册)

实体验证-FluentValidation 首先明白两个概念 自动验证&#xff1a;就是在请求进入到控制器前FluentValidation就自行完成实体的验证并做错误返回&#xff0c; 优点&#xff1a;简单 少一些手动调用的代码缺点&#xff1a;灵活性差&#xff0c;不好控制&#xff0c;不支持异步…

智慧校园电子班牌系统

智慧电子班牌区别于传统电子班牌&#xff0c;智慧校园电子班牌系统更加注重老师和学生的沟通交流和及时数据交互。学校为每个教室配置一台智能电子班牌&#xff0c;一般安装于教室门口&#xff0c;用来实时显示学校通知、班级通知&#xff0c;可设置集中分布式管理&#xff0c;…

RSA加解密简单实现

目录 浅谈加解密实现方式 MD5加密 DES加密 AES加密 RSA加密 SSL加密认证 关于RSA加解密实现 简单数据加解密的实现 分块加解密实现 附录 浅谈加解密实现方式 关于数据加解密方式&#xff0c;我们一般分为不可逆加密、对称可逆加密、非对称加密、综合加密应用等&…

魔改并封装 YoloV5 Version7 的 detect.py 成 API接口以供 python 程序使用

文章目录IntroductionSection 1 起因Section 2 魔改的思路Section 3 代码Part 1 参数部分Part 2 识别 APIPart 3 完整的 DetectAPI.pyPart 4 修改 dataloaders.pySection 4 调用ReferenceIntroduction YoloV5 作为 YoloV4 之后的改进型&#xff0c;在算法上做出了优化&#xf…

errgroup 原理简析

golang.org/x/sync/errgroup errgroup提供了一组并行任务中错误采集的方案。 先看注释 Package errgroup provides synchronization, error propagation, and Context cancelation for groups of goroutines working on subtasks of a common task. Group 结构体 // A Gro…

Sphinx : 高性能SQL全文检索引擎

Sphinx是一款基于SQL的高性能全文检索引擎&#xff0c;Sphinx的性能在众多全文检索引擎中也是数一数二的&#xff0c;利用Sphinx&#xff0c;我们可以完成比数据库本身更专业的搜索功能&#xff0c;而且可以有很多针对性的性能优化。 Sphinx的特点 快速创建索引&#xff1a;3分…

Barra模型因子的构建及应用系列三之Momentum因子

一、摘要 在之前的Barra模型系列文章中&#xff0c;我们已经初步讲解、构建了Size因子和Beta因子&#xff0c;并分别创建了对应的单因子策略。通过回测发现&#xff0c;其中Size因子的小市值效应具有很强的收益能力。而本篇文章将在该系列下进一步构建Momentum因子。 二、模型…

90%企业在探索的敏捷开发怎么做?极狐GitLab总结了这些逻辑与流程

本文来自&#xff1a; 彭亮 极狐(GitLab) 高级产品经理 毛超 极狐(GitLab) 研发工程师 极狐(GitLab) 市场部内容团队 “敏捷” 是指能够驾驭变化&#xff0c;保持组织竞争优势的一种能力。自 2001 年《敏捷宣言》以来&#xff0c;敏捷及敏捷开发理念逐渐席卷全球。中国信通院《…

面试已上岸,成功拿到阿里和腾讯的入职offer,Java程序员面经全在这了,希望能帮到你!

前言 一开始的时候简历海投大多数都被拒绝了&#xff0c;后来自己找在腾讯上班的朋友帮忙改了一下简历&#xff0c;果然不一样了大多都能拿到面试机会&#xff0c;当然拿到后也没有那么顺利&#xff0c;面了差不多有十几家公司的样子&#xff0c;大大小小的都有&#xff0c;其中…

C++和QML混合编程_QML发送信号到C++端(信号和槽绑定)

C和QML混合编程_QML发送信号到C端&#xff08;信号和槽绑定&#xff09; 前言&#xff1a; 下面是之前讲解过的三种方法 1、使用Q_INVOKABLE声明一下普通函数&#xff0c;在QML端可以直接调用 2、使用Connections绑定QML的信号和C端的槽函数 3、使用connect绑定QML的信号和C端的…

通俗易懂理解——布隆过滤器

文章目录概述本质优缺点优点&#xff1a;缺点&#xff1a;实际应用解决redis缓存穿透问题&#xff1a;概述 本质 本质&#xff1a;很长的二进制向量&#xff08;数组&#xff09; 主要作用&#xff1a;判断一个数据在这个数组中是否存在&#xff0c;如果不存在为0&#xff0c…

NR PDCP duplication

欢迎关注同名微信公众号“modem协议笔记”。 PDCP duplication 是PDCP 的一个功能&#xff0c;主要是为满足URLLC 场景的可靠性/延迟要求&#xff0c;而产生的一种提高传输可靠性的机制&#xff0c;具体就是在信号状况比较差的情况下&#xff0c;网络侧通过配置PDCP duplicati…

集中式存储和分布式存储

分布式存储是相对于集中式存储来说的&#xff0c;在介绍分布式存储之前&#xff0c;我们先看看什么是集中式存储。不久之前&#xff0c;企业级的存储设备都是集中式存储。所谓集中式存储&#xff0c;从概念上可以看出来是具有集中性的&#xff0c;也就是整个存储是集中在一个系…

Zynq非Video Mixer方案实现视频叠加输出,无需SDK配置,提供工程源码和技术支持

目录1、前言2、Video Mixer的不便之处3、FDMA取代Video Mixer实现视频叠加输出4、Vivado工程详解5、上板调试验证并演示6、福利&#xff1a;工程代码的获取1、前言 关于Zynq使用Video Mixer方案实现视频叠加输出方案请参考点击查看&#xff1a;Video Mixer方案 对于Zynq和Micr…

Elasticsearch:Security API 介绍

在我之前的文章 “Elasticsearch&#xff1a;运用 API 创建 roles 及 users” &#xff0c;我展示了如何使用 Security API 来创建用户及角色来控制访问 Elasticsearch 中的索引。在今天的文章中&#xff0c;我将展示一个使用 Security API 来创建一个用户及角色来访问一个索引…