【 SuperPoint 】图像特征提取上的对比实验

news2024/11/24 7:10:52

1. SIFT,SuperPoint 都具有提取图片特征点,并且输出特征描述子的特性,本篇文章从特征点的提取数量,特征点的正确匹配数量来探索一下二者的优劣。

在这里插入图片描述
SuperPoint提取到的特征点数量要少一些,可以理解,我想原因大概是SuperPoint训练使用的是合成数据集,含有很多形状,并且只标出了线段的一些拐点,而sift对图像的像素值变化敏感。
在这里插入图片描述
在特征点匹配上,感觉不出有什么明显的差异,但是很明显,SuperPoint的鲁棒性更高一些,sift匹配有很多的错点,比如SIFT第三幅图中的牛奶盒子,由于物体没有上下的起伏,可以认为连线中的斜线都是错匹配。
在形状较为复杂的情况下
正如上文所说,SuperPoint对形状较多的图片敏感。

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
同样值得注意的是,第一张图的窗子的点,SuperPoint并没有检测出来。

2. 总结

在捕捉特征点的时候,SuperPoint对形状的特征点敏感,SIFT对像素的变化敏感
在进行特征点匹配的时候,SuperPoint的特征描述子鲁棒性更好一些
视角变化较大的情况下,二者的表现都差强人意

代码
SIFT.py:

from __future__ import print_function
import cv2 as cv
import numpy as np
import argparse

pic1 = "./1.ppm"
pic2 = "./6.ppm"


parser = argparse.ArgumentParser(description='Code for Feature Matching with FLANN tutorial.')
parser.add_argument('--input1', help='Path to input image 1.', default=pic1)
parser.add_argument('--input2', help='Path to input image 2.', default=pic2)
args = parser.parse_args()
img_object = cv.imread(pic1)
img_scene = cv.imread(pic2)
if img_object is None or img_scene is None:
    print('Could not open or find the images!')
    exit(0)

#-- Step 1: Detect the keypoints using SURF Detector, compute the descriptors
minHessian = 600
detector = cv.xfeatures2d_SURF.create(hessianThreshold=minHessian)
keypoints_obj, descriptors_obj = detector.detectAndCompute(img_object, None)
keypoints_scene, descriptors_scene = detector.detectAndCompute(img_scene, None)

#-- Step 2: Matching descriptor vectors with a FLANN based matcher
# Since SURF is a floating-point descriptor NORM_L2 is used
matcher = cv.DescriptorMatcher_create(cv.DescriptorMatcher_FLANNBASED)
knn_matches = matcher.knnMatch(descriptors_obj, descriptors_scene, 2)

#-- Filter matches using the Lowe's ratio test
ratio_thresh = 0.75
good_matches = []
for m,n in knn_matches:
    if m.distance < ratio_thresh * n.distance:
        good_matches.append(m)

print("The number of keypoints in image1 is", len(keypoints_obj))
print("The number of keypoints in image2 is", len(keypoints_scene))
#-- Draw matches
img_matches = np.empty((max(img_object.shape[0], img_scene.shape[0]), img_object.shape[1]+img_scene.shape[1], 3), dtype=np.uint8)
cv.drawMatches(img_object, keypoints_obj, img_scene, keypoints_scene, good_matches, img_matches, flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

cv.namedWindow("Good Matches of SIFT", 0)
cv.resizeWindow("Good Matches of SIFT", 1024, 1024)
cv.imshow('Good Matches of SIFT', img_matches)
cv.waitKey()

使用sift.py时,只需要修改第6,7行的图片路径即可。

SuperPoint

import numpy as np
import os
import cv2
import torch



# Jet colormap for visualization.
myjet = np.array([[0., 0., 0.5],
                  [0., 0., 0.99910873],
                  [0., 0.37843137, 1.],
                  [0., 0.83333333, 1.],
                  [0.30044276, 1., 0.66729918],
                  [0.66729918, 1., 0.30044276],
                  [1., 0.90123457, 0.],
                  [1., 0.48002905, 0.],
                  [0.99910873, 0.07334786, 0.],
                  [0.5, 0., 0.]])


class SuperPointNet(torch.nn.Module):
    """ Pytorch definition of SuperPoint Network. """

    def __init__(self):
        super(SuperPointNet, self).__init__()
        self.relu = torch.nn.ReLU(inplace=True)
        self.pool = torch.nn.MaxPool2d(kernel_size=2, stride=2)
        c1, c2, c3, c4, c5, d1 = 64, 64, 128, 128, 256, 256
        # Shared Encoder.
        self.conv1a = torch.nn.Conv2d(1, c1, kernel_size=3, stride=1, padding=1)
        self.conv1b = torch.nn.Conv2d(c1, c1, kernel_size=3, stride=1, padding=1)
        self.conv2a = torch.nn.Conv2d(c1, c2, kernel_size=3, stride=1, padding=1)
        self.conv2b = torch.nn.Conv2d(c2, c2, kernel_size=3, stride=1, padding=1)
        self.conv3a = torch.nn.Conv2d(c2, c3, kernel_size=3, stride=1, padding=1)
        self.conv3b = torch.nn.Conv2d(c3, c3, kernel_size=3, stride=1, padding=1)
        self.conv4a = torch.nn.Conv2d(c3, c4, kernel_size=3, stride=1, padding=1)
        self.conv4b = torch.nn.Conv2d(c4, c4, kernel_size=3, stride=1, padding=1)
        # Detector Head.
        self.convPa = torch.nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1)
        self.convPb = torch.nn.Conv2d(c5, 65, kernel_size=1, stride=1, padding=0)
        # Descriptor Head.
        self.convDa = torch.nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1)
        self.convDb = torch.nn.Conv2d(c5, d1, kernel_size=1, stride=1, padding=0)

    def forward(self, x):
        """ Forward pass that jointly computes unprocessed point and descriptor
        tensors.
        Input
          x: Image pytorch tensor shaped N x 1 x H x W.
        Output
          semi: Output point pytorch tensor shaped N x 65 x H/8 x W/8.
          desc: Output descriptor pytorch tensor shaped N x 256 x H/8 x W/8.
        """
        # Shared Encoder.
        x = self.relu(self.conv1a(x))
        x = self.relu(self.conv1b(x))
        x = self.pool(x)
        x = self.relu(self.conv2a(x))
        x = self.relu(self.conv2b(x))
        x = self.pool(x)
        x = self.relu(self.conv3a(x))
        x = self.relu(self.conv3b(x))
        x = self.pool(x)
        x = self.relu(self.conv4a(x))
        x = self.relu(self.conv4b(x))
        # Detector Head.
        cPa = self.relu(self.convPa(x))
        semi = self.convPb(cPa)
        # Descriptor Head.
        cDa = self.relu(self.convDa(x))
        desc = self.convDb(cDa)
        dn = torch.norm(desc, p=2, dim=1)  # Compute the norm.
        desc = desc.div(torch.unsqueeze(dn, 1))  # Divide by norm to normalize.
        return semi, desc


class SuperPointFrontend(object):
    """ Wrapper around pytorch net to help with pre and post image processing. """

    def __init__(self, weights_path, nms_dist, conf_thresh, nn_thresh,
                 cuda=False):
        self.name = 'SuperPoint'
        self.cuda = cuda
        self.nms_dist = nms_dist
        self.conf_thresh = conf_thresh
        self.nn_thresh = nn_thresh  # L2 descriptor distance for good match.
        self.cell = 8  # Size of each output cell. Keep this fixed.
        self.border_remove = 4  # Remove points this close to the border.

        # Load the network in inference mode.
        self.net = SuperPointNet()
        if cuda:
            # Train on GPU, deploy on GPU.
            self.net.load_state_dict(torch.load(weights_path))
            self.net = self.net.cuda()
        else:
            # Train on GPU, deploy on CPU.
            self.net.load_state_dict(torch.load(weights_path,
                                                map_location=lambda storage, loc: storage))
        self.net.eval()

    def nms_fast(self, in_corners, H, W, dist_thresh):
        """
        Run a faster approximate Non-Max-Suppression on numpy corners shaped:
          3xN [x_i,y_i,conf_i]^T

        Algo summary: Create a grid sized HxW. Assign each corner location a 1, rest
        are zeros. Iterate through all the 1's and convert them either to -1 or 0.
        Suppress points by setting nearby values to 0.

        Grid Value Legend:
        -1 : Kept.
         0 : Empty or suppressed.
         1 : To be processed (converted to either kept or supressed).

        NOTE: The NMS first rounds points to integers, so NMS distance might not
        be exactly dist_thresh. It also assumes points are within image boundaries.

        Inputs
          in_corners - 3xN numpy array with corners [x_i, y_i, confidence_i]^T.
          H - Image height.
          W - Image width.
          dist_thresh - Distance to suppress, measured as an infinty norm distance.
        Returns
          nmsed_corners - 3xN numpy matrix with surviving corners.
          nmsed_inds - N length numpy vector with surviving corner indices.
        """
        grid = np.zeros((H, W)).astype(int)  # Track NMS data.
        inds = np.zeros((H, W)).astype(int)  # Store indices of points.
        # Sort by confidence and round to nearest int.
        inds1 = np.argsort(-in_corners[2, :])
        corners = in_corners[:, inds1]
        rcorners = corners[:2, :].round().astype(int)  # Rounded corners.
        # Check for edge case of 0 or 1 corners.
        if rcorners.shape[1] == 0:
            return np.zeros((3, 0)).astype(int), np.zeros(0).astype(int)
        if rcorners.shape[1] == 1:
            out = np.vstack((rcorners, in_corners[2])).reshape(3, 1)
            return out, np.zeros((1)).astype(int)
        # Initialize the grid.
        for i, rc in enumerate(rcorners.T):
            grid[rcorners[1, i], rcorners[0, i]] = 1
            inds[rcorners[1, i], rcorners[0, i]] = i
        # Pad the border of the grid, so that we can NMS points near the border.
        pad = dist_thresh
        grid = np.pad(grid, ((pad, pad), (pad, pad)), mode='constant')
        # Iterate through points, highest to lowest conf, suppress neighborhood.
        count = 0
        for i, rc in enumerate(rcorners.T):
            # Account for top and left padding.
            pt = (rc[0] + pad, rc[1] + pad)
            if grid[pt[1], pt[0]] == 1:  # If not yet suppressed.
                grid[pt[1] - pad:pt[1] + pad + 1, pt[0] - pad:pt[0] + pad + 1] = 0
                grid[pt[1], pt[0]] = -1
                count += 1
        # Get all surviving -1's and return sorted array of remaining corners.
        keepy, keepx = np.where(grid == -1)
        keepy, keepx = keepy - pad, keepx - pad
        inds_keep = inds[keepy, keepx]
        out = corners[:, inds_keep]
        values = out[-1, :]
        inds2 = np.argsort(-values)
        out = out[:, inds2]
        out_inds = inds1[inds_keep[inds2]]
        return out, out_inds

    def run(self, img):
        """ Process a numpy image to extract points and descriptors.
        Input
          img - HxW numpy float32 input image in range [0,1].
        Output
          corners - 3xN numpy array with corners [x_i, y_i, confidence_i]^T.
          desc - 256xN numpy array of corresponding unit normalized descriptors.
          heatmap - HxW numpy heatmap in range [0,1] of point confidences.
          """
        assert img.ndim == 2, 'Image must be grayscale.'
        assert img.dtype == np.float32, 'Image must be float32.'
        H, W = img.shape[0], img.shape[1]
        inp = img.copy()
        inp = (inp.reshape(1, H, W))
        inp = torch.from_numpy(inp)
        inp = torch.autograd.Variable(inp).view(1, 1, H, W)
        if self.cuda:
            inp = inp.cuda()
        # Forward pass of network.
        outs = self.net.forward(inp)
        semi, coarse_desc = outs[0], outs[1]
        # Convert pytorch -> numpy.
        semi = semi.data.cpu().numpy().squeeze()
        # --- Process points.
        # C = np.max(semi)
        # dense = np.exp(semi - C)  # Softmax.
        # dense = dense / (np.sum(dense))  # Should sum to 1.
        dense = np.exp(semi)  # Softmax.
        dense = dense / (np.sum(dense, axis=0) + .00001)  # Should sum to 1.
        # Remove dustbin.
        nodust = dense[:-1, :, :]
        # Reshape to get full resolution heatmap.
        Hc = int(H / self.cell)
        Wc = int(W / self.cell)
        nodust = nodust.transpose(1, 2, 0)
        heatmap = np.reshape(nodust, [Hc, Wc, self.cell, self.cell])
        heatmap = np.transpose(heatmap, [0, 2, 1, 3])
        heatmap = np.reshape(heatmap, [Hc * self.cell, Wc * self.cell])
        xs, ys = np.where(heatmap >= self.conf_thresh)  # Confidence threshold.
        if len(xs) == 0:
            return np.zeros((3, 0)), None, None
        pts = np.zeros((3, len(xs)))  # Populate point data sized 3xN.
        pts[0, :] = ys
        pts[1, :] = xs
        pts[2, :] = heatmap[xs, ys]
        pts, _ = self.nms_fast(pts, H, W, dist_thresh=self.nms_dist)  # Apply NMS.
        inds = np.argsort(pts[2, :])
        pts = pts[:, inds[::-1]]  # Sort by confidence.
        # Remove points along border.
        bord = self.border_remove
        toremoveW = np.logical_or(pts[0, :] < bord, pts[0, :] >= (W - bord))
        toremoveH = np.logical_or(pts[1, :] < bord, pts[1, :] >= (H - bord))
        toremove = np.logical_or(toremoveW, toremoveH)
        pts = pts[:, ~toremove]
        # --- Process descriptor.
        D = coarse_desc.shape[1]
        if pts.shape[1] == 0:
            desc = np.zeros((D, 0))
        else:
            # Interpolate into descriptor map using 2D point locations.
            samp_pts = torch.from_numpy(pts[:2, :].copy())
            samp_pts[0, :] = (samp_pts[0, :] / (float(W) / 2.)) - 1.
            samp_pts[1, :] = (samp_pts[1, :] / (float(H) / 2.)) - 1.
            samp_pts = samp_pts.transpose(0, 1).contiguous()
            samp_pts = samp_pts.view(1, 1, -1, 2)
            samp_pts = samp_pts.float()
            if self.cuda:
                samp_pts = samp_pts.cuda()
            desc = torch.nn.functional.grid_sample(coarse_desc, samp_pts)
            desc = desc.data.cpu().numpy().reshape(D, -1)
            desc /= np.linalg.norm(desc, axis=0)[np.newaxis, :]
        return pts, desc, heatmap



if __name__ == '__main__':


    print('==> Loading pre-trained network.')
    # This class runs the SuperPoint network and processes its outputs.
    fe = SuperPointFrontend(weights_path="superpoint_v1.pth",
                            nms_dist=4,
                            conf_thresh=0.015,
                            nn_thresh=0.7,
                            cuda=True)
    print('==> Successfully loaded pre-trained network.')

    pic1 = "./1.ppm"
    pic2 = "./6.ppm"

    image1_origin = cv2.imread(pic1)
    image2_origin = cv2.imread(pic2)

    image1 = cv2.imread(pic1, cv2.IMREAD_GRAYSCALE).astype(np.float32)
    image2 = cv2.imread(pic2, cv2.IMREAD_GRAYSCALE).astype(np.float32)
    image1 = image1 / 255.
    image2 = image2 / 255.

    if image1 is None or image2 is None:
        print('Could not open or find the images!')
        exit(0)

    # -- Step 1: Detect the keypoints using SURF Detector, compute the descriptors

    keypoints_obj, descriptors_obj, h1 = fe.run(image1)
    keypoints_scene, descriptors_scene, h2 = fe.run(image2)

    ## to transfer array ==> KeyPoints
    keypoints_obj = [cv2.KeyPoint(keypoints_obj[0][i], keypoints_obj[1][i], 1)
                for i in range(keypoints_obj.shape[1])]
    keypoints_scene = [cv2.KeyPoint(keypoints_scene[0][i], keypoints_scene[1][i], 1)
                     for i in range(keypoints_scene.shape[1])]
    print("The number of keypoints in image1 is", len(keypoints_obj))
    print("The number of keypoints in image2 is", len(keypoints_scene))

    # -- Step 2: Matching descriptor vectors with a FLANN based matcher
    # Since SURF is a floating-point descriptor NORM_L2 is used
    matcher = cv2.DescriptorMatcher_create(cv2.DescriptorMatcher_FLANNBASED)
    knn_matches = matcher.knnMatch(descriptors_obj.T, descriptors_scene.T, 2)

    # -- Filter matches using the Lowe's ratio test
    ratio_thresh = 0.75
    good_matches = []
    for m, n in knn_matches:
        if m.distance < ratio_thresh * n.distance:
            good_matches.append(m)

  # -- Draw matches
    img_matches = np.empty((max(image1_origin.shape[0], image2_origin.shape[0]), image1_origin.shape[1] + image2_origin.shape[1], 3),
                           dtype=np.uint8)
    cv2.drawMatches(image1_origin, keypoints_obj, image2_origin, keypoints_scene, good_matches, img_matches,
                    flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

    cv2.namedWindow("Good Matches of SuperPoint", 0)
    cv2.resizeWindow("Good Matches of SuperPoint", 1024, 1024)
    cv2.imshow('Good Matches of SuperPoint', img_matches)
    cv2.waitKey()

superpoint.py是基于官方给出的代码修改得到,使用步骤如下:

去官网下载模型的预训练文件,https://github.com/magicleap/SuperPointPretrainedNetwork

在这里插入图片描述

3. 笔者自己也操作跑了一个小视频:

请添加图片描述

参考:SIFT,SuperPoint在图像特征提取上的对比实验

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1053806.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

mac如何卸载应用并删除文件,2023年最新妙招大公开!

大家好&#xff0c;今天小编要为大家分享一些关于mac电脑的小技巧&#xff0c;特别是关于如何正确卸载应用程序以及清理卸载后的残留文件。你知道吗&#xff1f;很多人都不知道&#xff0c;mac系统默认的卸载方式可能会导致一些残留文件滞留在你的电脑上&#xff0c;慢慢地占用…

算法竞赛备赛之贪心算法训练提升,贪心算法基础掌握

1.区间问题 905.区间选点 给定N个闭区间[ai, bi]&#xff0c;请你在数轴上选择尽量少的点&#xff0c;使得每个区间内至少包含一个选出的点。 输出选择的点的最小数量&#xff0c;位于区间端点上的点也算作是区间内。 将每个按区间的右端点从小到大排序 从前往后依次枚举每…

VS Code 如何搭建 C/C++开发环境

目录 VScode是什么? VScode的下载和安装? 2.1 下载和安装 安装&#xff1a; 2.2 环境的介绍 环境介绍&#xff1a;​编辑 安装中文插件&#xff1a; VScode配置 C/C 开发环境 3.1 下载和配置MinGW-w64 编译器套件 下载&#xff1a; 配置MinGW64&#xff1a; 3.2 安…

排序篇(六)----排序小结

排序篇(六)----排序小结 排序算法复杂度及稳定性分析 直接插入排序的算法复杂度&#xff1a; 最好情况下&#xff0c;当数组已经有序时&#xff0c;直接插入排序的时间复杂度为O(n)&#xff0c;其中n是数组的大小。最坏情况下&#xff0c;当数组逆序排列时&#xff0c;直接插…

【python海洋专题二】读取水深nc文件并水深地形图

【python海洋专题二】读取水深nc文件并水深地形图 海洋与大气科学 导入函数包 from netCDF4 import Dataset import numpy as np import matplotlib.pyplot as plt 前两个上期更新说明了&#xff1a;第一个读取nc文件&#xff0c;第二个用于计算。 matplotlib.pyplot&#xf…

Audacity 使用教程:轻松录制、编辑音频

Audacity 使用教程&#xff1a;轻松录制、编辑音频 1. 简介 Audacity 是一款免费、开源且功能强大的音频录制和编辑软件。它适用于 Windows、Mac 和 Linux 等多种操作系统&#xff0c;适合音乐制作、广播后期制作以及普通用户进行音频处理。本教程将带领大家熟悉 Audacity 的…

Linux 基本语句_4_指针和函数

指针函数 顾名思义&#xff0c;即返回值为指针的函数 int * f (int n){int *p NULL;//空指针return p;//返回一个地址 }函数指针 指向函数的指针&#xff0c;每个函数都有自己的入口地址&#xff0c;函数指针专门指向这些地址#include <stdio.h>int max(int a, int b)…

【LeetCode热题100】--226.翻转二叉树

226.翻转二叉树 /*** Definition for a binary tree node.* public class TreeNode {* int val;* TreeNode left;* TreeNode right;* TreeNode() {}* TreeNode(int val) { this.val val; }* TreeNode(int val, TreeNode left, TreeNode right) {* …

10.1 File类

前言&#xff1a; java.io包中的File类是唯一一个可以代表磁盘文件的对象&#xff0c;它定义了一些用于操作文件的方法。通过调用File类提供的各种方法&#xff0c;可以创建、删除或者重命名文件&#xff0c;判断硬盘上某个文件是否存在&#xff0c;查询文件最后修改时间&…

HTTP的请求方法,空行,body,介绍请求报头的内部以及粘包问题

目录 一、GET与POST简介 二、空行和body 三、初识请求报头以及粘包问题 四、认识请求报头剩余部分 一、GET与POST简介 GET https://www.sogou.com/HTTP/1.1 请求报文中的方法&#xff0c;是最常规的方法&#xff08;获取资源&#xff09; POST&#xff1a;传输实体主体的方法…

MACH架构的质量工程指南

MACH是快速创建高质量应用的最佳实践&#xff0c;同时也意味着有助于团队内的质量工程。本文介绍了MACH在质量工程领域所起的作用&#xff0c;并介绍了成功的MACH架构必备的8个要素。原文: MACH Architecture: The Quality Engineering Guide MACH和质量工程有关。 在过去几年里…

点餐小程序实战教程02用户注册

按照我们的需求分析&#xff0c;本篇开始我们进入到具体的开发部分。低代码开发的基础是先创建数据源&#xff0c;我们本篇介绍用户注册功能&#xff0c;先需要创建一个用于存放用户信息的数据源。 1 用户分类 未使用过低代码工具的初学者&#xff0c;可能对低代码的用户分类…

集合-ArrayList源码分析(面试)

系列文章目录 1.集合-Collection-CSDN博客​​​​​​ 2.集合-List集合-CSDN博客 3.集合-ArrayList源码分析(面试)_喜欢吃animal milk的博客-CSDN博客 目录 系列文章目录 前言 一 . 什么是ArrayList? 二 . ArrayList集合底层原理 总结 前言 大家好,今天给大家讲一下Arra…

【LeetCode热题100】--94.二叉树的中序遍历

94.二叉树的中序遍历 /*** Definition for a binary tree node.* public class TreeNode {* int val;* TreeNode left;* TreeNode right;* TreeNode() {}* TreeNode(int val) { this.val val; }* TreeNode(int val, TreeNode left, TreeNode right) {…

Ubuntu安装zsh之后搜狗起不来

1. 问题描述 装了Ubuntu22.04后&#xff0c;先装了搜狗&#xff0c;再装了zsh&#xff0c;装完zsh之后发现搜狗起不来了&#xff0c;重装搜狗和fcitx之后还是起不来&#xff0c;在此记录解决过程。 2. 解决方法 参考博客: 配置zsh后&#xff0c;无法通过 DBus 连接到 Fcitx,…

2023年中国劳保用镜市场规模现状及行业需求前景分析[图]

眼睛对人体的重要性可谓不言而喻。同时&#xff0c;眼睛也是很脆弱的人体器官。特别是在生产过程中&#xff0c;由于异物、有毒气体等侵害&#xff0c;眼睛就很容易受伤&#xff0c;随着现代工业的发展&#xff0c;劳保镜已成为一种必不可少的工具&#xff0c;它不仅可以保护工…

ARTS 打卡 第四周,游刃有余

引言 时间过得好快&#xff0c;已经到了第四周学习打卡环节&#xff0c;也是本次活动的最后一周。认识三掌柜的想必都知道&#xff0c;我持续创作技术博客已经有6年时间了&#xff0c;固定每个月发布不少于6篇博文。同时&#xff0c;自己作为一名热爱分享的开发者&#xff0c;像…

jmh的OperationsPerInvocation参数

背景 最近再看fllink的性能基准测试时&#xff0c;发现它使用了OperationsPerInvocation注解&#xff0c;本文就来记录下这个注解的含义 官方解释 从官方文档&#xff1a;http://javadox.com/org.openjdk.jmh/jmh-core/0.9/org/openjdk/jmh/annotations/OperationsPerInvoca…

1.4.C++项目:仿muduo库实现并发服务器之buffer模块的设计

项目完整版在&#xff1a; 一、buffer模块&#xff1a; 缓冲区模块 Buffer模块是一个缓冲区模块&#xff0c;用于实现通信中用户态的接收缓冲区和发送缓冲区功能。 二、提供的功能 存储数据&#xff0c;取出数据 三、实现思想 1.实现换出去得有一块内存空间&#xff0c;采…

预编译(3)

目录 命名约定&#xff1a; #undef 命令行定义 条件编译 常见的条件编译指令 头⽂件的包含 头⽂件被包含的⽅式&#xff1a; 本地⽂件包含 库⽂件包含 嵌套文件包含 命名约定&#xff1a; 一般来讲函数的宏的使用语法很相似。所以语言本身没法帮我们区分二者那我们平…