OpenCV图像拼接多频段融合源码重构

news2024/9/20 9:44:14

OpenCV图像拼接多频段融合源码重构

图像拼接是计算机视觉中的一个常见问题,OpenCV提供了十分完善的算法类库。作者使用OpenCV4.6.0进行图像拼接,其提供了包括曝光补偿、最佳缝合线检测以及多频段融合等图像拼接常用算法,测试发现多频段融合算法的融合效果极好。为了加深理解,作者将OpenCV源码中的多频段融合算法代码进行了重构,仅保留了CPU处理的代码,这里分享出来。

源码重构

#ifndef STITCHINGBLENDER_H
#define STITCHINGBLENDER_H

#include "opencv2/opencv.hpp"

class KBlender
{
public:
    virtual ~KBlender() {}
    virtual void prepare(const std::vector<cv::Point> &corners, const std::vector<cv::Size> &sizes);
    virtual void prepare(cv::Rect dst_roi);
    virtual void feed(cv::InputArray img, cv::InputArray mask, cv::Point tl);
    virtual void blend(cv::InputOutputArray dst,cv::InputOutputArray dst_mask);

protected:
    cv::UMat dst_, dst_mask_;
    cv::Rect dst_roi_;
};


class KMultiBandBlender : public KBlender
{
public:
    KMultiBandBlender(int num_bands = 5, int weight_type = CV_32F);

    int numBands() const { return actual_num_bands_; }
    void setNumBands(int val) { actual_num_bands_ = val; }

    void prepare(cv::Rect dst_roi) CV_OVERRIDE;
    void feed(cv::InputArray img, cv::InputArray mask, cv::Point tl) CV_OVERRIDE;
    void blend(cv::InputOutputArray dst, cv::InputOutputArray dst_mask) CV_OVERRIDE;

private:
    int actual_num_bands_, num_bands_;
    std::vector<cv::UMat> dst_pyr_laplace_;
    std::vector<cv::UMat> dst_band_weights_;
    cv::Rect dst_roi_final_;
    int weight_type_; //CV_32F or CV_16S
};


//
// Auxiliary functions
void normalizeUsingWeightMap(cv::InputArray weight, CV_IN_OUT cv::InputOutputArray src);
void createLaplacePyr(cv::InputArray img, int num_levels, CV_IN_OUT std::vector<cv::UMat>& pyr);
void restoreImageFromLaplacePyr(CV_IN_OUT std::vector<cv::UMat>& pyr);


#endif // STITCHINGBLENDER_H
#include "stitchingblender.h"
#include <QDebug>

static const float WEIGHT_EPS = 1e-5f;

void KBlender::prepare(const std::vector<cv::Point> &corners, const std::vector<cv::Size> &sizes)
{
    prepare(cv::detail::resultRoi(corners, sizes));
}


void KBlender::prepare(cv::Rect dst_roi)
{
    dst_.create(dst_roi.size(), CV_16SC3);
    dst_.setTo(cv::Scalar::all(0));
    dst_mask_.create(dst_roi.size(), CV_8U);
    dst_mask_.setTo(cv::Scalar::all(0));
    dst_roi_ = dst_roi;
}


void KBlender::feed(cv::InputArray _img, cv::InputArray _mask, cv::Point tl)
{
    cv::Mat img = _img.getMat();
    cv::Mat mask = _mask.getMat();
    cv::Mat dst = dst_.getMat(cv::ACCESS_RW);
    cv::Mat dst_mask = dst_mask_.getMat(cv::ACCESS_RW);

    CV_Assert(img.type() == CV_16SC3);
    CV_Assert(mask.type() == CV_8U);
    int dx = tl.x - dst_roi_.x;
    int dy = tl.y - dst_roi_.y;

    for (int y = 0; y < img.rows; ++y)
    {
        const cv::Point3_<short> *src_row = img.ptr<cv::Point3_<short> >(y);
        cv::Point3_<short> *dst_row = dst.ptr<cv::Point3_<short> >(dy + y);
        const uchar *mask_row = mask.ptr<uchar>(y);
        uchar *dst_mask_row = dst_mask.ptr<uchar>(dy + y);

        for (int x = 0; x < img.cols; ++x)
        {
            if (mask_row[x])
                dst_row[dx + x] = src_row[x];
            dst_mask_row[dx + x] |= mask_row[x];
        }
    }
}


void KBlender::blend(cv::InputOutputArray dst, cv::InputOutputArray dst_mask)
{
    cv::UMat mask;
    compare(dst_mask_, 0, mask, cv::CMP_EQ);
    dst_.setTo(cv::Scalar::all(0), mask);
    dst.assign(dst_);
    dst_mask.assign(dst_mask_);
    dst_.release();
    dst_mask_.release();
}


KMultiBandBlender::KMultiBandBlender(int num_bands, int weight_type)
{
    num_bands_ = 0;
    setNumBands(num_bands);

    CV_Assert(weight_type == CV_32F || weight_type == CV_16S);
    weight_type_ = weight_type;
}

void KMultiBandBlender::prepare(cv::Rect dst_roi)
{
    dst_roi_final_ = dst_roi;

    // Crop unnecessary bands
    double max_len = static_cast<double>(std::max(dst_roi.width, dst_roi.height));
    num_bands_ = std::min(actual_num_bands_, static_cast<int>(ceil(std::log(max_len) / std::log(2.0))));

    // Add border to the final image, to ensure sizes are divided by (1 << num_bands_)
    dst_roi.width += ((1 << num_bands_) - dst_roi.width % (1 << num_bands_)) % (1 << num_bands_);
    dst_roi.height += ((1 << num_bands_) - dst_roi.height % (1 << num_bands_)) % (1 << num_bands_);

    KBlender::prepare(dst_roi);

    dst_pyr_laplace_.resize(num_bands_ + 1);
    dst_pyr_laplace_[0] = dst_;

    dst_band_weights_.resize(num_bands_ + 1);
    dst_band_weights_[0].create(dst_roi.size(), weight_type_);
    dst_band_weights_[0].setTo(0);

    for (int i = 1; i <= num_bands_; ++i)
    {
        dst_pyr_laplace_[i].create((dst_pyr_laplace_[i - 1].rows + 1) / 2,
                (dst_pyr_laplace_[i - 1].cols + 1) / 2, CV_16SC3);
        dst_band_weights_[i].create((dst_band_weights_[i - 1].rows + 1) / 2,
                (dst_band_weights_[i - 1].cols + 1) / 2, weight_type_);
        dst_pyr_laplace_[i].setTo(cv::Scalar::all(0));
        dst_band_weights_[i].setTo(0);
    }
}

void KMultiBandBlender::feed(cv::InputArray _img, cv::InputArray mask, cv::Point tl)
{
    int64 t = cv::getTickCount();

    cv::UMat img;
    img = _img.getUMat();

    CV_Assert(img.type() == CV_16SC3 || img.type() == CV_8UC3);
    CV_Assert(mask.type() == CV_8U);

    // Keep source image in memory with small border
    int gap = 3 * (1 << num_bands_);
    cv::Point tl_new(std::max(dst_roi_.x, tl.x - gap),
                     std::max(dst_roi_.y, tl.y - gap));
    cv::Point br_new(std::min(dst_roi_.br().x, tl.x + img.cols + gap),
                     std::min(dst_roi_.br().y, tl.y + img.rows + gap));

    // Ensure coordinates of top-left, bottom-right corners are divided by (1 << num_bands_).
    // After that scale between layers is exactly 2.
    //
    // We do it to avoid interpolation problems when keeping sub-images only. There is no such problem when
    // image is bordered to have size equal to the final image size, but this is too memory hungry approach.
    tl_new.x = dst_roi_.x + (((tl_new.x - dst_roi_.x) >> num_bands_) << num_bands_);
    tl_new.y = dst_roi_.y + (((tl_new.y - dst_roi_.y) >> num_bands_) << num_bands_);
    int width = br_new.x - tl_new.x;
    int height = br_new.y - tl_new.y;
    width += ((1 << num_bands_) - width % (1 << num_bands_)) % (1 << num_bands_);
    height += ((1 << num_bands_) - height % (1 << num_bands_)) % (1 << num_bands_);
    br_new.x = tl_new.x + width;
    br_new.y = tl_new.y + height;
    int dy = std::max(br_new.y - dst_roi_.br().y, 0);
    int dx = std::max(br_new.x - dst_roi_.br().x, 0);
    tl_new.x -= dx; br_new.x -= dx;
    tl_new.y -= dy; br_new.y -= dy;

    int top = tl.y - tl_new.y;
    int left = tl.x - tl_new.x;
    int bottom = br_new.y - tl.y - img.rows;
    int right = br_new.x - tl.x - img.cols;

    // Create the source image Laplacian pyramid
    cv::UMat img_with_border;
    copyMakeBorder(_img, img_with_border, top, bottom, left, right,cv::BORDER_REFLECT);

    qDebug() << "  Add border to the source image, time: " << ((cv::getTickCount() - t) / cv::getTickFrequency())*1000 << " ms";

    t = cv::getTickCount();

    std::vector<cv::UMat> src_pyr_laplace;
    createLaplacePyr(img_with_border, num_bands_, src_pyr_laplace);

    qDebug() << "  Create the source image Laplacian pyramid, time: " << ((cv::getTickCount() - t) / cv::getTickFrequency())*1000 << " ms";

    t = cv::getTickCount();

    // Create the weight map Gaussian pyramid
    cv::UMat weight_map;
    std::vector<cv::UMat> weight_pyr_gauss(num_bands_ + 1);

    if (weight_type_ == CV_32F)
    {
        mask.getUMat().convertTo(weight_map, CV_32F, 1./255.);
    }
    else // weight_type_ == CV_16S
    {
        mask.getUMat().convertTo(weight_map, CV_16S);
        cv::UMat add_mask;
        compare(mask, 0, add_mask, cv::CMP_NE);
        add(weight_map, cv::Scalar::all(1), weight_map, add_mask);
    }

    copyMakeBorder(weight_map, weight_pyr_gauss[0], top, bottom, left, right, cv::BORDER_CONSTANT);

    for (int i = 0; i < num_bands_; ++i)
        pyrDown(weight_pyr_gauss[i], weight_pyr_gauss[i + 1]);

    qDebug() << "  Create the weight map Gaussian pyramid, time: " << ((cv::getTickCount() - t) / cv::getTickFrequency())*1000 << " ms";

    t = cv::getTickCount();

    int y_tl = tl_new.y - dst_roi_.y;
    int y_br = br_new.y - dst_roi_.y;
    int x_tl = tl_new.x - dst_roi_.x;
    int x_br = br_new.x - dst_roi_.x;

    // Add weighted layer of the source image to the final Laplacian pyramid layer
    for (int i = 0; i <= num_bands_; ++i)
    {
        cv::Rect rc(x_tl, y_tl, x_br - x_tl, y_br - y_tl);
        {
            cv::Mat _src_pyr_laplace = src_pyr_laplace[i].getMat(cv::ACCESS_READ);
            cv::Mat _dst_pyr_laplace = dst_pyr_laplace_[i](rc).getMat(cv::ACCESS_RW);
            cv::Mat _weight_pyr_gauss = weight_pyr_gauss[i].getMat(cv::ACCESS_READ);
            cv::Mat _dst_band_weights = dst_band_weights_[i](rc).getMat(cv::ACCESS_RW);
            if (weight_type_ == CV_32F)
            {
                for (int y = 0; y < rc.height; ++y)
                {
                    const cv::Point3_<short>* src_row = _src_pyr_laplace.ptr<cv::Point3_<short> >(y);
                    cv::Point3_<short>* dst_row = _dst_pyr_laplace.ptr<cv::Point3_<short> >(y);
                    const float* weight_row = _weight_pyr_gauss.ptr<float>(y);
                    float* dst_weight_row = _dst_band_weights.ptr<float>(y);

                    for (int x = 0; x < rc.width; ++x)
                    {
                        dst_row[x].x += static_cast<short>(src_row[x].x * weight_row[x]);
                        dst_row[x].y += static_cast<short>(src_row[x].y * weight_row[x]);
                        dst_row[x].z += static_cast<short>(src_row[x].z * weight_row[x]);
                        dst_weight_row[x] += weight_row[x];
                    }
                }
            }
            else // weight_type_ == CV_16S
            {
                for (int y = 0; y < y_br - y_tl; ++y)
                {
                    const cv::Point3_<short>* src_row = _src_pyr_laplace.ptr<cv::Point3_<short> >(y);
                    cv::Point3_<short>* dst_row = _dst_pyr_laplace.ptr<cv::Point3_<short> >(y);
                    const short* weight_row = _weight_pyr_gauss.ptr<short>(y);
                    short* dst_weight_row = _dst_band_weights.ptr<short>(y);

                    for (int x = 0; x < x_br - x_tl; ++x)
                    {
                        dst_row[x].x += short((src_row[x].x * weight_row[x]) >> 8);
                        dst_row[x].y += short((src_row[x].y * weight_row[x]) >> 8);
                        dst_row[x].z += short((src_row[x].z * weight_row[x]) >> 8);
                        dst_weight_row[x] += weight_row[x];
                    }
                }
            }
        }

        x_tl /= 2; y_tl /= 2;
        x_br /= 2; y_br /= 2;
    }

    qDebug() << "  Add weighted layer of the source image to the final Laplacian pyramid layer, time: " << ((cv::getTickCount() - t) / cv::getTickFrequency())*1000 << " ms";
}


void KMultiBandBlender::blend(cv::InputOutputArray dst, cv::InputOutputArray dst_mask)
{
    cv::Rect dst_rc(0, 0, dst_roi_final_.width, dst_roi_final_.height);

    cv::UMat dst_band_weights_0;

    for (int i = 0; i <= num_bands_; ++i)
        normalizeUsingWeightMap(dst_band_weights_[i], dst_pyr_laplace_[i]);

    restoreImageFromLaplacePyr(dst_pyr_laplace_);

    dst_ = dst_pyr_laplace_[0](dst_rc);
    dst_band_weights_0 = dst_band_weights_[0];

    dst_pyr_laplace_.clear();
    dst_band_weights_.clear();

    compare(dst_band_weights_0(dst_rc), WEIGHT_EPS, dst_mask_, cv::CMP_GT);

    KBlender::blend(dst, dst_mask);
}


//
// Auxiliary functions

void normalizeUsingWeightMap(cv::InputArray _weight, cv::InputOutputArray _src)
{
    cv::Mat src;
    cv::Mat weight;

    src = _src.getMat();
    weight = _weight.getMat();

    CV_Assert(src.type() == CV_16SC3);

    if (weight.type() == CV_32FC1)
    {
        for (int y = 0; y < src.rows; ++y)
        {
            cv::Point3_<short> *row = src.ptr<cv::Point3_<short> >(y);
            const float *weight_row = weight.ptr<float>(y);

            for (int x = 0; x < src.cols; ++x)
            {
                row[x].x = static_cast<short>(row[x].x / (weight_row[x] + WEIGHT_EPS));
                row[x].y = static_cast<short>(row[x].y / (weight_row[x] + WEIGHT_EPS));
                row[x].z = static_cast<short>(row[x].z / (weight_row[x] + WEIGHT_EPS));
            }
        }
    }
    else
    {
        CV_Assert(weight.type() == CV_16SC1);

        for (int y = 0; y < src.rows; ++y)
        {
            const short *weight_row = weight.ptr<short>(y);
            cv::Point3_<short> *row = src.ptr<cv::Point3_<short> >(y);

            for (int x = 0; x < src.cols; ++x)
            {
                int w = weight_row[x] + 1;
                row[x].x = static_cast<short>((row[x].x << 8) / w);
                row[x].y = static_cast<short>((row[x].y << 8) / w);
                row[x].z = static_cast<short>((row[x].z << 8) / w);
            }
        }
    }
}


void createLaplacePyr(cv::InputArray img, int num_levels, std::vector<cv::UMat> &pyr)
{
    pyr.resize(num_levels + 1);

    if(img.depth() == CV_8U)
    {
        if(num_levels == 0)
        {
            img.getUMat().convertTo(pyr[0], CV_16S);
            return;
        }

        cv::UMat downNext;
        cv::UMat current = img.getUMat();
        pyrDown(img, downNext);

        for(int i = 1; i < num_levels; ++i)
        {
            cv::UMat lvl_up;
            cv::UMat lvl_down;

            pyrDown(downNext, lvl_down);
            pyrUp(downNext, lvl_up, current.size());
            subtract(current, lvl_up, pyr[i-1], cv::noArray(), CV_16S);

            current = downNext;
            downNext = lvl_down;
        }

        {
            cv::UMat lvl_up;
            pyrUp(downNext, lvl_up, current.size());
            subtract(current, lvl_up, pyr[num_levels-1], cv::noArray(), CV_16S);

            downNext.convertTo(pyr[num_levels], CV_16S);
        }
    }
    else
    {
        pyr[0] = img.getUMat();
        for (int i = 0; i < num_levels; ++i)
            pyrDown(pyr[i], pyr[i + 1]);
        cv::UMat tmp;
        for (int i = 0; i < num_levels; ++i)
        {
            pyrUp(pyr[i + 1], tmp, pyr[i].size());
            subtract(pyr[i], tmp, pyr[i]);
        }
    }
}


void restoreImageFromLaplacePyr(std::vector<cv::UMat> &pyr)
{
    if (pyr.empty())
        return;
    cv::UMat tmp;
    for (size_t i = pyr.size() - 1; i > 0; --i)
    {
        pyrUp(pyr[i], tmp, pyr[i - 1].size());
        add(tmp, pyr[i - 1], pyr[i - 1]);
    }
}

测试代码

cv::Mat img0 = cv::imread("E:/test/google_satellite_0000.bmp",cv::IMREAD_COLOR);
cv::Mat img1 = cv::imread("E:/test/google_satellite_0001.bmp",cv::IMREAD_COLOR);

cv::Mat mask0 = cv::Mat_<uchar>(img0.size(),255);
cv::Mat mask1 = cv::Mat_<uchar>(img1.size(),255);

std::vector<cv::UMat>   imgs_warped;
std::vector<cv::UMat>   masks_warped;
std::vector<cv::Point>  corners_warped;
std::vector<cv::Size>   sizes_warped;

imgs_warped.push_back(img0.getUMat(cv::ACCESS_READ));
imgs_warped.push_back(img1.getUMat(cv::ACCESS_READ));

masks_warped.push_back(mask0.getUMat(cv::ACCESS_READ));
masks_warped.push_back(mask1.getUMat(cv::ACCESS_READ));

corners_warped.push_back(cv::Point(0,0));
corners_warped.push_back(cv::Point(0,img0.rows/2)); //假设img0和img1上下重叠1/2。ps:对于实际中的图像拼接来说,这个位置关系应当已经得到

sizes_warped.push_back(img0.size());
sizes_warped.push_back(img1.size());

std::vector<cv::UMat> imgs_warped_f(imgs_warped.size());
for (unsigned int i = 0; i < imgs_warped.size(); ++i)
    imgs_warped[i].convertTo(imgs_warped_f[i], CV_32F);

//OpenCV最佳缝合线检测
cv::Ptr<cv::detail::SeamFinder> seam_finder;
seam_finder = cv::makePtr<cv::detail::DpSeamFinder>(cv::detail::DpSeamFinder::COLOR);
seam_finder->find(imgs_warped_f, corners_warped, masks_warped);

//多频段融合
cv::Ptr<KBlender> blender = cv::makePtr<KMultiBandBlender>();
KMultiBandBlender* mblender = dynamic_cast<KMultiBandBlender*>(blender.get());
mblender->setNumBands(5);

blender->prepare(corners_warped, sizes_warped);

for(unsigned int i=0;i<imgs_warped.size();i++)
{
    cv::Mat img_warped_s;
    imgs_warped[i].convertTo(img_warped_s, CV_16S);

    cv::Mat mask_warped = masks_warped[i].getMat(cv::ACCESS_READ);
    cv::Mat dilated_mask;
    cv::dilate(mask_warped, dilated_mask, cv::Mat());
    mask_warped = dilated_mask & mask_warped;

    blender->feed(img_warped_s, mask_warped, corners_warped[i]);
}

cv::Mat result,result_mask;
blender->blend(result, result_mask);

result.convertTo(result,CV_8U);*

算法效果

拼接原图-上图/img0
输入图像-上图/img0
拼接原图-下图/img1
输入图像-下图/img1
最佳缝合线检测结果-上图掩膜
最佳缝合线检测结果-掩膜-上图
最佳缝合线检测结果-下图掩膜
最佳缝合线检测结果-掩膜-下图
拼接结果
多频段融合拼接结果

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2080772.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

uni-app - - - - - 自定义tabbar

uni-app - - - - - 自定义tabbar 1. 创建页面2. pages.json3. 自定义tabbar4. 隐藏原生tabbar5. 全局注册组件6. 页面使用7. 效果图展示 1. 创建页面 2. pages.json 配置tabbar {"tabBar": {"list": [{"pagePath": "pages/ballroom/ballr…

认知杂谈25

今天分享 有人说的一段争议性的话 I I 《拖延症&#xff0c;谁都有过》 嘿&#xff0c;朋友们&#xff01;咱都来说说&#xff0c;拖延症这玩意儿&#xff0c;好多人都被它给缠上啦。你看哈&#xff0c;本来计划得好好的&#xff0c;今天要把房间收拾得干干净净&#xff0c;可…

SSH弱口令爆破服务器

一、实验背景 1、概述 使用kali的hydra进行ssh弱口令爆破&#xff0c;获得服务器的用户名和口令&#xff0c;通过 ssh远程登录服务器。 2、实验环境 kali攻击机&#xff1a;192.168.1.107 centos服务器&#xff1a;192.168.1.105 二、前置知识 1、centos设置用户并设置弱…

软件设计原则之接口隔离原则

接口隔离原则&#xff08;Interface Segregation Principle, ISP&#xff09;是面向对象设计中的一个重要原则&#xff0c;它属于SOLID原则之一。这个原则强调客户端&#xff08;即接口的调用者&#xff09;不应该被迫依赖于它们不使用的方法。换句话说&#xff0c;一个类对另一…

【区块链 + 司法存证】数据存证区块链服务开放平台 | FISCO BCOS应用案例

大数据时代&#xff0c;数据参与社会生产过程&#xff0c;实现价值增值&#xff0c;是一种新型生产要素。数据产品具有易复制、易修改等特点&#xff0c; 因而数据产品在使用、流通过程中面临被非法复制、非法传播、非法篡改和知识产权窃取等安全风险。在存证数 据上链过程中&a…

PDF转化为机器可读格式的工具

MinerU PDF转化为机器可读格式的工具 项目简介 MinerU是一款将PDF转化为机器可读格式的工具&#xff08;如markdown、json&#xff09;&#xff0c;可以很方便地抽取为任意格式。 项目地址&#xff1a; https://github.com/opendatalab/MinerU/tree/master主要功能 删除页…

微信开发者工具 自定义字体大小

常用编程软件自定义字体大全首页 文章目录 前言具体操作1. 打开文件设置对话框2. 在Font Family里面输入字体 前言 微信开发者工具 自定义字体大小&#xff0c;统一设置为 Cascadia Code SemiBold &#xff0c;大小为 14 具体操作 【文件】>【首选项】>【设置】>【文…

登录校验组件 Spring Security OAuth2 详解

什么是OAuth? OAuth&#xff08;全称Open Authorization&#xff0c;开放授权&#xff09;是一种基于令牌的身份验证和授权协议&#xff0c;它允许用户授权第三方应用访问其在服务提供者&#xff08;如社交媒体、邮箱服务等&#xff09;上存储的特定信息&#xff0c;而无需直…

安防监控/软硬一体/视频汇聚网关EasyCVR硬件启动崩溃是什么原因?

安防视频监控EasyCVR安防监控视频系统采用先进的网络传输技术&#xff0c;支持高清视频的接入和传输&#xff0c;能够满足大规模、高并发的远程监控需求。EasyCVR平台支持多种视频流的外部分发&#xff0c;如RTMP、RTSP、HTTP-FLV、WebSocket-FLV、HLS、WebRTC、WS-FMP4、HTTP-…

PyTorch深度学习网络(二:CNN)

卷积神经网络&#xff08;CNN&#xff09;是一种专门用于处理具有类似网格结构数据的深度学习模型&#xff0c;例如图像&#xff08;2D网格的像素&#xff09;和时间序列数据&#xff08;1D网格的信号强度&#xff09;。CNN在图像识别、图像分类、物体检测、语音识别等领域有着…

API网关之Kong

Kong 是一个高性能的开源 API 网关和微服务管理平台&#xff0c;用于管理、保护和扩展 API 和微服务。它最初由 Mashape 公司开发&#xff0c;并于 2015 年作为开源项目发布。Kong 能够处理 API 的路由、认证、负载均衡、缓存、监控、限流等多种功能&#xff0c;是微服务架构中…

Mysql中count(*) over 用法讲解

Mysql中count&#xff08;*&#xff09; over &#xff08;&#xff09;用法讲解 一、原理1、原理介绍 二、下面是一个使用COUNT(*) OVER()的代码示例&#xff1a;1、代码示例2、结果详解3、COUNT(*) OVER() 分区用法 三 、总结 一、原理 1、原理介绍 在MySQL中&#xff0c;C…

MySQL集群的基础部署及主从复制详解

一、Msql在服务器中的部署方法 官网&#xff1a;http://www.mysql.com 在企业中90%的服务器操作系统均为Linux 在企业中对于Mysql的安装通常用源码编译的方式来进行 1.1 在Linux下部署MySQL 1.1.1 部署环境 主机IP角色MySQL-node1172.25.254.13masterMySQL-node2172.25.…

【C语言】深入理解指针(四)qsort函数的实现

指针4 1.回调函数是什么2.qsort使用举例3.qsort函数的模拟实现 1.回调函数是什么 回调函数就是⼀个通过函数指针调⽤的函数。 如果你把函数的指针&#xff08;地址&#xff09;作为参数传递给另⼀个函数&#xff0c;当这个指针被⽤来调⽤其所指向的函数 时&#xff0c;被调⽤的…

【CanMV K230】外接传感器

【CanMV K230】外接传感器 外接LED灯 B站视频链接 抖音链接 我们后面主要做是机器视觉。K230能帮我们捕捉到图像信息。更多小功能需要我们自己来做。 比如舵机抬杆&#xff0c;测温报警等 都需要我们外接传感器。 本篇就来分享一下如何使用K230外接传感器 首先需要知道K230…

Leetcode JAVA刷刷站(98)验证二叉搜索树

一、题目概述 二、思路方向 在Java中&#xff0c;要判断一个二叉树是否是有效的二叉搜索树&#xff08;BST&#xff09;&#xff0c;我们可以采用递归的方法&#xff0c;通过维护一个外部的范围&#xff08;通常是Integer.MIN_VALUE到Integer.MAX_VALUE作为初始范围&#xff…

网络优化4|网络流问题|路径规划问题|车辆路径问题

网络流问题 网络最大流问题 研究网络通过的流量也是生产管理中经常遇到的问题 例如&#xff1a;交通干线车辆最大通行能力、生产流水线产品最大加工能力、供水网络中最大水流量等。这类网络的弧有确定的容量&#xff0c;虽然常用 c i j c_{ij} cij​表示从节点 i i i到节点 j…

怎么检测电脑的RAM?丨什么是RAM?

RAM 是 Random Access Memory 的缩写&#xff0c;它是一个允许计算机短期存储数据以更快访问的组件。众所周知&#xff0c;操作系统、应用程序和各种个人文件都存储在硬盘驱动器中。 当 CPU 需要调用硬盘上的数据进行计算和运行时&#xff0c;CPU 会将数据传输到 RAM 中进行计…

安防视频汇聚平台EasyCVR启动后无法访问登录页面是什么原因?

安防视频监控/视频集中存储/云存储/磁盘阵列EasyCVR平台基于云边端一体化架构&#xff0c;兼容性强、支持多协议接入&#xff0c;包括国标GB/T28181协议、部标JT808、GA/T1400协议、RTMP、RTSP/Onvif协议、海康Ehome、海康SDK、大华SDK、华为SDK、宇视SDK、乐橙SDK、萤石云SDK等…

科研绘图系列:R语言多组极坐标图(grouped polar plot)

介绍 Polar plot(极坐标图)是一种二维图表,它使用极坐标系统来表示数据,而不是像笛卡尔坐标系(直角坐标系)那样使用x和y坐标。在极坐标图中,每个数据点由一个角度(极角)和一个半径(极径)来确定。角度通常从水平线(或图表的某个固定参考方向)开始测量,而半径则是…