数据源10min自动断开连接导致查询抛异常(未获取可用连接)

news2024/11/14 6:12:37

由于个人能力有限,本文章仅仅代表本人想法,若有不对请即时指出,若有侵权,请联系本人。

1 背景

工作中引入druid来管理数据源连接,由于数据源每隔10分钟强制管理空闲超过10分钟的连接,导致每隔10分钟出现1次获取不到有效连接异常。业务请求量非常少(1h可能来一次请求)。因此,研究了一下druid源码,以及相应的解决方案。
(1)设置maxEvictableIdleTimeMillis为300000,这样5分钟之后强制剔除空闲超过5分钟的连接。
新来的请求重新建立新的连接。
优点: 适合定时任务或者请求量特别特别少的业务场景
(2)保活
keepAlive: true
keepAliveBetweenTimeMillis: 120000
优点: 持续保存有效连接,及时响应业务请求
缺点: 持有成本

2 技术实战

2.1 druid引入以及默认配置

引入maven 
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid-spring-boot-starter</artifactId>
            <version>1.2.23</version>
        </dependency>
// spi融入到springboot框架
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfig
ure
@Configuration
@ConditionalOnProperty(name = "spring.datasource.type",
        havingValue = "com.alibaba.druid.pool.DruidDataSource",
        matchIfMissing = true)
@ConditionalOnClass(DruidDataSource.class)
@AutoConfigureBefore(DataSourceAutoConfiguration.class)
@EnableConfigurationProperties({DruidStatProperties.class, DataSourceProperties.class})
@Import({DruidSpringAopConfiguration.class,
        DruidStatViewServletConfiguration.class,
        DruidWebStatFilterConfiguration.class,
        DruidFilterConfiguration.class})
public class DruidDataSourceAutoConfigure {
    private static final Logger LOGGER = LoggerFactory.getLogger(DruidDataSourceAutoConfigure.class);
    @Bean
    @ConditionalOnMissingBean({DruidDataSourceWrapper.class,
        DruidDataSource.class,
        DataSource.class})
    public DruidDataSourceWrapper dataSource() {
        LOGGER.info("Init DruidDataSource");
        return new DruidDataSourceWrapper();
    }
}
@ConfigurationProperties("spring.datasource.druid")
public class DruidDataSourceWrapper extends DruidDataSource implements InitializingBean {
	xxx
}
// 查看DruidAbstractDataSource类的属性
	// 默认初始化连接池=0
	public static final int DEFAULT_INITIAL_SIZE = 0;
	// 默认最大连接池=6
    public static final int DEFAULT_MAX_ACTIVE_SIZE = 8;
    // 默认最大的空闲连接池=8
    public static final int DEFAULT_MAX_IDLE = 8;
    // 默认最小的空闲连接池=0
    public static final int DEFAULT_MIN_IDLE = 0;
    // 默认最长的获取连接等待时间-1
    public static final int DEFAULT_MAX_WAIT = -1;
    // 默认validation_query=null
    public static final String DEFAULT_VALIDATION_QUERY = null;
    // 默认当应用向连接池申请连接时,连接池不判断这条连接是否是可用的。
    public static final boolean DEFAULT_TEST_ON_BORROW = false;
    // 默认当一个连接使用完归还到连接池时不进行验证
    public static final boolean DEFAULT_TEST_ON_RETURN = false;
    // 默认进行空闲时检测
    public static final boolean DEFAULT_WHILE_IDLE = true;
    // 默认检查空闲连接的频率 1min
    public static final long DEFAULT_TIME_BETWEEN_EVICTION_RUNS_MILLIS = 60 * 1000L;
    // 默认连接出错后重试时间间隔 0.5s
    public static final long DEFAULT_TIME_BETWEEN_CONNECT_ERROR_MILLIS = 500;
    public static final int DEFAULT_NUM_TESTS_PER_EVICTION_RUN = 3;
    public static final int DEFAULT_TIME_CONNECT_TIMEOUT_MILLIS = 10_000;
    // 默认连接超时时间10s
    public static final int DEFAULT_TIME_SOCKET_TIMEOUT_MILLIS = 10_000;
	// 默认剔除空闲连接最小的等待时间
    public static final long DEFAULT_MIN_EVICTABLE_IDLE_TIME_MILLIS = 1000L * 60L * 30L;
    // 默认剔除空闲连接最大的等待时间
    public static final long DEFAULT_MAX_EVICTABLE_IDLE_TIME_MILLIS = 1000L * 60L * 60L * 7;
    // 默认物理连接超时时间
    public static final long DEFAULT_PHY_TIMEOUT_MILLIS = -1;
	// 默认自动提交事务
    protected volatile boolean defaultAutoCommit = true;

2.2 项目初始化执行

        @Bean
    @ConditionalOnMissingBean({DruidDataSourceWrapper.class,
        DruidDataSource.class,
        DataSource.class})
    public DruidDataSourceWrapper dataSource() {
        LOGGER.info("Init DruidDataSource");
        return new DruidDataSourceWrapper();
    }
	public DruidDataSource() {
        this(false);// 默认非公平锁
    }
    public DruidDataSource(boolean fairLock) {
        super(fairLock);
        // 接受从系统参数传递的配置
        configFromPropeties(System.getProperties());
    }
    // 初始化非公平锁
    public DruidAbstractDataSource(boolean lockFair) {
        lock = new ReentrantLock(lockFair);
        notEmpty = lock.newCondition();
        empty = lock.newCondition();
    }
@ConfigurationProperties("spring.datasource.druid")
public class DruidDataSourceWrapper extends DruidDataSource implements InitializingBean {
	xxx
    @Override
    public void afterPropertiesSet() throws Exception {
		xxx
        init();//进行初始化,这时候会调用com.alibaba.druid.pool.DruidDataSource#init
    }
    xxx
}
public void init() throws SQLException {
        if (inited) {
            return;
        }

        // bug fixed for dead lock, for issue #2980
        DruidDriver.getInstance();

        final ReentrantLock lock = this.lock;
        try {
            lock.lockInterruptibly();
        } catch (InterruptedException e) {
            throw new SQLException("interrupt", e);
        }

        boolean init = false;
        try {
            if (inited) {
                return;
            }

            initStackTrace = Utils.toString(Thread.currentThread().getStackTrace());

            this.id = DruidDriver.createDataSourceId();
            if (this.id > 1) {
                long delta = (this.id - 1) * 100000;
                connectionIdSeedUpdater.addAndGet(this, delta);
                statementIdSeedUpdater.addAndGet(this, delta);
                resultSetIdSeedUpdater.addAndGet(this, delta);
                transactionIdSeedUpdater.addAndGet(this, delta);
            }

            if (this.jdbcUrl != null) {
                this.jdbcUrl = this.jdbcUrl.trim();
                initFromWrapDriverUrl();
            }
            initTimeoutsFromUrlOrProperties();

            for (Filter filter : filters) {
                filter.init(this);
            }

            if (this.dbTypeName == null || this.dbTypeName.length() == 0) {
                this.dbTypeName = JdbcUtils.getDbType(jdbcUrl, null);
            }

            DbType dbType = DbType.of(this.dbTypeName);
            if (JdbcUtils.isMysqlDbType(dbType)) {
                boolean cacheServerConfigurationSet = false;
                if (this.connectProperties.containsKey("cacheServerConfiguration")) {
                    cacheServerConfigurationSet = true;
                } else if (this.jdbcUrl.indexOf("cacheServerConfiguration") != -1) {
                    cacheServerConfigurationSet = true;
                }
                if (cacheServerConfigurationSet) {
                    this.connectProperties.put("cacheServerConfiguration", "true");
                }
            }

            if (maxActive <= 0) {
                throw new IllegalArgumentException("illegal maxActive " + maxActive);
            }

            if (maxActive < minIdle) {
                throw new IllegalArgumentException("illegal maxActive " + maxActive);
            }

            if (getInitialSize() > maxActive) {
                throw new IllegalArgumentException("illegal initialSize " + this.initialSize + ", maxActive " + maxActive);
            }

            if (timeBetweenLogStatsMillis > 0 && useGlobalDataSourceStat) {
                throw new IllegalArgumentException("timeBetweenLogStatsMillis not support useGlobalDataSourceStat=true");
            }

            if (maxEvictableIdleTimeMillis < minEvictableIdleTimeMillis) {
                throw new SQLException("maxEvictableIdleTimeMillis must be grater than minEvictableIdleTimeMillis");
            }

            if (keepAlive && keepAliveBetweenTimeMillis <= timeBetweenEvictionRunsMillis) {
                throw new SQLException("keepAliveBetweenTimeMillis must be greater than timeBetweenEvictionRunsMillis");
            }

            if (this.driverClass != null) {
                this.driverClass = driverClass.trim();
            }

            initFromSPIServiceLoader();

            resolveDriver();

            initCheck();

            this.netTimeoutExecutor = new SynchronousExecutor();

            initExceptionSorter();
            initValidConnectionChecker();
            validationQueryCheck();

            if (isUseGlobalDataSourceStat()) {
                dataSourceStat = JdbcDataSourceStat.getGlobal();
                if (dataSourceStat == null) {
                    dataSourceStat = new JdbcDataSourceStat("Global", "Global", this.dbTypeName);
                    JdbcDataSourceStat.setGlobal(dataSourceStat);
                }
                if (dataSourceStat.getDbType() == null) {
                    dataSourceStat.setDbType(this.dbTypeName);
                }
            } else {
                dataSourceStat = new JdbcDataSourceStat(this.name, this.jdbcUrl, this.dbTypeName, this.connectProperties);
            }
            dataSourceStat.setResetStatEnable(this.resetStatEnable);

            connections = new DruidConnectionHolder[maxActive];
            evictConnections = new DruidConnectionHolder[maxActive];
            keepAliveConnections = new DruidConnectionHolder[maxActive];
            nullConnections = new DruidConnectionHolder[maxActive];

            SQLException connectError = null;

            if (createScheduler != null && asyncInit) {
                for (int i = 0; i < initialSize; ++i) {
                    submitCreateTask(true);
                }
            } else if (!asyncInit) {
                // init connections
                while (poolingCount < initialSize) {
                    try {
                        PhysicalConnectionInfo pyConnectInfo = createPhysicalConnection();
                        DruidConnectionHolder holder = new DruidConnectionHolder(this, pyConnectInfo);
                        connections[poolingCount++] = holder;
                    } catch (SQLException ex) {
                        LOG.error("init datasource error, url: " + this.getUrl(), ex);
                        if (initExceptionThrow) {
                            connectError = ex;
                            break;
                        } else {
                            Thread.sleep(3000);
                        }
                    }
                }

                if (poolingCount > 0) {
                    poolingPeak = poolingCount;
                    poolingPeakTime = System.currentTimeMillis();
                }
            }

            createAndLogThread();
            createAndStartCreatorThread();
            createAndStartDestroyThread();

            // await threads initedLatch to support dataSource restart.
            if (createConnectionThread != null) {
                createConnectionThread.getInitedLatch().await();
            }
            if (destroyConnectionThread != null) {
                destroyConnectionThread.getInitedLatch().await();
            }

            init = true;

            initedTime = new Date();
            registerMbean();

            if (connectError != null && poolingCount == 0) {
                throw connectError;
            }

            if (keepAlive) {
                if (createScheduler != null) {
                    // async fill to minIdle
                    for (int i = 0; i < minIdle - initialSize; ++i) {
                        submitCreateTask(true);
                    }
                } else {
                    empty.signal();
                }
            }

        } catch (SQLException e) {
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;
        } catch (InterruptedException e) {
            throw new SQLException(e.getMessage(), e);
        } catch (RuntimeException e) {
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;
        } catch (Error e) {
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;

        } finally {
            inited = true;
            lock.unlock();

            if (init && LOG.isInfoEnabled()) {
                String msg = "{dataSource-" + this.getID();

                if (this.name != null && !this.name.isEmpty()) {
                    msg += ",";
                    msg += this.name;
                }

                msg += "} inited";

                LOG.info(msg);
            }
        }
    }

2.3 执行回收空闲连接

public class DestroyConnectionThread extends Thread {
       xxx
        public void run() {
            initedLatch.countDown();
            for (; !Thread.currentThread().isInterrupted(); ) {
                // 从前面开始删除
                try { // 若closed 为true,直接break停止执行
                    if (closed || closing) {
                        break;
                    }
                    // 每隔timeBetweenEvictionRunsMillis 执行一次
                    if (timeBetweenEvictionRunsMillis > 0) {
                        Thread.sleep(timeBetweenEvictionRunsMillis);
                    } else {//每隔1s执行一次
                        Thread.sleep(1000); //
                    }
                    if (Thread.interrupted()) {
                        break;
                    }
                    destroyTask.run();
                } catch (InterruptedException e) {
                    break;
                }
            }
        }
    }
    public class DestroyTask implements Runnable {
        public DestroyTask() {
        }
        @Override
        public void run() {
             // 执行回收空闲连接
            shrink(true, keepAlive);
            if (isRemoveAbandoned()) {
                removeAbandoned();
            }
        }
    }
    // checkTime为true, keepalive默认为false
    public void shrink(boolean checkTime, boolean keepAlive) {
        if (poolingCount == 0) {
            return;
        }

        final Lock lock = this.lock;
        try {
            lock.lockInterruptibly();
        } catch (InterruptedException e) {
            return;
        }

        boolean needFill = false;
        int evictCount = 0;
        int keepAliveCount = 0;
        int fatalErrorIncrement = fatalErrorCount - fatalErrorCountLastShrink;
        fatalErrorCountLastShrink = fatalErrorCount;

        try {
            if (!inited) {
                return;
            }

            final int checkCount = poolingCount - minIdle;
            final long currentTimeMillis = System.currentTimeMillis();
            // remaining is the position of the next connection should be retained in the pool.
            int remaining = 0;
            int i = 0;
            for (; i < poolingCount; ++i) {
                DruidConnectionHolder connection = connections[i];

                if ((onFatalError || fatalErrorIncrement > 0) && (lastFatalErrorTimeMillis > connection.connectTimeMillis)) {
                    keepAliveConnections[keepAliveCount++] = connection;
                    continue;
                }

                if (checkTime) {
                    if (phyTimeoutMillis > 0) {
                        long phyConnectTimeMillis = currentTimeMillis - connection.connectTimeMillis;
                        if (phyConnectTimeMillis > phyTimeoutMillis) {
                            evictConnections[evictCount++] = connection;
                            continue;
                        }
                    }

                    long idleMillis = currentTimeMillis - connection.lastActiveTimeMillis;

                    if (idleMillis < minEvictableIdleTimeMillis
                            && idleMillis < keepAliveBetweenTimeMillis) {
                        break;
                    }
					// 当空闲时间 > 最小空闲时间
                    if (idleMillis >= minEvictableIdleTimeMillis) {
                        if (i < checkCount) {
                            evictConnections[evictCount++] = connection;
                            continue;
                        // 当空闲时间 > 最大空闲时间
                        } else if (idleMillis > maxEvictableIdleTimeMillis) {
                        	// 放到剔除空闲连接数组中,并且剔除数量+1
                            evictConnections[evictCount++] = connection;
                            continue;
                        }
                    }
					// 若开启了保活,并且空闲连接 >= 保活间隔时间
                    if (keepAlive && idleMillis >= keepAliveBetweenTimeMillis
                            && currentTimeMillis - connection.lastKeepTimeMillis >= keepAliveBetweenTimeMillis) {
                        keepAliveConnections[keepAliveCount++] = connection;
                    } else {
                        if (i != remaining) {
                            // move the connection to the new position for retaining it in the pool.
                            connections[remaining] = connection;
                        }
                        remaining++;
                    }
                } else {
                    if (i < checkCount) {
                        evictConnections[evictCount++] = connection;
                    } else {
                        break;
                    }
                }
            }

            // shrink connections by HotSpot intrinsic function _arraycopy for performance optimization.
            int removeCount = evictCount + keepAliveCount;
            if (removeCount > 0) {
                int breakedCount = poolingCount - i;
                if (breakedCount > 0) {
                    // retains the connections that start at the break position.
                    System.arraycopy(connections, i, connections, remaining, breakedCount);
                    remaining += breakedCount;
                }
                // clean the old references of the connections that have been moved forward to the new positions.
                System.arraycopy(nullConnections, 0, connections, remaining, removeCount);
                poolingCount -= removeCount;
            }
            keepAliveCheckCount += keepAliveCount;

            if (keepAlive && poolingCount + activeCount < minIdle) {
                needFill = true;
            }
        } finally {
            lock.unlock();
        }

        if (evictCount > 0) {
             // 遍历所有需要剔除的空闲连接数组,将连接进行释放
            for (int i = 0; i < evictCount; ++i) {
                DruidConnectionHolder item = evictConnections[i];
                Connection connection = item.getConnection();
                JdbcUtils.close(connection);
                destroyCountUpdater.incrementAndGet(this);
            }
            // use HotSpot intrinsic function _arraycopy for performance optimization.
            System.arraycopy(nullConnections, 0, evictConnections, 0, evictConnections.length);
        }

        if (keepAliveCount > 0) {
            // keep order
            for (int i = keepAliveCount - 1; i >= 0; --i) {
                DruidConnectionHolder holder = keepAliveConnections[i];
                Connection connection = holder.getConnection();
                holder.incrementKeepAliveCheckCount();

                boolean validate = false;
                try {
                    this.validateConnection(connection);
                    validate = true;
                } catch (Throwable error) {
                    keepAliveCheckErrorLast = error;
                    keepAliveCheckErrorCountUpdater.incrementAndGet(this);
                    if (LOG.isDebugEnabled()) {
                        LOG.debug("keepAliveErr", error);
                    }
                }

                boolean discard = !validate;
                if (validate) {
                    holder.lastKeepTimeMillis = System.currentTimeMillis();
                    boolean putOk = put(holder, 0L, true);
                    if (!putOk) {
                        discard = true;
                    }
                }

                if (discard) {
                    try {
                        connection.close();
                    } catch (Exception error) {
                        discardErrorLast = error;
                        discardErrorCountUpdater.incrementAndGet(DruidDataSource.this);
                        if (LOG.isErrorEnabled()) {
                            LOG.error("discard connection error", error);
                        }
                    }

                    if (holder.socket != null) {
                        try {
                            holder.socket.close();
                        } catch (Exception error) {
                            discardErrorLast = error;
                            discardErrorCountUpdater.incrementAndGet(DruidDataSource.this);
                            if (LOG.isErrorEnabled()) {
                                LOG.error("discard connection error", error);
                            }
                        }
                    }

                    lock.lock();
                    try {
                        holder.discard = true;
                        discardCount++;

                        if (activeCount + poolingCount + createTaskCount < minIdle) {
                            needFill = true;
                        }
                    } finally {
                        lock.unlock();
                    }
                }
            }
            this.getDataSourceStat().addKeepAliveCheckCount(keepAliveCount);
            // use HotSpot intrinsic function _arraycopy for performance optimization.
            System.arraycopy(nullConnections, 0, keepAliveConnections, 0, keepAliveConnections.length);
        }

        if (needFill) {
            lock.lock();
            try {
                int fillCount = minIdle - (activeCount + poolingCount + createTaskCount);
                emptySignal(fillCount);
            } finally {
                lock.unlock();
            }
        } else if (fatalErrorIncrement > 0) {
            lock.lock();
            try {
                emptySignal();
            } finally {
                lock.unlock();
            }
        }
    }

2.3 全部收回连接后接受请求重新创建连接
在这里插入图片描述
在这里插入图片描述

    @Override
    public DruidPooledConnection getConnection() throws SQLException {
        return getConnection(maxWait);
    }

打开druid监控页面,可以观察连接池的连接数量变化。当没有请求后,再过maxEvictableIdleTimeMillis 时间,发现连接池的连接数=0
在这里插入图片描述
在这里插入图片描述

2.4 设置keepAlive=true

      keepAlive: true
      keepAliveBetweenTimeMillis: 120000
      timeBetweenEvictionRunsMillis: 5000 #关闭空闲连接间隔   5s
      minEvictableIdleTimeMillis: 120000 #连接保持空闲而不被驱逐的最小时间 2分钟
      maxEvictableIdleTimeMillis: 420000 #连接保持空闲而不被驱逐的最大时间 5分钟
            if (keepAlive && poolingCount + activeCount < minIdle) {
                needFill = true; //需要重建物理连接,保持minIdle数量
            }

登录druid控制台http://localhost:8080/druid/index.html
可以看到连接一直保持在minIdle量
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2093492.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

如何构造哈夫曼树

目录 一、哈夫曼树的概念 1、结点的权&#xff1a; 2、结点的带权路径长度 3、树的带权路径长度 4、哈夫曼树 二、哈夫曼树的构造 1、构造步骤 三、哈夫曼树的编码 一、哈夫曼树的概念 1、结点的权&#xff1a; 定义&#xff1a; 每个结点的权重&#xff08;重要性…

fork入门

1哪个分支会打印 如下是fork的典型问题。fork之后有3个分支&#xff0c;分别是pid等于0&#xff0c;pid大于0&#xff0c;pid小于0。如果我们不了解fork的话&#xff0c;那么肯定会认为这里的if else分支只会有一个分支被执行。而实际的执行结果是两个分支都执行了。fork返回之…

客流预测 | 基于Transformer下车站点客流推断研究(Matlab)

目录 效果一览基本介绍程序设计参考资料 效果一览 基本介绍 基于Transformer的车站客流推断研究是指利用Transformer模型来预测车站的客流情况。Transformer是一种强大的深度学习模型&#xff0c;特别擅长处理序列数据。研究可以为城市交通管理提供重要决策支持&#xff0c;帮…

恢复丢失的数据:iPhone 恢复指南

不小心删除了 iPhone 上的重要数据并意识到它没有备份&#xff1f;您并不孤单。在 iPhone 上恢复永久删除的数据似乎令人生畏&#xff0c;但并非总是不可能。我们将探索各种方法&#xff0c;包括使用专门的软件和备份恢复选项&#xff0c;为您提供恢复丢失数据的最佳机会。 常见…

裸机:LCD

什么是LCD&#xff1f; LCD&#xff0c;全称为Liquid Crystal Display&#xff0c;即液晶显示屏&#xff0c;是一种利用液晶物质的光学特性&#xff0c;通过控制电场来改变光的透过性&#xff0c;从而实现图像显示的技术。以下是关于LCD的详细解释&#xff1a; 一、LCD的基本…

ElementPlus实现页面,上部分是表单,下部分是表格

效果 <template><el-dialog v-model"produceDialogFormVisible" draggable custom-class"dialog-title" :title"title" :close-on-click-modal"false"><el-form label-width"120px"><el-row :gutter&q…

【STM32H743】将全局变量定义到指定内存MDK

STM32H743将全局变量定义到指定内存MDK 2024年8月31日 #elecEngeneer 上链 参考硬汉嵌入式。 这样Target里面的设置就作废了。 把H743的几个SRAM写上 ; ************************************************************* ; *** Scatter-Loading Description File generated by…

5G智慧工地项目汇报方案

1. 项目概述 5G智慧工地项目旨在通过5G技术提升建筑工地的通信、安防、质量管理和精益化管理水平&#xff0c;打造科技感十足的“5G智慧建造体验中心”。 2. 智慧工地需求 当前智慧工地需求集中在实时化、可视化、多元化、智慧化和便捷化&#xff0c;以满足全时段安全管理和…

《机器学习》周志华-CH4(决策树)

4.1基本流程 决策树是一类常见的机器学习方法&#xff0c;又称“判别树”&#xff0c;决策过程最终结论对应了我们所希望的判定结果。 一棵决策树 { 一个根结点 包含样本全集 若干个内部结点 对应属性测试&#xff0c;每个结点包含的样本集合根据属性测试结果划分到子结点中 若…

基于ssm+vue的汽车租赁管理系统

摘要 随着移动应用技术的发展&#xff0c;越来越多的用户借助于移动手机、电脑完成生活中的事务&#xff0c;许多的传统行业也更加重视与互联网的结合&#xff0c;以提高商家知名度和寻求更高的经济利益。针对传统汽车租赁系统&#xff0c;租赁信息、续租信息等问题&#xff0c…

第4章-07-将WebDriver获取的Cookie传递给Requests

🏆作者简介,黑夜开发者,CSDN领军人物,全栈领域优质创作者✌,CSDN博客专家,阿里云社区专家博主,2023年CSDN全站百大博主。 🏆数年电商行业从业经验,历任核心研发工程师,项目技术负责人。 🏆本文已收录于专栏:Web爬虫入门与实战精讲,后续完整更新内容如下。 文章…

linux下基本指令(持续更新)

目录 1.adduser 2.passwd 3.userdel 4. su - 5.ls 6.pwd ​编辑 7.cd 8.touch 9.mkdir &#x1f680; 10. rmdir && rm &#x1f680; 11.whoami &#xff08;who am i) 12.clear 13.tree (需要安装 yum install -y tree) 14.who 1.adduser 语法&…

TCP协议(1)

目录 一、TCP协议介绍 二、TCP协议格式 2.1、解包与分用 2.2、TCP的可靠性 2.3、TCP的工作模式 2.4、确认应答(ACK)机制 2.5、32位序号与确认序号 2.6 16位窗口大小 2.7 六个标志位 2.7.1、SYN 2.7.2、FIN 2.7.3、ACK 2.7.4、PSH 2.7.5、URG 2.7.6、RST 2.8、T…

Arco Voucher - 不知道有什么用的凭证单据录入表单插件

关于 Arco Voucher Arco Voucher 插件是一款不知道有什么用的凭证单据录入表单插件&#xff0c;可能只是为了看着像传统的凭证单据。 动态表头 附件上传/预览 添加凭证明细 https://apps.odoo.com/apps/modules/browse?authorzerone40 如有插件定制化需求或其他插件资源…

MATLAB智能优化算法-学习笔记(2)——变邻域搜索算法求解旅行商问题【过程+代码】

旅行商问题 (TSP) 旅行商问题(Traveling Salesman Problem, TSP)是经典的组合优化问题之一。问题的描述是:给定若干个城市以及每对城市之间的距离,一个旅行商需要从某个城市出发,访问每个城市恰好一次,最后回到出发城市,目标是找到一条总距离最短的环路。TSP 是 NP-har…

通用 PDF OCR 到 Word API 数据接口

通用 PDF OCR 到 Word API 数据接口 文件处理&#xff0c;OCR&#xff0c;PDF 高可用图像识别引擎&#xff0c;基于机器学习&#xff0c;超精准识别率。 1. 产品功能 通用识别接口&#xff1b;支持中英文等多语言字符混合识别&#xff1b;formdata 格式 PDF 文件流传参&#xf…

MySql执行计划(Explain关键字详解)

文章目录 预备知识学习本内容的前提必须了解1.什么是Explain?2.如何使用Explain?3.explain字段详解3.1、ID字段(情况1)、id值不同:(情况2)、id值相同:(情况3)、id列为null:(情况4)、子查询优化后3.2、select_type字段:表示那个是主要的查询1.simmple:2.primary:3.derived:…

WeStorm(没有指向JVM)

##一直困扰了好久&#xff0c;之前打开IDEA会弹出这个&#xff1a; 然后重启IDEA就没弹出来了。但是的但是&#xff0c;最近打开WebStorm也弹出来这个&#xff0c;重启也解决不了&#xff0c;一开始以为是JDK的问题&#xff0c;但是检查了好几遍&#xff0c;发现都没问题&…

沉浸式体验Stability AI文生图、图生图、图片PS功能(中篇)

今天小李哥就来介绍亚马逊云科技推出的国际前沿人工智能模型平台Amazon Bedrock上的Stability Diffusion模型开发生成式AI图像生成应用&#xff01;本系列共有3篇&#xff0c;在上篇中我们学习了如何在亚马逊云科技控制台上体验该模型的每个特色功能&#xff0c;如文生图、图生…

Vue setup语法糖

未使用setup语法糖 <script lang"ts">export default {name: "App",setup() {let name "张三"let age 20function handleClick() {age 1}return {name,age,handleClick,}}} </script><template><div class"class&…