记一次中大规模数据库迁移过程,从MySql到PostgreSQL.

news2024/11/16 3:49:32

从MySql到PostgreSQL迁移的决策过程就不说了。我也是第一次用PostgreSQL,也没法说好不好。决策已经定了,下面介绍一下执行过程。

一、数据基本情况

服务器:4核CPU,8G内存,1T硬盘,8Mbit网速。

数据库:MySql-5.5-community,数据量492GB,包含索引、日志。

由于服务器硬盘容量已不足300GB,没有办法在服务器上同时运行MySql和PostgreSQL完成迁移,所以只在本地运行PostgreSQL,并将数据先迁移到本地。

二、采用通用代码迁移。

因为熟悉,决定采用Java迁移。(为了减少工作量,选择站在巨人的肩膀上。)搜索到了这么一篇文章:自己动手写一个Mysql到PostgreSQL数据库迁移工具,看起来不错,拷贝到本地,稍做适配、改进,对主键为整形的数据表,采用增量方式进行迁移,代码如下:

package springDemo;


import java.sql.SQLException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.IntStream;

import javax.sql.DataSource;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.util.Assert;

import com.zaxxer.hikari.HikariDataSource;


public class DataTableMigration{

	private static Logger LOG = LoggerFactory.getLogger(DataTableMigration.class);
	
    private final JdbcTemplate targetJdbc;
    private final JdbcTemplate sourceJdbc;
    private final String tableName;
    private final String primaryKey;
    private final String[] columnNamesInSourceDB;
    private final String[] columnNamesInTargetDB;

    private final Map<String, String> columnMappings;

    public DataTableMigration(DataSource sourceDataSource, String tableName, DataSource targetDataSource) throws SQLException {
    	
    	this.tableName = tableName.toLowerCase();
        
        if (sourceDataSource instanceof HikariDataSource) {
            HikariDataSource hikariDataSource = (HikariDataSource) sourceDataSource;
            hikariDataSource.setMaxLifetime(86400000); // 设置为24小时
            hikariDataSource.setConnectionTimeout(600000);
            hikariDataSource.setReadOnly(true);
        }
        if (targetDataSource instanceof HikariDataSource) {
            HikariDataSource hikariDataSource = (HikariDataSource) targetDataSource;
            hikariDataSource.setMaxLifetime(86400000); // 设置为24小时
            hikariDataSource.setConnectionTimeout(600000);
        }
        
        this.sourceJdbc = new JdbcTemplate(sourceDataSource);
        this.targetJdbc = new JdbcTemplate(targetDataSource);
        System.out.println(sourceDataSource);
        System.out.println(targetDataSource);
        
        this.primaryKey = MigrationUtils.getPrimaryKeyByTableName(sourceDataSource.getConnection(), this.tableName);
        this.columnNamesInSourceDB = MigrationUtils.getColumnsByTableName(sourceDataSource.getConnection(), this.tableName);
        Assert.isTrue(this.columnNamesInSourceDB != null && this.columnNamesInSourceDB.length > 0,
                "can't find column infor from source db for the table " + this.tableName);
        
        this.columnNamesInTargetDB = MigrationUtils.getColumnsByTableName(targetDataSource.getConnection(), this.tableName);
        Assert.isTrue(this.columnNamesInTargetDB != null && this.columnNamesInTargetDB.length > 0,
                "can't find column infor from target db for the table " + this.tableName);
        
        this.columnMappings = new HashMap<>();
    }

    protected JdbcTemplate getSourceJdbc() {
      return this.sourceJdbc;
    }

    protected JdbcTemplate getTargetJdbc() {
        return this.targetJdbc;
      }


    protected List<Map<String, Object>> queryForList(String querySql, long offset, long stepLength) {
        return getSourceJdbc().queryForList(querySql, offset, stepLength);
    }

    private Object[] rowToParam(Map<String, Object> row) {
        return Arrays.stream(columnNamesInTargetDB)
                .map(colInSource -> columnMappings.getOrDefault(colInSource, colInSource))
                .map(row::get)
                .toArray();
    }

    protected String getInsertSQL() {
        return String.format("insert into %s (%s) values(%s) ",
                this.tableName,
                String.join(",", columnNamesInTargetDB),
                IntStream.range(0, columnNamesInTargetDB.length)
                        .mapToObj(n -> "?")
                        .collect(Collectors.joining(",")));
    }
    
    protected String getInsertSQLOnCconflict() {
        return String.format("insert into %s (%s) values(%s) ON CONFLICT (%s) DO NOTHING",
                this.tableName,
                String.join(",", columnNamesInTargetDB),
                IntStream.range(0, columnNamesInTargetDB.length).mapToObj(n -> "?").collect(Collectors.joining(",")),
                this.primaryKey);
    }

    protected int getStepLength() {
        return 1000000;
    }

	protected long getSourceMaxIndex() {
		long count = getSourceJdbc().queryForObject("select max(" + primaryKey + ") from " + tableName, Long.class);
		return count;
	}
	protected long getTargetMaxIndex() {
		long count = getTargetJdbc().queryForObject("select count(1) from " + tableName, Long.class);

		if (count > 0)
			count = getTargetJdbc().queryForObject("select max(" + primaryKey + ") from " + tableName, Long.class);
		else
			count = getSourceJdbc().queryForObject("select min(" + primaryKey + ") from " + tableName, Long.class) - 1;
		return count;
	}
    public void migrateIntegerIndexTable() throws Exception {

        LOG.info("start to migrate data from source db to target db");

        String sql = String.format("select %s from %s where %s > ? order by %s asc limit ?;",
        		String.join(",", columnNamesInSourceDB), this.tableName, this.primaryKey, this.primaryKey);

        long maxRecords = getSourceMaxIndex();
        long stepLength = getStepLength();
		for (long offset = getTargetMaxIndex(); offset < maxRecords; offset = getTargetMaxIndex()) {
			List<Map<String, Object>> rows = queryForList(sql, offset, stepLength);
			LOG.info("get records From source");
	        getTargetJdbc().batchUpdate(getInsertSQL(),
                	rows.stream().map(this::rowToParam).collect(Collectors.toList()));
			LOG.info("moved {} records", offset);
		}
    }
    
    public void migrateIntegerIndexTableJust1Line(long id) throws Exception {

        LOG.info("start to migrate data from source db to target db");

        String sql = String.format("select %s from %s where %s = ? limit ?;",
        		String.join(",", columnNamesInSourceDB), this.tableName, this.primaryKey);

		List<Map<String, Object>> rows = queryForList(sql, id, 1);
		LOG.info("get records From source");
	    getTargetJdbc().batchUpdate(getInsertSQL(),
	        	rows.stream().map(this::rowToParam).collect(Collectors.toList()));
		LOG.info("moved {} record", id);
    }

//	从原库获取总数量。
	protected int getSourceTotalRecords() {
		int count = getSourceJdbc().queryForObject("select count(1) from " + tableName, Integer.class);
		LOG.info("source db has {} records", count);
		return count;
	}
//	从目标库获取已经存储的数量。
	protected int getTargetTotalRecords() {
		int count = getTargetJdbc().queryForObject("select count(1) from " + tableName, Integer.class);
		LOG.info("target db has {} records", count);
		return count;
	}
    public void migrateStringIndexTable() throws SQLException {

        LOG.info("start to migrate data from source db to target db");

        String sql = String.format("select %s from %s order by %s asc limit ?, ?;",
				String.join(",", columnNamesInSourceDB), this.tableName, this.primaryKey);

        int maxRecords = getSourceTotalRecords();
        int stepLength = getStepLength();
		for (int offset = 0; offset < maxRecords; offset = offset + stepLength) {
			List<Map<String, Object>> rows = queryForList(sql, offset, stepLength);
			LOG.info("get records From source, " + rows.size());
	        getTargetJdbc().batchUpdate(getInsertSQLOnCconflict(),
                	rows.stream().map(this::rowToParam).collect(Collectors.toList()));
			LOG.info("moved {} records", offset);
		}    	
    }
    
    public void close() {
        try {
            if (sourceJdbc != null) {
            	sourceJdbc.getDataSource().getConnection().close();
            }
            if (targetJdbc != null) {
            	targetJdbc.getDataSource().getConnection().close();
            }
        } catch (SQLException e) {
            LOG.error("Error closing database connection", e);
        }
    }
    
    public static void main(String[] args) {
    	LOG.atInfo();
    	Config cf = new Config();

    	System.setProperty("spring.jdbc.getParameterType.ignore","true");
    	
        try {
			DataTableMigration dtmStr = new DataTableMigration(cf.sourceDataSource(), "target", cf.targetDataSource());
			dtmStr.migrateStringIndexTable();
			dtmStr.close();

			String[] tableNames = { "dailyexchange", "movingavg", "stats" };
			for (int i = 0; i < tableNames.length; i++) {
				DataTableMigration dtmInt = new DataTableMigration(cf.sourceDataSource(), tableNames[i], cf.targetDataSource());
				dtmInt.migrateIntegerIndexTable();
				dtmInt.close();
			}

//			DataTableMigration dtmInt = new DataTableMigration(cf.sourceDataSource(), "min1", cf.targetDataSource());
//			dtmInt.migrateIntegerIndexTable();
//			dtmInt.close();
            
		} catch (SQLException e) {
			e.printStackTrace();
		} catch (Exception e) {
			e.printStackTrace();
		}
    }
}

开始几个数据表,由于规模较小只有几百万行,几个小时就迁移完成。下面开始迁移最大的数据表min1,有34亿行。这个速度就无法接受了。考虑到每次通讯会耗费时间,所以尽量加大每批次传输量。调整每批次迁移数量到100万行后(最大是1048576),稍微提高了传输速度,达到10分钟每百万行。如下:

HikariDataSource (null)
HikariDataSource (null)
2023-04-12T07:31:49.370+08:00  INFO   --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2023-04-12T07:31:50.701+08:00  INFO   --- [           main] com.zaxxer.hikari.pool.HikariPool        : HikariPool-1 - Added connection com.mysql.cj.jdbc.ConnectionImpl@3cce5371
2023-04-12T07:31:50.704+08:00  INFO   --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2023-04-12T07:31:51.056+08:00  INFO   --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-2 - Starting...
2023-04-12T07:31:51.148+08:00  INFO   --- [           main] com.zaxxer.hikari.pool.HikariPool        : HikariPool-2 - Added connection org.postgresql.jdbc.PgConnection@19b93fa8
2023-04-12T07:31:51.148+08:00  INFO   --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-2 - Start completed.
2023-04-12T07:31:51.164+08:00  INFO   --- [           main] springDemo.DataTableMigration            : start to migrate data from source db to target db
2023-04-12T07:40:24.912+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@3016fd5e (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:40:29.923+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@6c45ee6e (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:40:34.928+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@6b3e12b5 (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:40:39.933+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@5aac4250 (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:40:44.936+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@1338fb5 (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:40:49.938+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@42463763 (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:40:54.941+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@59f63e24 (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:40:59.947+08:00  WARN   --- [           main] com.zaxxer.hikari.pool.PoolBase          : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@7ca33c24 (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.
2023-04-12T07:41:20.733+08:00  INFO   --- [           main] springDemo.DataTableMigration            : get records From source
2023-04-12T07:41:33.743+08:00  INFO   --- [           main] springDemo.DataTableMigration            : moved 2990509187 records

 以这个速度传输完34亿行数据大概需要24天(是的吧(34亿/100万)*10分钟/1440分钟),仍然无法接受。参考相关文章(找不见了),了解到采用通用代码迁移数据,将会有大量时间用于构建List<Map<String, Object>>映射。

三、编写专用代码迁移。

想偷懒采用通用代码,对大表看来不太行。所以没有办法,不得不编写专门的迁移代码,鸣谢:ChatGTP。代码具体如下:

package pack;

import java.sql.*;
import java.time.LocalDate;
import java.time.LocalTime;
import java.io.InputStream;
import java.util.Properties;

import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator;

public class MysqlToPostgres {

	/*
	 * 要通过批量插入的方式将MySQL数据库的数据表迁移到PostgreSQL数据库中,你需要基于JDBC技术开发一个Java程序。
	 * 
	 * 以下是迁移过程中的步骤:
	 * 
	 * 1. 使用JDBC连接MySQL数据库和PostgreSQL数据库。
	 * 
	 * 2. 从MySQL数据库中读取要迁移的数据表。
	 * 
	 * 3. 将MySQL数据表中的数据批量读取出来。
	 * 
	 * 4. 将数据批量插入到PostgreSQL数据库中的相应数据表中。
	 * 
	 */
	
	private static final Logger logger = Logger.getLogger(Min1.class);
	private Connection mysqlConn = null;
	private Connection pgConn = null;

	public static void main(String args[]) {

		System.out.println(System.getProperty("java.class.path"));
		PropertyConfigurator.configure("log4j.properties");
		
		MysqlToPostgres M2P = new MysqlToPostgres();
		M2P.init();
		long flag = M2P.getTargetMaxIndex();
		long end = M2P.getSourceMaxIndex();
		
		logger.info("source line count:" + end);

		for (; flag < end; flag = M2P.getTargetMaxIndex()) {
			logger.info("target line count:" + flag);
			M2P.migrate(flag);
//			break;
		}
		
		M2P.uninit();
	}

	public void init() {
		Properties props = new Properties();
		InputStream input = null;
		
		try {
			String filename = "consts.properties";
			input = MysqlToPostgres.class.getClassLoader().getResourceAsStream(filename);
			if (input == null) {
				System.out.println("Sorry, unable to find " + filename);
				return;
			}

			// load the properties file
			// get the property value and print it out
			props.load(input);
			String sourceIP = props.getProperty("sourceIP");
			String targetIP = props.getProperty("targetIP");
			String username = props.getProperty("DBUserName");
			String password = props.getProperty("DBPassword");
			System.out.println(getMinute() + " " + username);

			// 连接MySQL数据库
			Class.forName("com.mysql.jdbc.Driver");
			mysqlConn = DriverManager.getConnection("jdbc:mysql://" + sourceIP + "/cf_stock?useCompression=true", username, password);

			// 连接PostgreSQL数据库
			Class.forName("org.postgresql.Driver");
			pgConn = DriverManager.getConnection("jdbc:postgresql://" + targetIP + "/cf_stock", username, password);
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
	
	protected long getSourceMaxIndex() {
		long count = 0;
		Statement mysqlStmt = null;
		try {
			mysqlStmt = mysqlConn.createStatement();

			// 批量读取MySQL数据表中的数据
			ResultSet mysqlRs = mysqlStmt.executeQuery("select max(recordID) from min1;");
			if (mysqlRs.next()) {
				count = mysqlRs.getLong("max(recordID)");
			}
			mysqlStmt.close();
		} catch (Exception e) {
			e.printStackTrace();
		}
		return count;
	}
	
	protected long getTargetMaxIndex() {
		long count = 0;
		Statement pgStmt = null;

		try {
			pgStmt = pgConn.createStatement();

			// 批量读取MySQL数据表中的数据
			ResultSet pgRs = pgStmt.executeQuery("select max(recordID) from min1;");
			if (pgRs.next()) {
				count = pgRs.getLong(1);
			}
			pgStmt.close();
		} catch (Exception e) {
			e.printStackTrace();
		}
		return count;
	}

	public void migrate(long flag) {
		PreparedStatement pgStmt = null;
		PreparedStatement mysqlStmt = null;

		try {
			String sql = "INSERT INTO min1 "
					+ "(recordID, dayRecordID, targetID, date, minute, "
					+ "open, high, low, close, average, shareVolume, moneyVolume, openInterest) "
					+ "VALUES (?,?,?,?,?, ?,?,?,?, ?,?,?,?) "; 
			pgStmt = pgConn.prepareStatement(sql);
			
			// 批量读取MySQL数据表中的数据
			String mysqlSql = "select * from min1 where recordID > ? order by recordID asc limit 1000000;";
			mysqlStmt = mysqlConn.prepareStatement(mysqlSql);
			mysqlStmt.setLong(1, flag);
			ResultSet mysqlRs = mysqlStmt.executeQuery();
			logger.info(getMinute()+" get records from mysql.");

			int i = 0;
			while (mysqlRs.next()) {
				Min1 m1 = new Min1(mysqlRs);

				// 将数据批量插入到PostgreSQL数据库中
				pgStmt.setLong		(1, m1.recordID);
				pgStmt.setLong		(2, m1.dayRecordID);
				pgStmt.setString	(3, m1.targetID);
				pgStmt.setDate		(4, m1.date);
				pgStmt.setShort		(5, m1.minute);
				
				pgStmt.setFloat	(6, m1.open);
				pgStmt.setFloat	(7, m1.high);
				pgStmt.setFloat	(8, m1.low);
				pgStmt.setFloat	(9, m1.close);
			
				pgStmt.setFloat	(10, m1.average);
				pgStmt.setLong	(11, m1.shareVolume);
				pgStmt.setLong	(12, m1.moneyVolume);
				pgStmt.setLong	(13, m1.openInterest);
								
				pgStmt.addBatch();
				
				i++;
				if (i % 500000 == 0) {
					System.out.println(i);
				}
			}

			// 提交批量插入
			logger.info(getMinute() + " combine all sql into a batch.");
			pgStmt.executeBatch();
			logger.info(getMinute() + " after excute batch.");
			pgStmt.clearBatch();

			mysqlRs.close();
			mysqlStmt.close();
			pgStmt.close();
		
		} catch (Exception e) {
			e.printStackTrace();
		}
	}

	public void uninit() {
		try {
			mysqlConn.close();
			pgConn.close();
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
	
	public String getMinute() {
		LocalTime now = LocalTime.now();
		return "" + now.getHour() + ":" + now.getMinute() + ":" + now.getSecond();
	}
}

运行起来效果还可以,大概2分钟迁移100万行,如此算来大概需要5天:

[main] INFO pack.Min1 - source line count:3474392405
[main] INFO pack.Min1 - target line count:2991509187
[main] INFO pack.Min1 - 7:44:14 get records from mysql.
500000
1000000
[main] INFO pack.Min1 - 7:44:15 combine all sql into a batch.
[main] INFO pack.Min1 - 7:44:29 after excute batch.
[main] INFO pack.Min1 - target line count:2992509187
[main] INFO pack.Min1 - 7:45:54 get records from mysql.
500000
1000000
[main] INFO pack.Min1 - 7:45:56 combine all sql into a batch.
[main] INFO pack.Min1 - 7:46:10 after excute batch.
[main] INFO pack.Min1 - target line count:2993509187

 完。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/410721.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【建站】手把手教你搭建惊艳的博客

系列文章目录 第一章 手把手教你搭建自己的博客 文章目录系列文章目录前言一、网站软件的选择二、网站空间的选择1. 建议选择有名、大厂的2. 上手容易&#xff0c;丰富而详实的文档3. 要稳定&#xff0c;少出问题4. 及时处理并有能力处理问题三、 环境准备四、 安装PHP81. 首先…

sql知识点复习以及项目中的例子

常见的聚合函数&#xff1a; avg&#xff08;&#xff09;&#xff0c;求平均值&#xff1b; sum&#xff08;&#xff09; 求和&#xff1b; count&#xff08;&#xff09;&#xff0c;计算和&#xff1b; min&#xff08;&#xff09;求最小值&#xff1b; max&#xff…

ChatGPT想干掉开发人员,做梦去吧

很多人都发现ChatGPT可以做一些代码相关的工作&#xff0c;不仅可以写一些基础的类似python、java、js的代码段&#xff0c;还可以做一定量的调优&#xff0c;于是就开始担忧起来&#xff0c;到哪天我的开发工作会不会被ChatGPT这个工具给取代了&#xff1f; 目录 1. ChatGPT…

腾讯云轻量应用服务器镜像修改限制说明(必看)

腾讯云轻量应用服务器镜像可以更换或修改吗&#xff1f;可以&#xff01;镜像可以修改&#xff0c;镜像是指轻量服务器的预装操作系统&#xff0c;轻量服务器创建成功后镜像也是可以更换的&#xff0c;如下图&#xff1a; 腾讯云轻量应用服务器镜像可以修改 目录 轻量服务器修…

计算机系统基本组成于基本功能

什么是计算机系统 计算机系统中的各个抽象层&#xff1a; C语言程序设计层 数据的机器级表示&#xff0c;运算语句和过程调用的机器级表示操作系统、编译和链接指令集体系架构&#xff08;ISA&#xff09;和汇编层 指令系统、机器代码&#xff0c;汇编语言微体系结构和硬件层 …

365天深度学习训练营-第J9周:Inception v3算法实战与解析

目录 一、前言 二、论文解读 1、Inception网络架构描述 2、Inception网络架构的优点 3、InceptionV3的改进 三、模型搭建 1、Inception-A 2、Inception-B 3、Inception-C 4、Reduction-A 5、Reduction-B 6、辅助分支 7、InceptionV3实现 一、前言 &#x1f368; 本…

ASP一个简单的网上教务系统模型的设计与实现

对于一个学校来说&#xff0c;大量教师信息&#xff0c;学生信息管理&#xff0c;学生成绩管理&#xff0c;基本数据的维护都难于通过传统的方法进行管理&#xff1a;这就迫切需要利用计算机技术来帮助学校管理者处理这些日常管理。本系统正是为了简化教学任务的管理&#xff0…

FreeRTOS 任务调度及相关函数详解(一)

文章目录一、任务调度器开启函数 vTaskStartScheduler()二、内核相关硬件初始化函数 xPortStartScheduler()三、启动第一个任务 prvStartFirstTask()四、中断服务函数 xPortPendSVHandler()五、空闲任务一、任务调度器开启函数 vTaskStartScheduler() 这个函数的功能就是开启任…

【ROS2指南-1】配置ROS2环境

资料来源Configuring your ROS 2 environment — ROS 2 Documentation: Dashing documentationhttp://docs.ros.org/en/dashing/Tutorials/Configuring-ROS2-Environment.html 目标&#xff1a;本教程将向您展示如何准备 ROS 2 环境。 教程级别&#xff1a;初学者 时间&…

js控制页面随浏览器放大缩小,页面布局不变

一.给App.vue设置minWidth、minHeight、maxWidth以及maxHeight,值为浏览器的可视窗口大小(我的浏览器不全屏的时候是1920*937,全屏的时候是1920*1080) 1.在main.js中获取浏览器的宽高,并挂载到全局变量上以便使用 // 浏览器窗口,这个地方值不会变,你任意拉扯浏览器也不会改变…

【数据库基本操作】打开数据库

一、启动与关闭 只介绍一种方法&#xff1a; 打开命令行工具&#xff0c;以管理员身份运行 1.启动数据库 net start mysql80 //80是在安装的时候设置的名字&#xff08;默认&#xff09;&#xff0c;不用在意 2.关闭数据库 net stop mysql80 如题已经成功&#…

场景搭建、素材库、在线标绘等,四维轻云地理空间数据云管理平台新增了这些功能

四维轻云是一款地理空间数据云管理平台&#xff0c;具有地理空间数据在线管理、展示及分享等功能。在四维轻云平台中&#xff0c;用户可以不受时间地点的限制&#xff0c;随时随地管理、查看及分享各类地理空间数据。 为了更好地满足用户需求和进行地理空间数据在线管理&#…

【C++从入门到放弃】string全方面分析(常用接口、模拟实现)

&#x1f9d1;‍&#x1f4bb;作者&#xff1a; 情话0.0 &#x1f4dd;专栏&#xff1a;《C从入门到放弃》 &#x1f466;个人简介&#xff1a;一名双非编程菜鸟&#xff0c;在这里分享自己的编程学习笔记&#xff0c;欢迎大家的指正与点赞&#xff0c;谢谢&#xff01; strin…

d2l 文本预处理textDataset

这一节极其重要&#xff0c;重要到本来是d2l的内容我也要归到pyhon封面&#xff0c;这里面class的操作很多&#xff0c;让我娓娓道来&#xff01; 目录 1.要实现的函数 2.读取数据集 3.词元化 4.Vocab类 4.1count_corpus(tokens) 4.2class中的各种self 4.2.1 _token_fr…

CS5260设计电路|替代RTD2169设计方案|Typec转VGA方案应用设计

CS5260,RTD2169,AG9300都可实现Type-C TO VGA转换器设计,适用于笔记本电脑、主板、台式机、适配器和对接系统等多个细分市场和显示器应用程序&#xff0c; CS5260设计电路如下&#xff1a; 2. CS5260功能特性&#xff1a; USB-C型规格1.2 VESA显示端口tm (DP) v1.4兼容接收机…

nvm安装及使用

nvm是一个node的版本管理工具。 nvm-windows下载 1、安装 首先要卸载电脑上已经有的node版本&#xff0c;注意需要卸载干净&#xff0c;再安装nvm 一路 next 安装就可以了。 安装成功后&#xff0c;以管理员身份运行&#xff08;很重要&#xff01;&#xff01;&#xff01;&…

MIPI 打怪升级之DCS篇

目录1 Overview2 Display Architectures2.1 The Type 1 Display Architecture3 Power Level3.1 Type 1 Display Architecture Power Change Sequences3.2 Type 2 Display Architecture Power Change Sequences3.3 Type 3 Display Architecture Power Change Sequences4 Gamma C…

unity的学习,准备搞一款mmo小游戏,服务器和客户端从零学

如代码所示&#xff0c;简单了解一下。 using System.Collections; using System.Collections.Generic; using UnityEngine;public class test : MonoBehaviour { void Awake(){Debug.Log("awake hello world!");}// 当脚本可用时&#xff0c;也就是打勾的时候可以…

线程同步-信号量-互斥量-条件变量

文章目录线程同步信号量互斥量条件变量线程同步 线程同步其实实现的是线程排队。防止线程同步访问共享资源造成冲突。多个线程访问共享资源的代码有可能是同一份代码&#xff0c;也有可能是不同的代码&#xff1b;无论是否执行同一份代码&#xff0c;只要这些线程的代码访问同…

Java避免死锁的几个常见方法(有测试代码和分析过程)

目录 Java避免死锁的几个常见方法 死锁产生的条件 上死锁代码 然后 &#xff1a;jstack 14320 >> jstack.text Java避免死锁的几个常见方法 Java避免死锁的几个常见方法 避免一个线程同时获取多个锁。避免一个线程在锁内同时占用多个资源&#xff0c;尽量保证每个锁…