目录
一、问题列表
二、解决方案
三、参考资料
四、配置详解
五、数据库相关操作
一、问题列表
1、问题一:点击 startup.bat 窗口出现后立马闪退的问题。
2、问题二:启动后日志文件报错:
ERROR com.alibaba.otter.canal.deployer.CanalLauncher - ## Something goes wrong when starting up the canal Server:
java.lang.IllegalStateException: Extension instance(name: kafka, class: interface com.alibaba.otter.canal.connector.core.spi.CanalMQProducer) could not be instantiated: class could not be found
at com.alibaba.otter.canal.connector.core.spi.ExtensionLoader.createExtension(ExtensionLoader.java:169) ~[connector.core-1.1.7-SNAPSHOT.jar:na]
at com.alibaba.otter.canal.connector.core.spi.ExtensionLoader.getExtension(ExtensionLoader.java:121) ~[connector.core-1.1.7-SNAPSHOT.jar:na]
at com.alibaba.otter.canal.deployer.CanalStarter.start(CanalStarter.java:69) ~[canal.deployer-1.1.7-SNAPSHOT.jar:na]
at com.alibaba.otter.canal.deployer.CanalLauncher.main(CanalLauncher.java:124) ~[canal.deployer-1.1.7-SNAPSHOT.jar:na]
3、问题三、启动报一下错误:
2019-04-22 18:45:04.182 [destination = example , address = /127.0.0.1:3306 , EventParser] ERROR c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - dump address /127.0.0.1:3306 has an error, retrying. caused by
java.io.IOException: Received error packet: errno = 1236, sqlstate = HY000 errmsg = Could not find first log file name in binary log index file
at com.alibaba.otter.canal.parse.inbound.mysql.dbsync.DirectLogFetcher.fetch(DirectLogFetcher.java:102) ~[canal.parse-1.1.0.jar:na]
at com.alibaba.otter.canal.parse.inbound.mysql.MysqlConnection.dump(MysqlConnection.java:213) ~[canal.parse-1.1.0.jar:na]
at com.alibaba.otter.canal.parse.inbound.AbstractEventParser$3.run(AbstractEventParser.java:248) ~[canal.parse-1.1.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_202]
二、解决方案
1、问题一解决方案:编辑 startup.bat 文件,在最后面加入:pause
2、问题二解决方案:把plugin目录里的jar包拷贝到lib就行了。
3、问题三解决方案:删除Canal项目中conf文件夹产生的meta.bat文件,之后再重启即可。
三、参考资料
1、canal部署安装参考链接:https://blog.csdn.net/qq_40309183/article/details/124497076
2、canal下载地址:https://github.com/alibaba/canal/releases
四、配置详解
1、conf\example\instance.properties 文件
这个配置文件一般只需要改以下参数,其他的使用默认的即可:
canal.instance.master.address=127.0.0.1:3306 // 把地址换成自己的
canal.instance.defaultDatabaseName=test // 数据库名称换成自己的
#################################################
## mysql serverId , v1.0.26+ will autoGen
# canal.instance.mysql.slaveId=0 //每个instance都会伪装成一个mysql slave , 此id对于canal前端的Mysql实例而言,必须是唯一的,但是同一个Canal中相同的instance,此slaveld应该一样
# enable gtid use true/false
canal.instance.gtidon=false
# position info
canal.instance.master.address=127.0.0.1:3306 //需要连接的数据库地址及端口
canal.instance.master.journal.name= //需要读取的起始的binlog文件
canal.instance.master.position= //需要读取的起始的binlog文件的偏移量
canal.instance.master.timestamp= //需要读取的起始的binlog的时间戳
canal.instance.master.gtid=
# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=
# table meta tsdb info
canal.instance.tsdb.enable=true //v1.0.25版本新增,是否开启table meta的时间序列版本记录功能
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb //v1.0.25版本新增,table meta的时间序列版本的本地存储路径,默认为instance目录
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal
#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=
# username/password
canal.instance.dbUsername=canal //数据库账号
canal.instance.dbPassword=canal //数据库密码
canal.instance.connectionCharset=UTF-8 //数据库解析编码格式
canal.instance.defaultDatabaseName=test //数据库连接时默认schema
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==
# table regex
canal.instance.filter.regex=.*\\..* //mysql 数据解析关注的表,Perl正则表达式.
# table black regex
canal.instance.filter.black.regex= //canal将会过滤那些不符合要求的table,这些table的数据将不会被解析和传送
#################################################
2、conf\canal.properties 文件
这个配置文件一般只要修改以下参数:
canal.serverMode = tcp // 换成自己的中间件,例如:tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ
最下面的中间件的配置,根据自己用的来配置对应的地址即可
#################################################
######### common argument #############
#################################################
canal.id= 1 #每个canal server实例的唯一标识
canal.ip= #canal server绑定的本地IP信息,如果不配置,默认选择一个本机IP进行,
canal.port=11111 #canal server提供socket tcp服务的端口
canal.metrics.pull.port=11112
canal.zkServers= #canal server链接zookeeper集群的链接信息
# flush data to zk
canal.zookeeper.flush.period = 1000 #canal持久化数据到zookeeper上的更新频率,单位毫秒
canal.withoutNetty = false
# tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ
canal.serverMode = tcp # 服务模式
# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir} #canal持久化数据到file上的目录
canal.file.flush.period = 1000 #canal持久化数据到file上的更新频率,单位毫秒
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384 #canal内存store中可缓存buffer记录数,需要为2的指数
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024 # 内存记录的单位大小,默认1KB,和buffer.size组合决定最终的内存使用大小
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE #canal内存store中数据缓存模式
1. ITEMSIZE : 根据buffer.size进行限制,只限制记录的数量
2. MEMSIZE : 根据buffer.size * buffer.memunit的大小,限制缓存记录的大小
canal.instance.memory.rawEntry = true
## detecing config
canal.instance.detecting.enable = false #是否开启心跳检查
#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql = select 1 #心跳检查sql
canal.instance.detecting.interval.time = 3 #心跳检查频率,单位秒
canal.instance.detecting.retry.threshold = 3 #心跳检查失败重试次数
##非常注意:interval.time * retry.threshold值,应该参考既往DBA同学对数据库的故障恢复时间,
##“太短”会导致集群运行态角色“多跳”;“太长”失去了活性检测的意义,导致集群的敏感度降低,Consumer断路可能性增加。
canal.instance.detecting.heartbeatHaEnable = false #心跳检查失败后,是否开启自动mysql自动切换
#说明:比如心跳检查失败超过阀值后,如果该配置为true,canal就会自动链到mysql备库获取binlog数据 false
# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size = 1024 # 最大事务完整解析的长度支持超过该长度后,一个事务可能会被拆分成多次提交到canal store中,无法保证事务的完整可见性
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60 #canal发生mysql切换时,在新的mysql库上查找 binlog时需要往前查找的时间,单位秒
说明:mysql主备库可能存在解析延迟或者时钟不统一,需要回退一段时间,保证数据不丢
# network config
canal.instance.network.receiveBufferSize = 16384 #网络链接参数,SocketOptions.SO_RCVBUF
canal.instance.network.sendBufferSize = 16384 #网络链接参数,SocketOptions.SO_SNDBUF
canal.instance.network.soTimeout = 30 #网络链接参数,SocketOptions.SO_TIMEOUT
# binlog filter config
canal.instance.filter.druid.ddl = true
canal.instance.filter.query.dcl = false #ddl语句是否隔离发送,开启隔离可保证每次只返回发送一条ddl数据,不和其他dml语句混合返回.(otter ddl同步使用)
canal.instance.filter.query.dml = false #是否忽略DML的query语句,比如insert/update/delete table.(mysql5.6的ROW模式可以包含statement模式的query记录)
canal.instance.filter.query.ddl = false #是否忽略DDL的query语句,比如create table/alater table/drop table/rename table/create index/drop index. (目前支持的ddl类型主要为table级别的操作,create databases/trigger/procedure暂时划分为dcl类型)
canal.instance.filter.table.error = false
canal.instance.filter.rows = false
canal.instance.filter.transaction.entry = false
# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB
# binlog ddl isolation
canal.instance.get.ddl.isolation = false
# parallel parser config
canal.instance.parser.parallel = true
## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256
# table meta tsdb info //关于时间序列版本
canal.instance.tsdb.enable=true
canal.instance.tsdb.dir=${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url=jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername=canal
canal.instance.tsdb.dbPassword=canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval=24
# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire=360
# rds oss binlog account
canal.instance.rds.accesskey =
canal.instance.rds.secretkey =
#################################################
######### destinations #############
#################################################
canal.destinations= example
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true #开启instance自动扫描
如果配置为true,canal.conf.dir目录下的instance配置变化会自动触发:
a. instance目录新增: 触发instance配置载入,lazy为true时则自动启动
b. instance目录删除:卸载对应instance配置,如已启动则进行关闭
c. instance.properties文件变化:reload instance配置,如已启动自动进行重启操作
canal.auto.scan.interval = 5 #instance自动扫描的间隔时间,单位秒
canal.instance.tsdb.spring.xml=classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml=classpath:spring/tsdb/mysql-tsdb.xml
canal.instance.global.mode = spring #instance管理模式,Production级别我们要求使用spring
canal.instance.global.lazy = false #全局lazy模式
#canal.instance.global.manager.address = 127.0.0.1:1099 #全局的manager配置方式的链接信息
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml #全局的spring配置方式的组件文件
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml
##################################################
######### MQ Properties #############
##################################################
# aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=
canal.mq.flatMessage = true
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
canal.mq.database.hash = true
canal.mq.send.thread.size = 30
canal.mq.build.thread.size = 8
##################################################
######### Kafka #############
##################################################
kafka.bootstrap.servers = 127.0.0.1:9092
kafka.acks = 1
kafka.compression.type = none
kafka.batch.size = 16384
kafka.linger.ms = 1
kafka.max.request.size = 1048576
kafka.buffer.memory = 33554432
kafka.max.in.flight.requests.per.connection = 1
kafka.retries = 0
kafka.kerberos.enable = false
kafka.kerberos.krb5.file = ../conf/kerberos/krb5.conf
kafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf
# sasl demo
# kafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \\n username=\"alice\" \\npassword="alice-secret\";
# kafka.sasl.mechanism = SCRAM-SHA-512
# kafka.security.protocol = SASL_PLAINTEXT
##################################################
######### RocketMQ #############
##################################################
rocketmq.producer.group = test
rocketmq.enable.message.trace = false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr = 127.0.0.1:9876
rocketmq.retry.times.when.send.failed = 0
rocketmq.vip.channel.enabled = false
rocketmq.tag =
##################################################
######### RabbitMQ #############
##################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =
rabbitmq.password =
rabbitmq.deliveryMode =
##################################################
######### Pulsar #############
##################################################
pulsarmq.serverUrl =
pulsarmq.roleToken =
pulsarmq.topicTenantPrefix =
五、数据库相关操作
1、查询BinLog是否开启,如果输出结果中的Value列为ON,则表示已经开启了binlog日志。如果Value列为OFF,则表示没有开启binlog日志。
SHOW VARIABLES LIKE 'log_bin';
2、查询BinLog的状态,输出结果中的File列就是当前binlog日志的文件名,Position列是当前binlog日志的位置。
SHOW MASTER STATUS;
3、查询BinLog的模式,一般分为三种模式:row,statement,mixed。需要确认是row模式。
SHOW VARIABLES LIKE '%binlog_format%';
注意,只有MySQL服务器的超级用户才能查询binlog日志的状态和内容。
4、创建 canal 用户相关操作:
-- 创建canal用户
CREATE USER canal IDENTIFIED BY 'canal';
-- 设置canal用户的权限
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
-- 刷新权限
FLUSH PRIVILEGES;