kafka使用常见问题

news2025/1/5 9:51:17

连接不上kafka,报下边的错

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:760) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:747) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) ~[kafka-clients-3.0.2.jar:na]
	at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_351]

2024-12-31 21:04:59.505 ERROR 35092 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"id":null,"payerName":"payer756","payerAcc":"payer_acc756","payeeName":"payee756","payeeAcc":"payee...' to topic Fraud_acc:

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:760) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:747) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) ~[kafka-clients-3.0.2.jar:na]
	at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_351]

2024-12-31 21:04:59.505 ERROR 35092 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"id":null,"payerName":"payer757","payerAcc":"payer_acc757","payeeName":"payee757","payeeAcc":"payee...' to topic Fraud_acc:

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:760) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:747) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) ~[kafka-clients-3.0.2.jar:na]
	at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_351]

2024-12-31 21:04:59.505 ERROR 35092 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"id":null,"payerName":"payer758","payerAcc":"payer_acc758","payeeName":"payee758","payeeAcc":"payee...' to topic Fraud_acc:

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:760) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:747) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) ~[kafka-clients-3.0.2.jar:na]
	at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_351]

2024-12-31 21:04:59.505 ERROR 35092 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"id":null,"payerName":"payer759","payerAcc":"payer_acc759","payeeName":"payee759","payeeAcc":"payee...' to topic Fraud_acc:

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:760) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:747) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) ~[kafka-clients-3.0.2.jar:na]
	at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_351]

2024-12-31 21:04:59.505 ERROR 35092 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"id":null,"payerName":"payer760","payerAcc":"payer_acc760","payeeName":"payee760","payeeAcc":"payee...' to topic Fraud_acc:

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:760) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:747) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) ~[kafka-clients-3.0.2.jar:na]
	at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_351]

2024-12-31 21:04:59.505 ERROR 35092 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{"id":null,"payerName":"payer761","payerAcc":"payer_acc761","payeeName":"payee761","payeeAcc":"payee...' to topic Fraud_acc:

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:760) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:747) [kafka-clients-3.0.2.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:283) ~[kafka-clients-3.0.2.jar:na]
	at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_351]
解决方法

cat server.properties 文件中配置一下下边的参数, 当初是localhost所以访问丢失
在这里插入图片描述

2025-01-01 09:18:04.991  WARN 6240 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-my-group-1, groupId=my-group] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2025-01-01 09:18:05.736  INFO 6240 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2025-01-01 09:18:05.736  INFO 6240 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2025-01-01 09:18:05.737  INFO 6240 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 1 ms
2025-01-01 09:18:05.843  INFO 6240 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
	acks = -1
	batch.size = 16384
	bootstrap.servers = [192.168.1.112:9092]
	buffer.memory = 33554432
	client.dns.lookup = use_all_dns_ips
	client.id = producer-1
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = true
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2025-01-01 09:18:05.862  INFO 6240 --- [nio-8080-exec-1] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-1] Instantiated an idempotent producer.
2025-01-01 09:18:05.882  INFO 6240 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.0.2
2025-01-01 09:18:05.882  INFO 6240 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: 25b1aea02e37da14
2025-01-01 09:18:05.882  INFO 6240 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1735694285881
2025-01-01 09:18:06.127  INFO 6240 --- [ad | producer-1] org.apache.kafka.clients.Metadata        : [Producer clientId=producer-1] Resetting the last seen epoch of partition my-topic-0 to 0 since the associated topicId changed from null to ZiipuoTKS22oBX6HbBpMbQ
2025-01-01 09:18:06.128  INFO 6240 --- [ad | producer-1] org.apache.kafka.clients.Metadata        : [Producer clientId=producer-1] Cluster ID: DioEcCfQQNi6Ea50_-07Ag
2025-01-01 09:18:06.160  INFO 6240 --- [ad | producer-1] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-1] ProducerId set to 16 with epoch 0
2025-01-01 09:18:08.127  WARN 6240 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-my-group-1, groupId=my-group] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

kafka消费者报错

2025-01-01 17:04:38.425 ERROR 38716 --- [. Out (20/24)#0] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-my-group-49, groupId=my-group] Offset commit failed on partition FraudAcc-0 at offset 25: The coordinator is not aware of this member.
2025-01-01 17:04:38.425  INFO 38716 --- [. Out (20/24)#0] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-my-group-49, groupId=my-group] OffsetCommit failed with Generation{generationId=-1, memberId='', protocol='null'}: The coordinator is not aware of this member.
2025-01-01 17:04:38.425  INFO 38716 --- [. Out (20/24)#0] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-my-group-49, groupId=my-group] Resetting generation due to: encountered UNKNOWN_MEMBER_ID from OFFSET_COMMIT response
2025-01-01 17:04:38.425  INFO 38716 --- [. Out (20/24)#0] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-my-group-49, groupId=my-group] Request joining group due to: encountered UNKNOWN_MEMBER_ID from OFFSET_COMMIT response
2025-01-01 17:04:38.425  WARN 38716 --- [. Out (20/24)#0] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-my-group-49, groupId=my-group] Asynchronous auto-commit of offsets {FraudAcc-0=OffsetAndMetadata{offset=25, leaderEpoch=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.

解决方法

根据你提供的日志信息,错误的根本原因在于Kafka消费者在尝试提交偏移量(offset commit)时遇到了协调者(coordinator)不知道的成员(member)。这通常发生在以下几种情况下:

  1. 消费者组重新平衡:当消费者组中的一个或多个消费者加入或离开时,Kafka会触发一次重新平衡操作。在此期间,所有消费者都会暂时失去对分区的所有权,并且需要重新加入组以获得新的分配。如果你的消费者在重新平衡后没有正确地重新加入,它将无法提交偏移量。

  2. max.poll.interval.ms超时:这个配置项定义了消费者可以花费多长时间处理来自poll()调用的消息而不返回到Kafka进行心跳检测。如果消费者花费的时间超过了这个值,Kafka认为该消费者已经死亡并触发重新平衡。这可能是由于消息处理时间过长或者消费者卡住未能及时调用poll()导致的。

  3. session.timeout.ms超时:这是另一个与消费者健康检查有关的参数,它决定了Kafka等待消费者发送心跳的最大时间。如果超过这个时间没有收到心跳,Kafka也会认为该消费者已经死亡并触发重新平衡。

  4. max.poll.records设置过高:如果你的max.poll.records配置得太高,那么每次poll()调用可能会返回大量的记录,从而增加了处理这些记录所需的时间,可能导致max.poll.interval.ms超时。

解决方案

你可以通过调整以下几个配置来解决这个问题:

  • 增加max.poll.interval.ms:如果知道你的消费者可能需要更多时间来处理一批消息,可以适当增加这个值。默认是5分钟(300,000毫秒),你可以根据实际情况调整为更长的时间。
    spring.kafka.consumer.max-poll-interval=600000  # 例如设置为10分钟
    
  • 减少max.poll.records:降低每次poll()调用返回的最大记录数,可以减少单次处理的负担,使消费者更快地完成处理并返回心跳。
    spring.kafka.consumer.max-poll-records=100  # 根据具体情况调整
    
  • 优化消息处理逻辑:确保你的消息处理逻辑尽可能高效,避免长时间阻塞的操作,如数据库查询、网络请求等。考虑使用异步处理或其他方式来加速处理过程。
  • 检查消费者的健壮性:确保消费者不会因为异常情况而卡住或崩溃。添加适当的异常处理和监控机制可以帮助快速发现和解决问题。
  • 确保消费者组ID唯一:确认每个消费者使用的消费者组ID是唯一的,除非它们确实属于同一个消费者组并且共享相同的订阅主题。

日志分析

从日志中可以看到,消费者consumer-my-group-49试图提交偏移量,但协调者并不知道这个成员的存在,因此提交失败。然后消费者重置了它的生成代(generation),并请求加入组。紧接着的日志警告指出了问题的核心——消费者在两次poll()之间的间隔超过了max.poll.interval.ms,这通常意味着消息处理时间过长。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2270027.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

用uniapp写一个播放视频首页页面代码

效果如下图所示 首页有导航栏&#xff0c;搜索框&#xff0c;和视频列表&#xff0c; 导航栏如下图 搜索框如下图 视频列表如下图 文件目录 视频首页页面代码如下 <template> <view class"video-home"> <!-- 搜索栏 --> <view class…

深入浅出:从入门到精通大模型Prompt、SFT、RAG、Infer、Deploy、Agent

阅读原文 渐入佳境 我们都知道&#xff0c;通过编写一个提示词&#xff08;prompt&#xff09;&#xff0c;我们可以引导大模型生成回答&#xff0c;从而开启愉快的人工智能对话&#xff0c;比如让模型介绍一下卡皮巴拉。上边简图描述了这个过程&#xff0c;我们拆成两部分 pr…

Unity-Mirror网络框架-从入门到精通之Basic示例

文章目录 前言Basic示例场景元素预制体元素代码逻辑BasicNetManagerPlayer逻辑SyncVars属性Server逻辑Client逻辑 PlayerUI逻辑 最后 前言 在现代游戏开发中&#xff0c;网络功能日益成为提升游戏体验的关键组成部分。Mirror是一个用于Unity的开源网络框架&#xff0c;专为多人…

【新方法】通过清华镜像源加速 PyTorch GPU 2.5安装及 CUDA 版本选择指南

下面详细介绍所提到的两条命令&#xff0c;它们的作用及如何在你的 Python 环境中加速 PyTorch 等库的安装。 1. 设置清华镜像源 pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple这条命令的作用是将 pip &#xff08;Python 的包管理工具&#xf…

自动化测试-Pytest测试

目录 pytest简介 基本测试实例 编写测试文件 执行测试 pytest运行时参数 mark标记 Fixture pytest插件 Allure测试报告 测试步骤 pytest简介 Pytest‌是一个非常流行的Python测试框架&#xff0c;它支持简单的单元测试和复杂的功能测试&#xff0c;具有易于上手、功…

Java-33 深入浅出 Spring - FactoryBean 和 BeanFactory BeanPostProcessor

点一下关注吧&#xff01;&#xff01;&#xff01;非常感谢&#xff01;&#xff01;持续更新&#xff01;&#xff01;&#xff01; 大数据篇正在更新&#xff01;https://blog.csdn.net/w776341482/category_12713819.html 目前已经更新到了&#xff1a; MyBatis&#xff…

MySQL 服务器简介

通常所说的 MySQL 服务器指的是mysqld程序&#xff0c;当运⾏mysqld后对外提供MySQL 服务&#xff0c;这个专题的内容涵盖了以下关于MySQL 服务器以及相关配置的内容&#xff0c;包括&#xff1a; 服务器⽀持的启动选项。可以在命令⾏和配置⽂件中指定这些选项。 服务器系统变…

分布式版本管理工具——Git关联远程仓库(github+gitee)

Git远程仓库&#xff08;Github&#xff09;的基本使用 一、前言二、Git远程仓库介绍三、演示1. 关联github远程仓库2. 关联gitee&#xff08;码云&#xff09;远程仓库3. 重命名远程仓库名4. 移除远程仓库 四、结束语 一、前言 古之立大事者&#xff0c;不惟有超世之才&#x…

ZLib库使用详细教程 以及标准ZLib函数和QT自带压缩函数比较

1. 下载Zlib 官网下载地址如下&#xff1a;http://www.zlib.net/ 2. 利用cmake编译zlib 有两种方法可以打开cmake-gui winR输入cmd打开命令行&#xff0c;在命令行中输入cmake-gui可以直接打开应用界面找到你一开始安装cmake的文件夹&#xff0c;在bin子文件夹中双击cmake-…

加载Tokenizer和基础模型的解析及文件介绍:from_pretrained到底加载了什么?

加载Tokenizer和基础模型的解析及文件介绍 在使用Hugging Face的transformers库加载Tokenizer和基础模型时&#xff0c;涉及到许多文件的调用和解析。这篇博客将详细介绍这些文件的功能和它们在加载过程中的作用&#xff0c;同时结合代码片段进行解析。 下图是我本地下载好模…

SpringAI从入门到熟练

学习SpringAI的记录情况 文章目录 前言 因公司需要故而学习SpringAI文档&#xff0c;故将自己所见所想写成文章&#xff0c;供大佬们参考 主要是为什么这么写呢&#xff0c;为何不抽出来呢&#xff0c;还是希望可以用的时候更加方便一点&#xff0c;如果大家有需求可以自行去…

嵌入式系统中C++的基本使用方法

大家好,今天主要给大家分享一下,最近操作C++代码的控制方法。 什么是构造函数?构造函数在对象实例化时被系统自动调用,仅且调用一次。 什么是析构函数?与构造函数相反, 在对象结束其生命周期时系统自动执行析构函数。 第一个:析构函数与构造函数区别 实例代码: #inclu…

【Qt】多元素控件:QListWidget、QTableWidget、QTreeWidget

目录 QListWidget 核心属性&#xff1a; 核心方法&#xff1a; 核心信号&#xff1a; 例子&#xff1a; QListWidgetItem QTableWidget 核心方法&#xff1a; 核心信号 QTableWidgetItem 例子&#xff1a; QTreeWidget 核心方法&#xff1a; 核心信号&#xff1a…

HTML5 标签输入框(Tag Input)详解

HTML5 标签输入框&#xff08;Tag Input&#xff09;详解 标签输入框&#xff08;Tag Input&#xff09;是一种用户界面元素&#xff0c;允许用户输入多个标签或关键词&#xff0c;通常用于表单、搜索框或内容分类等场景。以下是实现标签输入框的详细讲解。 1. 任务概述 标…

前端加载自己制作的栅格切片服务充当底图

注意mapview的center属性和tilelayer.fullExtent的区别。 前者是设置mapview显示的中心点坐标&#xff0c; const view new MapView({ container: "viewDiv", map: map, center:[100,25] }); 后者是读…

Windows 安装Mysql 8.1.0版本,并迁移数据库

一、下载MySQL压缩包 进入MySQL官网&#xff1a;https://downloads.mysql.com/archives/community/ 下载zip包到本地&#xff0c;然后解压缩。 二、安装MySQL 1、 创建my.ini文件 新创建一个my.ini文件&#xff0c;文件内容如下&#xff0c;记得修改【basedir】和【datadir…

uniapp——微信小程序,从客户端会话选择文件

微信小程序选择文件 文章目录 微信小程序选择文件效果图选择文件返回数据格式 API文档&#xff1a; chooseMessageFile 微信小程序读取文件&#xff0c;请查看 效果图 选择文件 /*** description 从客户端会话选择文件* returns {String} 文件路径*/ const chooseFile () &g…

学习threejs,导入CTM格式的模型

&#x1f468;‍⚕️ 主页&#xff1a; gis分享者 &#x1f468;‍⚕️ 感谢各位大佬 点赞&#x1f44d; 收藏⭐ 留言&#x1f4dd; 加关注✅! &#x1f468;‍⚕️ 收录于专栏&#xff1a;threejs gis工程师 文章目录 一、&#x1f340;前言1.1 ☘️THREE.ColladaLoader DAE模…

1、pycharm、python下载与安装

1、去官网下载pycharm 官网&#xff1a;https://www.jetbrains.com/pycharm/download/?sectionwindows 2、在等待期间&#xff0c;去下载python 进入官网地址&#xff1a;https://www.python.org/downloads/windows/ 3、安装pycharm 桌面会出现快捷方式 4、安装python…

虚拟机Centos下安装Mysql完整过程(图文详解)

目录 一. 准备工作 1. 设置虚拟机静态IP 2. 卸载Mysql 3. 给CentOS添加rpm源 二. 安装MySQL 1. 安装mysql服务 2. 启动mysql服务 3. 开启MySQL开机自启动 4. 查看mysql服务状态 5. 查看mysql初始密码 6. 登录mysql &#xff0c;修改密码 7. 允许外部访问MySQL数据库…