kafka2.x版本配置SSL进行加密和身份验证

news2024/11/28 3:46:40

背景:找了一圈资料,都是东讲讲西讲讲,最后我还没搞好,最终决定参考官网说明。

官网指导手册地址:Apache Kafka

需要预备的知识,keytool和openssl

关于keytool的参考:keytool的使用-CSDN博客

关于openssl的参考:openssl常用命令大全_openssl命令参数大全-CSDN博客

先只看SSL安全机制方式。

Apache Kafka 允许客户端通过 SSL 进行连接。默认情况下,SSL 处于禁用状态,但可以根据需要打开。

  1. 1为每个 Kafka 代理生成 SSL 密钥和证书

部署一个或多个支持 SSL 的代理的第一步是为集群中的每台计算机生成密钥和证书。您可以使用 Java 的 keytool 实用程序来完成此任务。我们最初会将密钥生成到临时密钥库中,以便稍后使用 CA 导出和签名。

keytool -keystore server.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA

您需要在上面的命令中指定两个参数:

  1. 密钥库:存储证书的密钥库文件。密钥库文件包含证书的私钥;因此,它需要安全保存。这里是server.keystore.jks
  2. 有效期:证书的有效时间,单位为天。这里是700天。

可以看到,目录下生成了对应文件

之后可以运行以下命令来验证生成的证书的内容:

keytool -list -v -keystore server.keystore.jks

  1. 2创建您自己的 CA

完成第一步后,群集中的每台计算机都有一个公钥-私钥对,以及一个用于标识计算机的证书。但是,该证书是未签名的,这意味着攻击者可以创建此类证书来伪装成任何计算机。

因此,通过为群集中的每台计算机对证书进行签名来防止伪造证书非常重要。证书颁发机构 (CA) 负责对证书进行签名。CA的工作方式类似于签发护照的政府——政府在每本护照上盖章(签名),使护照变得难以伪造。其他政府会验证印章以确保护照的真实性。同样,CA 对证书进行签名,而加密技术保证签名证书在计算上难以伪造。因此,只要 CA 是真实且受信任的颁发机构,客户端就可以高度保证它们连接到真实的计算机。

openssl req -new -x509 -keyout ca-key -out ca-cert -days 365

生成的 CA 只是一个公钥-私钥对和证书,它旨在对其他证书进行签名。
下一步是将生成的 CA 添加到客户端的信任库中,以便客户端可以信任此 CA:

keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

与步骤 1 中存储每台机器自己的身份的密钥库不同,客户机的信任库存储客户机应信任的所有证书。将证书导入到信任库中还意味着信任由该证书签名的所有证书。如上所述,信任政府 (CA) 也意味着信任它签发的所有护照(证书)。此属性称为信任链,在大型 Kafka 集群上部署 SSL 时特别有用。您可以使用单个 CA 对集群中的所有证书进行签名,并让所有计算机共享信任该 CA 的同一信任库。这样,所有计算机都可以对所有其他计算机进行身份验证。

  1. 3对证书进行签名

下一步是使用步骤 2 中生成的 CA 对步骤 1 生成的所有证书进行签名。首先,您需要从密钥库中导出证书:

keytool -密钥库 client.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file

然后与 CA 一起签名:

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 700 -CAcreateserial -passin pass:{ca-password}

最后,您需要将 CA 的证书和签名的证书都导入到密钥库中:

keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert

keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed

参数的定义如下:

  1. 密钥库:密钥库的位置
  2. ca-cert:CA的证书
  3. ca-key:CA的私钥
  4. ca-password:CA的密码
  5. cert-file:导出的服务器未签名证书
  6. cert-signed:服务器的签名证书

  1. 4配置 Kafka 代理

Kafka Broker 支持侦听多个端口上的连接。我们需要在 server.properties 中配置以下属性,该属性必须具有一个或多个逗号分隔值:

如果未为代理间通信启用 SSL(请参阅下文了解如何启用它),则需要 PLAINTEXT 和 SSL 端口。

listeners=PLAINTEXT://localhost:9092,SSL://localhost:9092

代理端需要以下 SSL 配置

ssl.keystore.location=/home/lighthouse/server.keystore.jks

ssl.keystore.password=test1234

ssl.key.password=test1234

ssl.truststore.location=/home/lighthouse/server.truststore.jks

ssl.truststore.password=测试1234

注意:ssl.truststore.password 在技术上是可选的,但强烈建议使用。如果未设置密码,则对信任库的访问仍然可用,但完整性检查将被禁用。值得考虑的可选设置:

  1. ssl.client.auth=none(“required” => 需要客户端身份验证,“requested” =>请求客户端身份验证,没有证书的客户端仍然可以连接。不建议使用“requested”,因为它提供了错误的安全感,并且配置错误的客户端仍将成功连接。
  2. ssl.cipher.suites(可选)。密码套件是身份验证、加密、MAC 和密钥交换算法的命名组合,用于协商使用 TLS 或 SSL 网络协议的网络连接的安全设置。(默认值为空列表)
  3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (列出要从客户端接受的 SSL 协议。请注意,SSL 已被弃用,取而代之的是 TLS,不建议在生产中使用 SSL)
  4. ssl.keystore.type=JKS
  5. ssl.truststore.type=JKS
  6. ssl.secure.random.implementation=SHA1PRNG

如果要为代理之间的通信启用 SSL,请将以下内容添加到 server.properties 文件(默认为 PLAINTEXT)

security.inter.broker.protocol=SSL

  1. 5配置 Kafka 客户端

SSL 仅支持新的 Kafka 生产者和使用者,不支持较旧的 API。对于生产者和使用者,SSL 的配置是相同的。
如果代理中不需要客户机认证,那么下面是一个最小配置示例:

security.protocol=SSL协议

ssl.truststore.location=/var/private/ssl/client.truststore.jks

ssl.truststore.password=测试1234

注意:ssl.truststore.password 在技术上是可选的,但强烈建议使用。如果未设置密码,则对信任库的访问仍然可用,但完整性检查将被禁用。如果需要客户机认证,那么必须像步骤 1 中一样创建密钥库,并且还必须配置以下内容:

ssl.keystore.location=/var/private/ssl/client.keystore.jks

ssl.keystore.password=test1234

ssl.key.password=test1234

根据我们的要求和代理配置,可能还需要其他配置设置:

  1. ssl.provider(可选)。用于 SSL 连接的安全提供程序的名称。缺省值是 JVM 的缺省安全提供程序。
  2. ssl.cipher.suites(可选)。密码套件是身份验证、加密、MAC 和密钥交换算法的命名组合,用于协商使用 TLS 或 SSL 网络协议的网络连接的安全设置。
  3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1。它应该列出至少一个在代理端配置的协议
  4. ssl.truststore.type=JKS
  5. ssl.keystore.type=JKS

生产者和消费者共同使用到的client-ssl.properties文件内容如下:

使用 console-producer 和 console-consumer 的示例:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --consumer.config ./config/client-ssl.properties

报错了:

还要在用户目录下执行如下命令,信任客户端:

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.keystore.jks -alias CARoot -import -file ca-cert

如果密码错了,还会报如下错误:

lighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:431)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:299)
        at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:44)
        at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKS
        at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:73)
        at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
        at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)
        at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)
        at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:439)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:420)
        ... 3 more
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKS
        at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:144)
        at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
        ... 8 more
Caused by: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKS
        at org.apache.kafka.common.security.ssl.SslFactory$SecurityStore.load(SslFactory.java:357)
        at org.apache.kafka.common.security.ssl.SslFactory.createSSLContext(SslFactory.java:248)
        at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:141)
        ... 9 more
Caused by: java.io.IOException: keystore password was incorrect
        at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2092)
        at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:243)
        at java.base/java.security.KeyStore.load(KeyStore.java:1479)
        at org.apache.kafka.common.security.ssl.SslFactory$SecurityStore.load(SslFactory.java:354)
        ... 11 more
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
        ... 15 more

然后我核对了client-ssl.properties文件中的配置(包含密码),再次启动producer,会报如下错:

lighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
>[2024-03-19 13:42:49,783] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:49,835] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:49,937] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:50,140] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:50,543] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:51,298] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:52,203] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:53,158] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:54,264] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:55,220] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:56,376] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
^C^Clighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$

核对了server.properties文件的密码后,启动kafka还是报错,报的错关键信息如下:

[2024-03-19 14:34:31,955] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 14:34:31,957] WARN SSL handshake failed (kafka.utils.CoreUtils$)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: No name matching localhost found
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:360)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:303)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:298)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
        at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1076)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1063)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1010)
        at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:402)
        at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:484)
        at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:340)
        at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:265)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:170)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
        at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:74)
        at kafka.server.KafkaServer.doControlledShutdown$1(KafkaServer.scala:510)
        at kafka.server.KafkaServer.controlledShutdown(KafkaServer.scala:563)
        at kafka.server.KafkaServer.$anonfun$shutdown$2(KafkaServer.scala:585)
        at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86)
        at kafka.server.KafkaServer.shutdown(KafkaServer.scala:585)
        at kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:48)
        at kafka.Kafka$$anon$1.run(Kafka.scala:72)
Caused by: java.security.cert.CertificateException: No name matching localhost found
        at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
        at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:461)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:435)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632)
        ... 24 more
[2024-03-19 14:34:31,957] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:31,960] INFO [/config/changes-event-process-thread]: Shutting down (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,961] INFO [/config/changes-event-process-thread]: Shutdown completed (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,961] INFO [/config/changes-event-process-thread]: Stopped (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,962] INFO [SocketServer brokerId=0] Stopping socket server request processors (kafka.network.SocketServer)
[2024-03-19 14:34:31,979] INFO [SocketServer brokerId=0] Stopped socket server request processors (kafka.network.SocketServer)
[2024-03-19 14:34:31,980] INFO [data-plane Kafka Request Handler on Broker 0], shutting down (kafka.server.KafkaRequestHandlerPool)
[2024-03-19 14:34:31,988] INFO [data-plane Kafka Request Handler on Broker 0], shut down completely (kafka.server.KafkaRequestHandlerPool)
[2024-03-19 14:34:31,995] INFO [KafkaApi-0] Shutdown complete. (kafka.server.KafkaApis)
[2024-03-19 14:34:31,997] INFO [ExpirationReaper-0-topic]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,059] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
^C[2024-03-19 14:34:32,114] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:32,132] INFO [ExpirationReaper-0-topic]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,132] INFO [ExpirationReaper-0-topic]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,134] INFO [TransactionCoordinator id=0] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-19 14:34:32,135] INFO [ProducerId Manager 0]: Shutdown complete: last producerId assigned 1000 (kafka.coordinator.transaction.ProducerIdManager)
[2024-03-19 14:34:32,136] INFO [Transaction State Manager 0]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager)
[2024-03-19 14:34:32,136] INFO [Transaction Marker Channel Manager 0]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,139] INFO [Transaction Marker Channel Manager 0]: Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,140] INFO [Transaction Marker Channel Manager 0]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,141] INFO [TransactionCoordinator id=0] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-19 14:34:32,141] INFO [GroupCoordinator 0]: Shutting down. (kafka.coordinator.group.GroupCoordinator)
[2024-03-19 14:34:32,144] INFO [ExpirationReaper-0-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,160] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,261] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,362] INFO [ExpirationReaper-0-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,362] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,363] INFO [ExpirationReaper-0-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,363] INFO [GroupCoordinator 0]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator)
[2024-03-19 14:34:32,364] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager)
[2024-03-19 14:34:32,364] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,366] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,366] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,368] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager)
[2024-03-19 14:34:32,369] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager)
[2024-03-19 14:34:32,370] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager)
[2024-03-19 14:34:32,370] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[2024-03-19 14:34:32,370] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,463] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,564] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,666] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,692] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,692] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,693] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,768] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,870] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,893] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,893] INFO [ExpirationReaper-0-ElectPreferredLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,897] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager)
[2024-03-19 14:34:32,898] INFO Shutting down. (kafka.log.LogManager)
[2024-03-19 14:34:32,934] INFO Shutdown complete. (kafka.log.LogManager)
[2024-03-19 14:34:32,960] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2024-03-19 14:34:32,964] INFO Session: 0x100cd6124170002 closed (org.apache.zookeeper.ZooKeeper)
[2024-03-19 14:34:32,966] INFO EventThread shut down for session: 0x100cd6124170002 (org.apache.zookeeper.ClientCnxn)
[2024-03-19 14:34:32,966] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2024-03-19 14:34:32,968] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
^C[2024-03-19 14:34:33,740] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
^C[2024-03-19 14:34:33,972] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:34,170] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:34,170] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:34,172] INFO [SocketServer brokerId=0] Shutting down socket server (kafka.network.SocketServer)
^C[2024-03-19 14:34:34,204] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
[2024-03-19 14:34:34,204] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:34,206] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)

经过一番清理ca-*,cert-*,client-*,server-*文件后,然后重新生成秘钥证书和CA、签名,步骤如下:

一、生成 SSL 密钥和证书
keytool -keystore server.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore server.truststore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore client.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore client.truststore.jks -alias localhost -validity 700 -genkey -keyalg RSA

2、创建我自己的CA
openssl req -new -x509 -keyout ca-key -out ca-cert -days 700
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 700 -CAcreateserial -passin pass:123456

3、对证书进行签名
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed

keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore client.keystore.jks -alias CARoot -import -file ca-cert

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

再次启动zookeeper和kafka,然后执行生产producer命令,发现还是报错:

[2024-03-19 17:52:38,773] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,876] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,876] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-03-19 17:52:38,876] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,979] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.netw
ork.Selector)
[2024-03-19 17:52:38,980] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed
 (org.apache.kafka.clients.NetworkClient)
[2024-03-19 17:52:38,980] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:39,083] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.netw
ork.Selector)
[2024-03-19 17:52:39,083] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:39,083] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed
 (org.apache.kafka.clients.NetworkClient)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1530331.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【漏洞复现】正方教学管理信息服务平台ReportServer存在任意文件读取

免责声明&#xff1a;文章来源互联网收集整理&#xff0c;请勿利用文章内的相关技术从事非法测试&#xff0c;由于传播、利用此文所提供的信息或者工具而造成的任何直接或者间接的后果及损失&#xff0c;均由使用者本人负责&#xff0c;所产生的一切不良后果与文章作者无关。该…

wireshark数据捕获实验简述

Wireshark是一款开源的网络协议分析工具&#xff0c;它可以用于捕获和分析网络数据包。是一款很受欢迎的“网络显微镜”。 实验拓扑图&#xff1a; 实验基础配置&#xff1a; 服务器&#xff1a; ip:172.16.1.88 mask:255.255.255.0 r1: sys sysname r1 undo info enable in…

HCIP作业

实验要求&#xff1a; 1、R6为ISP&#xff0c;接口IP地址均为公有地址&#xff0c;该设备只能配置IP地址&#xff0c;之后不能再对其进行任何配置&#xff1b; 2、R1-R5为局域网&#xff0c;私有IP地址192.168.1.0/24&#xff0c;请合理分配&#xff1b; 3、R1、R2、R4&#x…

java数据结构与算法刷题-----LeetCode135. 分发糖果

java数据结构与算法刷题目录&#xff08;剑指Offer、LeetCode、ACM&#xff09;-----主目录-----持续更新(进不去说明我没写完)&#xff1a;https://blog.csdn.net/grd_java/article/details/123063846 文章目录 1. 左右遍历2. 进阶&#xff1a;常数空间遍历&#xff0c;升序降…

LabVIEW NV色心频率扫描

LabVIEW NV色心频率扫描 通过LabVIEW软件开发一个能够实现对金刚石氮空位&#xff08;Nitrogen-Vacancy&#xff0c;NV&#xff09;色心的频率扫描系统。系统通过USB协议与硬件设备通信&#xff0c;对NV色心进行高精度的频率扫描&#xff0c;满足了频率在2.6 GHz到3.2 GHz范围…

使用DMA方式控制串口

本身DMA没什么问题&#xff0c;但是最后用GPIOB点灯&#xff0c;就是点不亮。 回到原来GPIO点灯程序&#xff0c;使用GPIOB就是不亮&#xff0c;替换为GPIOA就可以&#xff0c;简单问题总是卡得很伤。

微信小程序的配置文件使用说明:

在上一文中学习开发小程序的起航日记&#xff0c;我们准备好了开发小程序时所需的环境和准备工作&#xff0c;同时也简单的了解了一下小程序的项目结构组成。 这一章&#xff0c;我们主要对小程序的配置文件进行学习。 文章目录 小程序_配置文件1.json2.app.jsonpages 属性wind…

C++:类和对象(上篇)

目录&#xff1a; 一&#xff1a;面向对象和过程的介绍 二&#xff1a;类的引入 三&#xff1a;类的定义 四&#xff1a;类的访问限定符以及封装 五&#xff1a;类的作用域 六&#xff1a;类的实例化 七&#xff1a;类对象大小的计算 八&#xff1a;类成员函数的this指…

DolphinScheduler运维-页面加载缓慢

一、问题描述 DolphinScheduler调度平台的UI界面加载缓慢,项目中的任务实例加载时间过长,需要解决这个问题,提高DolphinScheduler平台UI页面的加载速度。 二、原因分析 经过分析发现,任务实例过多是导致UI加载缓慢的主要原因。由于任务实例无法直接删除,根据文档了解到需…

基于docker+rancher部署Vue项目的教程

基于dockerrancher部署Vue的教程 前段时间总有前端开发问我Vue如何通过docker生成镜像&#xff0c;并用rancher上进行部署&#xff1f;今天抽了2个小时研究了一下&#xff0c;给大家记录一下这个过程。该部署教程适用于Vue、Vue2、Vue3等版本。 PS&#xff1a;该教程基于有一定…

UART动态调整接收时钟

文章目录 一、UART接收模块误码率二、接收时钟动态纠正方法2.1、过采样2.2、上板效果 一、UART接收模块误码率 由于发送端和接收端存在一定的频率误差&#xff0c;随着时间的推移&#xff0c;累计误差不断增加&#xff0c;从而产生亚稳态现象&#xff0c;会导致误码&#xff0…

【Vue3】组件通信以及各种方式的对比

方式一&#xff1a;props 「父」向「子」组件发送数据 父组件&#xff1a; 定义需要传递给子组件的数据&#xff0c;并使用 v-bind 指令将其绑定到子组件的 props 上。 <template><child-component :message"parentMessage" /> </template><sc…

3.19网络编程

select实现的TCP并发服务器 #include <myhead.h> #define SER_IP "192.168.141.134" #define SER_PORT 8888 int main(int argc, const char *argv[]) {// 1、创建一个套接字int sfd -1;sfd socket(AF_INET, SOCK_STREAM, 0);if (sfd -1){perr…

【leetcode热题】 地下城游戏

恶魔们抓住了公主并将她关在了地下城 dungeon 的 右下角 。地下城是由 m x n 个房间组成的二维网格。我们英勇的骑士最初被安置在 左上角 的房间里&#xff0c;他必须穿过地下城并通过对抗恶魔来拯救公主。 骑士的初始健康点数为一个正整数。如果他的健康点数在某一时刻降至 0…

TSINGSEE青犀数字化、智能化视频技术推动森林防火智慧监管

一、背景分析 中央网络安全和信息化委员会印发《“十四五”国家信息化规划》&#xff0c;明确指出“提升林草生态网络感知能力&#xff0c;完善生态系统保护成效数字化监测评估体系”。这为数字化系统建设引领了方向&#xff0c;中国林业信息化建设迈入了新的阶段&#xff0c;全…

六、C#快速排序算法

简介 快速排序是一种常用的排序算法&#xff0c;它基于分治的思想&#xff0c;通过将一个无序的序列分割成两个子序列&#xff0c;并递归地对子序列进行排序&#xff0c;最终完成整个序列的排序。 其基本思路如下&#xff1a; 选择数组中的一个元素作为基准&#xff08;pivot…

绝地求生:现在购买通行证还能兑换成长型武器吗?

大家好&#xff0c;我闲游盒&#xff0c;这几天收到几位盒友的私信咨询我现在购买通行证还能获得一把成长型武器吗&#xff1f;我相信还有许多盒友也有此困惑&#xff0c;那我就在这统一回复了&#xff0c;目前距通行证和商城物资箱礼包下架还有最后16天时间&#xff0c;众所周…

fs方法举例

fs.readFile() 读取文件 const fs require(node:fs) const path require(node:path) const s path.resolve(__dirname, ./hello.txt) const buf fs.readFileSync(s) console.log(buf.toString())输出的Buffer对象 用toString()方法转字符串之后 fs.appendFile() 创建新…

Vue项目使用process.env关键字及Vue.config.js配置解决前端跨域问题

1.process.env 是Node.js 中的一个环境 1.打开命令行查看环境: 2.process.env与Vue CLI 项目 Vue Cli 有以下三种运行模式 development 模式用于 vue-cli-service serve test 模式用于 vue-cli-service test:unit production 模式用于 vue-cli-service build 和 vue-cli-se…

Java设计模式 | 简单工厂模式

概述 需求 设计一个咖啡店点餐系统设计一个咖啡类&#xff08;Coffee&#xff09;&#xff1b;并定义其两个子类&#xff08;美式咖啡AmericanCoffee和拿铁咖啡LatteCoffee&#xff09;&#xff1b;再设计一个咖啡店类&#xff08;CoffeeStore&#xff09;&#xff0c;其具备…