使用C语言操作kafka

news2024/11/18 21:25:22

文章目录

  • 1 安装librdkafka
  • 2 开启kafka相关服务
    • 2.1 启动zookeeper
    • 2.2 启动Kafka
    • 2.3 创建topic
  • 3 c语言操作kafka的范例
    • 3.1 消费者
    • 3.2 生产者
    • 3.3 生产者和消费者的交互
  • 总结

1 安装librdkafka

git clone https://github.com/edenhill/librdkafka.git
cd librdkafka
git checkout v1.7.0
./configure
make
sudo make install
sudo ldconfig

在librdkafka的examples目录下会有示例程序。比如consumer的启动需要下列参数

./consumer <broker> <group.id> <topic1> <topic2>..

指定broker、group id、topic(可以订阅多个)。示例:

./consumer localhost:9092 0 test

缩略语介绍:
在这里插入图片描述

2 开启kafka相关服务

2.1 启动zookeeper

启动zookeeper可以通过下面的脚本来启动zookeeper服务,当然,也可以自己独立搭建zookeeper的集群来实现。这里我们直接使用kafka自带的zookeeper。

cd bin/
# 前台运行:
sh zookeeper-server-start.sh  ../config/zookeeper.properties

# 后台运行:
sh zookeeper-server-start.sh -daemon ../config/zookeeper.properties

可以通过命令lsof -i:2181 查看zookeeper是否启动成功。

$ lsof -i:2181
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    74930  fly   96u  IPv6 734467      0t0  TCP *:2181 (LISTEN)

2.2 启动Kafka

启动kafka(kafka安装路径的bin目录下执行),默认启动端口9092。

sh kafka-server-start.sh -daemon ../config/server.properties

2.3 创建topic

sh kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

参数说明:

–create 是创建主题的的动作指令。
–zookeeper 指定kafka所连接的zookeeper服务地址。
–replicator-factor 指定了副本因子(即副本数量); 表示该topic需要在不同的broker中保存几份,这里设置成1,表示在两个broker中保存两份Partitions分区数。
–partitions 指定分区个数;多通道,类似车道。
–topic 指定所要创建主题的名称,比如test。

3 c语言操作kafka的范例

3.1 消费者

在librdkafka\examples下有consumer.c文件,该文件是一个c语言操作kafka的代码范例,内容如下。

/**
 * Simple high-level balanced Apache Kafka consumer
 * using the Kafka driver from librdkafka
 * (https://github.com/edenhill/librdkafka)
 */

#include <stdio.h>
#include <signal.h>
#include <string.h>
#include <ctype.h>


/* Typical include path would be <librdkafka/rdkafka.h>, but this program
 * is builtin from within the librdkafka source tree and thus differs. */
//#include <librdkafka/rdkafka.h>
#include "rdkafka.h"


static volatile sig_atomic_t run = 1;

/**
 * @brief Signal termination of program
 */
static void stop (int sig) {
        run = 0;
}



/**
 * @returns 1 if all bytes are printable, else 0.
 */
static int is_printable (const char *buf, size_t size) {
        size_t i;

        for (i = 0 ; i < size ; i++)
                if (!isprint((int)buf[i]))
                        return 0;

        return 1;
}


int main (int argc, char **argv) {
        rd_kafka_t *rk;          /* Consumer instance handle */
        rd_kafka_conf_t *conf;   /* Temporary configuration object */
        rd_kafka_resp_err_t err; /* librdkafka API error code */
        char errstr[512];        /* librdkafka API error reporting buffer */
        const char *brokers;     /* Argument: broker list */
        const char *groupid;     /* Argument: Consumer group id */
        char **topics;           /* Argument: list of topics to subscribe to */
        int topic_cnt;           /* Number of topics to subscribe to */
        rd_kafka_topic_partition_list_t *subscription; /* Subscribed topics */
        int i;

        /*
         * Argument validation
         */
        if (argc < 4) {
                fprintf(stderr,
                        "%% Usage: "
                        "%s <broker> <group.id> <topic1> <topic2>..\n",
                        argv[0]);
                return 1;
        }

        brokers   = argv[1];
        groupid   = argv[2];
        topics    = &argv[3];
        topic_cnt = argc - 3;


        /*
         * Create Kafka client configuration place-holder
         */
        conf = rd_kafka_conf_new();	// 创建配置文件

        /* Set bootstrap broker(s) as a comma-separated list of
         * host or host:port (default port 9092).
         * librdkafka will use the bootstrap brokers to acquire the full
         * set of brokers from the cluster. */
        if (rd_kafka_conf_set(conf, "bootstrap.servers", brokers,
                              errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
                fprintf(stderr, "%s\n", errstr);
                rd_kafka_conf_destroy(conf);
                return 1;
        }

        /* Set the consumer group id.
         * All consumers sharing the same group id will join the same
         * group, and the subscribed topic' partitions will be assigned
         * according to the partition.assignment.strategy
         * (consumer config property) to the consumers in the group. */
        if (rd_kafka_conf_set(conf, "group.id", groupid,
                              errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
                fprintf(stderr, "%s\n", errstr);
                rd_kafka_conf_destroy(conf);
                return 1;
        }

        /* If there is no previously committed offset for a partition
         * the auto.offset.reset strategy will be used to decide where
         * in the partition to start fetching messages.
         * By setting this to earliest the consumer will read all messages
         * in the partition if there was no previously committed offset. */
        if (rd_kafka_conf_set(conf, "auto.offset.reset", "earliest",
                              errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
                fprintf(stderr, "%s\n", errstr);
                rd_kafka_conf_destroy(conf);
                return 1;
        }

        /*
         * Create consumer instance.
         *
         * NOTE: rd_kafka_new() takes ownership of the conf object
         *       and the application must not reference it again after
         *       this call.
         */
         // 创建一个kafka消费者
        rk = rd_kafka_new(RD_KAFKA_CONSUMER, conf, errstr, sizeof(errstr));
        if (!rk) {
                fprintf(stderr,
                        "%% Failed to create new consumer: %s\n", errstr);
                return 1;
        }

        conf = NULL; /* Configuration object is now owned, and freed,
                      * by the rd_kafka_t instance. */


        /* Redirect all messages from per-partition queues to
         * the main queue so that messages can be consumed with one
         * call from all assigned partitions.
         *
         * The alternative is to poll the main queue (for events)
         * and each partition queue separately, which requires setting
         * up a rebalance callback and keeping track of the assignment:
         * but that is more complex and typically not recommended. */
        rd_kafka_poll_set_consumer(rk);// poll机制,设置消费者实例到poll中


        /* Convert the list of topics to a format suitable for librdkafka */
        // 创建主题分区列表
        subscription = rd_kafka_topic_partition_list_new(topic_cnt);
        for (i = 0 ; i < topic_cnt ; i++)
                rd_kafka_topic_partition_list_add(subscription,
                                                  topics[i],
                                                  /* the partition is ignored
                                                   * by subscribe() */
                                                  RD_KAFKA_PARTITION_UA);

        /* Subscribe to the list of topics */
        err = rd_kafka_subscribe(rk, subscription);
        if (err) {
                fprintf(stderr,
                        "%% Failed to subscribe to %d topics: %s\n",
                        subscription->cnt, rd_kafka_err2str(err));
                rd_kafka_topic_partition_list_destroy(subscription);
                rd_kafka_destroy(rk);
                return 1;
        }

        fprintf(stderr,
                "%% Subscribed to %d topic(s), "
                "waiting for rebalance and messages...\n",
                subscription->cnt);

        rd_kafka_topic_partition_list_destroy(subscription);


        /* Signal handler for clean shutdown */
        signal(SIGINT, stop);

        /* Subscribing to topics will trigger a group rebalance
         * which may take some time to finish, but there is no need
         * for the application to handle this idle period in a special way
         * since a rebalance may happen at any time.
         * Start polling for messages. */

        while (run) {
                rd_kafka_message_t *rkm;
				
                rkm = rd_kafka_consumer_poll(rk, 100);
                if (!rkm)
                        continue; /* Timeout: no message within 100ms,
                                   *  try again. This short timeout allows
                                   *  checking for `run` at frequent intervals.
                                   */

                /* consumer_poll() will return either a proper message
                 * or a consumer error (rkm->err is set). */
                if (rkm->err) {
                        /* Consumer errors are generally to be considered
                         * informational as the consumer will automatically
                         * try to recover from all types of errors. */
                        fprintf(stderr,
                                "%% Consumer error: %s\n",
                                rd_kafka_message_errstr(rkm));
                        rd_kafka_message_destroy(rkm);
                        continue;
                }

                /* Proper message. */
                printf("Message on %s [%"PRId32"] at offset %"PRId64":\n",
                       rd_kafka_topic_name(rkm->rkt), rkm->partition,
                       rkm->offset);

                /* Print the message key. */
                if (rkm->key && is_printable(rkm->key, rkm->key_len))
                        printf(" Key: %.*s\n",
                               (int)rkm->key_len, (const char *)rkm->key);
                else if (rkm->key)
                        printf(" Key: (%d bytes)\n", (int)rkm->key_len);

                /* Print the message value/payload. */
                if (rkm->payload && is_printable(rkm->payload, rkm->len))
                        printf(" Value: %.*s\n",
                               (int)rkm->len, (const char *)rkm->payload);
                else if (rkm->payload)
                        printf(" Value: (%d bytes)\n", (int)rkm->len);

                rd_kafka_message_destroy(rkm);
        }


        /* Close the consumer: commit final offsets and leave the group. */
        fprintf(stderr, "%% Closing consumer\n");
        rd_kafka_consumer_close(rk);


        /* Destroy the consumer */
        rd_kafka_destroy(rk);

        return 0;
}

在这里插入图片描述

3.2 生产者

在librdkafka\examples下有producer.c文件,该文件是一个c语言操作kafka的代码范例,内容如下。

/**
 * Simple Apache Kafka producer
 * using the Kafka driver from librdkafka
 * (https://github.com/edenhill/librdkafka)
 */

#include <stdio.h>
#include <signal.h>
#include <string.h>


/* Typical include path would be <librdkafka/rdkafka.h>, but this program
 * is builtin from within the librdkafka source tree and thus differs. */
#include "rdkafka.h"


static volatile sig_atomic_t run = 1;

/**
 * @brief Signal termination of program
 */
static void stop (int sig) {
        run = 0;
        fclose(stdin); /* abort fgets() */
}


/**
 * @brief Message delivery report callback.
 *
 * This callback is called exactly once per message, indicating if
 * the message was succesfully delivered
 * (rkmessage->err == RD_KAFKA_RESP_ERR_NO_ERROR) or permanently
 * failed delivery (rkmessage->err != RD_KAFKA_RESP_ERR_NO_ERROR).
 *
 * The callback is triggered from rd_kafka_poll() and executes on
 * the application's thread.
 */
static void dr_msg_cb (rd_kafka_t *rk,
                       const rd_kafka_message_t *rkmessage, void *opaque) {
        if (rkmessage->err)
                fprintf(stderr, "%% Message delivery failed: %s\n",
                        rd_kafka_err2str(rkmessage->err));
        else
                fprintf(stderr,
                        "%% Message delivered (%zd bytes, "
                        "partition %"PRId32")\n",
                        rkmessage->len, rkmessage->partition);

        /* The rkmessage is destroyed automatically by librdkafka */
}



int main (int argc, char **argv) {
        rd_kafka_t *rk;         /* Producer instance handle */
        rd_kafka_conf_t *conf;  /* Temporary configuration object */
        char errstr[512];       /* librdkafka API error reporting buffer */
        char buf[512];          /* Message value temporary buffer */
        const char *brokers;    /* Argument: broker list */
        const char *topic;      /* Argument: topic to produce to */

        /*
         * Argument validation
         */
        if (argc != 3) {
                fprintf(stderr, "%% Usage: %s <broker> <topic>\n", argv[0]);
                return 1;
        }

        brokers = argv[1];
        topic   = argv[2];


        /*
         * Create Kafka client configuration place-holder
         */
        conf = rd_kafka_conf_new();

        /* Set bootstrap broker(s) as a comma-separated list of
         * host or host:port (default port 9092).
         * librdkafka will use the bootstrap brokers to acquire the full
         * set of brokers from the cluster. */
        if (rd_kafka_conf_set(conf, "bootstrap.servers", brokers,
                              errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {
                fprintf(stderr, "%s\n", errstr);
                return 1;
        }

        /* Set the delivery report callback.
         * This callback will be called once per message to inform
         * the application if delivery succeeded or failed.
         * See dr_msg_cb() above.
         * The callback is only triggered from rd_kafka_poll() and
         * rd_kafka_flush(). */
        rd_kafka_conf_set_dr_msg_cb(conf, dr_msg_cb);

        /*
         * Create producer instance.
         *
         * NOTE: rd_kafka_new() takes ownership of the conf object
         *       and the application must not reference it again after
         *       this call.
         */
        rk = rd_kafka_new(RD_KAFKA_PRODUCER, conf, errstr, sizeof(errstr));
        if (!rk) {
                fprintf(stderr,
                        "%% Failed to create new producer: %s\n", errstr);
                return 1;
        }

        /* Signal handler for clean shutdown */
        signal(SIGINT, stop);

        fprintf(stderr,
                "%% Type some text and hit enter to produce message\n"
                "%% Or just hit enter to only serve delivery reports\n"
                "%% Press Ctrl-C or Ctrl-D to exit\n");

        while (run && fgets(buf, sizeof(buf), stdin)) {
                size_t len = strlen(buf);
                rd_kafka_resp_err_t err;

                if (buf[len-1] == '\n') /* Remove newline */
                        buf[--len] = '\0';

                if (len == 0) {
                        /* Empty line: only serve delivery reports */
                        rd_kafka_poll(rk, 0/*non-blocking */);
                        continue;
                }

                /*
                 * Send/Produce message.
                 * This is an asynchronous call, on success it will only
                 * enqueue the message on the internal producer queue.
                 * The actual delivery attempts to the broker are handled
                 * by background threads.
                 * The previously registered delivery report callback
                 * (dr_msg_cb) is used to signal back to the application
                 * when the message has been delivered (or failed).
                 */
        retry:
                err = rd_kafka_producev(
                        /* Producer handle */
                        rk,
                        /* Topic name */
                        RD_KAFKA_V_TOPIC(topic),
                        /* Make a copy of the payload. */
                        RD_KAFKA_V_MSGFLAGS(RD_KAFKA_MSG_F_COPY),
                        /* Message value and length */
                        RD_KAFKA_V_VALUE(buf, len),
                        /* Per-Message opaque, provided in
                         * delivery report callback as
                         * msg_opaque. */
                        RD_KAFKA_V_OPAQUE(NULL),
                        /* End sentinel */
                        RD_KAFKA_V_END);

                if (err) {
                        /*
                         * Failed to *enqueue* message for producing.
                         */
                        fprintf(stderr,
                                "%% Failed to produce to topic %s: %s\n",
                                topic, rd_kafka_err2str(err));

                        if (err == RD_KAFKA_RESP_ERR__QUEUE_FULL) {
                                /* If the internal queue is full, wait for
                                 * messages to be delivered and then retry.
                                 * The internal queue represents both
                                 * messages to be sent and messages that have
                                 * been sent or failed, awaiting their
                                 * delivery report callback to be called.
                                 *
                                 * The internal queue is limited by the
                                 * configuration property
                                 * queue.buffering.max.messages */
                                rd_kafka_poll(rk, 1000/*block for max 1000ms*/);
                                goto retry;
                        }
                } else {
                        fprintf(stderr, "%% Enqueued message (%zd bytes) "
                                "for topic %s\n",
                                len, topic);
                }


                /* A producer application should continually serve
                 * the delivery report queue by calling rd_kafka_poll()
                 * at frequent intervals.
                 * Either put the poll call in your main loop, or in a
                 * dedicated thread, or call it after every
                 * rd_kafka_produce() call.
                 * Just make sure that rd_kafka_poll() is still called
                 * during periods where you are not producing any messages
                 * to make sure previously produced messages have their
                 * delivery report callback served (and any other callbacks
                 * you register). */
                rd_kafka_poll(rk, 0/*non-blocking*/);
        }


        /* Wait for final messages to be delivered or fail.
         * rd_kafka_flush() is an abstraction over rd_kafka_poll() which
         * waits for all messages to be delivered. */
        fprintf(stderr, "%% Flushing final messages..\n");
        rd_kafka_flush(rk, 10*1000 /* wait for max 10 seconds */);

        /* If the output queue is still not empty there is an issue
         * with producing messages to the clusters. */
        if (rd_kafka_outq_len(rk) > 0)
                fprintf(stderr, "%% %d message(s) were not delivered\n",
                        rd_kafka_outq_len(rk));

        /* Destroy the producer instance */
        rd_kafka_destroy(rk);

        return 0;
}

在这里插入图片描述

3.3 生产者和消费者的交互

(1)启动消费者。

./consumer localhost:9092 0 test

显示:

% Subscribed to 1 topic(s), waiting for rebalance and messages...

(2)启动生产者。

./producer localhost:9092 test

显示:

% Type some text and hit enter to produce message
% Or just hit enter to only serve delivery reports
% Press Ctrl-C or Ctrl-D to exit

(3)通信过程。
生产者发送hello:

$ ./producer localhost:9092 test
% Type some text and hit enter to produce message
% Or just hit enter to only serve delivery reports
% Press Ctrl-C or Ctrl-D to exit
hello consumer
% Enqueued message (14 bytes) for topic test

消费者接收:

$ ./consumer localhost:9092 0 test
% Subscribed to 1 topic(s), waiting for rebalance and messages...
Message on test [0] at offset 4:
 Value: hello consumer

总结

  1. 一个分区只能被一个消费者读取。如果一个topic只有一个分区,多个消费者读取时只有一个消费者能读到数据;单个分区开启多个消费者去读取数据是没有意义的。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1264207.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

linux用户身份切换su和 sudo

su 切换root&#xff0c;但是&#xff0c;环境变量是之前用户的 可以看到利用su切换&#xff0c;根目录还是pro1的 su - 连同环境一起切换成root&#xff0c;切换后工作目录都不一样了&#xff0c;看输入内容左侧信息&#xff0c;和第一个图片比较 -c仅执行一次命令&#xff0…

docker限制容器内存的方法

在服务器中使用 docker 时&#xff0c;如果不对 docker 的可调用内存进行限制&#xff0c;当 docker 内的程序出现不可预测的问题时&#xff0c;就很有可能因为内存爆炸导致服务器主机的瘫痪。而对 docker 进行限制后&#xff0c;可以将瘫痪范围控制在 docker 内。 因此&#…

RK3568笔记六:基于Yolov8的训练及部署

若该文为原创文章&#xff0c;转载请注明原文出处。 基于Yolov8的训练及部署&#xff0c;参考鲁班猫的手册训练自己的数据集部署到RK3568,用的是正点的板子。 1、 使用 conda 创建虚拟环境 conda create -n yolov8 python3.8 ​ conda activate yolov8 2、 安装 pytorch 等…

Deep Image Prior

深度图像先验 论文链接&#xff1a;https://sites.skoltech.ru/app/data/uploads/sites/25/2018/04/deep_image_prior.pdf 项目链接&#xff1a;https://github.com/DmitryUlyanov/deep-image-prior Abstract 深度卷积网络已经成为一种流行的图像生成和恢复工具。一般来说&a…

摇滚史密斯2014重置版外接声卡

摇滚史密斯2014重置版外接声卡 前提 由于rs_asio是通过模拟原厂线的方法来使游戏可以支持声卡的&#xff0c;因此&#xff0c;声卡的采样频率需要满足原厂线要求&#xff0c;即采样率可以设置为 48000 Hz。 我使用的是 Sonic Cube 这款声卡&#xff0c;非常幸运&#xff0c;…

微服务--05--配置管理

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 配置管理这些问题都可以通过统一的配置管理器服务解决。而Nacos不仅仅具备注册中心功能&#xff0c;也具备配置管理的功能&#xff1a; 1.配置共享1.1.添加共享配置…

【mmseg】ValueError: Only one of `max_epochs` or `max_iters` can be set.报错解决

目录 &#x1f49c;&#x1f49c;1背景 ❤️ ❤️2分析 &#x1f525;2.1config查看 &#x1f525;2.2BaseRunner基类 &#x1f49a;&#x1f49a;3解决 &#x1f525;3.1按照epoch &#x1f525;3.2按照iters 整理不易&#xff0c;欢迎一键三连&#xff01;&#xff01…

Kubernetes入门学习(上)

文章目录 Kubernetes入门学习&#xff08;上&#xff09;介绍云原生 Kubernetes架构基础概念Kubernetes架构控制平面组件Node组件 组件关系 安装Kubernetes基本对象和操作Pod&#xff08;容器集&#xff09;Deployment(部署)与ReplicaSet(副本集)Service&#xff08;服务&#…

【工业智能】Solutions

各类问题对应的解决方案 工艺参数推荐APC 排产调度智能算法强化学习 运筹优化空压机群控 预测 工艺参数推荐 APC 排产调度 智能算法 遗传算法 强化学习 DDQN 运筹优化 空压机群控 MIP混合整数规划 能耗优化 预测 电池容量预测 时序预测&#xff0c;回归预测 点击剩余…

python基础练习题库实验5

文章目录 题目1代码实验结果题目2代码实验结果题目3代码实验结果![在这里插入图片描述](https://img-blog.csdnimg.cn/direct/6058fb4b66994aed838f920f7fe75706.png)题目4代码实验结果题目总结题目1 编写一个程序,使用while循环语句和字符串格式显示以下精确输出。 例如: …

企企通相继出席首届百家新锐企业融通创新交流会与采购数字化创新沙龙,持续深化数字赋能

近期&#xff0c;企企通受邀分别参加了广州、上海业界重磅活动&#xff0c;针对新形势下企业数字化采购升级的新技术与新思路、产业链上下游协同发展等进行探讨&#xff0c;赋能数字化信息技术产业生态发展&#xff0c;并对各方主体如何协作共赢助推企业数字化发展建言献策。 0…

7.Spring源码解析-parseBeanDefinitions解析beanDefinitions

默认解析的命名空间由parseDefaultElement方法去处理&#xff0c;即import, alias, bean, 嵌套的beans四种元素 import 写法示例: <import resource"CTIContext.xml" /> <import resource"customerContext.xml" /> importBeanDefinitionRe…

idea 2023使用技巧(一)

IntelliJ IDEA在业界被公认为最好的java开发工具之一。它能给你良好的开发体验。 idea版本号为2023.2.5。 1 基础操作 1.1索引 idea首次加载项目时&#xff0c;都会创建索引&#xff0c;创建索引的时间跟项目的文件多少成正比。idea的缓存和索引主要是用来加快文件查询&…

Python入职某新员工大量使用Lambda表达式,却被老员工喷是屎山

Python中Lambda表达式是一种简洁而强大的特性,其在开发中的使用优缺点明显,需要根据具体场景权衡取舍。 Lambda表达式的优点之一是它的紧凑语法,适用于一些短小而简单的函数。这种形式使得代码更为精炼,特别在一些函数式编程场景中,Lambda表达式可以提高代码的表达力。此外…

平凯星辰 TiDB 获评 “2023 中国金融科技守正创新扬帆计划” 十佳优秀实践奖

11 月 10 日&#xff0c;2023 金融街论坛年会同期举办了“第五届成方金融科技论坛——金融科技守正创新论坛”&#xff0c;北京金融产业联盟发布了“扬帆计划——分布式数据库金融应用研究与实践优秀成果”&#xff0c; 平凯星辰提报的实践报告——“国产 HTAP 数据库在金融规模…

CMake构建一个转换为3d tile的开源代码成功

之前CMake构建一个转换为3d tile的开源代码&#xff0c;生成解决方案之后&#xff0c;从VS2019打开&#xff1b; 总是报一个错误&#xff0c;跟 mocs_compilation_Debug.cpp 这个QT相关文件有关&#xff0c;它生成的obj&#xff0c;总是报模块计算机x64和目标计算机x86冲突&am…

ELK+Filebeat

Filebeat概述 1.Filebeat简介 Filebeat是一款轻量级的日志收集工具&#xff0c;可以在非JAVA环境下运行。 因此&#xff0c;Filebeat常被用在非JAVAf的服务器上用于替代Logstash&#xff0c;收集日志信息。实际上&#xff0c;Filebeat几乎可以起到与Logstash相同的作用&…

涵盖多种功能,龙讯旷腾Module第一期:物质结构

Module是什么 在PWmat的基础功能上&#xff0c;我们针对用户的使用需求开发了一些顶层模块&#xff08;Module&#xff09;。这些Module中的一部分是与已有的优秀工具的接口&#xff0c;一部分是以PWmat的计算结果为基础得到实际需要的物理量&#xff0c;一部分则是为特定的计…

小白备战蓝桥杯:Java基础语法

一、注释 IDEA注释快捷键&#xff1a;Ctrl / 单行注释&#xff1a; //注释信息 多行注释&#xff1a; /* 注释信息 */ 二、字面量 常用数据&#xff1a;整数、小数、字符串&#xff08;双引号&#xff09;、字符&#xff08;单引号&#xff09;、布尔值&#xff08;tr…

elk:filebeat也是一个日志收集工具

filebeat是一个轻量级的日志收集工具&#xff0c;所使用的系统资源比logstash部署和启动使用的资源要小的多 filebeat可以允许在非java环境&#xff0c;他可以代替logstash在非java环境上收集日志 filebeat无法实现数据的过滤&#xff0c;一般是结合logstash的数据过滤功能一…