Filebeat学习笔记

news2024/11/24 0:51:52

Filebeat基本概念

简介

Filebeat是一种轻量级日志采集器,内置有多种模块(auditd、Apache、Nginx、System、MySQL等),针对常见格式的日志大大简化收集、解析和可视化过程,只需一条命令即可。之所以能实现这一点,是因为它将自动默认路径(因操作系统而异)与Elasticsearch采集节点管道的定义和Kibana仪表板组合在一起。不仅如此,数个Filebeat模块还包括预配置的 Machine Learning 任务。另一点需要声明的是:根据采集的数据形式不同,形成了由多个模块组成的Beats。Beats是开源数据传输程序集,可以将其作为代理安装在服务器上,将操作数据发送给Elasticsearch,或者通过Logstash,在Kibana中可视化数据之前,在Logstash中进一步处理和增强数据。

Beats组成模块如下:

日志格式采集所需组件框架备注
Audit dataAuditbeat轻量型审计日志采集器
Log filesFilebeat轻量型日志采集器
AvailabilityHeartbeat轻量型运行时间监控采集器
MetricsMetribeat轻量型指标采集器
Network trafficPacketbeat轻量型网络数据采集器
Windows event logsWinlogbeat轻量型Windows事件日志采集器

在这里插入图片描述

Filebeat特点

  • 轻量型日志采集器,占用资源更少,对机器配置要求极低。
  • 操作简便,可将采集到的日志信息直接发送到ES集群、Logstash、Kafka集群等消息队列中。
  • 异常中断重启后会继续上次停止的位置。(通过${filebeat_home}\data\registry文件来记录日志的偏移量)。
  • 使用压力敏感协议(backpressure-sensitive)来传输数据,在logstash忙的时候,Filebeat会减慢读取-传输速度,一旦logstash恢复,则Filebeat恢复原来的速度。
  • Filebeat带有内部模块(auditd,Apache,Nginx,System和MySQL),可通过一个指定命令来简化通用日志格式的收集、解析和可视化。
bin/logstash -e 'input { stdin{} } output { stdout{} }'

Filebeat与Logstash对比

  • Filebeat是轻量级数据托运者,您可以在服务器上将其作为代理安装,以将特定类型的操作数据发送到Elasticsearch。与Logstash相比,其占用空间小,使用的系统资源更少。
  • Logstash具有更大的占用空间,但提供了大量的输入,过滤和输出插件,用于收集,丰富和转换来自各种来源的数据。
  • Logstash是使用Java编写,插件是使用jruby编写,对机器的资源要求会比较高。在采集日志方面,对CPU、内存上都要比Filebeat高很多。

Filebeat安装

Filebeat本身对机器性能要求不高,采集数据后采用http请求发送数据。

下载链接:https://www.elastic.co/cn/downloads/beats/filebeat

注意下载版本对应一致,避免出现兼容性问题。

将下载的filebeat-8.9.0-linux-x86_64.tar.gz文件上传到/usr/local/software/路径上。

cd /usr/local/software/
tar -xzvf filebeat-8.9.0-linux-x86_64.tar.gz
mv filebeat-8.9.0-linux-x86_64 filebeat-8.9.0
cd filebeat-8.9.0

官方文档:https://www.elastic.co/guide/en/beats/filebeat/current/index.html

通过修改filebeat.yml文件

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  # 输入默认是关闭状态,需要改成true打开
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  # 改成我们需要监控的日志文件
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
    # Windows的案例

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/795338.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【Go语言】Golang保姆级入门教程 Go初学者介绍chapter1

Golang 开山篇 Golang的学习方向 区块链研发工程师&#xff1a; 去中心化 虚拟货币 金融 Go服务器端、游戏软件工程师 &#xff1a; C C 处理日志 数据打包 文件系统 数据处理 很厉害 处理大并发 Golang分布式、云计算软件工程师&#xff1a;盛大云 cdn 京东 消息推送 分布式文…

集睿致远推出CS5466多功能拓展坞方案:支持DP1.4、HDMI2.1视频8K输出

ASL新推出的 CS5466是一款Type-C/DP1.4转HDMI2.1的显示协议转换芯片,&#xff0c;它通过类型C/显示端口链路接收视频和音 频流&#xff0c;并转换为支持TMDS或FRL输出信令。DP接收器支持81.Gbp s链路速率。HDMI输出端口可以作为TMDS或FRL发射机工作。FRL发射机符合HDMI 2.1规范…

08综合评价作业

某核心企业需要在6个待选的零部件供应商中选择一个合作伙伴&#xff0c;各待选供应商有关数据如表1所列&#xff0c;试从中选择一个最优供应商(理想解法) 评价指标产品质量产品价格/元地理位置/km售后服务/h技术水平经济效益供应能力/件市场影响度交货情况10.83326213.20.20.1…

Emacs之实现鼠标/键盘选中即拷贝外界内容(一百二十)

简介&#xff1a; CSDN博客专家&#xff0c;专注Android/Linux系统&#xff0c;分享多mic语音方案、音视频、编解码等技术&#xff0c;与大家一起成长&#xff01; 优质专栏&#xff1a;Audio工程师进阶系列【原创干货持续更新中……】&#x1f680; 人生格言&#xff1a; 人生…

linux操作历史history定制

history记录 Linux中历史操作记录history是一个很有用的功能&#xff0c;有时忘记了&#xff0c;翻翻以前的命令&#xff0c;十分方便。 # 展示所有历史记录 history # 筛选历史记录 history | grep nginx # 清除全部记录 -c history -c # 指定删除某一行,15是行号 history -…

5-Ngnix配置基于用户访问控制和IP的虚拟主机

目录 5.1.Ngnix配置基于用户访问控制的多虚拟主机 5.1.1.前提条件 5.1.2.Ngnix配置基于用户访问控制的多虚拟主机 5.2.Ngnix配置基于IP的虚拟主机 5.3.Ngnix配置基于IP的多虚拟主机 Nginx配置文件在/usr/local/nginx/conf下&#xff0c;文件名为nginx.conf 5.1.Ngnix配置…

如何部署MySQL读写分离

目录 第一步 部署MySQL主从复制 第二步 导入依赖环境文件 第三步 赋权并执行 第四步 优化文件名称 第五步 修改配置文件 第六步 刷新文件并查看详细信息 第七步 安装Amoeba 第八步 配置开放权限 第九步 修改配置文件 第十步 修改配置文件内容 第十一步 修改另一个配…

排队理论简介

排队理论简介 1. 理论背景2. 研究的数学方法3. 拒绝型排队系统与等候型排队系统4. 拒绝型排队系统 本文参考文献为Вентцель Е. С.的《Исследование операций》。 1. 理论背景 排队理论又称大众服务理论&#xff0c;顾名思义指的是在有限的服务条…

防风固沙功能重要性评价

1.3 防风固沙功能重要性评价 1.3.1 评估模型 以生态系统防风固沙服务能力指数作为评估指标&#xff0c;计算公式为&#xff1a; 式中&#xff1a;Sws为防风固沙服务能力指数&#xff0c;NPPmean为多年植被净初级生产力平均值&#xff0c;K 为土壤可蚀性因子&#xff0c;Fq为多…

算法(1)

位运算 剑指 Offer II 003. 前 n 个数字二进制中 1 的个数 快速计算1比特数 x x&(x-1) 将数字的最后一位变成0直到x0&#xff0c;就可以计算出每一个数字中的1比特数。不过要求O(N) 动态规划 奇数&#xff1a;二进制表示中&#xff0c;奇数一定比前面那个偶数多一个 1&…

从 axios 源码学习设计模式

文章目录 一、源码分析1.1 axios 为什么可以多种方式调用1.2 拦截器实现注册使用&#xff1a;promise链式调用 二、从 axios 看设计模式axios 的精髓在哪2.1 抽象工厂axios.create -- 创建新实例的工厂 2.2 微内核设计2.3 适配器思想2.4 责任链模式2.5 桥接模式举例&#xff1a…

elevation mapping学习笔记1之Ubuntu18.04+ROS-melodic编译安装elevation mapping高程图及示例运行

文章目录 0 引言1 安装依赖1.1 grid map1.2 Eigen1.3 kindr1.4 Point Cloud Library (PCL) 2 编译和问题解决3 运行示例3.1 turtlesim3_waffle_demo3.2 simple_demo 和 Ground Truth Demo 0 引言 苏黎世开源的elevation mapping指的是苏黎世联邦理工学院&#xff08;ETH Zuric…

TypeScript -- 类

文章目录 TypeScript -- 类TS -- 类的概念创建一个简单的ts类继承 public / private / protected-- 公共/私有/受保护的public -- 公共private -- 私有的protected -- 受保护的 其他特性readonly -- 只读属性静态属性 -- static修饰ts的getter /setter抽象类abstract TypeScrip…

【ArcGIS Pro二次开发】(53):村规制表、制图【福建省】

这篇算是村规入库的一个延续。 村庄规划中有一些图纸是需要严格按照规范制图&#xff0c;或形成一定规范格式的。 这些图纸的制作基本算是机械式的工作&#xff0c;可以用工具来代替人工。 一、要实现的功能 如上图所示&#xff0c;在【村庄规划】组&#xff0c;新增了两个工…

虚拟机中的win10连不上网怎么办?

安装VMware 16和在其中安装win10系统参考这篇&#xff0c;很有详细且有用。 这篇主要记录我安装后发现虚拟机中win10连不上网了&#xff0c;查了好多&#xff0c;终于有一个方法弄好了。 1.打开主机的网络连接 双击所连接的网络的属性——>共享 &#xff0c;将小勾勾全部勾…

UE5 动画蓝图模板(Animation Blueprint Template)

文章目录 前言准备内容创建动画蓝图使用动画蓝图模板示例1示例2总结前言 本文基于虚幻5.2版本介绍制作动画蓝图模板,本教程要求使用虚幻5.0及以上版本。 准备内容 使用第三人称游戏内容包,已添加可忽略。 选择第三人称游戏,添加到项目。 创建动画蓝图 在 Characters 文件…

前端vue2 全局水印效果

最近写项目遇到一个需求&#xff0c;全局显示水印&#xff0c;不管在哪个路由都要显示。 想要实现的效果&#xff1a; 新建shuiyin.js文件 // 定义水印函数 const addWatermark ({container document.body, // 水印添加到的容器&#xff0c;默认为 bodywidth "200px&…

java代码审计6之ssrf

文章目录 1、java支持的网络请求协议&#xff1a;2、Java 中能发起⽹络请求的类2.1、仅⽀持 HTTP/HTTPS 协议的类2.2、⽀持 sun.net.www.protocol 所有协议的类2.3、审计关键词 3、靶场3.1、漏洞代码13.2、ftp协议读取技巧3.3、无回显之探测内网3.4、无回显之探测文件 之前的文…

代码随想录额外题目| 链表 ●234回文链表●143重排链表 ●141环形链表

#234回文链表 简单方法很简单&#xff08;转成vector判断&#xff09;&#xff0c;难的方法有点难, 很巧妙 简单方法&#xff1a; bool isPalindrome(ListNode* head) {vector<int> vec;ListNode* curhead;while(cur){vec.push_back(cur->val);curcur->next;}i…

Window和linux使用samba实现文件共享

开发环境 开发平台&#xff1a;IMX6 虚拟机环境&#xff1a;Ubuntu16.04 Samba版本&#xff1a;3.4.17 目的 实现无论IMX6作为客户端还是服务端&#xff0c;IMX6系统下与window系统、ubuntu系统文件共享。 Samba移植 下载Samba源码,这个网上一搜大把&#xff0c;我用的版本…