SpringBoot+ELK 收集日志的两种方式

news2024/11/17 17:45:14

方式一、FileBeat+logstash 7.5.1(docker)+ES(docker)+springboot 日志文件 应用方式

我们采用ELFK 架构采集日志,直接读取日志生成的文件,不对Springboot的日志任何的修改。也就是FileBeat 通过读取日志文件位置获取日志内容,然后发送至logstash,logstash收到日志后再发送至ES,这种方式

1.fileBeat 配置文件

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-avicit-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - E:\AVICIT\changfei\code\avicit-vpdm\logs\*.log

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["172.17.4.55:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.17.4.55:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true


重点是配置这两个地方
在这里插入图片描述

在这里插入图片描述

2.logstash 配置文件

input {
beats {
        port => 5044
        host => "172.17.4.55"
   }
}
output {
  elasticsearch {
     hosts => ["http://172.17.4.55:9200"]
     index => "logstash-%{+YYYY.MM.dd}"
     user =>"elastic"
     password =>"elastic"
     #document_id => "%{id}"
   }
  stdout { codec => rubydebug }
}

3. 控制台输出日志

我们先启动fileBeat,由于是在项目是本地启动的所以安装的是windows 下的fileBeat ,

1.先启动本地fileBeat

.\filebeat -e -c filebeat.yml 
pause

2.在启动容器中的logbash

docker run -p 5044:5044 -it -v /var/lib/docker/volumes/logstash/config:/usr/share/logstash/config  --name logstash  logstash:7.5.1

在启动filebeat和logstash 后,可以看到会自动读取已经存在的springboot日志并输出到ogstash 中。在Springboot 启动后会自动实时刷新输出springboot更新的日志到控制台
在这里插入图片描述

用logstash 7.5.1(docker) 这种方式测试发现需要配合FileBeat才能接收到日志到logstash 中,但是日志无法通过logstash 7.5.1(docker) 发送到ES中。所以可以采用方式二

方式二、logstash7.0.1 +ES(docker)+springboot

1 Springboot 日志端口方法方式

日志配置

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="2 seconds">
    <timestamp key="TIMESTAMP" datePattern="yyyy-MM-dd"/>
    <springProperty scope="context" name="application.name" source="spring.application.name" defaultValue="xxxxx"/>
    <property name="LOGPATH" value="log"/>
    <!-- 读取SpringBoot配置文件获取logstash的地址和端口 -->
    <springProperty scope="context" name="logstash-host" source="log.logstash-host"/>
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] [traceId=%X{traceId} spanId=%X{spanId}] %-5level %logger{100} - %msg%n
            </pattern>
        </encoder>
    </appender>

    <appender name="rollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
     …………………………
    </appender>
  <!--  <appender name="AsyncAppender" class="ch.qos.logback.classic.AsyncAppender">
  ……………………
    </appender>

    <appender name="errorFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
    ……………………
    </appender>
<!--输出到logstash的appender-->
   <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
       <!-- &lt;!&ndash;可以访问的logstash日志收集端口&ndash;&gt;-->
        <destination>172.17.4.55:5044</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <!-- project default level -->
    <logger name="xxxxx" level="info"/>

    <root level="info">
        <appender-ref ref="console"/>
        <appender-ref ref="rollingFile"/>
        <!-- <appender-ref ref="AsyncAppender"/>-->
        <appender-ref ref="errorFile"/>
     <appender-ref ref="LOGSTASH"/>
    </root>
</configuration>

日志里面需要配置日志的输出地址和日志格式 ,也就LOGSTASH的appender和logstash-host 的

2 logstash 的logstash.conf配置文件

在测试的时候可以先把Springboot 接收的日志打印在logstash中输出,如果测试没有问题再输出到ES

2.1 先输出到控在logstash中输出 配置

创建 springbootlog_logstash.conf 配置文件

input {
  #file {
  #  path => "/home/test/logstash/ml-25m/movies.csv"
  #  start_position => "beginning"
  #  sincedb_path => "/dev/null"
  #}
  tcp {
        port => 5044
        host => "172.17.4.55"
        }
}
output {
   #elasticsearch {
   #  hosts => "http://172.17.4.55:9200"
   #  index => "movies"
   #  document_id => "%{id}"
   # }

   stdout{
        codec => rubydebug
        }
  }

启动

 ../bin/logstash -f ./springbootlog_logstash.conf

输出
在这里插入图片描述
启动

控制台输出没有问题再改springbootlog_logstash.conf

input {
  #file {
  #  path => "/home/test/logstash/ml-25m/movies.csv"
  #  start_position => "beginning"
  #  sincedb_path => "/dev/null"
  #}
  tcp {
        port => 5044
        host => "172.17.4.55"
        }
}
output {
   elasticsearch {
     hosts => "http://172.17.4.55:9200"
     index => "avicit-vpdm-%{+yyyy-MM-dd}.log"
     # document_id => "%{id}"
    }

  # stdout{
  #     codec => rubydebug
  #     }
  }

重新启动后,由于我们把在logstash在控制台输出的注释掉了,所以没有在控制台打印。我们在es中查看输出,可以看到成功输出到ES,并建立日志索引

在这里插入图片描述

2.2 ES 建立索引

在这里插入图片描述
在索引查看界面中就可以看到我们建立的索引了。
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1884462.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

综合项目实战--jenkins流水线

一、流水线定义 软件生产环节,如:需求调研、需求设计、概要设计、详细设计、编码、单元测试、集成测试、系统测试、用户验收测试、交付等,这些流程就组成一条完整的流水线。脚本式流水线(pipeline)的出现代表企业人员可以更自由的通过代码来实现不同的工作流程。 二、pi…

Flink 运行时架构

Flink 运行时的组件 作业管理器&#xff08;JobManager&#xff09;资源管理器&#xff08;ResourceManager&#xff09;任务管理器&#xff08;TaskManager&#xff09;分发器&#xff08;Dispatch&#xff09; JobManager 控制一个应用程序执行的主进程&#xff0c;也就是说…

IDEA 编译单个Java文件

文章目录 一、class文件的生成位置二、编译单个文件编译项目报错Error:java: 无效的源发行版: 8 一、class文件的生成位置 file->project structure->Modules 二、编译单个文件 选中文件&#xff0c;点击recompile 编译项目报错 Error:java: 无效的源发行版: 8 Fi…

从GPT到AGI:ChatGPT如何改变人机交互

在人工智能&#xff08;AI&#xff09;领域&#xff0c;ChatGPT等大语言模型&#xff08;LLM&#xff09;的出现&#xff0c;标志着一个新的时代。本文将深入探讨ChatGPT的技术原理、误解、潜在问题以及未来的发展方向和应用场景&#xff0c;并分析其对社会和商业领域的影响。 …

【Python数据分析及环境搭建】:教程详解1(第23天)

系列文章目录 Python进行数据分析的优势常用Python数据分析开源库介绍启动Jupyter服务Jupyter Notebook的使用 文章目录 系列文章目录前言学习目标1. Python进行数据分析的优势2. 常用Python数据分析开源库介绍2.1 NumPy2.2 Pandas2.3 Matplotlib2.4 Seaborn2.5 Sklearn2.6 Ju…

python 分析nginx的error.log日志 然后写入到 mongodb当中 并且解决mongodb无法根据id删除数据的问题

废话不多说 直接上代码 import re import os import pymongo import uuid import bson def extract_unresolved_info(log_path):unresolved_info []with open(log_path, r) as file:log_text file.read()lines log_text.split("\n")for line in lines:# 这种属于主…

汽车内饰塑料件光照老化实验箱

塑料件光照老化实验箱概述 塑料件光照老化实验箱&#xff0c;又称为氙灯老化试验箱&#xff0c;是一种模拟自然光照条件下塑料材料老化情况的实验设备。它通过内置的氙灯或其他光源&#xff0c;产生接近自然光的紫外线辐射&#xff0c;以此来加速塑料及其他材料的光老化过程。…

Open3D 点云CPD算法配准(粗配准)

目录 一、概述 二、代码实现 2.1关键函数 2.2完整代码 三、实现效果 3.1原始点云 3.2配准后点云 一、概述 在Open3D中&#xff0c;CPD&#xff08;Coherent Point Drift&#xff0c;一致性点漂移&#xff09;算法是一种经典的点云配准方法&#xff0c;适用于无序点云的非…

Python番外篇之责任转移:有关于虚拟机编程语言的往事

编程之痛 如果&#xff0c;你像笔者一样&#xff0c;有过学习或者使用汇编语言与C、C等语言的经历&#xff0c;一定对下面所说的痛苦感同身受。 汇编语言 将以二进制表示的一条条CPU的机器指令&#xff0c;以人类可读的方式进行表示。虽然&#xff0c;人类可读了&#xff0c…

Android Studio 2023版本切换DNK版本

选择自己需要的版本下载 根目录下的配置路劲注意切换 build.gradle文件下的ndkVersion也要配好对应版本

【web APIs】快速上手Day03

目录 Web APIs - 第3天全选文本框案例事件流事件捕获事件冒泡阻止冒泡解绑事件on事件方式解绑addEventListener方式解绑 注意事项-鼠标经过事件的区别两种注册事件的区别 事件委托综合案例-tab栏切换改造 其他事件页面加载事件元素滚动事件页面滚动事件-获取位置页面滚动事件-滚…

【java高级】【算法】通过子节点 反向获取 树路径父节点 且不获取无关节点

有一个奇葩需求 要求 用户配置在某选择框的选项 例如 然后在选择时显示 用户配置的选项 依旧是返回树,但是只包含 选择的子节点。 以及涉及的父节点,树路径 不返回无关节点 【一般】我们开发中都是直接通过 树节点 返回 其下子节点 这个需求的确很奇葩。 而且还要考…

生命在于学习——Python人工智能原理(3.1.1)

Python部分结束了&#xff0c;开始概率论部分 一、概率基本知识 1.1 事件与概率 1.1.1 事件的运算与关系 &#xff08;一&#xff09;基本概念 定义1 随机试验 如果一个试验满足如下条件&#xff1a; 在试验前不能断定其将发生什么结果&#xff0c;但可明确指出或说明试验…

Python系统教程01

Python 是一门解释性语言&#xff0c;相对更简单、易学&#xff0c;它可以用于解决数学问题、获取与分 析数据、爬虫爬取网络数据、实现复制数学算法等等。 1、print()函数&#xff1a; print()书写时注意所有的符号都是英文符号。print()输出内容时&#xff0c;若要输出字符…

【RabbitMQ问题踩坑】RabbitMQ设置手动ack后,消息队列有多条消息,只能消费一条,就不继续消费了,这是为什么 ?

现象&#xff1a;我发送5条消息到MQ队列中&#xff0c;同时&#xff0c;我在yml中设置的是需要在代码中手动确认&#xff0c;但是我把代码中的手动ack给关闭了&#xff0c;会出现什么情况&#xff1f; yml中配置&#xff0c;配置需要在代码中手动去确认消费者消费消息成功&…

赋能心理大模型,景联文科技推出高质量心理大模型数据库

生成式大模型作为当前发展势头最为强劲的人工智能前沿技术&#xff0c;其在临床心理学领域中的创新应用已成为社会关注和医学聚焦的热点之一。 心理大模型在落地应用过程中可能面临的痛点主要包括以下几个方面&#xff1a; 数据隐私与安全&#xff1a;确保敏感的个人信息在模型…

uniapp微信小程序电子签名

先上效果图&#xff0c;不满意可以直接关闭这页签 新建成单独的组件&#xff0c;然后具体功能引入&#xff0c;具体功能点击签名按钮&#xff0c;把当前功能页面用样式隐藏掉&#xff0c;v-show和v-if也行&#xff0c;然后再把这个组件显示出来。 【签名-撤销】原理是之前绘画时…

全球首款搭载Google Gemini和GPT-4o的智能眼镜发布

智能眼镜仍然是一个尚未完全成熟的未来概念&#xff0c;但生成式人工智能的到来显著提升了这些设备的能力。Meta 的 Ray-Ban 智能眼镜被许多人视为当今最好的选择之一&#xff0c;而现在 Solos AirGo Vision 正在为其带来竞争&#xff0c;这款眼镜还集成了 Google Gemini 支持。…

burpsuite 设置监听窗口 火狐利用插件快速切换代理状态

一、修改burpsuite监听端口 1、首先打开burpsuite&#xff0c;点击Proxy下的Options选项&#xff1a; 2、可以看到默认的监听端口为8080&#xff0c;首先选中我们想要修改的监听&#xff0c;点击Edit进行编辑 3、将端口改为9876&#xff0c;并保存 4、可以看到监听端口修改成功…

JUC基础学习

1.Java JUC简介 2.volatile关键字-内存可见性 3.原子变量-CAS算法 4.ConcurrentHashMap锁分段机制