【云原生】Kubernetes部署EFK日志分析系统

news2024/12/28 6:34:51

Kubernetes部署EFK日志分析系统

文章目录

  • Kubernetes部署EFK日志分析系统
    • 一、前置知识点
      • 1.1、k8s集群应该采集哪些日志?
      • 1.2、k8s比较流行的日志收集解决方案
      • 1.3、fluentd、filebeta、logstash对比分析
        • 1.3.1、Logstash
        • 1.3.2、Filebeat
        • 1.3.3、fluentd
      • 1.4、EFK工作原理
    • 资源列表
    • 基础环境
    • 二、检查K8S集群是否健康
      • 2.1、检查Pod状态
      • 2.2、检查node节点状态
      • 2.3、检查组件状态
    • 三、部署EFK
      • 3.1、所有node节点拉取镜像
      • 3.2、创建命名空间
      • 3.3、安装es
      • 3.4、安装kibana
      • 3.5、安装fluented
      • 3.6、查看Pod和暴露的端口
      • 3.7、访问kibana
    • 四、访问kibana
      • 4.1、打开kibana
      • 4.2、选择数据
      • 4.3、创建索引
      • 4.4、添加字段
      • 4.5、打开Discover

一、前置知识点

1.1、k8s集群应该采集哪些日志?

  • k8s系统的组件日志:apiversion、scheduler、kubelet
  • k8s集群里面部署的影城程序日志

1.2、k8s比较流行的日志收集解决方案

  • Elasticsearch、Fluentd和Kibana

  • (EFK)技术栈也是官方推荐的一种方案

1.3、fluentd、filebeta、logstash对比分析

1.3.1、Logstash
  • Logsstash是一个开源数据收集引擎,具有实时管道功能。Logstash可以动态地将来自不同数据源的数据统一起来,并将数据标准化道你所选择的目的地

优点

  • Logstash主要的优点就是它的灵活性,主要因为它有很多插件,详细的文档以及直白的配置格式让它可以在多种场景下应用。我们基本上可以在网上找到很多资源,几乎可以处理任何问题

缺点

  • Logstash致命的问题是消耗服务器的CPU、内存资源
1.3.2、Filebeat
  • Filebeat是一个轻量级的日志传输工具,它的存在正弥补了Logstash的缺点。Filebeta作为一个轻量级的日志传输工具可以将日志推送道中心Logstash

优势

  • Filebeta只是一个二进制文件没有任何依赖。它的占用资源极少
1.3.3、fluentd
  • Fluentd是一个开源数据收集器,通过丰富的插件系统,对数据进行统一收集和消费,能够更好地使用和理解数据。Fluentd将数据结构化为JSON格式,然后统一输出道用户所指定的日志存储系统中。

优点

  • fluentd比logstash更省资源,更轻量级,作为部署在k8s节点上的日志收集器;fluentd有更多强大、开放的插件数量和社区。插件多,也非常灵活,规则也不复杂

1.4、EFK工作原理

  • k8s集群的Pod日志存放在/var/log/containers/这个目录;
  • 在k8s集群节点上安装Fluentd(通过DaemonSet控制器运行Fluentd,以便它在每个node节点上都可以运行一个Pod),fluentd从各个node节点的Docker容器中提取日志、过滤和转换日志数据,然后将日志数据转发道Elasticsearch集群进行索引和存储,最终把日志数据通过kibana展示出来。

资源列表

操作系统配置主机名IP提前部署
CentOS 7.92C4Gmaster192.168.93.101kubeadm集群
CentOS 7.92C4Gnode1192.168.93.102kubeadm集群
CentOS 7.92C4Gnode2192.168.93.103kubeadm集群

基础环境

  • 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
  • 关闭内核安全机制
setenforce 0
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
  • 资源清单文件链接
https://github.com/kubernetes/kubernetes/tree/9682b7248fb69733c2a0ee53618856e87b067f16/cluster/addons/fluentd-elasticsearch

二、检查K8S集群是否健康

  • 提前部署根据资源列表部署一套K8S集群

2.1、检查Pod状态

[root@master ~]# kubectl get pod -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-gznfs            1/1     Running   1 (13d ago)   13d
kube-flannel   kube-flannel-ds-jb9vq            1/1     Running   1 (13d ago)   13d
kube-flannel   kube-flannel-ds-xl6dr            1/1     Running   1 (13d ago)   13d
kube-system    coredns-6d8c4cb4d-g5nfk          1/1     Running   1 (13d ago)   13d
kube-system    coredns-6d8c4cb4d-xqvh6          1/1     Running   1 (13d ago)   13d
kube-system    etcd-master                      1/1     Running   1 (13d ago)   13d
kube-system    kube-apiserver-master            1/1     Running   1 (13d ago)   13d
kube-system    kube-controller-manager-master   1/1     Running   1 (13d ago)   13d
kube-system    kube-proxy-9pftl                 1/1     Running   1 (13d ago)   13d
kube-system    kube-proxy-gdqk7                 1/1     Running   1 (13d ago)   13d
kube-system    kube-proxy-h7gm2                 1/1     Running   1 (13d ago)   13d
kube-system    kube-scheduler-master            1/1     Running   1 (13d ago)   13d

2.2、检查node节点状态

[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   13d   v1.23.0
node1    Ready    <none>                 13d   v1.23.0
node2    Ready    <none>                 13d   v1.23.0

2.3、检查组件状态

[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   

三、部署EFK

3.1、所有node节点拉取镜像

  • 镜像非常大会很慢,拉取不下来的私信或者评论(开源免费)
docker pull quay.io/fluentd_elasticsearch/elasticsearch:v7.4.3
docker pull docker.elastic.co/kibana/kibana-oss:7.4.2
docker pull quay.io/fluentd_elasticsearch/fluentd:v3.1.0

3.2、创建命名空间

  • 所有资源清单都可以直接复制粘贴
# 创建 EFK 工作目录,后续的所有操作都在此目录下完成
[root@master ~]# mkdir efk
[root@master ~]# cd efk
# 创建资源清单
[root@master efk]# cat create-logging-namespace.yaml 
kind: Namespace
apiVersion: v1
metadata:
 name: logging
 labels:
   k8s-app: logging
   kubernetes.io/cluster-service: "true"
   addonmanager.kubernetes.io/mode: Reconcile


# 部署资源清单
[root@master efk]# kubectl apply -f create-logging-namespace.yaml 
namespace/logging created


# 查看命名空间
[root@master efk]# kubectl get ns | grep logging
logging           Active   24s

3.3、安装es

  • 所有资源清单都可以直接复制粘贴
# 创建stateful控制器资源清单
[root@master efk]# cat elasticsearch-stateful.yaml 
# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch-logging
  namespace: logging
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
rules:
  - apiGroups:
      - ""
    resources:
      - "services"
      - "namespaces"
      - "endpoints"
    verbs:
      - "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
  - kind: ServiceAccount
    name: elasticsearch-logging
    namespace: logging
    apiGroup: ""
roleRef:
  kind: ClusterRole
  name: elasticsearch-logging
  apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: logging
  labels:
    k8s-app: elasticsearch-logging
    version: v7.4.3
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceName: elasticsearch-logging
  replicas: 2
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v7.4.3
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v7.4.3
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
        - image: quay.io/fluentd_elasticsearch/elasticsearch:v7.4.3
          name: elasticsearch-logging
          imagePullPolicy: Always
          resources:
            # need more cpu upon initialization, therefore burstable class
            limits:
              cpu: 1000m
              memory: 3Gi
            requests:
              cpu: 100m
              memory: 3Gi
          ports:
            - containerPort: 9200
              name: db
              protocol: TCP
            - containerPort: 9300
              name: transport
              protocol: TCP
          livenessProbe:
            tcpSocket:
              port: transport
            initialDelaySeconds: 5
            timeoutSeconds: 10
          readinessProbe:
            tcpSocket:
              port: transport
            initialDelaySeconds: 5
            timeoutSeconds: 10
          volumeMounts:
            - name: elasticsearch-logging
              mountPath: /data
          env:
            - name: "NAMESPACE"
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: "MINIMUM_MASTER_NODES"
              value: "1"
      volumes:
        - name: elasticsearch-logging
          emptyDir: {}
      # Elasticsearch requires vm.max_map_count to be at least 262144.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
      initContainers:
        - image: alpine:3.6
          command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
          name: elasticsearch-logging-init
          securityContext:
            privileged: true


# 部署资源清单
[root@master efk]# kubectl apply -f elasticsearch-stateful.yaml 
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
# 创建service控制器资源清单
[root@master efk]# cat elasticsearch-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  clusterIP: None
  ports:
    - name: db
      port: 9200
      protocol: TCP
      targetPort: 9200
    - name: transport
      port: 9300
      protocol: TCP
      targetPort: 9300
  publishNotReadyAddresses: true
  selector:
    k8s-app: elasticsearch-logging
  sessionAffinity: None
  type: ClusterIP


# 部署资源清单
[root@master efk]# kubectl apply -f elasticsearch-svc.yaml 
service/elasticsearch-logging created
# 查看部署资源
[root@master efk]# kubectl get pod -n logging | grep elasticsearch 
elasticsearch-logging-0   1/1     Running   7 (3m46s ago)   12m
elasticsearch-logging-1   0/1     Running   0               36s


[root@master efk]# kubectl get svc -n logging | grep elasticsearch-logging
elasticsearch-logging   ClusterIP   None         <none>        9200/TCP,9300/TCP   4m5s

3.4、安装kibana

  • 所有资源清单都可以直接复制粘贴
# 创建deployment控制器资源清单
[root@master efk]# cat kibana-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: logging
  labels:
    k8s-app: kibana-logging
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kibana-logging
          image: docker.elastic.co/kibana/kibana-oss:7.4.2
          resources:
            # need more cpu upon initialization, therefore burstable class
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          env:
            - name: ELASTICSEARCH_HOSTS
              value: http://elasticsearch-logging:9200
            - name: SERVER_NAME
              value: kibana-logging
            - name: SERVER_BASEPATH
              value: ""
           #   value: /api/v1/namespaces/logging/services/kibana-logging/proxy
           # - name: SERVER_REWRITEBASEPATH
           #   value: "false"
          ports:
            - containerPort: 5601
              name: ui
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /api/status
          #    port: ui
          #  initialDelaySeconds: 5
          #  timeoutSeconds: 10
          #readinessProbe:
          #  httpGet:
          #    path: /api/status
          #    port: ui
          #  initialDelaySeconds: 5
          #  timeoutSeconds: 10


# 部署资源清单
[root@master efk]# kubectl apply -f kibana-deployment.yaml 
deployment.apps/kibana-logging created
# 创建service控制器资源清单
[root@master efk]# cat kibana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: logging
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
spec:
  type: NodePort
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana-logging


# 部署资源清单
[root@master efk]# kubectl apply -f kibana-svc.yaml 
service/kibana-logging created
# 查看部署资源
[root@master efk]# kubectl get pod -n logging | grep kibana
kibana-logging-f6bb87f47-thnqp   1/1     Running   0               2m32s


[root@master efk]# kubectl get svc -n logging | grep kibana-logging
kibana-logging          NodePort    10.1.237.34   <none>        5601:32191/TCP      112s

3.5、安装fluented

# 创建fluentd配置文件资源清单
[root@master efk]# cat fluentd-es-config.yaml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-es-config-v0.2.1
  namespace: logging
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>
 
  containers.input.conf: |-
    # This configuration file for Fluentd / td-agent is used
    # to watch changes to Docker log files. The kubelet creates symlinks that
    # capture the pod name, namespace, container name & Docker container ID
    # to the docker logs for pods in the /var/log/containers directory on the host.
    # If running this fluentd configuration in a Docker container, the /var/log
    # directory should be mounted in the container.
    #
    # These logs are then submitted to Elasticsearch which assumes the
    # installation of the fluent-plugin-elasticsearch & the
    # fluent-plugin-kubernetes_metadata_filter plugins.
    # See https://github.com/uken/fluent-plugin-elasticsearch &
    # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
    # more information about the plugins.
    #
    # Example
    # =======
    # A line in the Docker log file might look like this JSON:
    #
    # {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #  "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z"}
    #
    # The time_format specification below makes sure we properly
    # parse the time format produced by Docker. This will be
    # submitted to Elasticsearch and should appear like:
    # $ curl 'http://elasticsearch-logging:9200/_search?pretty'
    # ...
    # {
    #      "_index" : "logstash-2014.09.25",
    #      "_type" : "fluentd",
    #      "_id" : "VBrbor2QTuGpsQyTCdfzqA",
    #      "_score" : 1.0,
    #      "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
    #                 "stream":"stderr","tag":"docker.container.all",
    #                 "@timestamp":"2014-09-25T22:45:50+00:00"}
    #    },
    # ...
    #
    # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
    # record & add labels to the log record if properly configured. This enables users
    # to filter & search logs on any metadata.
    # For example a Docker container's logs might be in the directory:
    #
    #  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
    #
    # and in the file:
    #
    #  997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # where 997599971ee6... is the Docker ID of the running container.
    # The Kubernetes kubelet makes a symbolic link to this file on the host machine
    # in the /var/log/containers directory which includes the pod name and the Kubernetes
    # container name:
    #
    #    synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #    ->
    #    /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
    #
    # The /var/log directory on the host is mapped to the /var/log directory in the container
    # running this instance of Fluentd and we end up collecting the file:
    #
    #   /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # This results in the tag:
    #
    #  var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
    # which are added to the log message as a kubernetes field object & the Docker container ID
    # is also added under the docker field object.
    # The final tag is:
    #
    #   kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
    #
    # And the final log record look like:
    #
    # {
    #   "log":"2014/09/25 21:15:03 Got request with path wombat\n",
    #   "stream":"stderr",
    #   "time":"2014-09-25T21:15:03.499185026Z",
    #   "kubernetes": {
    #     "namespace": "default",
    #     "pod_name": "synthetic-logger-0.25lps-pod",
    #     "container_name": "synth-lgr"
    #   },
    #   "docker": {
    #     "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
    #   }
    # }
    #
    # This makes it easier for users to search for logs by pod name or by
    # the name of the Kubernetes container regardless of how many times the
    # Kubernetes pod has been restarted (resulting in a several Docker container IDs).
    # Json Log Example:
    # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
    # CRI Log Example:
    # 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>
    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>
    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>
  system.input.conf: |-
    # Example:
    # 2015-12-21 23:17:22,066 [salt.state       ][INFO    ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
    <source>
      @id minion
      @type tail
      format /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
      time_format %Y-%m-%d %H:%M:%S
      path /var/log/salt/minion
      pos_file /var/log/salt.pos
      tag salt
    </source>
    # Example:
    # Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
    <source>
      @id startupscript.log
      @type tail
      format syslog
      path /var/log/startupscript.log
      pos_file /var/log/es-startupscript.log.pos
      tag startupscript
    </source>
    # Examples:
    # time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
    # time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
    # TODO(random-liu): Remove this after cri container runtime rolls out.
    <source>
      @id docker.log
      @type tail
      format /^time="(?<time>[^"]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
      path /var/log/docker.log
      pos_file /var/log/es-docker.log.pos
      tag docker
    </source>
    # Example:
    # 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
    <source>
      @id etcd.log
      @type tail
      # Not parsing this, because it doesn't have anything particularly useful to
      # parse out of it (like severities).
      format none
      path /var/log/etcd.log
      pos_file /var/log/es-etcd.log.pos
      tag etcd
    </source>
 
    # Multi-line parsing is required for all the kube logs because very large log
    # statements, such as those that include entire object bodies, get split into
    # multiple lines by glog.
 
    # Example:
    # I0204 07:32:30.020537    3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
    <source>
      @id kubelet.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kubelet.log
      pos_file /var/log/es-kubelet.log.pos
      tag kubelet
    </source>
 
    # Example:
    # I1118 21:26:53.975789       6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
    <source>
      @id kube-proxy.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-proxy.log
      pos_file /var/log/es-kube-proxy.log.pos
      tag kube-proxy
    </source>
 
    # Example:
    # I0204 07:00:19.604280       5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
    <source>
      @id kube-apiserver.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-apiserver.log
      pos_file /var/log/es-kube-apiserver.log.pos
      tag kube-apiserver
    </source>
 
    # Example:
    # I0204 06:55:31.872680       5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui
    <source>
      @id kube-controller-manager.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-controller-manager.log
      pos_file /var/log/es-kube-controller-manager.log.pos
      tag kube-controller-manager
    </source>
    # Example:
    # W0204 06:49:18.239674       7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
    <source>
      @id kube-scheduler.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/kube-scheduler.log
      pos_file /var/log/es-kube-scheduler.log.pos
      tag kube-scheduler
    </source>
    # Example:
    # I0603 15:31:05.793605       6 cluster_manager.go:230] Reading config from path /etc/gce.conf
    <source>
      @id glbc.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/glbc.log
      pos_file /var/log/es-glbc.log.pos
      tag glbc
    </source>
    # Example:
    # I0603 15:31:05.793605       6 cluster_manager.go:230] Reading config from path /etc/gce.conf
    <source>
      @id cluster-autoscaler.log
      @type tail
      format multiline
      multiline_flush_interval 5s
      format_firstline /^\w\d{4}/
      format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
      time_format %m%d %H:%M:%S.%N
      path /var/log/cluster-autoscaler.log
      pos_file /var/log/es-cluster-autoscaler.log.pos
      tag cluster-autoscaler
    </source>
    # Logs from systemd-journal for interesting services.
    # TODO(random-liu): Remove this after cri container runtime rolls out.
    <source>
      @id journald-docker
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "docker.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-docker.pos
      </storage>
      read_from_head true
      tag docker
    </source>
    <source>
      @id journald-container-runtime
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-container-runtime.pos
      </storage>
      read_from_head true
      tag container-runtime
    </source>
    <source>
      @id journald-kubelet
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-kubelet.pos
      </storage>
      read_from_head true
      tag kubelet
    </source>
    <source>
      @id journald-node-problem-detector
      @type systemd
      matches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]
      <storage>
        @type local
        persistent true
        path /var/log/journald-node-problem-detector.pos
      </storage>
      read_from_head true
      tag node-problem-detector
    </source>
    <source>
      @id kernel
      @type systemd
      matches [{ "_TRANSPORT": "kernel" }]
      <storage>
        @type local
        persistent true
        path /var/log/kernel.pos
      </storage>
      <entry>
        fields_strip_underscores true
        fields_lowercase true
      </entry>
      read_from_head true
      tag kernel
    </source>
  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @id forward
      @type forward
    </source>
  monitoring.conf: |-
    # Prometheus Exporter Plugin
    # input plugin that exports metrics
    <source>
      @id prometheus
      @type prometheus
    </source>
    <source>
      @id monitor_agent
      @type monitor_agent
    </source>
    # input plugin that collects metrics from MonitorAgent
    <source>
      @id prometheus_monitor
      @type prometheus_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>
    # input plugin that collects metrics for output plugin
    <source>
      @id prometheus_output_monitor
      @type prometheus_output_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>
    # input plugin that collects metrics for in_tail plugin
    <source>
      @id prometheus_tail_monitor
      @type prometheus_tail_monitor
      <labels>
        host ${hostname}
      </labels>
    </source>
  output.conf: |-
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      type_name _doc
      include_tag_key true
      host elasticsearch-logging
      port 9200
      logstash_format true
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        total_limit_size 500M
        overflow_action block
      </buffer>
    </match>


# 部署资源清单
[root@master efk]# kubectl apply -f fluentd-es-config.yaml --validate=false
configmap/fluentd-es-config-v0.2.1 created
# 创建deployemnt资源清单
[root@master efk]# cat fluentd-es-ds.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: logging
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: logging
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es-v3.1.1
  namespace: logging
  labels:
    k8s-app: fluentd-es
    version: v3.1.1
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v3.1.1
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        version: v3.1.1
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-node-critical
      serviceAccountName: fluentd-es
      containers:
      - name: fluentd-es
        image: quay.io/fluentd_elasticsearch/fluentd:v3.1.0
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
        ports:
        - containerPort: 24231
          name: prometheus
          protocol: TCP
        livenessProbe:
          tcpSocket:
            port: prometheus
          initialDelaySeconds: 5
          timeoutSeconds: 10
        readinessProbe:
          tcpSocket:
            port: prometheus
          initialDelaySeconds: 5
          timeoutSeconds: 10
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-es-config-v0.2.1


# 部署资源清单
[root@master efk]# kubectl apply -f fluentd-es-ds.yaml 
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v3.1.1 created
# 查看部署资源
[root@master efk]# kubectl get pod -n logging | grep fluentd
fluentd-es-v3.1.1-bsgxx          1/1     Running   0             34s
fluentd-es-v3.1.1-gt289          1/1     Running   0             34s

3.6、查看Pod和暴露的端口

[root@master efk]# kubectl get pod,svc -n logging
NAME                                 READY   STATUS    RESTARTS      AGE
pod/elasticsearch-logging-0          1/1     Running   7 (14m ago)   23m
pod/elasticsearch-logging-1          1/1     Running   0             11m
pod/fluentd-es-v3.1.1-bsgxx          1/1     Running   0             103s
pod/fluentd-es-v3.1.1-gt289          1/1     Running   0             103s
pod/kibana-logging-f6bb87f47-thnqp   1/1     Running   0             7m57s

NAME                            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch-logging   ClusterIP   None          <none>        9200/TCP,9300/TCP   13m
#####################################################################
service/kibana-logging          NodePort    10.1.237.34   <none>        5601:32191/TCP      6m55s
#####################################################################

3.7、访问kibana

  • 访问地址:http://192.168.93.101:32191(端口改为自己暴露的service端口,在上面已经用#号进行了标注)

四、访问kibana

  • 访问地址:http://192.168.93.101:32191

4.1、打开kibana

在这里插入图片描述

4.2、选择数据

在这里插入图片描述

4.3、创建索引

  • 输入:logstash-*会显示匹配到的资源
    在这里插入图片描述

4.4、添加字段

在这里插入图片描述

4.5、打开Discover

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1908552.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

设计模式探索:观察者模式

1. 观察者模式 1.1 什么是观察者模式 观察者模式用于建立一种对象与对象之间的依赖关系&#xff0c;当一个对象发生改变时将自动通知其他对象&#xff0c;其他对象会相应地作出反应。 在观察者模式中有如下角色&#xff1a; Subject&#xff08;抽象主题/被观察者&#xf…

【数据结构】12.排序

一、排序的概念及其运用 1.1排序的概念 排序&#xff1a;所谓排序&#xff0c;就是使一串记录&#xff0c;按照其中的某个或某些关键字的大小&#xff0c;递增或递减的排列起来的操作。 稳定性&#xff1a;假定在待排序的记录序列中&#xff0c;存在多个具有相同的关键字的记…

(自适应手机端)保健品健康产品网站模板下载

(自适应手机端)保健品健康产品网站模板下载PbootCMS内核开发的网站模板&#xff0c;该模板适用于装修公司网站、装潢公司网站等企业&#xff0c;当然其他行业也可以做&#xff0c;只需要把文字图片换成其他行业的即可&#xff1b;自适应手机端&#xff0c;同一个后台&#xff0…

sql盲注

文章目录 布尔盲注时间盲注 布尔盲注 介绍&#xff1a;在网页只给你两种回显的时候是用&#xff0c;类似于布尔类型的数据&#xff0c;1表示正确&#xff0c;0表示错误。 特点&#xff1a;思路简单&#xff0c;步骤繁琐且麻烦。 核心函数&#xff1a; length()函数substr()函…

ZD屏幕录像机解锁版下载及安装教程 (一款小巧的轻量级屏幕录像工具)

录屏系列软件安装目录 一、超好用的傲软录屏下载和解锁版安装教程 (专业好用的桌面录屏软件)&#xff09; 二、班迪录屏Bandicam v7解锁版安装教程&#xff08;高清录屏软件&#xff09; 三、Mirillis Action v4 解锁版安装教程(专业高清屏幕录像软件) 四、Aiseesoft Scree…

C语言编程3:运算符,运算符的基本用法

C语言3&#x1f525;&#xff1a;运算符&#xff0c;运算符的基本用法 一、运算符&#x1f33f; &#x1f387;1.1 定义 运算符是指进行运算的动作&#xff0c;比如加法运算符"“&#xff0c;减法运算符”-" 算子是指参与运算的值&#xff0c;这个值可能是常数&a…

Apache Spark分布式计算框架架构介绍

目录 一、概述 二、Apache Spark架构组件栈 2.1 概述 2.2 架构图 2.3 架构分层组件说明 2.3.1 支持数据源 2.3.2 调度运行模式 2.3.3 Spark Core核心 2.3.3.1 基础设施 2.3.3.2 存储系统 2.3.3.3 调度系统 2.3.3.4 计算引擎 2.3.4 生态组件 2.3.4.1 Spark SQL 2.…

三菱PLC 实现PID控制温度 手搓PID指令!!!

目录 1.前言 2.PID公式的讲解 3.程序 4.硬件介绍 5.EPLAN图纸 6.成果展示 7.结语 1.前言 新手想要学习PLC的PID控制 首先会被大串的PID 公式吓到 PID公式有很多种&#xff1a;基本PID 位置式 增量式 模拟式 理想型 等等 但是 不要急 别看这么多公式 其实 将公式拆…

如何通过ip地址判断网络类别

在计算机网络中&#xff0c;IP地址不仅是设备在网络中的唯一标识&#xff0c;同时也隐含了网络类别的信息。了解如何根据IP地址判断网络类别&#xff0c;对于网络管理员、系统工程师以及网络爱好者来说都是一项基本技能。本文将详细介绍如何通过IP地址判断网络类别。 一、IP地址…

普中51单片机:矩阵按键扫描与应用详解(五)

文章目录 引言电路图开发板IO连接矩阵键盘的工作原理行列扫描逐行/逐列扫描 LCD1602代码库代码演示——暴力扫描代码演示——数码管(行列式)代码演示——线翻转法代码演示——LCD1602密码锁 引言 矩阵按键是一种通过行列交叉连接的按键阵列&#xff0c;可以有效地减少单片机I/…

LibreOffice的国内镜像安装地址和node.js国内快速下载网站

文章目录 1、LibreOffice1.1、LibreOffice在application-conf.yml中的配置2、node.js 1、LibreOffice 国内镜像包网址&#xff1a;https://mirrors.cloud.tencent.com/libreoffice/libreoffice/ 1.1、LibreOffice在application-conf.yml中的配置 jodconverter:local:enable…

代谢组数据分析一:代谢组数据准备

介绍 该数据集是来自于Zeybel 2022年发布的文章_Multiomics Analysis Reveals the Impact of Microbiota on Host Metabolism in Hepatic Steatosis_ [@zeybel2022multiomics],它包含了多种组学数据,如: 微生物组(粪便和口腔) 宿主人体学指标 宿主临床学指标 宿主血浆代谢…

C语言之数据在内存中的存储(1),整形与大小端字节序

目录 前言 一、整形数据在内存中的存储 二、大小端字节序 三、大小端字节序的判断 四、字符型数据在内存中的存储 总结 前言 本文主要讲述整型包括字符型是如何在内存中存储的&#xff0c;涉及到大小端字节序这一概念&#xff0c;还有如何判断大小端&#xff0c;希望对大…

大语言模型的直接偏好优化(DPO)对齐在PAI-QuickStart实践

直接偏好优化&#xff08;Direct Preference Optimization&#xff0c;DPO)算法是大语言模型对齐的经典算法之一&#xff0c;它巧妙地将奖励模型&#xff08;Reward Model&#xff09;训练和强化学习&#xff08;RL&#xff09;两个步骤合并成了一个&#xff0c;使得训练更加快…

Python 基础知识:为什么使用 __init__.py ?

大家好&#xff01;今天&#xff0c;我们将深入了解 Python 中的 __init__.py 文件&#xff0c;这个小文件却能干大事。让我们抛开任何专业术语&#xff0c;直接进入正题。 什么是 __init__.py &#xff1f; 假设你有一个 Python 目录&#xff0c;里面有一堆 Python 文件&…

vue3【实战】语义化首页布局

技术要点&#xff0c;详见注释 <script setup></script><template><div class"page"><header>页头</header><nav>导航</nav><!-- 主体内容 --><main class"row"><aside>左侧边栏<s…

JavaDS —— 顺序表ArrayList

顺序表 顺序表是用一段物理地址连续的存储单元依次存储数据元素的线性结构&#xff0c;一般情况下采用数组存储。在数组上完成数据的增删查改。在物理和逻辑上都是连续的。 模拟实现 下面是我们要自己模拟实现的方法&#xff1a; 首先我们要创建一个顺序表&#xff0c;顺序表…

C++初探究

概述 C可以追溯到1979年&#xff0c;C之父Bjarne Stroustrup在在使用C语言研发工作时发现C语言的不足&#xff0c;并想要将其改进&#xff0c;到1983年&#xff0c;Bjarne Stroustrup在C语言的基础上添加了面向对象编程的特性&#xff0c;设计出了C的雏形。 网址推荐 C官方文…

C++继承(一文说懂)

目录 一&#xff1a; &#x1f525;继承的概念及定义1.1 继承的概念1.2 继承定义1.2.1 定义格式1.2.2 继承关系和访问限定符1.2.3 继承基类成员访问方式的变化 二&#xff1a;&#x1f525;基类和派生类对象赋值转换三&#xff1a;&#x1f525;继承中的作用域四&#xff1a;&a…

太多项会毁了回归

「AI秘籍」系列课程&#xff1a; 人工智能应用数学基础 人工智能Python基础 人工智能基础核心知识 人工智能BI核心知识 人工智能CV核心知识 多项式回归的过度拟合及其避免方法 通过添加现有特征的幂&#xff0c;多项式回归可以帮助你充分利用数据集。它允许我们甚至使用简…