【k8s管理--集群日志管理elk】

news2024/11/25 2:41:35

1、ELKF日志部署框架

在这里插入图片描述

  • 使用docker部署的k8s集群所有的容器日志统一都在目录:/var/log/containers/
  • 1、filebeat是一个轻量级的日志手机工具,主要功能是收集日志
  • 2、logstash通可以收集日志,也可以进行数据清洗,但是一般不用logstash来做日志收集,其依赖java环境,并且数据量过大,会占用过多资源,所以logstash一般用来进行数据清洗
  • 3、logstash清洗完的数据会交给elasticsearch进行存储
  • 4、用户通过kibana进行可视化页面查看日志,kibana主要用途是负责数据的展示,类似于grafana。
  • 5、kibana中展示得数据是通过elasticsearch的api进行相关数据的搜索。

2、ELK部署

2.1 创建配置文件

2.1.1 创建命名空间的配置

apiVersion: v1
kind: Namespace
metadata:
  name: kube-logging

2.1.2 创建es的配置

--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: elasticsearch-logging 
  namespace: kube-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    kubernetes.io/name: "Elasticsearch" 
spec: 
  ports: 
  - port: 9200 
    protocol: TCP 
    targetPort: db 
  selector: 
    k8s-app: elasticsearch-logging 
--- 
# RBAC authn and authz 
apiVersion: v1 
kind: ServiceAccount 
metadata: 
  name: elasticsearch-logging 
  namespace: kube-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
--- 
kind: ClusterRole 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: 
  name: elasticsearch-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
rules: 
- apiGroups: 
  - "" 
  resources: 
  - "services" 
  - "namespaces" 
  - "endpoints" 
  verbs: 
  - "get" 
--- 
kind: ClusterRoleBinding 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: 
  namespace: kube-logging 
  name: elasticsearch-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
subjects: 
- kind: ServiceAccount 
  name: elasticsearch-logging 
  namespace: kube-logging 
  apiGroup: "" 
roleRef: 
  kind: ClusterRole 
  name: elasticsearch-logging 
  apiGroup: "" 
--- 
# Elasticsearch deployment itself 
apiVersion: apps/v1 
kind: StatefulSet #使用statefulset创建Pod 
metadata: 
  name: elasticsearch-logging #pod名称,使用statefulSet创建的Pod是有序号有顺序的 
  namespace: kube-logging  #命名空间 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    srv: srv-elasticsearch 
spec: 
  serviceName: elasticsearch-logging #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local) 
  replicas: 1 #副本数量,单节点 
  selector: 
    matchLabels: 
      k8s-app: elasticsearch-logging #和pod template配置的labels相匹配 
  template: 
    metadata: 
      labels: 
        k8s-app: elasticsearch-logging 
        kubernetes.io/cluster-service: "true" 
    spec: 
      serviceAccountName: elasticsearch-logging 
      containers: 
      - image: docker.io/library/elasticsearch:7.9.3 
        name: elasticsearch-logging 
        resources: 
          # need more cpu upon initialization, therefore burstable class 
          limits: 
            cpu: 1000m 
            memory: 2Gi 
          requests: 
            cpu: 100m 
            memory: 500Mi 
        ports: 
        - containerPort: 9200 
          name: db 
          protocol: TCP 
        - containerPort: 9300 
          name: transport 
          protocol: TCP 
        volumeMounts: 
        - name: elasticsearch-logging 
          mountPath: /usr/share/elasticsearch/data/   #挂载点 
        env: 
        - name: "NAMESPACE" 
          valueFrom: 
            fieldRef: 
              fieldPath: metadata.namespace 
        - name: "discovery.type"  #定义单节点类型 
          value: "single-node" 
        - name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整 
          value: "-Xms512m -Xmx2g"  
      volumes: 
      - name: elasticsearch-logging 
        hostPath: 
          path: /data/es/ 
      nodeSelector: #如果需要匹配落盘节点可以添加 nodeSelect 
        es: data 
      tolerations: 
      - effect: NoSchedule 
        operator: Exists 
      # Elasticsearch requires vm.max_map_count to be at least 262144. 
      # If your OS already sets up this number to a higher value, feel free 
      # to remove this init container. 
      initContainers: #容器初始化前的操作 
      - name: elasticsearch-logging-init 
        image: alpine:3.6 
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误 
        securityContext:  #仅应用到指定的容器上,并且不会影响Volume 
          privileged: true #运行特权容器 
      - name: increase-fd-ulimit 
        image: busybox 
        imagePullPolicy: IfNotPresent 
        command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量 
        securityContext: 
          privileged: true 
      - name: elasticsearch-volume-init #es数据落盘初始化,加上777权限 
        image: alpine:3.6 
        command: 
          - chmod 
          - -R 
          - "777" 
          - /usr/share/elasticsearch/data/ 
        volumeMounts: 
        - name: elasticsearch-logging 
          mountPath: /usr/share/elasticsearch/data/

2.1.3 创建logstash的配置

--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: logstash 
  namespace: kube-logging 
spec: 
  ports: 
  - port: 5044 
    targetPort: beats 
  selector: 
    type: logstash 
  clusterIP: None 
--- 
apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: logstash 
  namespace: kube-logging 
spec: 
  selector: 
    matchLabels: 
      type: logstash 
  template: 
    metadata: 
      labels: 
        type: logstash 
        srv: srv-logstash 
    spec: 
      containers: 
      - image: docker.io/kubeimages/logstash:7.9.3 #该镜像支持arm64和amd64两种架构 
        name: logstash 
        ports: 
        - containerPort: 5044 
          name: beats 
        command: 
        - logstash 
        - '-f' 
        - '/etc/logstash_c/logstash.conf' 
        env: 
        - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS" 
          value: "http://elasticsearch-logging:9200" 
        volumeMounts: 
        - name: config-volume 
          mountPath: /etc/logstash_c/ 
        - name: config-yml-volume 
          mountPath: /usr/share/logstash/config/ 
        - name: timezone 
          mountPath: /etc/localtime 
        resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响 
          limits: 
            cpu: 1000m 
            memory: 2048Mi 
          requests: 
            cpu: 512m 
            memory: 512Mi 
      volumes: 
      - name: config-volume 
        configMap: 
          name: logstash-conf 
          items: 
          - key: logstash.conf 
            path: logstash.conf 
      - name: timezone 
        hostPath: 
          path: /etc/localtime 
      - name: config-yml-volume 
        configMap: 
          name: logstash-yml 
          items: 
          - key: logstash.yml 
            path: logstash.yml 
 
--- 
apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: logstash-conf 
  namespace: kube-logging 
  labels: 
    type: logstash 
data: 
  logstash.conf: |- 
    input {
      beats { 
        port => 5044 
      } 
    } 
    filter {
      # 处理 ingress 日志 
      if [kubernetes][container][name] == "nginx-ingress-controller" {
        json {
          source => "message" 
          target => "ingress_log" 
        }
        if [ingress_log][requesttime] { 
          mutate { 
            convert => ["[ingress_log][requesttime]", "float"] 
          }
        }
        if [ingress_log][upstremtime] { 
          mutate { 
            convert => ["[ingress_log][upstremtime]", "float"] 
          }
        } 
        if [ingress_log][status] { 
          mutate { 
            convert => ["[ingress_log][status]", "float"] 
          }
        }
        if  [ingress_log][httphost] and [ingress_log][uri] {
          mutate { 
            add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"} 
          } 
          mutate { 
            split => ["[ingress_log][entry]","/"] 
          } 
          if [ingress_log][entry][1] { 
            mutate { 
              add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"} 
              remove_field => "[ingress_log][entry]" 
            }
          } else { 
            mutate { 
              add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"} 
              remove_field => "[ingress_log][entry]" 
            }
          }
        }
      }
      # 处理以srv进行开头的业务服务日志 
      if [kubernetes][container][name] =~ /^srv*/ { 
        json { 
          source => "message" 
          target => "tmp" 
        } 
        if [kubernetes][namespace] == "kube-logging" { 
          drop{} 
        } 
        if [tmp][level] { 
          mutate{ 
            add_field => {"[applog][level]" => "%{[tmp][level]}"} 
          } 
          if [applog][level] == "debug"{ 
            drop{} 
          } 
        } 
        if [tmp][msg] { 
          mutate { 
            add_field => {"[applog][msg]" => "%{[tmp][msg]}"} 
          } 
        } 
        if [tmp][func] { 
          mutate { 
            add_field => {"[applog][func]" => "%{[tmp][func]}"} 
          } 
        } 
        if [tmp][cost]{ 
          if "ms" in [tmp][cost] { 
            mutate { 
              split => ["[tmp][cost]","m"] 
              add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"} 
              convert => ["[applog][cost]", "float"] 
            } 
          } else { 
            mutate { 
              add_field => {"[applog][cost]" => "%{[tmp][cost]}"} 
            }
          }
        }
        if [tmp][method] { 
          mutate { 
            add_field => {"[applog][method]" => "%{[tmp][method]}"} 
          }
        }
        if [tmp][request_url] { 
          mutate { 
            add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"} 
          } 
        }
        if [tmp][meta._id] { 
          mutate { 
            add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"} 
          } 
        } 
        if [tmp][project] { 
          mutate { 
            add_field => {"[applog][project]" => "%{[tmp][project]}"} 
          }
        }
        if [tmp][time] { 
          mutate { 
            add_field => {"[applog][time]" => "%{[tmp][time]}"} 
          }
        }
        if [tmp][status] { 
          mutate { 
            add_field => {"[applog][status]" => "%{[tmp][status]}"} 
            convert => ["[applog][status]", "float"] 
          }
        }
      }
      mutate { 
        rename => ["kubernetes", "k8s"] 
        remove_field => "beat" 
        remove_field => "tmp" 
        remove_field => "[k8s][labels][app]" 
      }
    }
    output { 
      elasticsearch { 
        hosts => ["http://elasticsearch-logging:9200"] 
        codec => json 
        index => "logstash-%{+YYYY.MM.dd}" #索引名称以logstash+日志进行每日新建 
      }
    } 
---
 
apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: logstash-yml 
  namespace: kube-logging 
  labels: 
    type: logstash 
data: 
  logstash.yml: |- 
    http.host: "0.0.0.0" 
    xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200

2.1.4 创建filebeat的配置

--- 
apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: filebeat-config 
  namespace: kube-logging 
  labels: 
    k8s-app: filebeat 
data: 
  filebeat.yml: |- 
    filebeat.inputs: 
    - type: container 
      enable: true
      paths: 
        - /var/log/containers/*.log #这里是filebeat采集挂载到pod中的日志目录 
      processors: 
        - add_kubernetes_metadata: #添加k8s的字段用于后续的数据清洗 
            host: ${NODE_NAME}
            matchers: 
            - logs_path: 
                logs_path: "/var/log/containers/" 
    #output.kafka:  #如果日志量较大,es中的日志有延迟,可以选择在filebeat和logstash中间加入kafka 
    #  hosts: ["kafka-log-01:9092", "kafka-log-02:9092", "kafka-log-03:9092"] 
    # topic: 'topic-test-log' 
    #  version: 2.0.0 
    output.logstash: #因为还需要部署logstash进行数据的清洗,因此filebeat是把数据推到logstash中 
       hosts: ["logstash:5044"] 
       enabled: true 
--- 
apiVersion: v1 
kind: ServiceAccount 
metadata: 
  name: filebeat 
  namespace: kube-logging 
  labels: 
    k8s-app: filebeat
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole 
metadata: 
  name: filebeat 
  labels: 
    k8s-app: filebeat 
rules: 
- apiGroups: [""] # "" indicates the core API group 
  resources: 
  - namespaces 
  - pods 
  verbs: ["get", "watch", "list"] 
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: filebeat 
subjects: 
- kind: ServiceAccount 
  name: filebeat 
  namespace: kube-logging 
roleRef: 
  kind: ClusterRole 
  name: filebeat 
  apiGroup: rbac.authorization.k8s.io 
--- 
apiVersion: apps/v1 
kind: DaemonSet 
metadata: 
  name: filebeat 
  namespace: kube-logging 
  labels: 
    k8s-app: filebeat 
spec: 
  selector: 
    matchLabels: 
      k8s-app: filebeat 
  template: 
    metadata: 
      labels: 
        k8s-app: filebeat 
    spec: 
      serviceAccountName: filebeat 
      terminationGracePeriodSeconds: 30 
      containers: 
      - name: filebeat 
        image: docker.io/kubeimages/filebeat:7.9.3 #该镜像支持arm64和amd64两种架构 
        args: [ 
          "-c", "/etc/filebeat.yml", 
          "-e","-httpprof","0.0.0.0:6060" 
        ] 
        #ports: 
        #  - containerPort: 6060 
        #    hostPort: 6068 
        env: 
        - name: NODE_NAME 
          valueFrom: 
            fieldRef: 
              fieldPath: spec.nodeName 
        - name: ELASTICSEARCH_HOST 
          value: elasticsearch-logging 
        - name: ELASTICSEARCH_PORT 
          value: "9200" 
        securityContext: 
          runAsUser: 0 
          # If using Red Hat OpenShift uncomment this: 
          #privileged: true 
        resources: 
          limits: 
            memory: 1000Mi 
            cpu: 1000m 
          requests: 
            memory: 100Mi 
            cpu: 100m 
        volumeMounts: 
        - name: config #挂载的是filebeat的配置文件 
          mountPath: /etc/filebeat.yml 
          readOnly: true 
          subPath: filebeat.yml 
        - name: data #持久化filebeat数据到宿主机上 
          mountPath: /usr/share/filebeat/data 
        - name: varlibdockercontainers #这里主要是把宿主机上的源日志目录挂载到filebeat容器中,如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把mountPath改为/var/lib 
          mountPath: /var/lib
          readOnly: true 
        - name: varlog #这里主要是把宿主机上/var/log/pods和/var/log/containers的软链接挂载到filebeat容器中 
          mountPath: /var/log/ 
          readOnly: true 
        - name: timezone 
          mountPath: /etc/localtime 
      volumes: 
      - name: config 
        configMap: 
          defaultMode: 0600 
          name: filebeat-config 
      - name: varlibdockercontainers 
        hostPath: #如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把path改为/var/lib 
          path: /var/lib
      - name: varlog 
        hostPath: 
          path: /var/log/ 
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart 
      - name: inputs 
        configMap: 
          defaultMode: 0600 
          name: filebeat-inputs 
      - name: data 
        hostPath: 
          path: /data/filebeat-data 
          type: DirectoryOrCreate 
      - name: timezone 
        hostPath: 
          path: /etc/localtime 
      tolerations: #加入容忍能够调度到每一个节点 
      - effect: NoExecute 
        key: dedicated 
        operator: Equal 
        value: gpu 
      - effect: NoSchedule 
        operator: Exists 

2.1.5 创建kibana的配置

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-logging
  name: kibana-config
  labels:
    k8s-app: kibana
data:
  kibana.yml: |-
    server.name: kibana
    server.host: "0.0.0.0"
    i18n.locale: zh-CN                      #设置默认语言为中文
    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}         #es集群连接地址,由于我这都都是k8s部署且在一个ns下,可以直接使用service name连接
--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: kibana 
  namespace: kube-logging 
  labels: 
    k8s-app: kibana 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    kubernetes.io/name: "Kibana" 
    srv: srv-kibana 
spec: 
  type: NodePort
  ports: 
  - port: 5601 
    protocol: TCP 
    targetPort: ui 
  selector: 
    k8s-app: kibana 
--- 
apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: kibana 
  namespace: kube-logging 
  labels: 
    k8s-app: kibana 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    srv: srv-kibana 
spec: 
  replicas: 1 
  selector: 
    matchLabels: 
      k8s-app: kibana 
  template: 
    metadata: 
      labels: 
        k8s-app: kibana 
    spec: 
      containers: 
      - name: kibana 
        image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构 
        resources: 
          # need more cpu upon initialization, therefore burstable class 
          limits: 
            cpu: 1000m 
          requests: 
            cpu: 100m 
        env: 
          - name: ELASTICSEARCH_HOSTS 
            value: http://elasticsearch-logging:9200 
        ports: 
        - containerPort: 5601 
          name: ui 
          protocol: TCP 
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes:
      - name: config
        configMap:
          name: kibana-config
--- 
apiVersion: networking.k8s.io/v1
kind: Ingress 
metadata: 
  name: kibana 
  namespace: kube-logging 
spec: 
  ingressClassName: nginx
  rules: 
  - host: kibana.lan-he.com.cn
    http: 
      paths: 
      - path: / 
        pathType: Prefix
        backend: 
          service:
            name: kibana 
            port:
              number: 5601 

2.2 部署elk服务

2.2.1 部署elk服务以及查看状态

[root@k8s-master elk]# kubectl apply -f namespace.yaml
namespace/kube-logging created
[root@k8s-master elk]# kubectl apply -f es.yaml
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created

[root@k8s-master elk]# kubectl apply -f logstash.yaml
service/logstash created
deployment.apps/logstash created
configmap/logstash-conf created
configmap/logstash-yml created

[root@k8s-master elk]# kubectl apply -f kibana.yaml
configmap/kibana-config created
service/kibana created
deployment.apps/kibana created
ingress.networking.k8s.io/kibana created


[root@k8s-master elk]# kubectl apply -f filebeat.yaml
configmap/filebeat-config created
serviceaccount/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
daemonset.apps/filebeat created

[root@k8s-master elk]# kubectl get all -n kube-logging
NAME                            READY   STATUS              RESTARTS   AGE
pod/elasticsearch-logging-0     0/1     Pending             0          27s
pod/filebeat-2fz26              1/1     Running             0             2m17s
pod/filebeat-j4qf2              1/1     Running             0             2m17s
pod/filebeat-r97lm              1/1     Running             0             2m17s
pod/kibana-86ddf46d47-9q79f     0/1     ContainerCreating   0          8s
pod/logstash-5f774b55bb-992qj   0/1     Pending             0          20s

NAME                            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/elasticsearch-logging   ClusterIP   10.1.16.54    <none>        9200/TCP         28s
service/kibana                  NodePort    10.1.181.71   <none>        5601:32274/TCP   8s
service/logstash                ClusterIP   None          <none>        5044/TCP         21s

NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/filebeat   3         3         2       3            2           <none>          3m26s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana     0/1     1            0           8s
deployment.apps/logstash   0/1     1            0           21s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-86ddf46d47     1         1         0       8s
replicaset.apps/logstash-5f774b55bb   1         1         0       20s

NAME                                     READY   AGE
statefulset.apps/elasticsearch-logging   0/1     27s

2.2.2 Q1:es的pod状态是Pending

2.2.2.1 查看pod的描述信息
  • 信息一: node(s) had untolerated taint {node-role.kubernetes.io/control-plane: },

    • 这表示有一个节点(控制平面节点)被标记(tainted)为 node-role.kubernetes.io/control-plane,并且没有 Pod 能够容忍(tolerate)这个标记。通常,控制平面节点(即 Kubernetes 集群中的 master 节点)是不应该运行工作负载的。
  • 信息二: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }:

    • 这表明还有一个节点被认为是不可达的,因此 Pod 无法被调度到该节点上。
  • 信息三: 2 Insufficient memory. preemption:

    • 这表示有两个节点上的可用内存不足以满足 Pod 的资源请求。
  • 信息四: 0/3 nodes are available

    • 这表示即使考虑 Pod 抢占,也没有可用的节点可以供 Pod 使用。
  • 信息五: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

    • 这两条信息表明,对于即将到来的 Pod,没有找到可以被抢占(即终止并重新调度)的低优先级 Pod,或者即使进行了抢占,也不会对调度有帮助。
[root@k8s-master elk]# kubectl describe -n kube-logging  po elasticsearch-logging-0
Name:             elasticsearch-logging-0
Namespace:        kube-logging
Priority:         0
Service Account:  elasticsearch-logging
Node:             <none>
Labels:           controller-revision-hash=elasticsearch-logging-bcd67584
                  k8s-app=elasticsearch-logging
                  kubernetes.io/cluster-service=true
                  statefulset.kubernetes.io/pod-name=elasticsearch-logging-0
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Controlled By:    StatefulSet/elasticsearch-logging
Init Containers:
  elasticsearch-logging-init:
    Image:      alpine:3.6
    Port:       <none>
    Host Port:  <none>
    Command:
      /sbin/sysctl
      -w
      vm.max_map_count=262144
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwbvr (ro)
  increase-fd-ulimit:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      ulimit -n 65536
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwbvr (ro)
  elasticsearch-volume-init:
    Image:      alpine:3.6
    Port:       <none>
    Host Port:  <none>
    Command:
      chmod
      -R
      777
      /usr/share/elasticsearch/data/
    Environment:  <none>
    Mounts:
      /usr/share/elasticsearch/data/ from elasticsearch-logging (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwbvr (ro)
Containers:
  elasticsearch-logging:
    Image:       docker.io/elastic/elasticsearch:8.12.0
    Ports:       9200/TCP, 9300/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:     100m
      memory:  500Mi
    Environment:
      NAMESPACE:       kube-logging (v1:metadata.namespace)
      discovery.type:  single-node
      ES_JAVA_OPTS:    -Xms512m -Xmx2g
    Mounts:
      /usr/share/elasticsearch/data/ from elasticsearch-logging (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwbvr (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  elasticsearch-logging:
    Type:          HostPath (bare host directory volume)
    Path:          /data/es/
    HostPathType:
  kube-api-access-fwbvr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              es=data
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  2m37s  default-scheduler  0/3 nodes are available: 2 Insufficient memory, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
2.2.2.2 给master打一个标签
  • 查看es的yaml文件中得到pod需要往哪个node上调用,标签如下图
    在这里插入图片描述
[root@k8s-master elk]# kubectl label nodes k8s-master  es='data'
node/k8s-master labeled



[root@k8s-master elk]# kubectl get all -n kube-logging
NAME                            READY   STATUS              RESTARTS   AGE
pod/elasticsearch-logging-0     0/1     Init:0/3            0          16m
pod/filebeat-2fz26              1/1     Running             0             2m17s
pod/filebeat-j4qf2              1/1     Running             0             2m17s
pod/filebeat-r97lm              1/1     Running             0             2m17s
pod/kibana-86ddf46d47-6vmrb     0/1     ContainerCreating   0          59s
pod/kibana-86ddf46d47-9q79f     1/1     Terminating         0          16m
pod/logstash-5f774b55bb-992qj   0/1     Pending             0          16m

NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/filebeat   3         3         2       3            2           <none>          3m26s

NAME                            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/elasticsearch-logging   ClusterIP   10.1.16.54    <none>        9200/TCP         16m
service/kibana                  NodePort    10.1.181.71   <none>        5601:32274/TCP   16m
service/logstash                ClusterIP   None          <none>        5044/TCP         16m

NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/filebeat   3         3         2       3            2           <none>          3m26s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana     0/1     1            0           16m
deployment.apps/logstash   0/1     1            0           16m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-86ddf46d47     1         1         0       16m
replicaset.apps/logstash-5f774b55bb   1         1         0       16m

NAME                                     READY   AGE
statefulset.apps/elasticsearch-logging   0/1     16m

2.2.3 Q2:logstash的状态也是Pending

2.2.3.1 查看pod的描述信息
  • 问题和刚才es启动的时候问题差不多一致
[root@k8s-master elk]# kubectl describe -n kube-logging po logstash-5f774b55bb-992qj
Name:             logstash-5f774b55bb-992qj
Namespace:        kube-logging
Priority:         0
Service Account:  default
Node:             <none>
Labels:           pod-template-hash=5f774b55bb
                  srv=srv-logstash
                  type=logstash
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/logstash-5f774b55bb
Containers:
  logstash:
    Image:      docker.io/elastic/logstash:8.12.0
    Port:       5044/TCP
    Host Port:  0/TCP
    Command:
      logstash
      -f
      /etc/logstash_c/logstash.conf
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:     512m
      memory:  512Mi
    Environment:
      XPACK_MONITORING_ELASTICSEARCH_HOSTS:  http://elasticsearch-logging:9200
    Mounts:
      /etc/localtime from timezone (rw)
      /etc/logstash_c/ from config-volume (rw)
      /usr/share/logstash/config/ from config-yml-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2x2t5 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      logstash-conf
    Optional:  false
  timezone:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:
  config-yml-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      logstash-yml
    Optional:  false
  kube-api-access-2x2t5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  5m59s (x2 over 10m)  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  3m23s (x5 over 21m)  default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
2.2.3.2 释放资源和添加标签
[root@k8s-master ~]# kubectl label nodes k8s-node-01  type="logstash"
node/k8s-node-01 labeled

# 删除之前创建的prometheus资源
[root@k8s-master ~]# kubectl delete -f  kube-prometheus-0.12.0/manifests/

2.2.4 Q3:kibana状态是:CrashLoopBackOff

[root@k8s-master ~]# kubectl  get po  -n kube-logging   -o wide
NAME                        READY   STATUS             RESTARTS        AGE    IP          NODE          NOMINATED NODE   READINESS GATES
elasticsearch-logging-0     1/1     Running            3 (4m54s ago)   71m    10.2.0.4    k8s-master    <none>           <none>
filebeat-2fz26              1/1     Running   0             6m7s   10.2.0.5    k8s-master    <none>           <none>
filebeat-j4qf2              1/1     Running   0             6m7s   10.2.1.20   k8s-node-01   <none>           <none>
filebeat-r97lm              1/1     Running   0             6m7s   10.2.2.17   k8s-node-02   <none>           <none>
kibana-86ddf46d47-6vmrb     0/1     CrashLoopBackOff   20 (43s ago)    93m    10.2.2.12   k8s-node-02   <none>           <none>
logstash-5f774b55bb-992qj   1/1     Running            0               108m   10.2.1.19   k8s-node-01   <none>           <none>
2.2.4.1、查看pod的描述信息
[root@k8s-master ~]# kubectl describe -n kube-logging po kibana-86ddf46d47-d97cj
Name:             kibana-86ddf46d47-d97cj
Namespace:        kube-logging
Priority:         0
Service Account:  default
Node:             k8s-node-02/192.168.0.191
Start Time:       Sat, 02 Mar 2024 15:23:10 +0800
Labels:           k8s-app=kibana
                  pod-template-hash=86ddf46d47
Annotations:      <none>
Status:           Running
IP:               10.2.2.15
IPs:
  IP:           10.2.2.15
Controlled By:  ReplicaSet/kibana-86ddf46d47
Containers:
  kibana:
    Container ID:   docker://c5a6e6ee69e7766dd38374be22bad6d44c51deae528f36f00624e075e4ffe1d2
    Image:          docker.io/elastic/kibana:8.12.0
    Image ID:       docker-pullable://elastic/kibana@sha256:596700740fdb449c5726e6b6a6dd910c2140c8a37e68a301a5186e86b0834c9f
    Port:           5601/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 02 Mar 2024 15:24:50 +0800
      Finished:     Sat, 02 Mar 2024 15:24:54 +0800
    Ready:          False
    Restart Count:  4
    Limits:
      cpu:  1
    Requests:
      cpu:  100m
    Environment:
      ELASTICSEARCH_HOSTS:  http://elasticsearch-logging:9200
    Mounts:
      /usr/share/kibana/config/kibana.yml from config (ro,path="kibana.yml")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fpcqx (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kibana-config
    Optional:  false
  kube-api-access-fpcqx:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m12s                default-scheduler  Successfully assigned kube-logging/kibana-86ddf46d47-d97cj to k8s-node-02
  Normal   Pulled     32s (x5 over 2m10s)  kubelet            Container image "docker.io/elastic/kibana:8.12.0" already present on machine
  Normal   Created    32s (x5 over 2m10s)  kubelet            Created container kibana
  Normal   Started    32s (x5 over 2m10s)  kubelet            Started container kibana
  Warning  BackOff    1s (x9 over 2m)      kubelet            Back-off restarting failed container
2.2.4.2、查看容器的日志信息
# 1、查看pod的描述信息

[root@k8s-master ~]# kubectl logs -n kube-logging kibana-86ddf46d47-d97cj
Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/8.12/production.html#openssl-legacy-provider
{"log.level":"info","@timestamp":"2024-03-02T07:24:52.627Z","log.logger":"elastic-apm-node","ecs.version":"8.10.0","agentVersion":"4.2.0","env":{"pid":7,"proctitle":"/usr/share/kibana/bin/../node/bin/node","os":"linux 3.10.0-1160.105.1.el7.x86_64","arch":"x64","host":"kibana-86ddf46d47-d97cj","timezone":"UTC+00","runtime":"Node.js v18.18.2"},"config":{"active":{"source":"start","value":true},"breakdownMetrics":{"source":"start","value":false},"captureBody":{"source":"start","value":"off","commonName":"capture_body"},"captureHeaders":{"source":"start","value":false},"centralConfig":{"source":"start","value":false},"contextPropagationOnly":{"source":"start","value":true},"environment":{"source":"start","value":"production"},"globalLabels":{"source":"start","value":[["git_rev","e9092c0a17923f4ed984456b8a5db619b0a794b3"]],"sourceValue":{"git_rev":"e9092c0a17923f4ed984456b8a5db619b0a794b3"}},"logLevel":{"source":"default","value":"info","commonName":"log_level"},"metricsInterval":{"source":"start","value":120,"sourceValue":"120s"},"serverUrl":{"source":"start","value":"https://kibana-cloud-apm.apm.us-east-1.aws.found.io/","commonName":"server_url"},"transactionSampleRate":{"source":"start","value":0.1,"commonName":"transaction_sample_rate"},"captureSpanStackTraces":{"source":"start","sourceValue":false},"secretToken":{"source":"start","value":"[REDACTED]","commonName":"secret_token"},"serviceName":{"source":"start","value":"kibana","commonName":"service_name"},"serviceVersion":{"source":"start","value":"8.12.0","commonName":"service_version"}},"activationMethod":"require","message":"Elastic APM Node.js Agent v4.2.0"}
[2024-03-02T07:24:54.467+00:00][INFO ][root] Kibana is starting
[2024-03-02T07:24:54.550+00:00][INFO ][root] Kibana is shutting down
[2024-03-02T07:24:54.554+00:00][FATAL][root] Reason: [config validation of [server].host]: value must be a valid hostname (see RFC 1123).
Error: [config validation of [server].host]: value must be a valid hostname (see RFC 1123).
    at ObjectType.validate (/usr/share/kibana/node_modules/@kbn/config-schema/src/types/type.js:94:13)
    at ConfigService.validateAtPath (/usr/share/kibana/node_modules/@kbn/config/src/config_service.js:253:19)
    at /usr/share/kibana/node_modules/@kbn/config/src/config_service.js:263:204
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/map.js:10:37
    at OperatorSubscriber._this._next (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/OperatorSubscriber.js:33:21)
    at OperatorSubscriber.Subscriber.next (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Subscriber.js:51:18)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/distinctUntilChanged.js:18:28
    at OperatorSubscriber._this._next (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/OperatorSubscriber.js:33:21)
    at OperatorSubscriber.Subscriber.next (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Subscriber.js:51:18)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/map.js:10:24
    at OperatorSubscriber._this._next (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/OperatorSubscriber.js:33:21)
    at OperatorSubscriber.Subscriber.next (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Subscriber.js:51:18)
    at ReplaySubject._subscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/ReplaySubject.js:54:24)
    at ReplaySubject.Observable._trySubscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:41:25)
    at ReplaySubject.Subject._trySubscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Subject.js:123:47)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:35:31
    at Object.errorContext (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/errorContext.js:22:9)
    at ReplaySubject.Observable.subscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:26:24)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/share.js:66:18
    at OperatorSubscriber.<anonymous> (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/lift.js:14:28)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:30:30
    at Object.errorContext (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/errorContext.js:22:9)
    at Observable.subscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:26:24)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/map.js:9:16
    at OperatorSubscriber.<anonymous> (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/lift.js:14:28)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:30:30
    at Object.errorContext (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/errorContext.js:22:9)
    at Observable.subscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:26:24)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/distinctUntilChanged.js:13:16
    at OperatorSubscriber.<anonymous> (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/lift.js:14:28)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:30:30
    at Object.errorContext (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/errorContext.js:22:9)
    at Observable.subscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:26:24)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/operators/map.js:9:16
    at SafeSubscriber.<anonymous> (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/lift.js:14:28)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:30:30
    at Object.errorContext (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/util/errorContext.js:22:9)
    at Observable.subscribe (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/Observable.js:26:24)
    at /usr/share/kibana/node_modules/rxjs/dist/cjs/internal/firstValueFrom.js:24:16
    at new Promise (<anonymous>)
    at firstValueFrom (/usr/share/kibana/node_modules/rxjs/dist/cjs/internal/firstValueFrom.js:8:12)
    at EnvironmentService.preboot (/usr/share/kibana/node_modules/@kbn/core-environment-server-internal/src/environment_service.js:50:169)
    at Server.preboot (/usr/share/kibana/node_modules/@kbn/core-root-server-internal/src/server.js:146:55)
    at Root.preboot (/usr/share/kibana/node_modules/@kbn/core-root-server-internal/src/root/index.js:47:32)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at bootstrap (/usr/share/kibana/node_modules/@kbn/core-root-server-internal/src/bootstrap.js:97:9)
    at Command.<anonymous> (/usr/share/kibana/src/cli/serve/serve.js:241:5)

 FATAL  Error: [config validation of [server].host]: value must be a valid hostname (see RFC 1123).
2.2.4.3 kibana的配置文件中有个hosts信息填写错误

在这里插入图片描述

# 1、更新配置文件
[root@k8s-master ~]# kubectl replace  -f elk/kibana.yaml
configmap/kibana-config replaced
service/kibana replaced
deployment.apps/kibana replaced
ingress.networking.k8s.io/kibana replaced


# 2、删除旧的pod
[root@k8s-master ~]# kubectl delete  po kibana-86ddf46d47-d97cj   -n kube-logging
pod "kibana-86ddf46d47-d97cj" deleted

# 3、全部服务都跑起来了 
[root@k8s-master ~]# kubectl  get all   -n kube-logging
NAME                            READY   STATUS    RESTARTS      AGE
pod/elasticsearch-logging-0     1/1     Running   3 (14m ago)   80m
pod/filebeat-2fz26              1/1     Running   0             4m20s
pod/filebeat-j4qf2              1/1     Running   0             4m20s
pod/filebeat-r97lm              1/1     Running   0             4m20s
pod/kibana-86ddf46d47-d2ddz     1/1     Running   0             3s
pod/logstash-5f774b55bb-992qj   1/1     Running   0             117m

NAME                            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/elasticsearch-logging   ClusterIP   10.1.16.54    <none>        9200/TCP         117m
service/kibana                  NodePort    10.1.181.71   <none>        5601:32274/TCP   117m
service/logstash                ClusterIP   None          <none>        5044/TCP         117m

NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/filebeat   3         3         2       3            2           <none>          3m26s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kibana     1/1     1            1           117m
deployment.apps/logstash   1/1     1            1           117m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/kibana-86ddf46d47     1         1         1       117m
replicaset.apps/logstash-5f774b55bb   1         1         1       117m

NAME                                     READY   AGE
statefulset.apps/elasticsearch-logging   1/1     117m

2.2.5 查看elk可视化界面,显示 Kibana 服务器尚未准备就绪。

# 1、查看ingress控制器的信息
[root@k8s-master ~]# kubectl get ingressclasses.networking.k8s.io   
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       12h

# 2、查看ingress 信息
[root@k8s-master ~]# kubectl get ingress  -n kube-logging
NAME     CLASS   HOSTS                  ADDRESS       PORTS   AGE
kibana   nginx   kibana.lan-he.com.cn   10.1.86.240   80      131m

# 3、查看ingress的pod 所在那个node节点
[root@k8s-master ~]# kubectl get po   -n monitoring  -o wide
NAME                             READY   STATUS    RESTARTS       AGE   IP              NODE          NOMINATED NODE   READINESS GATES
ingress-nginx-controller-6gz5k   1/1     Running   2 (110m ago)   11h   10.10.10.177   k8s-node-01   <none>           <none>


# 4、查看kibana的日志,如下图
[root@k8s-master elk]# kubectl logs -f kibana-86ddf46d47-d2ddz  -n kube-logging

注意:本文未添加安装ingress-nginx控制器的教程,所以在使用elk的时候需要提前把ingress-ngin控制器安装好。另外就是需要注意es的pod共享数据卷的问题,如果没有权限也会导致安装失败,如下图kibana无法访问到es的服务。

在这里插入图片描述

[root@k8s-master elk]# kubectl apply -f kibana.yaml
Warning: resource configmaps/kibana-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/kibana-config configured
Warning: resource services/kibana is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
service/kibana configured
Warning: resource deployments/kibana is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
deployment.apps/kibana configured
Warning: resource ingresses/kibana is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
ingress.networking.k8s.io/kibana configured

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1484493.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

C++ Algorithm Tutorial (1)

中文版 c算法入门教程(1)_c怎么学习算法-CSDN博客 Cis a powerful and widely used programming language, and for those who want to delve deeper into programming and algorithms, mastering Cis an important milestone. This article will take you step by step to und…

殿堂级Flink源码极精课程预售

一、为什么我们要读源码? 1、让个人技术快速成长: 优秀的开源框架,底层的源码设计思想也非常优秀,同时还有含有大量的设计模式和并发编程技术&#xff0c;优秀的解决方案,熟读源码对猿们技术提升有很大帮助 2、新技术学习能力: Java开源码框架的源码熟读后&#xff0c;若出现…

微服务day03-Nacos配置管理与Nacos集群搭建

一.Nacos配置管理 Nacos不仅可以作为注册中心&#xff0c;可以进行配置管理 1.1 统一配置管理 统一配置管理可以实现配置的热更新&#xff08;即不用重启当服务发生变更时也可以直接更新&#xff09; dataId格式&#xff1a;服务名-环境名.yaml&#xff0c;分组一般使用默认…

基于YOLOv8/YOLOv7/YOLOv6/YOLOv5的番茄成熟度检测系统(Python+PySide6界面+训练代码)

摘要&#xff1a;开发番茄成熟度检测系统对于提高农业产量和食品加工效率具有重大意义。本篇博客详细介绍了如何利用深度学习构建一个番茄成熟度检测系统&#xff0c;并提供了完整的实现代码。该系统基于强大的YOLOv8算法&#xff0c;并结合了YOLOv7、YOLOv6、YOLOv5的对比&…

【机器学习】CIFAR-10数据集简介、下载方法(自动)

【机器学习】CIFAR-10数据集简介、下载方法(自动) &#x1f308; 个人主页&#xff1a;高斯小哥 &#x1f525; 高质量专栏&#xff1a;Matplotlib之旅&#xff1a;零基础精通数据可视化、Python基础【高质量合集】、PyTorch零基础入门教程&#x1f448; 希望得到您的订阅和支…

Linux第67步_linux字符设备驱动_注册和注销

1、字符设备注册与注销的函数原型” /*字符设备注册的函数原型*/ static inline int register_chrdev(unsigned int major,\ const char *name, \ const struct file_operations *fops) /* major:主设备号&#xff0c;Limnux下每个设备都有一个设备号&#xff0c;设备号分…

Ainx的全局配置

&#x1f4d5;作者简介&#xff1a; 过去日记&#xff0c;致力于Java、GoLang,Rust等多种编程语言&#xff0c;热爱技术&#xff0c;喜欢游戏的博主。 &#x1f4d7;本文收录于Ainx系列&#xff0c;大家有兴趣的可以看一看 &#x1f4d8;相关专栏Rust初阶教程、go语言基础系列…

[SS]语义分割_膨胀卷积

膨胀卷积 目录 一、概念 1、定义 2、知识点 二、详细介绍 1、引入 2、膨胀系数设定 一、概念 1、定义 膨胀卷积&#xff08;Dilated Convolution&#xff09;&#xff0c;也称为空洞卷积&#xff08;Atrous Convolution&#xff09;&#xff0c;是一种在卷积神经网络…

Vue3 条件渲染 v-show

v-show 指令&#xff1a;用于控制元素的显示或隐藏。 执行条件&#xff1a;当条件为 false 时&#xff0c;会添加一个 display:none 属性将元素隐藏。 应用场景&#xff1a;适用于显示隐藏切换频率较高的场景。 <div v-show"数据">内容</div> 基础用法…

YOLOv9独家原创改进|加入空间和通道重建卷积SCConv模块!

专栏介绍&#xff1a;YOLOv9改进系列 | 包含深度学习最新创新&#xff0c;主力高效涨点&#xff01;&#xff01;&#xff01; 一、本文介绍 本文将一步步演示如何在YOLOv9中添加 / 替换新模块&#xff0c;寻找模型上的创新&#xff01; 适用检测目标&#xff1a; YOLOv9模块…

matlab阶段学习小节1

数组排序 fliplr()实现数组倒序&#xff0c;但不对大小进行排序&#xff0c;只是元素位置掉头。 要想实现大小倒序排列&#xff0c;可以先sort()实现正序排列&#xff0c;再用fliplr倒一下 %数组运算 %矩阵 %xAb的解xb/A;(矩阵) %右除运算A/B&#xff0c;左矩阵为被除数&a…

【嵌入式学习】QT-Day4-Qt基础

简单实现闹钟播报&#xff0c;设置时间&#xff0c;当系统时间与设置时间相同时播报语音5次&#xff0c;然后停止。如果设置时间小于当前系统时间&#xff0c;则弹出消息提示框&#xff0c;并清空输入框。 #include "my_clock.h" #include "ui_my_clock.h&quo…

PHPStudy安装

一、简介 PhpStudy是一款集成了Apache、PHP、MySQL等服务的Web开发环境软件&#xff0c;主要用于本地调试和开发PHP程序。它为用户提供了一个可以简单快捷地搭建PHP运行环境的平台&#xff0c;使得开发者无需手动配置繁琐的服务器环境便可开始开发工作。 1.1 功能 境搭建&am…

音视频开发项目:H.265播放器:视频解码篇

视频演示 如下将演示新版播放器播放 1分钟1080p/25fps/H.265 MP4视频&#xff0c;具体视频参数如下&#xff1a; 粉丝福利&#xff0c; 免费领取C音视频学习资料包学习路线大纲、技术视频/代码&#xff0c;内容包括&#xff08;音视频开发&#xff0c;面试题&#xff0c;FFmpe…

SpringBoot整合rabbitmq-扇形交换机队列(三)

说明&#xff1a;本文章主要是Fanout 扇形交换机的使用&#xff0c;它路由键的概念&#xff0c;绑定了页无用&#xff0c;这个交换机在接收到消息后&#xff0c;会直接转发到绑定到它上面的所有队列。 大白话&#xff1a;广播模式&#xff0c;交换机会把消息发给绑定它的所有队…

编译链接实战(25)ThreadSanitizer检测线程安全

ThreadSanitizer&#xff08;又称为TSan&#xff09;是一个用于C/C的数据竞争检测器。在并发系统中&#xff0c;数据竞争是最常见且最难调试的错误类型之一。当两个线程并发访问同一个变量&#xff0c;并且至少有一个访问是写操作时&#xff0c;就会发生数据竞争。C11标准正式将…

STM32 | J-link安装过程

J-Link是一种由SEGGER公司开发的调试器和仿真器,用于嵌入式系统开发。它可以帮助开发人员在开发过程中进行调试和仿真,提供了快速、稳定的连接,并支持多种不同类型的微处理器和微控制器。 要获取J-Link软件,请访问SEGGER公司的官方网站(www.segger.com)并进入他们的下载…

LLC谐振变换器变频移相混合控制MATLAB仿真

微❤关注“电气仔推送”获得资料&#xff08;专享优惠&#xff09; 基本控制原理 为了实现变换器较小的电压增益&#xff0c;同时又有较 高的效率&#xff0c;文中在变频控制的基础上加入移相控制&#xff0c; 在这种控制策略下&#xff0c;变换器通过调节一次侧开关管 的开关…

现代信号处理学习笔记(二)参数估计理论

参数估计理论为我们提供了一套系统性的工具和方法&#xff0c;使我们能够从样本数据中推断总体参数&#xff0c;并评估估计的准确性和可靠性。这些概念在统计学和数据分析中起着关键的作用。 目录 前言 一、估计子的性能 1、无偏估计与渐近无偏估计 2、估计子的有效性 两个…

【C++】用命名空间避免命名冲突

&#x1f338;博主主页&#xff1a;釉色清风&#x1f338;文章专栏&#xff1a;C&#x1f338;今日语录&#xff1a;如果神明还不帮你&#xff0c;说明他相信你。 &#x1fab7;文章简介&#xff1a;这篇文章是结合谭浩强老师的书以及自己的理解&#xff0c;同时加入了一些例子…