K8S日志收集方案-EFK部署

news2025/1/11 16:52:39

EFK架构工作流程

在这里插入图片描述

部署说明

ECK (Elastic Cloud on Kubernetes):2.7
Kubernetes:1.23.0

文件准备

crds.yaml
下载地址:https://download.elastic.co/downloads/eck/2.7.0/crds.yaml

operator.yaml
下载地址:https://download.elastic.co/downloads/eck/2.7.0/operator.yaml

elastic.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: efk-elastic
  namespace: elastic-system
spec:
  version: 8.12.2
  nodeSets:
    - name: default
      count: 3
      config:
        node.store.allow_mmap: false
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
            storageClassName: openebs-hostpath
      podTemplate:
        spec:
          containers:
          - name: elasticsearch
            env:
            - name: ES_JAVA_OPTS
              value: -Xms2g -Xmx2g
            resources:
              requests:
                memory: 2Gi
                cpu: 2
              limits:
                memory: 4Gi
                cpu: 2
          # https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
              command: ["sh", "-c", "sysctl -w vm.max_map_count=262144"]


---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic-system
  labels:
    app: efk-elastic-nodeport
  name: efk-elastic-nodeport
spec:
  sessionAffinity: None
  selector:
    common.k8s.elastic.co/type: elasticsearch
    elasticsearch.k8s.elastic.co/cluster-name: efk-elastic
  ports:
    - name: http-9200
      protocol: TCP
      targetPort: 9200
      port: 9200
      nodePort: 30920
    - name: http-9300
      protocol: TCP
      targetPort: 9300
      port: 9300
      nodePort: 30921
  type: NodePort


---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: efk-kibana
  namespace: elastic-system
spec:
  version: 8.12.2
  count: 1
  elasticsearchRef:
    name: efk-elastic
    namespace: elastic-system

---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic-system
  labels:
    app: efk-kibana-nodeport
  name: efk-kibana-nodeport
spec:
  sessionAffinity: None
  selector:
    common.k8s.elastic.co/type: kibana
    kibana.k8s.elastic.co/name: efk-kibana
  ports:
    - name: http-5601
      protocol: TCP
      targetPort: 5601
      port: 5601
      nodePort: 30922
  type: NodePort

fluentd-es-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-es-config
  namespace: elastic-system
data:
  fluent.conf: |-
    # https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/fluent.conf

    @include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
    @include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || 'prometheus'}.conf"
    @include kubernetes.conf
    @include conf.d/*.conf

    <match kubernetes.**>
      # https://github.com/kubernetes/kubernetes/issues/23001
      @type elasticsearch_dynamic
      @id  kubernetes_elasticsearch
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
      reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
      reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
      reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
      log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
      logstash_prefix logstash-${record['kubernetes']['namespace_name']}
      logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
      logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
      index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
      target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"
      type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
      include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
      template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
      template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
      template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
      sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
      request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
      suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"
      enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"
      ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"
      ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"
      ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"
      <buffer>
        flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
        flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
        chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
        queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
        retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
        retry_forever true
      </buffer>
    </match>

    <match **>
      @type elasticsearch
      @id out_es
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
      reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
      reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
      reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
      log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
      logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
      logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
      logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
      index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
      target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"
      type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
      include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
      template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
      template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
      template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
      sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
      request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
      suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"
      enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"
      ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"
      ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"
      ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"
      <buffer>
        flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
        flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
        chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
        queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
        retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
        retry_forever true
      </buffer>
    </match>
  kubernetes.conf: |-
    # https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/kubernetes.conf

    <label @FLUENT_LOG>
      <match fluent.**>
        @type null
        @id ignore_fluent_logs
      </match>
    </label>

    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>
    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>
    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>

    <source>
      @type tail
      @id in_tail_minion
      path /var/log/salt/minion
      pos_file /var/log/fluentd-salt.pos
      tag salt
      <parse>
        @type regexp
        expression /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
        time_format %Y-%m-%d %H:%M:%S
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_startupscript
      path /var/log/startupscript.log
      pos_file /var/log/fluentd-startupscript.log.pos
      tag startupscript
      <parse>
        @type syslog
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_docker
      path /var/log/docker.log
      pos_file /var/log/fluentd-docker.log.pos
      tag docker
      <parse>
        @type regexp
        expression /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_etcd
      path /var/log/etcd.log
      pos_file /var/log/fluentd-etcd.log.pos
      tag etcd
      <parse>
        @type none
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kubelet
      multiline_flush_interval 5s
      path /var/log/kubelet.log
      pos_file /var/log/fluentd-kubelet.log.pos
      tag kubelet
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_proxy
      multiline_flush_interval 5s
      path /var/log/kube-proxy.log
      pos_file /var/log/fluentd-kube-proxy.log.pos
      tag kube-proxy
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_apiserver
      multiline_flush_interval 5s
      path /var/log/kube-apiserver.log
      pos_file /var/log/fluentd-kube-apiserver.log.pos
      tag kube-apiserver
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_controller_manager
      multiline_flush_interval 5s
      path /var/log/kube-controller-manager.log
      pos_file /var/log/fluentd-kube-controller-manager.log.pos
      tag kube-controller-manager
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_scheduler
      multiline_flush_interval 5s
      path /var/log/kube-scheduler.log
      pos_file /var/log/fluentd-kube-scheduler.log.pos
      tag kube-scheduler
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_rescheduler
      multiline_flush_interval 5s
      path /var/log/rescheduler.log
      pos_file /var/log/fluentd-rescheduler.log.pos
      tag rescheduler
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_glbc
      multiline_flush_interval 5s
      path /var/log/glbc.log
      pos_file /var/log/fluentd-glbc.log.pos
      tag glbc
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_cluster_autoscaler
      multiline_flush_interval 5s
      path /var/log/cluster-autoscaler.log
      pos_file /var/log/fluentd-cluster-autoscaler.log.pos
      tag cluster-autoscaler
      <parse>
        @type kubernetes
      </parse>
    </source>

    # Example:
    # 2017-02-09T00:15:57.992775796Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" ip="104.132.1.72" method="GET" user="kubecfg" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
    # 2017-02-09T00:15:57.993528822Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" response="200"
    <source>
      @type tail
      @id in_tail_kube_apiserver_audit
      multiline_flush_interval 5s
      path /var/log/kubernetes/kube-apiserver-audit.log
      pos_file /var/log/kube-apiserver-audit.log.pos
      tag kube-apiserver-audit
      <parse>
        @type multiline
        format_firstline /^\S+\s+AUDIT:/
        # Fields must be explicitly captured by name to be parsed into the record.
        # Fields may not always be present, and order may change, so this just looks
        # for a list of key="\"quoted\" value" pairs separated by spaces.
        # Unknown fields are ignored.
        # Note: We can't separate query/response lines as format1/format2 because
        #       they don't always come one after the other for a given query.
        format1 /^(?<time>\S+) AUDIT:(?: (?:id="(?<id>(?:[^"\\]|\\.)*)"|ip="(?<ip>(?:[^"\\]|\\.)*)"|method="(?<method>(?:[^"\\]|\\.)*)"|user="(?<user>(?:[^"\\]|\\.)*)"|groups="(?<groups>(?:[^"\\]|\\.)*)"|as="(?<as>(?:[^"\\]|\\.)*)"|asgroups="(?<asgroups>(?:[^"\\]|\\.)*)"|namespace="(?<namespace>(?:[^"\\]|\\.)*)"|uri="(?<uri>(?:[^"\\]|\\.)*)"|response="(?<response>(?:[^"\\]|\\.)*)"|\w+="(?:[^"\\]|\\.)*"))*/
        time_format %Y-%m-%dT%T.%L%Z
      </parse>
    </source>

fluentd-es-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: elastic-system
  labels:
    app: fluentd-es
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    app: fluentd-es
rules:
  - apiGroups:
      - ""
    resources:
      - "namespaces"
      - "pods"
    verbs:
      - "get"
      - "watch"
      - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    app: fluentd-es
subjects:
  - kind: ServiceAccount
    name: fluentd-es
    namespace: elastic-system
    apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es
  namespace: elastic-system
  labels:
    app: fluentd-es
spec:
  selector:
    matchLabels:
      app: fluentd-es
  template:
    metadata:
      labels:
        app: fluentd-es
    spec:
      serviceAccount: fluentd-es
      serviceAccountName: fluentd-es
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd-es
          image: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-2
          env:
            - name: FLUENT_ELASTICSEARCH_HOST
              value: efk-elastic-es-http
            # default user
            - name: FLUENT_ELASTICSEARCH_USER
              value: elastic
            # is already present from the elasticsearch deployment
            - name: FLUENT_ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: efk-elastic-es-elastic-user
                  key: elastic
            # elasticsearch standard port
            - name: FLUENT_ELASTICSEARCH_PORT
              value: "9200"
            # der elastic operator ist https standard
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "https"
              # dont need systemd logs for now
            - name: FLUENTD_SYSTEMD_CONF
              value: disable
            # da certs self signt sind muss verify disabled werden
            - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
              value: "false"
            # to avoid issue https://github.com/uken/fluent-plugin-elasticsearch/issues/525
            - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
              value: "false"
          resources:
            limits:
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
            - name: config-volume
              mountPath: /fluentd/etc
      terminationGracePeriodSeconds: 30
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: config-volume
          configMap:
            name: fluentd-es-config

index-cleaner.yaml

apiVersion: batch/v1  
kind: CronJob  
metadata:  
  name: index-cleaner  
  namespace: elastic-system  
spec:  
  schedule: "0 0 * * *" 
  jobTemplate:  
    spec:  
      backoffLimit: 2  
      template:  
        spec:
          containers:  
            - name: index-cleaner  
              image: hcd1129/es-index-cleaner:latest
              env:  
                - name: DAYS
                  value: "2"
                - name: PREFIX
                  value: logstash-*  
                - name: ES_HOST  
                  value: https://efk-elastic-es-http:9200  
                - name: ES_USER  
                  value: elastic  
                - name: ES_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      key: elastic
                      name: efk-elastic-es-elastic-user
          restartPolicy: Never

es-index-cleaner镜像

执行部署

# 依次执行

#安装 Operator
kubectl create -f crds.yaml

kubectl apply -f operator.yaml

# 安装Elasticsearch kibana
kubectl apply -f elastic.yaml

#安装 Fluentd
kubectl apply -f fluentd-es-configmap.yaml

kubectl apply -f fluentd-es-ds.yaml

#部署自动清理旧日志定时任务
kubectl apply -f index-cleaner.yaml

部署效果

kubectl get all -n elastic-system

在这里插入图片描述

账号密码

#es账号:elastic
#查看es密码
kubectl get secret efk-elastic-es-elastic-user -n elastic-system -o=jsonpath='{.data.elastic}' | base64 --decode; echo

效果展示

https://192.168.1.180:30920/
在这里插入图片描述
https://192.168.1.180:30922/
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1519886.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

javaweb-maven+HTTP协议+Tomcat+SpringBoot入门+请求+响应+分层解耦

Maven IDEA集成Maven 依赖管理 依赖配置 maven是插件完成对应的工作的~ 哇哇哇maven看完啦~~~~~~ Spring.io Springboot是Spring家族的子项目&#xff0c;可以帮助我们非常快速地构建应用程序&#xff0c;简化开发&#xff0c;提高效率。 RestController请…

【XR806开发板试用】基于WEBSOCKET实现人机交互(控制开关灯)以及开发问题记录

一、开发板编译、功能介绍 根据官方文档编译烧录成功后&#xff0c;我们修改下官方例子&#xff0c;进行开发来实现websocket。 整体流程&#xff1a;开发板先自动寻找指定的wifi并且连接&#xff0c;连接成功后&#xff0c;通过websocket来与服务端连接&#xff0c;连接成功后…

升入理解计算机系统学习笔记

磁盘存储 磁盘是广为应用的保存大量数据的存储设备&#xff0c;存储数据的数量级可以达到几百到几千千兆字节&#xff0c;而基于RAM的存储器只能有几百或几千兆字节。不过&#xff0c;从磁盘上读信息的时间为毫秒级&#xff0c;比从DRAM读慢了10万倍&#xff0c;比从SRAM读慢了…

鸿蒙Harmony应用开发—ArkTS声明式开发(容器组件:GridItem)

网格容器中单项内容容器。 说明&#xff1a; 该组件从API Version 7开始支持。后续版本如有新增内容&#xff0c;则采用上角标单独标记该内容的起始版本。仅支持作为Grid组件的子组件使用。 子组件 可以包含单个子组件。 接口 GridItem GridItem(value?: GridItemOptions)…

DVWA靶场-CSRF跨站请求伪造

CSRF(跨站请求伪造)简介概念 CSRF&#xff08;Cross—site request forgery&#xff09;&#xff0c;跨站请求伪造&#xff0c;是指利用受害者未失效的身份认证信息&#xff08;cookie&#xff0c;会话等&#xff09;&#xff0c;诱骗其点击恶意链接或者访问包含攻击代码的页面…

UG NX二次开发(C#)-单选对话框UF_UI_select_with_single_dialog的使用

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档 文章目录 1、前言2、UF_UI_select_with_single_dialog函数3、实现代码3.1 利用委托创建一个方法3.2 直接调用1、前言 对于单选对话框,采用C++/C写的时候比较容易,也在帮助文档中有示例,但是对于C#开发采…

修复ElementUI中el-select与el-option无法通过v-model实现数据双向绑定的问题

1. 问题描述 需求&#xff1a;在使用ElementUI时&#xff0c;通过el-select和el-option标签实现下拉列表功能&#xff0c;当el-option中的选项被选中时&#xff0c;被选中的选项可以正确回显到已选择的列表中。 对于上面的下拉列表&#xff0c;当我们选中“超级管理员”的选项…

小程序路由跳转---事件通信通道EventChannel(一)

EventChannel是什么&#xff1f; 借助wx.navigateTo方法&#xff0c;在两个页面之间构建起数据通道&#xff0c;互相可以通过“派发事件”及“注册事件监听器”来实现基于事件的页面通信。基础库版本v2.7.3以上支持。 EventChannel中主要的方法 EventChannel.emit( strign e…

【四 (3)数据可视化之 Seaborn 常用图表及代码实现 】

目录 文章导航一、介绍二、安装Seaborn三、导入Seaborn四、设置可以中文显示五、占比类图表1、饼图2、环形图 六、比较排序类1、条形图2、箱线图3、小提琴图 七、趋势类图表1、折线图 八、频率分布类1、直方图 九、关系类图表1、散点图2、成对关系图3、热力图 文章导航 【一 简…

手机备忘录怎么导出到电脑,如何将手机备忘录导出到电脑

备忘录是我们日常生活和工作中常用的工具之一&#xff0c;我们可以在手机上轻松地记录重要的事务、想法和灵感。然而&#xff0c;在某些情况下&#xff0c;我们可能需要将手机备忘录导出到电脑进行更详细的整理和管理。那么&#xff0c;手机备忘录怎么导出到电脑&#xff0c;如…

ASP.NET-Server.HtmlEncode

目录 背景: 1.转义特殊字符&#xff1a; 2.防止跨站脚本攻击&#xff08;XSS&#xff09;&#xff1a; 3.确保输出安全性&#xff1a; 4.保留原始文本形式&#xff1a; 5.与用户输入交互安全&#xff1a; 实例说明: 不用Server.HtmlEncode 效果展示: 用Server.HtmlEnc…

【Numpy】练习题100道(26-50题)

#学习笔记# 在学习神经网络的过程中发现对numpy的操作不是非常熟悉&#xff0c;遂找到了Numpy 100题。 Git-hub链接 1.题目列表 26. 下面的脚本输出什么&#xff1f;(★☆☆) print(sum(range(5),-1)) from numpy import * print(sum(range(5),-1)) 27. 考虑一个整数向量…

TCP的三次握手和4次挥手

一、首先讲一下TCP的由来 最开始&#xff0c;人们考虑到将网络信息的呼唤与回应进行规范&#xff0c;达成一种公认的协议&#xff0c;就好像没有交通规则的路口设定交通规则。 人们设计出完美的OSI协议&#xff0c;这个协议包含七个层次由下到上分别是&#xff1a; 物理层&…

【Unity】进度条和血条的三种做法

前言 在使用Unity开发的时候&#xff0c;进度条和血条是必不可少的&#xff0c;本篇文章将简单介绍一下几种血条的制作方法。 1.使用Slider Slider组件由两部分组成&#xff1a;滑动区域和滑块。滑动区域用于显示滑动条的背景&#xff0c;而滑块则表示当前的数值位置。用户可…

因聚而生 数智有为丨软通动力携子公司鸿湖万联亮相华为中国合作伙伴大会2024

3月14日&#xff0c;以“因聚而生 数智有为”为主题的“华为中国合作伙伴大会2024”在深圳隆重开幕。作为华为的重要合作伙伴和本次大会钻石级&#xff08;最高级&#xff09;合作伙伴&#xff0c;软通动力深度参与本次盛会&#xff0c;携前沿数智化技术成果和与华为的联合解决…

Python之Web开发中级教程----创建Django子应用

Python之Web开发中级教程----创建Django子应用 基于上一个教程的Django项目&#xff08;可以先看上一集&#xff0c;链接如下&#xff1a;&#xff09; https://mp.csdn.net/mp_blog/creation/editor/136724897 2.创建子应用 python manager.py startapp book admin.py&…

蓝桥杯 EDA 组 2021-2022 省赛真题+模拟题原理图解析

本文解析了标题内的原理图蓝桥杯EDA组真题&#xff0c;为方便阅读2023年真题/模拟和国赛部分放到其他章节解析。下文中重复或者是简单的电路节约篇幅不在赘述。 其中需要补充和计算原理图的题目解析都放在最下面 一、2021第十二届真题第一场 1.1 AMS1117 线性稳压器 最常见的1…

工具类实现导出复杂excel、word

1、加入准备的工具类 package com.ly.cloud.utils.exportUtil;import java.util.Map;public interface TemplateRenderer {Writable render(Map<String, Object> dataSource) throws Throwable;}package com.ly.cloud.utils.exportUtil;import java.util.Map;public int…

AI学习笔记之六:无监督学习如何帮助人类挖掘数据金矿和防范网络欺诈

在这个大数据时代&#xff0c;企业和组织在过去几十上百年的经营过程中积累了大量的原始数据&#xff0c;其中蕴含着宝贵的商业价值和见解。然而&#xff0c;要从这些海量的、未经标记和处理的数据中发现隐藏的规律和知识&#xff0c;并不是一件容易的事情。这就好比要从一座巨…

测试用例的设计(1)

目录 1. 测试用例的基本要素 2.测试用例的设计方法 2.1.基于需求设计 2.2根据功能需求测试 2.3非功能测试 3. 具体的设计方法 3.1等价类法 3.2边界值法 3.3判定表 1. 测试用例的基本要素 测试用例是为了实施测试而面向测试的系统提供的一组集合,这组集合包含:测试环境,…