K8S中Prometheus+Grafana监控

news2024/10/7 12:23:52

1.介绍

        phometheus:当前一套非常流行的开源监控和报警系统。

        运行原理:通过HTTP协议周期性抓取被监控组件的状态。输出被监控组件信息的HTTP接口称为exporter。

        常用组件大部分都有exporter可以直接使用,比如haproxy,nginx,Mysql,Linux系统信息(包括磁盘、内存、CPU、网络等待)。

        prometheus主要特点:

  • 一个多维数据模型(时间序列由metrics指标名字和设置key/value键/值的labels构成)。
  • 非常高效的存储,平均一个采样数据占~3.5字节左右,320万的时间序列,每30秒采样,保持60天,消耗磁盘大概228G。
  • 一种灵活的查询语言(PromQL)。
  • 无依赖存储,支持local和remote不同模型。
  • 采用http协议,使用pull模式,拉取数据。
  • 监控目标,可以采用服务器发现或静态配置的方式。
  • 多种模式的图像和仪表板支持,图形化友好。
  • 通过中间网关支持推送时间。

        Grafana:是一个用于可视化大型测量数据的开源系统,可以对Prometheus 的指标数据进行可视化。

        Prometheus的体系结构图:

        Prometheus直接或通过中间推送网关从已检测的作业中删除指标,以处理短暂的作业。它在本地存储所有报废的样本,并对这些数据运行规则,以汇总和记录现有数据中的新时间序列,或生成警报。Grafana或其他API使用者可以用来可视化收集的数据。

2.部署prometheus

        2.1 使用RBAC进行授权

[root@k8s-node01 k8s-prometheus]# cat prometheus-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile 
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
      - nodes/metrics
      - services
      - endpoints
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
  - nonResourceURLs:
      - "/metrics"
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system



[root@k8s-node01 k8s-prometheus]# kubectl apply -f prometheus-rbac.yaml 
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created

        2.2 配置管理

        使用Configmap保存不需要加密配置信息,yaml中修改对应的NODE IP即可。

[root@k8s-node01 k8s-prometheus]# cat prometheus-configmap.yaml 
# Prometheus configuration format https://prometheus.io/docs/prometheus/latest/configuration/configuration/
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system 
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  prometheus.yml: |
    rule_files:
    - /etc/config/rules/*.rules

    scrape_configs:
    - job_name: prometheus
      static_configs:
      - targets:
        - localhost:9090

    - job_name: kubernetes-nodes
      scrape_interval: 30s
      static_configs:
      - targets:
        - 11.0.1.13:9100
        - 11.0.1.14:9100

    - job_name: kubernetes-apiservers
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - action: keep
        regex: default;kubernetes;https
        source_labels:
        - __meta_kubernetes_namespace
        - __meta_kubernetes_service_name
        - __meta_kubernetes_endpoint_port_name
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
 
    - job_name: kubernetes-nodes-kubelet
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

    - job_name: kubernetes-nodes-cadvisor
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __metrics_path__
        replacement: /metrics/cadvisor
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

    - job_name: kubernetes-service-endpoints
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scrape
      - action: replace
        regex: (https?)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scheme
        target_label: __scheme__
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_service_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_service_name
        target_label: kubernetes_name

    - job_name: kubernetes-services
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module:
        - http_2xx
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_probe
      - source_labels:
        - __address__
        target_label: __param_target
      - replacement: blackbox
        target_label: __address__
      - source_labels:
        - __param_target
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - source_labels:
        - __meta_kubernetes_service_name
        target_label: kubernetes_name

    - job_name: kubernetes-pods
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: kubernetes_pod_name
    alerting:
      alertmanagers:
      - static_configs:
          - targets: ["alertmanager:80"]

[root@k8s-node01 k8s-prometheus]# kubectl apply -f prometheus-configmap.yaml 
configmap/prometheus-config created

        2.3 有状态部署prometheus

        这里使用storageclass进行动态供给,给prometheus的数据进行持久化

[root@k8s-node01 k8s-prometheus]# cat prometheus-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: prometheus 
  namespace: kube-system
  labels:
    k8s-app: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v2.2.1
spec:
  serviceName: "prometheus"
  replicas: 1
  podManagementPolicy: "Parallel"
  updateStrategy:
   type: "RollingUpdate"
  selector:
    matchLabels:
      k8s-app: prometheus
  template:
    metadata:
      labels:
        k8s-app: prometheus
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: prometheus
      initContainers:
      - name: "init-chown-data"
        image: "busybox:latest"
        imagePullPolicy: "IfNotPresent"
        command: ["chown", "-R", "65534:65534", "/data"]
        volumeMounts:
        - name: prometheus-data
          mountPath: /data
          subPath: ""
      containers:
        - name: prometheus-server-configmap-reload
          image: "jimmidyson/configmap-reload:v0.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - --volume-dir=/etc/config
            - --webhook-url=http://localhost:9090/-/reload
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config
              readOnly: true
          resources:
            limits:
              cpu: 10m
              memory: 10Mi
            requests:
              cpu: 10m
              memory: 10Mi

        - name: prometheus-server
          image: "prom/prometheus:v2.2.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - --config.file=/etc/config/prometheus.yml
            - --storage.tsdb.path=/data
            - --web.console.libraries=/etc/prometheus/console_libraries
            - --web.console.templates=/etc/prometheus/consoles
            - --web.enable-lifecycle
          ports:
            - containerPort: 9090
          readinessProbe:
            httpGet:
              path: /-/ready
              port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30
          livenessProbe:
            httpGet:
              path: /-/healthy
              port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30
          # based on 10 running nodes with 30 pods each
          resources:
            limits:
              cpu: 200m
              memory: 1000Mi
            requests:
              cpu: 200m
              memory: 1000Mi
            
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config
            - name: prometheus-data
              mountPath: /data
              subPath: ""
            - name: prometheus-rules
              mountPath: /etc/config/rules

      terminationGracePeriodSeconds: 300
      volumes:
        - name: config-volume
          configMap:
            name: prometheus-config
        - name: prometheus-rules
          configMap:
            name: prometheus-rules

  volumeClaimTemplates:
  - metadata:
      name: prometheus-data
    spec:
      storageClassName: managed-nfs-storage 
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "1Gi"

[root@k8s-node01 k8s-prometheus]# kubectl apply -f prometheus-statefulset.yaml 
Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead
statefulset.apps/prometheus created

[root@k8s-node01 k8s-prometheus]#kubectl get pod -n kube-system |grep prometheus
NAME                                    READY   STATUS    RESTARTS   AGE
prometheus-0                            2/2     Running   6          1d

        2.4 创建service暴露访问端口

[root@k8s-node01 k8s-prometheus]# cat prometheus-service.yaml 
kind: Service
apiVersion: v1
metadata: 
  name: prometheus
  namespace: kube-system
  labels: 
    kubernetes.io/name: "Prometheus"
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec: 
  type: NodePort
  ports: 
    - name: http 
      port: 9090
      protocol: TCP
      targetPort: 9090
      nodePort: 30090
  selector: 
    k8s-app: prometheus

[root@k8s-master prometheus-k8s]# kubectl apply -f prometheus-service.yaml

        2.5 web访问

        使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:Port

3.部署Grafana

        

[root@k8s-master prometheus-k8s]# cat grafana.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: grafana
  namespace: kube-system
spec:
  serviceName: "grafana"
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana
        ports:
          - containerPort: 3000
            protocol: TCP
        resources:
          limits:
            cpu: 100m
            memory: 256Mi
          requests:
            cpu: 100m
            memory: 256Mi
        volumeMounts:
          - name: grafana-data
            mountPath: /var/lib/grafana
            subPath: grafana
      securityContext:
        fsGroup: 472
        runAsUser: 472
  volumeClaimTemplates:
  - metadata:
      name: grafana-data
    spec:
      storageClassName: managed-nfs-storage #和prometheus使用同一个存储类
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "1Gi"

---

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port : 80
    targetPort: 3000
    nodePort: 30091
  selector:
    app: grafana

[root@k8s-master prometheus-k8s]#kubectl apply -f grafana.yaml

访问方式:

使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:Port ,默认账号密码为admin

4.监控K8S集群中Pod、Node、资源对象数据的方法

Pod:
kubelet的节点使用cAdvisor提供的metrics接口获取该节点所有Pod和容器相关的性能指标数据,安装kubelet默认就开启了

Node:

需要使用node_exporter收集器采集节点资源利用率。

使用node_exporter.sh脚本分别在所有服务器上部署node_exporter收集器,不需要修改可直接运行脚本

[root@k8s-master prometheus-k8s]# cat node_exporter.sh 
#!/bin/bashwget https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz

tar zxf node_exporter-0.17.0.linux-amd64.tar.gz
mv node_exporter-0.17.0.linux-amd64 /usr/local/node_exporter

cat <<EOF >/usr/lib/systemd/system/node_exporter.service
[Unit]
Description=https://prometheus.io

[Service]
Restart=on-failure
ExecStart=/usr/local/node_exporter/node_exporter --collector.systemd --collector.systemd.unit-whitelist=(docker|kubelet|kube-proxy|flanneld).service

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable node_exporter
systemctl restart node_exporter
[root@k8s-master prometheus-k8s]# ./node_exporter.sh

[root@k8s-master prometheus-k8s]# ps -ef|grep node_exporter
root       6227      1  0 Oct08 ?        00:06:43 /usr/local/node_exporter/node_exporter --collector.systemd --collector.systemd.unit-whitelist=(docker|kubelet|kube-proxy|flanneld).service
root     118269 117584  0 23:27 pts/0    00:00:00 grep --color=auto node_exporter

资源对象:

kube-state-metrics采集了k8s中各种资源对象的状态信息,只需要在master节点部署就行

        1.创建rbac的yaml对metrics进行授权

[root@k8s-master prometheus-k8s]# cat kube-state-metrics-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-state-metrics
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups: [""]
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs: ["list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs: ["list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  verbs: ["list", "watch"]
- apiGroups: ["batch"]
  resources:
  - cronjobs
  - jobs
  verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
  resources:
  - horizontalpodautoscalers
  verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kube-state-metrics-resizer
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups: [""]
  resources:
  - pods
  verbs: ["get"]
- apiGroups: ["extensions"]
  resources:
  - deployments
  resourceNames: ["kube-state-metrics"]
  verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kube-state-metrics
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kube-state-metrics-resizer
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: kube-system
[root@k8s-master prometheus-k8s]# kubectl apply -f kube-state-metrics-rbac.yaml

        2.编写Deployment和ConfigMap的yaml进行metrics pod部署,不需要进行修改

[root@k8s-master prometheus-k8s]# cat kube-state-metrics-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: kube-system
  labels:
    k8s-app: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v1.3.0
spec:
  selector:
    matchLabels:
      k8s-app: kube-state-metrics
      version: v1.3.0
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kube-state-metrics
        version: v1.3.0
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: kube-state-metrics
      containers:
      - name: kube-state-metrics
        image: lizhenliang/kube-state-metrics:v1.3.0
        ports:
        - name: http-metrics
          containerPort: 8080
        - name: telemetry
          containerPort: 8081
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
      - name: addon-resizer
        image: lizhenliang/addon-resizer:1.8.3
        resources:
          limits:
            cpu: 100m
            memory: 30Mi
          requests:
            cpu: 100m
            memory: 30Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
          - name: config-volume
            mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          - --container=kube-state-metrics
          - --cpu=100m
          - --extra-cpu=1m
          - --memory=100Mi
          - --extra-memory=2Mi
          - --threshold=5
          - --deployment=kube-state-metrics
      volumes:
        - name: config-volume
          configMap:
            name: kube-state-metrics-config
---
# Config map for resource configuration.
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-state-metrics-config
  namespace: kube-system
  labels:
    k8s-app: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
data:
  NannyConfiguration: |-
    apiVersion: nannyconfig/v1alpha1
    kind: NannyConfiguration
[root@k8s-master prometheus-k8s]# kubectl apply -f kube-state-metrics-deployment.yaml

        3.编写Service的yaml对metrics进行端口暴露

[root@k8s-master prometheus-k8s]# cat kube-state-metrics-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: kube-state-metrics
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "kube-state-metrics"
  annotations:
    prometheus.io/scrape: 'true'
spec:
  ports:
  - name: http-metrics
    port: 8080
    targetPort: http-metrics
    protocol: TCP
  - name: telemetry
    port: 8081
    targetPort: telemetry
    protocol: TCP
  selector:
    k8s-app: kube-state-metrics
[root@k8s-master prometheus-k8s]# kubectl apply -f kube-state-metrics-service.yaml

[root@k8s-master prometheus-k8s]# kubectl get pod,svc -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/alertmanager-5d75d5688f-fmlq6           2/2     Running   0          9dpod/coredns-5bd5f9dbd9-wv45t                1/1     Running   1          9dpod/grafana-0                               1/1     Running   2          15dpod/kube-state-metrics-7c76bdbf68-kqqgd     2/2     Running   6          14dpod/kubernetes-dashboard-7d77666777-d5ng4   1/1     Running   5          16dpod/prometheus-0                            2/2     Running   6          15dNAME                           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
service/alertmanager           ClusterIP   10.0.0.207   <none>        80/TCP              13dservice/grafana                NodePort    10.0.0.74    <none>        80:30091/TCP        15dservice/kube-dns               ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP       14dservice/kube-state-metrics     ClusterIP   10.0.0.194   <none>        8080/TCP,8081/TCP   14dservice/kubernetes-dashboard   NodePort    10.0.0.127   <none>        443:30001/TCP       17dservice/prometheus             NodePort    10.0.0.33    <none>        9090:30090/TCP      14d
报错一:进行2.1步骤时报错:ensure CRDs are installed first
[root@k8s-node01 k8s-prometheus]# kubectl apply -f prometheus-rbac.yaml
serviceaccount/prometheus unchanged
resource mapping not found for name: "prometheus" namespace: "" from "prometheus-rbac.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "prometheus" namespace: "" from "prometheus-rbac.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first

使用附件的原yaml会报错,原因是因为api过期,需要手动修改 apiVersion: rbac.authorization.k8s.io/v1beta1为apiVersion: rbac.authorization.k8s.io/v1

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1686793.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

论文精读:UFO: A UI-Focused Agent for Windows OS Interaction

UFO : A UI-Focused Agent for Windows OS Interaction Status: Reading Author: Bo Qiao, Chaoyun Zhang, Dongmei Zhang, Liqun Li, Minghua Ma, Qinglong Zhang, Qingwei Lin, Saravan Rajmohan, Shilin He, Si Qin, Xiangyu Zhang, Yu Kang Institution: 微软&#xff08;…

Xline 0.7重构性能分析总述

1、重构概述 在Xline 0.7.0中&#xff0c;我们完成了对Xline代码库中进行了一次较大的重构。这次重构在某些性能测试中甚至使得Xline获得了近20倍的性能提升。在本文中我会讲解Xline中重构后命令执行流程的新设计&#xff0c;以及我们是如何优化Xline的性能的。 2、etcd的性能…

Map遍历、反射、GC

map的遍历 用foreach遍历 HashMap<Character,Integer> map new HashMap<>();map.put(A,2);map.put(B,3);map.put(C,3);for (Map.Entry<Character,Integer> entry: map.entrySet()) {char key entry.getKey();int value entry.getValue();System.out.prin…

Nacos 进阶篇---服务发现:服务之间请求调用链路分析(六)

一、引言 前面几个章节把Nacos服务注册从客户端到服务端&#xff0c;整个流程源码都分析了一遍。 本章节我们来分析&#xff0c;order-service、stock-service 完成Nacos注册后&#xff0c;可以通过Feign的方式&#xff0c;来完成服务之间的调用。那它的底层是如何实现的&am…

linux下的docker使用

docker是什么&#xff0c;docker翻译过来的意思就是码头工人&#xff0c;顾名思义&#xff0c;docker本质上就是一个搬运工&#xff0c;只不过从搬运货物改成了搬运程序&#xff0c;使搬运的不同的程序能够独立的运行在码头上的不同容器内&#xff0c;互不干扰&#xff0c;而他…

不使用ScrollRect 和 HorizontalLayoutGroup做的横向循环列表

一、 版本一 1.前情提要 因为需要展示300多个相同的物体&#xff0c;但是如果全部放在场景内&#xff0c;运行起来会很卡&#xff0c;所以想到了用无限循环&#xff0c;然后动态填充不同的数据。 做的这个没有用HorizontalLayoutGroup 和 ScrollRect 。 1.没有使用Horizontal…

Git原理及常用命令小结——实用版(ing......)、Git设置用户名邮箱

Git基本认识 Git把数据看作是对小型文件系统的一组快照&#xff0c;每次提交更新&#xff0c;或在Git中保存项目状态时&#xff0c;Git主要对当时的全部文件制作一个快照并保存这个快照的索引。同时&#xff0c;为了提高效率&#xff0c;如果文件没有被修改&#xff0c;Git不再…

JSON的序列化与反序列化以及VSCode执行Run Code 报错

JSON JSON: JavaScript Object Notation JS对象简谱 , 是一种轻量级的数据交换格式。 JSON格式 { "name":"金苹果", "info":"种苹果" } 一个对象&#xff1a;由一个大括号表示.括号中通过键值对来描述对象的属性 (可以理解为, 大…

操作系统总结(2)

目录 2.1 进程的概念、组成、特征 &#xff08;1&#xff09;知识总览 &#xff08;2&#xff09;进程的概念 &#xff08;3&#xff09;进程的组成—PCB &#xff08;4&#xff09;进程的组成---程序段和数据段 &#xff08;5&#xff09;程序是如何运行的呢&#xff1f…

Android和flutter交互,maven库的形式导入aar包

记录遇到的问题&#xff0c;在网上找了很多资料&#xff0c;都是太泛泛了&#xff0c;使用后&#xff0c;还不能生效&#xff0c;缺少详细的说明&#xff0c;或者关键代码缺失&#xff0c;我遇到的问题用红色的标注了 导入aar包有两种模式 1.比较繁琐的&#xff0c;手动将aar…

Java8-HashMap实现原理

目录 HashMap原理 hashmap的put流程&#xff1a; HashMap扩容机制&#xff1a; HashMap的寻址算法&#xff1a; HashMap原理 HashMap的底层数据结构是由&#xff0c;数组&#xff0c;链表和红黑树组成的。 当我们往HashMap中put元素的时候&#xff0c;利用key的hashCode重…

HC32F103BCB使用SPI获取AS5040编码器数据

1.AS5040介绍 2.硬件电路 硬件上使用SSI通信方式连接。 3.配置硬件SPI 查看手册&#xff0c;AS5040时序 可以看到在空闲阶段不发生数据传输的时候时钟(CLK)和数据(DO)都保持高电位(tCLKFE阶段)&#xff0c;在第一个脉冲的下降沿触发编码器载入发送数据&#xff0c;然后每一个…

【Unity Shader入门精要 第9章】更复杂的光照(四)

1. 透明度测试物体的阴影 对于物体有片元丢弃的情况&#xff0c;比如透明度测试或者后边会讲到的消融效果&#xff0c;使用默认的 ShadowCaster Pass 会产生问题&#xff0c;这是因为该Pass在生成阴影映射纹理时&#xff0c;没有考虑被丢弃的片元&#xff0c;而是使用完整的模…

FTP文件传输议

FTP是一种文件传输协议&#xff1a;用来上传和下载&#xff0c;实现远程共享文件&#xff0c;和统一管理文件 工作原理&#xff1a;用于互联网上的控制文件的双向传输是一个应用程序。工作在TCP/IP协议簇的&#xff0c;其传输协议是TCP协议提高文件传输的共享性和可靠性&#…

阅读笔记——《AFLNeTrans:状态间关系感知的网络协议模糊测试》

【参考文献】洪玄泉,贾鹏,刘嘉勇.AFLNeTrans&#xff1a;状态间关系感知的网络协议模糊测试[J].信息网络安全,2024,24(01):121-132.【注】本文仅为作者个人学习笔记&#xff0c;如有冒犯&#xff0c;请联系作者删除。 目录 摘要 1、引言 2、背景及动机 2.1、网络协议实现程…

正点原子LWIP学习笔记(二)MAC简介

MAC简介 一、MAC简介&#xff08;了解&#xff09;二级目录三级目录 二、ST的ETH框架&#xff08;了解&#xff09;三、SMI站管理接口&#xff08;熟悉&#xff09;四、介质接口MII、RMII&#xff08;熟悉&#xff09; 一、MAC简介&#xff08;了解&#xff09; STM32 的 MAC …

Ubuntu24.04设置静态IP地址

Ubuntu24.04设置静态IP地址 前言&#xff1a;vm17.5的动态IP问题 第一个是设置的静态IP我们可以看到是forever&#xff0c;第二个则是动态IP则是一天的时间。 如果我们不设置静态IP的话&#xff0c;那么可能在本地测试项目的时候&#xff0c;第二天发现一些服务不能用了&#…

13.js对象

定义 一种复杂数据类型&#xff0c;是无序的&#xff08;不保留键的插入顺序&#xff09;&#xff0c;以键值对&#xff08;{key:value})形式存放的数据集合 对象的创建 &#xff08;1&#xff09;字面量创建 var 对象名{ } &#xff08;2&#xff09;内部构造函数创建 v…

VirtualBox安装ubuntu22.04记录

一,VirtualBox 软件安装 虚拟机&#xff08;Virtual Machine&#xff09;指通过软件模拟的具有完整硬件系统功能的、运行在一个完全隔离环境中的完整计算机系统。在实体计算机中能够完成的工作在虚拟机中都能够实现。 常见的虚拟机软件主要有两款 VMware 和 VirtualBox 。VMwar…

争议湖北消费金融2023年业绩,营收下滑or财务内控重大缺陷?

近日&#xff0c;湖北消费金融股份有限公司&#xff08;下称“湖北消费金融”&#xff09;披露了2023年度相关信息&#xff0c;包括股权结构、关联方、董事会、分支机构、资产负债情况等信息。 据介绍&#xff0c;湖北消费金融的注册资本为10.058亿元&#xff0c;法定代表人为…