97、prometheus之yaml文件

news2024/11/16 1:38:10
命令回顾
[root@master01 ~]# kubectl explain ingress


KIND:     Ingress
VERSION:  networking.k8s.io/v1

DESCRIPTION:
     Ingress is a collection of rules that allow inbound connections to reach
     the endpoints defined by a backend. An Ingress can be configured to give
     services externally-reachable urls, load balance traffic, terminate SSL,
     offer name based virtual hosting etc.

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind	<string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata	<Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec	<Object>
     Spec is the desired state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status	<Object>
     Status is the current state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

[root@master01 ~]# kubectl describe ingress


Name:             nginx-daemon-ingress
Namespace:        default
Address:          10.96.183.19
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  tls.secret terminates www.xy102.com
Rules:
  Host           Path  Backends
  ----           ----  --------
  www.xy102.com  
                 /   nginx-daemon-svc:80 (<none>)
Annotations:     <none>
Events:          <none>

一、prometheus

node_exporter

节点数据收集器

daemonset-------->保证每个节点都有一个收集器

prometheus------->监控主程序

grafana------->图形化

altermanager---->告警模块

node_exporter组件安装

[root@master01 opt]# mkdir prometheus
[root@master01 opt]# cd prometheus/
[root@master01 prometheus]# vim node_exporter.yaml
[root@master01 prometheus]# 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitor-sa
  labels:
    name: node-exporter
spec:
  selector:
    matchLabels:
     name: node-exporter
  template:
    metadata:
      labels:
        name: node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
      - name: node-exporter
        image: prom/node-exporter
        ports:
        - containerPort: 9100
        resources:
          limits:
            cpu: "0.5"
        securityContext:
          privileged: true
        args:
        - --path.procfs
        - /host/proc
        - --path.sysfs
        - /host/sys
        - --collector.filesystem.ignored-mount-points
        - '"^/(sys|proc|dev|host|etc)($|/)"'
        volumeMounts:
        - name: dev
          mountPath: /host/dev
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: rootfs
          mountPath: /rootfs
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: dev
          hostPath:
            path: /dev
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /
            
            
            
[root@master01 ~]# cd /opt/
[root@master01 opt]# kubectl create ns monitor-sa
namespace/monitor-sa created
[root@master01 opt]# ls
cni                                 ingress
cni_bak                             jenkins-2.396-1.1.noarch.rpm
cni-plugins-linux-amd64-v0.8.6.tgz  k8s-yaml
configmap                           kube-flannel.yml
containerd                          nginx-de.yaml
data1                               secret
flannel.tar                         test
ingree.contro-0.30.0.tar            update-kubeadm-cert.sh
ingree.contro-0.30.0.tar.gz
[root@master01 opt]# mkdir prometheus
[root@master01 opt]# cd prometheus/
[root@master01 prometheus]# vim node_exporter.yaml
[root@master01 prometheus]# kubectl apply -f node_exporter.yaml 
daemonset.apps/node-exporter created

[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS             RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-7mfnf   0/1     ErrImagePull       0          2m29s   192.168.168.81   master01   <none>           <none>
node-exporter-c6hq2   0/1     ImagePullBackOff   0          13m     192.168.168.82   node01     <none>           <none>
node-exporter-jgz96   0/1     ImagePullBackOff   0          13m     192.168.168.83   node02     <none>           <none>

##镜像拉取失败

##镜像拉不下来
导入镜像
[root@master01 prometheus]# rz -E
rz waiting to receive.
[root@master01 prometheus]# ls
node_exporter.yaml  node.tar
[root@master01 prometheus]# docker load -i node.tar    ##所有节点都部署

[root@node01 opt]# mkdir prometheus
[root@node01 opt]# rz -E
rz waiting to receive.
[root@node01 opt]# docker load -i node.tar
1e604deea57d: Loading layer  1.458MB/1.458MB
6b83872188a9: Loading layer  2.455MB/2.455MB
4f3f7dd00054: Loading layer   20.5MB/20.5MB
Loaded image: prom/node-exporter:v1



[root@node02 ~]# cd /opt/
[root@node02 opt]# mkdir prometheus
[root@node02 opt]# cd prometheus/
[root@node02 prometheus]# rz -E
rz waiting to receive.
[root@node02 prometheus]# docker load -i node.tar
1e604deea57d: Loading layer  1.458MB/1.458MB
6b83872188a9: Loading layer  2.455MB/2.455MB
4f3f7dd00054: Loading layer   20.5MB/20.5MB
Loaded image: prom/node-exporter:v1





[root@master01 prometheus]# vim node_exporter.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitor-sa
  labels:
    name: node-exporter
spec:
  selector:
    matchLabels:
     name: node-exporter
  template:
    metadata:
      labels:
        name: node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:v1
        ports:
        - containerPort: 9100
        resources:
          limits:
            cpu: "0.5"
        securityContext:
          privileged: true
        args:
        - --path.procfs
        - /host/proc
        - --path.sysfs
        - /host/sys
        - --collector.filesystem.ignored-mount-points
        - '"^/(sys|proc|dev|host|etc)($|/)"'
        volumeMounts:
        - name: dev
          mountPath: /host/dev
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: rootfs
          mountPath: /rootfs
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: dev
          hostPath:
            path: /dev
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /



[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS             RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-7mfnf   0/1     ErrImagePull       0          2m29s   192.168.168.81   master01   <none>           <none>
node-exporter-c6hq2   0/1     ImagePullBackOff   0          13m     192.168.168.82   node01     <none>           <none>
node-exporter-jgz96   0/1     ImagePullBackOff   0          13m     192.168.168.83   node02     <none>           <none>


##已经导入镜像,重启

[root@master01 prometheus]# kubectl delete pod node-exporter-7mfnf -n monitor-sa 
pod "node-exporter-7mfnf" deleted


[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS             RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-76nkz   1/1     Running            0          26s   192.168.168.81   master01   <none>           <none>
node-exporter-c6hq2   0/1     ImagePullBackOff   0          14m   192.168.168.82   node01     <none>           <none>
node-exporter-jgz96   0/1     ImagePullBackOff   0          13m   192.168.168.83   node02     <none>           <none>


##已经导入镜像,重启
[root@master01 prometheus]# kubectl delete pod node-exporter-c6hq2 -n monitor-sa 
pod "node-exporter-c6hq2" deleted

[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS              RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-487lb   1/1     Running             0          55s   192.168.168.82   node01     <none>           <none>
node-exporter-76nkz   1/1     Running             0          98s   192.168.168.81   master01   <none>           <none>
node-exporter-jj92l   0/1     ContainerCreating   0          10s   192.168.168.83   node02     <none>           <none>


##已经导入镜像,重启
[root@master01 prometheus]# kubectl delete pod node-exporter-jgz96 -n monitor-sa 
pod "node-exporter-jgz96" deleted


[root@master01 prometheus]# kubectl get pod -n monitor-sa -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
node-exporter-487lb   1/1     Running   0          12m   192.168.168.82   node01     <none>           <none>
node-exporter-76nkz   1/1     Running   0          13m   192.168.168.81   master01   <none>           <none>
node-exporter-jj92l   1/1     Running   0          12m   192.168.168.83   node02     <none>           <none>


http://192.168.168.81:9100/metrics




http://192.168.168.81:9100/metrics


[root@master01 prometheus]# kubectl create serviceaccount monitor -n monitor-sa
serviceaccount/monitor created


[root@master01 prometheus]# kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrole=cluster-admin  --serviceaccount=monitor-sa:monitor
clusterrolebinding.rbac.authorization.k8s.io/monitor-clusterrolebinding created


192.168.168.81:9100/metrics

在这里插入图片描述

设置告警的配置

[root@master01 prometheus]# rz -E
rz waiting to receive.
[root@master01 prometheus]# ls
node_exporter.yaml  node.tar  prometheus-alertmanager-cfg.yaml

[root@master01 prometheus]# vim prometheus-alertmanager-cfg.yaml 


120       - targets: ['192.168.168.81:10251']
121     - job_name: 'kubernetes-controller-manager'
122       scrape_interval: 5s
123       static_configs:
124       - targets: ['192.168.168.81:10252']
125     - job_name: 'kubernetes-kube-proxy'
126       scrape_interval: 5s
127       static_configs:
128       - targets: ['192.168.168.81:10249','192.168.168.82:10249','192.168    .168.83:10249']


137       - targets: ['192.168.168.81:2379']

221       - alert: kube-state-metrics的cpu使用率大于90%
222         expr: rate(process_cpu_seconds_total{k8s_app=~"kube-state-metric    s"}[1m]) * 100 > 90

    description: "{{$labels.mountpoint }} 磁盘分区使用大于80%(目前使用
:{{$value}}%)"
      - alert: HighPodCpuUsage
        #告警邮件的标题
        expr: sum(rate(container_cpu_usage_seconds_total{namespace="default", pod=~".+"}[5m])) by (pod) > 0.9
        #收集指标数据
        for: 5m
        #占用cpu90%的持续时间5minute,告警
        labels:
          severity: warning
        annotations:
        #告警的内容
          description: "{{ $labels.pod }} 的CPU使用率高于90%."
          summary: "Pod {{ $labels.pod }} 的CPU使用率高"


[root@master01 prometheus]# kubectl apply -f prometheus-alertmanager-cfg.yaml 
configmap/prometheus-config created




在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

邮件邮箱设置

prometheus的svc

prometheus告警的svc

prometheus+nodeport部署prometheus

创建secret资源

grafana的yaml文件

[root@master01 prometheus]# vim alter-mail.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: alertmanager
  namespace: monitor-sa
data:
  alertmanager.yml: |-
    global:
      resolve_timeout: 1m
      smtp_smarthost: 'smtp.qq.com:25'
      smtp_from: '1435678619@qq.com'
      smtp_auth_username: '1435678619@qq.com'
      smtp_auth_password: 'yniumbpaclkggfcc'
      smtp_require_tls: false
    route:
      group_by: [alertname]
      group_wait: 10s
      group_interval: 10s
      repeat_interval: 10m 
      receiver: default-receiver
    receivers:
    - name: 'default-receiver'
      email_configs:
      - to: '1435678619@qq.com'
        send_resolved: true

[root@master01 prometheus]# kubectl apply -f alter-mail.yaml 
configmap/alertmanager created




##prometheus的svc
[root@master01 prometheus]# vim prometheus-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitor-sa
  labels:
    app: prometheus
spec:
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      protocol: TCP
  selector:
    app: prometheus
    component: server
    
    
##prometheus告警的svc
[root@master01 prometheus]# vim prometheus-alter.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    name: prometheus
    kubernetes.io/cluster-service: 'true'
  name: alertmanager
  namespace: monitor-sa
spec:
  ports:
  - name: alertmanager
    nodePort: 30066
    port: 9093
    protocol: TCP
    targetPort: 9093
  selector:
    app: prometheus
  sessionAffinity: None
  type: NodePort
  


##prometheus+nodeport部署prometheus

[root@master01 prometheus]# vim prometheus-deploy.yaml

  
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitor-sa
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
      component: server
  template:
    metadata:
      labels:
        app: prometheus
        component: server
      annotations:
        prometheus.io/scrape: 'false'
    spec:
      serviceAccountName: monitor
      initContainers:
      - name: init-chmod
        image: busybox:latest
        command: ['sh','-c','chmod -R 777 /prometheus;chmod -R 777 /etc']
        volumeMounts:
        - mountPath: /prometheus
          name: prometheus-storage-volume
        - mountPath: /etc/localtime
          name: timezone
      containers:
      - name: prometheus
        image: prom/prometheus:v2.45.0
        command:
          - prometheus
          - --config.file=/etc/prometheus/prometheus.yml
          - --storage.tsdb.path=/prometheus
          - --storage.tsdb.retention=720h
          - --web.enable-lifecycle
        ports:
        - containerPort: 9090
        volumeMounts:
        - name: prometheus-config
          mountPath: /etc/prometheus/
        - mountPath: /prometheus/
          name: prometheus-storage-volume
        - name: timezone
          mountPath: /etc/localtime
        - name: k8s-certs
          mountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/
      - name: alertmanager
        image: prom/alertmanager:v0.20.0
        args:
        - "--config.file=/etc/alertmanager/alertmanager.yml"
        - "--log.level=debug"
        ports:
        - containerPort: 9093
          protocol: TCP
          name: alertmanager
        volumeMounts:
        - name: alertmanager-config
          mountPath: /etc/alertmanager
        - name: alertmanager-storage
          mountPath: /alertmanager
        - name: localtime
          mountPath: /etc/localtime
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-config
            defaultMode: 0777
        - name: prometheus-storage-volume
          hostPath:
            path: /data
            type: DirectoryOrCreate
        - name: k8s-certs
          secret:
            secretName: etcd-certs
        - name: timezone
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai
        - name: alertmanager-config
          configMap:
            name: alertmanager
        - name: alertmanager-storage
          hostPath:
            path: /data/alertmanager
            type: DirectoryOrCreate
        - name: localtime
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai
  
  
  
[root@master01 prometheus]# kubectl apply -f prometheus-deploy.yaml 
deployment.apps/prometheus-server created
[root@master01 prometheus]# kubectl apply -f prometheus-svc.yaml 
service/prometheus created
[root@master01 prometheus]# kubectl apply -f prometheus-alter.yaml 
service/alertmanager created


##创建secret资源
[root@master01 prometheus]# kubectl -n monitor-sa create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/server.key --from-file=/etc/kubernetes/pki/etcd/server.crt --from-file=/etc/kubernetes/pki/etcd/ca.crt
secret/etcd-certs created


[root@master01 prometheus]# kubectl describe pod -n monitor-sa 



##prometheus启动情况
[root@master01 prometheus]# kubectl get pod -n monitor-sa 
NAME                                 READY   STATUS    RESTARTS   AGE
node-exporter-487lb                  1/1     Running   0          3h50m
node-exporter-76nkz                  1/1     Running   0          3h51m
node-exporter-jj92l                  1/1     Running   0          3h50m
prometheus-server-55d866cb44-6n2bf   2/2     Running   0          4m4s

##查看命名空间下的端口
[root@master01 prometheus]# kubectl get svc -n monitor-sa 
NAME           TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
alertmanager   NodePort   10.96.54.65   <none>        9093:30066/TCP   5m25s
prometheus     NodePort   10.96.29.5    <none>        9090:30493/TCP   5m40s



##grafana的yaml文件

[root@master01 prometheus]# vim pro-gra.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-client-storageclass
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: grafana
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:7.5.11
        securityContext:
          runAsUser: 104
          runAsGroup: 107
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: false
        - mountPath: /var
          name: grafana-storage
        - mountPath: /var/lib/grafana
          name: graf-test
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
      - name: graf-test
        persistentVolumeClaim:
          claimName: grafana
---
apiVersion: v1
kind: Service
metadata:
  labels:
  name: monitoring-grafana
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
  type: NodePort


[root@master01 prometheus]# kubectl apply -f pro-gra.yaml 
persistentvolumeclaim/grafana created
deployment.apps/monitoring-grafana created
service/monitoring-grafana created



[root@master01 prometheus]# kubectl get svc -n kube-system 
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns             ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   23d
monitoring-grafana   NodePort    10.96.131.109   <none>        80:30901/TCP             39s



http://192.168.168.81:30066/#/alerts

在这里插入图片描述

http://192.168.168.81:30493/
在这里插入图片描述

http://192.168.168.81:30901/

在这里插入图片描述
在这里插入图片描述

[root@master01 prometheus]# kubectl edit configmap kube-proxy -n kube-system

//处理 kube-proxy 监控告警
kubectl edit configmap kube-proxy -n kube-system
......
metricsBindAddress: "0.0.0.0:10249"
#因为 kube-proxy 默认端口10249是监听在 127.0.0.1 上的,需要改成监听到物理节点上

configmap/kube-proxy edited


#重新启动 kube-proxy
kubectl get pods -n kube-system | grep kube-proxy |awk '{print $1}' | xargs kubectl delete pods -n kube-system


[root@master01 prometheus]# kubectl get pods -n kube-system | grep kube-proxy |awk '{print $1}' | xargs kubectl delete pods -n kube-system
pod "kube-proxy-d5fnf" deleted
pod "kube-proxy-kpvs2" deleted
pod "kube-proxy-nrszf" deleted

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

http://prometheus.monitor-sa.svc:9090

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

压力测试

[root@master01 prometheus]# vim ylcs.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa-test
  labels:
    hpa: test
spec:
  replicas: 1
  selector:
    matchLabels:
      hpa: test
  template:
    metadata:
      labels:
        hpa: test
    spec:
      containers:
      - name: centos
        image: centos:7                command: ["/bin/bash", "-c", "yum install -y stress --nogpgcheck && sleep 3600"]
        volumeMounts:
        - name: yum
          mountPath: /etc/yum.repos.d/
      volumes:
      - name: yum
        hostPath:
          path: /etc/yum.repos.d/




[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS             RESTARTS   AGE
hpa-test-c9b658d84-7pvc8   0/1     CrashLoopBackOff   6          10m
nfs1-76f66b958-68wpl       1/1     Running            1          13d


[root@master01 prometheus]# kubectl logs -f hpa-test-c9b658d84-7pvc8 
Loaded plugins: fastestmirror, ovl
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Determining fastest mirrors
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: curl#6 - "Could not resolve host: mirrorlist.centos.org; Unknown error"


 One of the configured repositories failed (Unknown),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo=<repoid> ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable <repoid>
        or
            subscription-manager repos --disable=<repoid>

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true

Cannot find a valid baseurl for repo: base/7/x86_64


[root@master01 prometheus]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2024-09-19 14:31:18--  http://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 114.232.93.242, 58.218.92.241, 114.232.93.243, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|114.232.93.242|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

100%[==================================>] 2,523       --.-K/s 用时 0s      

2024-09-19 14:31:18 (106 MB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])

解决报错

[root@master01 prometheus]# kubectl delete -f ylcs.yaml 
deployment.apps "hpa-test" deleted
[root@master01 prometheus]# kubectl apply -f ylcs.yaml 
deployment.apps/hpa-test created



[root@master01 prometheus]# cd /etc/yum.repos.d/
[root@master01 yum.repos.d]# ls
backup            CentOS-Debuginfo.repo  CentOS-Vault.repo  kubernetes.repo
Centos-7.repo     CentOS-fasttrack.repo  docker-ce.repo     local.repo
CentOS-Base.repo  CentOS-Media.repo      epel.repo
CentOS-CR.repo    CentOS-Sources.repo    epel-testing.repo
[root@master01 yum.repos.d]# rm -rf local.repo 
[root@master01 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
--2024-09-19 14:38:36--  http://mirrors.aliyun.com/repo/Centos-7.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 114.232.93.240, 58.218.92.243, 114.232.93.241, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|114.232.93.240|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2523 (2.5K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

100%[==================================>] 2,523       --.-K/s 用时 0s      

2024-09-19 14:38:36 (73.3 MB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2523/2523])

[root@master01 yum.repos.d]# cd -
/opt/prometheus
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Error     3          50s
nfs1-76f66b958-68wpl       1/1     Running   1          13d
[root@master01 prometheus]# kubectl delete -f ylcs.yaml 
deployment.apps "hpa-test" deleted
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS        RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Terminating   3          56s
nfs1-76f66b958-68wpl       1/1     Running       1          13d
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS        RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Terminating   3          57s
nfs1-76f66b958-68wpl       1/1     Running       1          13d
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS        RESTARTS   AGE
hpa-test-c9b658d84-bs457   0/1     Terminating   3          58s
nfs1-76f66b958-68wpl       1/1     Running       1          13d
[root@master01 prometheus]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
nfs1-76f66b958-68wpl   1/1     Running   1          13d
[root@master01 prometheus]# kubectl apply -f ylcs.yaml 
deployment.apps/hpa-test created
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
hpa-test-c9b658d84-h9xvf   1/1     Running   0          1s
nfs1-76f66b958-68wpl       1/1     Running   1          13d

[root@node01 ~]# cd /etc/yum.repos.d/
[root@node01 yum.repos.d]# ls
backup            CentOS-Debuginfo.repo  CentOS-Vault.repo  kubernetes.repo
Centos-7.repo     CentOS-fasttrack.repo  docker-ce.repo     local.repo
CentOS-Base.repo  CentOS-Media.repo      epel.repo
CentOS-CR.repo    CentOS-Sources.repo    epel-testing.repo
[root@node01 yum.repos.d]# rm -rf local.repo 



[root@node02 ~]# cd /etc/yum.repos.d/
[root@node02 yum.repos.d]# ls
backup            CentOS-Debuginfo.repo  CentOS-Vault.repo  kubernetes.repo
Centos-7.repo     CentOS-fasttrack.repo  docker-ce.repo     local.repo
CentOS-Base.repo  CentOS-Media.repo      epel.repo
CentOS-CR.repo    CentOS-Sources.repo    epel-testing.repo
[root@node02 yum.repos.d]# rm -rf local.repo 
[root@node02 yum.repos.d]# ls



[root@node01 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@node02 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@master01 prometheus]# kubectl apply -f ylcs.yaml 
deployment.apps/hpa-test created
[root@master01 prometheus]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
hpa-test-c9b658d84-cqklr   1/1     Running   0          3s
nfs1-76f66b958-68wpl       1/1     Running   1          13d

[root@master01 prometheus]# kubectl get pod -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
hpa-test-c9b658d84-cqklr   1/1     Running   0          110s   10.244.2.251   node02   <none>           <none>
nfs1-76f66b958-68wpl       1/1     Running   1          13d    10.244.2.173   node02   <none>           <none>


##到node02上top情况
[root@node02 yum.repos.d]#

在这里插入图片描述
在这里插入图片描述

[root@master01 prometheus]# kubectl exec -it hpa-test-c9b658d84-cqklr bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@hpa-test-c9b658d84-cqklr /]# stress -c 4
stress: info: [64] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2157712.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【超详细】基于YOLOv8训练无人机视角Visdrone2019数据集

主要内容如下&#xff1a; 1、Visdrone2019数据集介绍 2、下载、制作YOLO格式训练集 3、模型训练及预测 4、Onnxruntime推理 运行环境&#xff1a;Python3.8&#xff08;要求>3.8&#xff09;&#xff0c;torch1.12.0cu113&#xff08;要求>1.8&#xff09;&#xff0c…

网站建设中,sitemap是什么,有什么作用

在网站建设中&#xff0c;Sitemap&#xff08;站点地图&#xff09;是一种文件&#xff0c;通常采用txt或XML格式&#xff0c;它列出了网站中的网页、视频或其他文件的相关信息。Sitemap的主要作用是帮助搜索引擎更高效地抓取和索引网站内容。 以下是Sitemap的具体作用&#x…

ABAP 学习t-code DWDM

ABAP 学习t-code DWDM &#xff0c;里面有很多例子展示&#xff0c;且能看到源代码

【第十四章:Sentosa_DSML社区版-机器学习之时间序列】

目录 【第十四章&#xff1a;Sentosa_DSML社区版-机器学习时间序列】 14.1 ARIMAX 14.2 ARIMA 14.3 HoltWinters 14.4 一次指数平滑预测 14.5 二次指数平滑预测 【第十四章&#xff1a;Sentosa_DSML社区版-机器学习时间序列】 14.1 ARIMAX 1.算子介绍 考虑其他序列对一…

云计算第四阶段---CLOUD Day7---Day8

CLOUD 07 一、Dockerfile详细解析 指令说明FROM指定基础镜像&#xff08;唯一&#xff09;RUN在容器内执行命令&#xff0c;可以写多条ADD把文件拷贝到容器内&#xff0c;如果文件是 tar.xx 格式&#xff0c;会自动解压COPY把文件拷贝到容器内&#xff0c;不会自动解压ENV设置…

双十一快来了!什么值得买?分享五款高品质好物~

双十一大促再次拉开帷幕&#xff0c;面对众多优惠是否感到选择困难&#xff1f;为此&#xff0c;我们精心筛选了一系列适合数字生活的好物&#xff0c;旨在帮助每一位朋友都能轻松找到心仪之选。这份推荐清单&#xff0c;不仅实用而且性价比高&#xff0c;是您双十一购物的不二…

C++入门基础知识82(实例)——实例7【 判断一个数是奇数还是偶数】

成长路上不孤单&#x1f60a;&#x1f60a;&#x1f60a;&#x1f60a;&#x1f60a;&#x1f60a; 【14后&#x1f60a;///C爱好者&#x1f60a;///持续分享所学&#x1f60a;///如有需要欢迎收藏转发///&#x1f60a;】 今日分享关于C 实例 【判断一个数是奇数还是偶数】相…

【JavaEE初阶】文件IO(上)

欢迎关注个人主页&#xff1a;逸狼 创造不易&#xff0c;可以点点赞吗~ 如有错误&#xff0c;欢迎指出~ 目录 路径 绝对路径 相对路径 文件类型 文件的操作 File类 文件系统操作 创建文件,获取路径 删除文件 列出所有路径 路径修改 创建目录 mkdir和mkdirs 服务器领域,机械…

win系统接入google_auth实现动态密码,加强保护

开源代码地址&#xff1a;windows动态密码: 针对win服务器进行的动态密码管控&#xff0c;需要配合谷歌的身份认证APP使用 (gitee.com) 为什么要搞个动态密码呢&#xff1f; 首先云服务器启用了远程访问&#xff0c;虽然更换了端口以及初始用户名&#xff0c;不过还是是不是被…

go的结构体、方法、接口

结构体&#xff1a; 结构体&#xff1a;不同类型数据集合 结构体成员是由一系列的成员变量构成&#xff0c;这些成员变量也被称为“字段” 先声明一下我们的结构体&#xff1a; type Person struct {name stringage intsex string } 定义结构体法1&#xff1a; var p1 P…

老程序员的数字游戏开发笔记(三) —— Godot出你的第一个2D游戏(一篇文章完整演绎Godot制作2D游戏的全部细节)

忽略代码&#xff0c;忽略素材&#xff0c;忽略逻辑&#xff01; 游戏的精髓是人性与思想&#xff0c;我一篇一篇地制作&#xff0c;不想动手的小伙伴看一看就可以&#xff0c;感受一下也不错&#xff0c;我们是有目的性的&#xff0c;这一切都是为今后的AI融合打基础&#xf…

详解CORDIC算法以及Verilog实现并且调用Xilinx CORDIC IP核进行验证

系列文章目录 文章目录 系列文章目录一、什么是CORDIC算法&#xff1f;二、CORDIC算法原理推导三、CORDIC模式3.1 旋转模式3.2 向量模式 四、Verilog实现CORDIC4.1 判断象限4.2 定义角度表4.3 迭代公式 五、仿真验证5.1 matlab打印各角度的正余弦值5.2 Verilog仿真结果观察 六、…

大模型学习方向不知道的,看完这篇学习思路好清晰!!

入门大模型并没有想象中复杂&#xff0c;尤其对于普通程序员&#xff0c;建议采用从外到内的学习路径。下面我们通过几个步骤来探索如何系统学习大模型&#xff1a; 1️⃣初步理解应用场景与人才需求 大模型的核心应用涵盖了智能体&#xff08;AI Agent&#xff09;、微调&…

NodeFormer:一种用于节点分类的可扩展图结构学习 Transformer

人工智能咨询培训老师叶梓 转载标明出处 现有的神经网络&#xff08;GNNs&#xff09;在处理大规模图数据时面临着一些挑战&#xff0c;如过度平滑、异质性、长距离依赖处理、边缘不完整性等问题&#xff0c;尤其是当输入图完全缺失时。为了解决这些问题&#xff0c;上海交通大…

RK3588NPU驱动版本升级至0.9.6教程

RK3588NPU驱动版本升级至0.9.6教程 1、下载RK3588NPU驱动2、修改NPU驱动源码2.0 修改MONITOR_TPYE_DEV写错问题2.1 解决缺少函数rockchip_uninit_opp_table问题2.2 解决缺少函数vm_flags_set、vm_flag_clear的问题2.3 内核编译成功2.4 重新构建系统 3、注意事项4、其他问题处理…

故障诊断 | 基于双路神经网络的滚动轴承故障诊断

故障诊断 | 基于双路神经网络的滚动轴承故障诊断 目录 故障诊断 | 基于双路神经网络的滚动轴承故障诊断效果一览基本介绍程序设计参考资料效果一览 基本介绍 基于双路神经网络的滚动轴承故障诊断 融合了原始振动信号 和 二维信号时频图像的多输入(多通道)故障诊断方法 单路和双…

【原创】java+springboot+mysql党员教育网系统设计与实现

个人主页&#xff1a;程序猿小小杨 个人简介&#xff1a;从事开发多年&#xff0c;Java、Php、Python、前端开发均有涉猎 博客内容&#xff1a;Java项目实战、项目演示、技术分享 文末有作者名片&#xff0c;希望和大家一起共同进步&#xff0c;你只管努力&#xff0c;剩下的交…

【Linux】常用指令【更详细,带实操】

Linux全套讲解系列&#xff0c;参考视频-B站韩顺平&#xff0c;本文的讲解更为详细 目录 一、文件目录指令 1、cd【change directory】指令 ​ 2、mkdir【make dir..】指令​ 3、cp【copy】指令 ​ 4、rm【remove】指令 5、mv【move】指令 6、cat指令和more指令 7、less和…

【爬虫工具】小红书评论高级采集软件

用python开发的爬虫采集工具【爬小红书搜索评论软件】&#xff0c;支持根据关键词采集评论。 思路&#xff1a;笔记关键词->笔记链接->评论 软件界面&#xff1a; 完整文章、详细了解&#xff1a; https://mp.weixin.qq.com/s/C_TuChFwh8Vw76hTGX679Q 好用的软件一起分…

去除字符串或字符串数组中字符串左侧的空格或指定字符numpy.char.lstrip()

【小白从小学Python、C、Java】 【考研初试复试毕业设计】 【Python基础AI数据分析】 去除字符串或字符串数组中 字符串左侧的空格或指定字符 numpy.char.lstrip() [太阳]选择题 请问关于以下代码表述错误的选项是&#xff1f; import numpy as np print("【执行】np.cha…