Pod控制器详解【五】

news2024/11/27 6:24:02

文章目录

  • 5. Pod控制器详解
    • 5.1 Pod控制器介绍
    • 5.2 ReplicaSet(RS)
    • 5.3 Deployment(Deploy)
    • 5.4 Horizontal Pod Autoscaler(HPA)
    • 5.5 DaemonSet(DS)
    • 5.6 Job
    • 5.7 CronJob(CJ)

5. Pod控制器详解

5.1 Pod控制器介绍

Pod是kubernetes的最小管理单元,在kubernetes中,按照pod的创建方式可以将其分为两类:

  • 自主式pod:kubernetes直接创建出来的Pod,这种pod删除后就没有了,也不会重建
  • 控制器创建的pod:kubernetes通过控制器创建的pod,这种pod删除了之后还会自动重建

什么是Pod控制器

Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod。

在kubernetes中,有很多类型的pod控制器,每种都有自己适合的场景,常见的有下面这些:

  • ReplicationController:比较原始的pod控制器,已经被废弃,由ReplicaSet替代
  • ReplicaSet:保证副本数量一直维持在期望值,并支持pod数量扩缩容,镜像版本升级
  • Deployment:通过控制ReplicaSet来控制Pod,并支持滚动升级、回退版本
  • Horizontal Pod Autoscaler:可以根据集群负载自动水平调整Pod的数量,实现削峰填谷
  • DaemonSet:在集群中的指定Node上运行且仅运行一个副本,一般用于守护进程类的任务
  • Job:它创建出来的pod只要完成任务就立即退出,不需要重启或重建,用于执行一次性任务
  • Cronjob:它创建的Pod负责周期性任务控制,不需要持续后台运行
  • StatefulSet:管理有状态应用

5.2 ReplicaSet(RS)

ReplicaSet的主要作用是保证一定数量的pod正常运行,它会持续监听这些Pod的运行状态,一旦Pod发生故障,就会重启或重建。同时它还支持对pod数量的扩缩容和镜像版本的升降级。
在这里插入图片描述

ReplicaSet的资源清单文件:

apiVersion: apps/v1 # 版本号
kind: ReplicaSet # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: rs
spec: # 详情描述
  replicas: 3 # 副本数量
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.25.1
        ports:
        - containerPort: 80

在这里面,需要新了解的配置项就是spec下面几个选项:

  • replicas:指定副本数量,其实就是当前rs创建出来的pod的数量,默认为1

  • selector:选择器,它的作用是建立pod控制器和pod之间的关联关系,采用的Label Selector机制

    在pod模板上定义label,在控制器上定义选择器,就可以表明当前控制器能管理哪些pod了

  • template:模板,就是当前控制器创建pod所使用的模板,里面其实就是前一章学过的pod的定义

创建ReplicaSet

创建pc-replicaset.yaml文件,内容如下:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: pc-replicaset
  namespace: dev
spec:
  replicas: 3
  selector:
    matchExpressions:
    - key: app
      operator: In
      values:
      - rs-pod-nginx1
  template:
    metadata:
      labels:
        app: rs-pod-nginx1
    spec:
      containers:
      - name: rs-nginx1-container
        image: nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: nginx-port
          protocol: "TCP"
# 创建rs
[root@k8s-master replicasets]# pwd
/root/manifest/replicasets
[root@k8s-master replicasets]# kubectl apply -f pc-replicaset.yaml 
replicaset.apps/pc-replicaset created

# 查看rs
# DESIRED:期望副本数量  
# CURRENT:当前副本数量  
# READY:已经准备好提供服务的副本数量
[root@k8s-master ~]# kubectl get replicaset pc-replicaset -n dev -o wide
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS            IMAGES         SELECTOR
pc-replicaset   3         3         3       38s   rs-nginx1-container   nginx:1.24.0   app in (rs-pod-nginx1)
[root@k8s-master ~]# 


# 查看当前控制器创建出来的pod
# 这里发现控制器创建出来的pod的名称是在控制器名称后面拼接了-xxxxx随机码
[root@k8s-master ~]# kubectl get pod -n dev 
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-8w9t5   1/1     Running   0          114s
pc-replicaset-ds7x6   1/1     Running   0          114s
pc-replicaset-rlmhn   1/1     Running   0          114s
[root@k8s-master ~]# 

扩缩容

# 编辑rs的副本数量,修改spec:replicas: 5即可
[root@k8s-master ~]# kubectl edit replicaset pc-replicaset -n dev
......
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: pc-replicaset
  namespace: dev
spec:
  replicas: 5   # 修改原来的3变成5
  selector:
    matchExpressions:
    - key: app
      operator: In
      values:
      - rs-pod-nginx1
......
replicaset.apps/pc-replicaset edited

# 查看pod
[root@k8s-master ~]# kubectl get pod -n dev 
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-6zntj   1/1     Running   0          8s
pc-replicaset-7g7xs   1/1     Running   0          8s
pc-replicaset-8w9t5   1/1     Running   0          3m35s
pc-replicaset-ds7x6   1/1     Running   0          3m35s
pc-replicaset-rlmhn   1/1     Running   0          3m35s
[root@k8s-master ~]# 

# 当然也可以直接使用命令实现
# 使用scale命令实现扩缩容, 后面--replicas=n直接指定目标数量即可
[root@k8s-master ~]# kubectl scale --replicas=2 rs pc-replicaset -n dev
replicaset.apps/pc-replicaset scaled
[root@k8s-master ~]# 


# 命令运行完毕,立即查看,发现已经有3个开始准备退出了
[root@k8s-master ~]# kubectl get pod -n dev -w
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-6zntj   1/1     Running   0          4m14s  # 删除
pc-replicaset-7g7xs   1/1     Running   0          4m14s  # 删除
pc-replicaset-8w9t5   1/1     Running   0          7m41s  # 保留
pc-replicaset-ds7x6   1/1     Running   0          7m41s  # 保留
pc-replicaset-rlmhn   1/1     Running   0          7m41s  # 删除
pc-replicaset-6zntj   1/1     Terminating   0          6m20s
pc-replicaset-7g7xs   1/1     Terminating   0          6m20s
pc-replicaset-rlmhn   1/1     Terminating   0          9m47s
pc-replicaset-rlmhn   0/1     Terminating   0          9m48s
pc-replicaset-7g7xs   0/1     Terminating   0          6m21s
pc-replicaset-6zntj   0/1     Terminating   0          6m21s
pc-replicaset-rlmhn   0/1     Terminating   0          9m48s
pc-replicaset-6zntj   0/1     Terminating   0          6m21s
pc-replicaset-7g7xs   0/1     Terminating   0          6m21s
pc-replicaset-rlmhn   0/1     Terminating   0          9m48s
pc-replicaset-rlmhn   0/1     Terminating   0          9m48s
pc-replicaset-7g7xs   0/1     Terminating   0          6m21s
pc-replicaset-7g7xs   0/1     Terminating   0          6m21s
pc-replicaset-6zntj   0/1     Terminating   0          6m21s
pc-replicaset-6zntj   0/1     Terminating   0          6m21s


#稍等片刻,就只剩下2个了
[root@k8s-master ~]# kubectl get pod -n dev 
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-8w9t5   1/1     Running   0          13m
pc-replicaset-ds7x6   1/1     Running   0          13m
[root@k8s-master ~]# 

镜像升级

# 查看镜像版本: nginx:1.24.0
[root@k8s-master ~]# kubectl get replicaset pc-replicaset -n dev -o wide
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS            IMAGES         SELECTOR
pc-replicaset   2         2         2       14m   rs-nginx1-container   nginx:1.24.0   app in (rs-pod-nginx1)
[root@k8s-master ~]# 

# 编辑rs的容器镜像 - image: nginx:1.25.1
[root@k8s-master ~]# kubectl edit replicaset pc-replicaset -n dev
replicaset.apps/pc-replicaset edited
[root@k8s-master ~]# 


# 再次查看,发现镜像版本已经变更了
[root@k8s-master ~]# kubectl get replicaset pc-replicaset -n dev -o wide
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS            IMAGES         SELECTOR
pc-replicaset   2         2         2       16m   rs-nginx1-container   nginx:1.25.1   app in (rs-pod-nginx1)
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get pod -n dev
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-8w9t5   1/1     Running   0          16m
pc-replicaset-ds7x6   1/1     Running   0          16m
[root@k8s-master ~]# 

# 同样的道理,也可以使用命令[set]完成这个工作
# kubectl set image rs rs名称 容器名字=镜像版本 -n namespace
[root@k8s-master ~]# kubectl set image replicaset pc-replicaset rs-nginx1-container=nginx:1.24.0 -n dev
replicaset.apps/pc-replicaset image updated
[root@k8s-master ~]# 

# 再次查看,发现镜像版本已经变更了
[root@k8s-master ~]# kubectl get rs pc-replicaset -n dev -w -o wide
NAME            DESIRED   CURRENT   READY   AGE   CONTAINERS            IMAGES         SELECTOR
pc-replicaset   2         2         2       21m   rs-nginx1-container   nginx:1.25.1   app in (rs-pod-nginx1)
pc-replicaset   2         2         2       21m   rs-nginx1-container   nginx:1.24.0   app in (rs-pod-nginx1)
pc-replicaset   2         2         2       21m   rs-nginx1-container   nginx:1.24.0   app in (rs-pod-nginx1)

删除ReplicaSet

# 使用kubectl delete命令会删除此RS以及它管理的Pod
# 在kubernetes删除RS前,会将RS的replicasclear调整为0,等待所有的Pod被删除后,在执行RS对象的删除
[root@k8s-master ~]# kubectl delete replicaset pc-replicaset -n dev
replicaset.apps "pc-replicaset" deleted
[root@k8s-master ~]# 

[root@k8s-master ~]# kubectl get rs pc-replicaset -n dev -o wide 
Error from server (NotFound): replicasets.apps "pc-replicaset" not found
[root@k8s-master ~]# 



[root@k8s-master replicasets]# kubectl create -f pc-replicaset.yaml 
replicaset.apps/pc-replicaset created
[root@k8s-master ~]# kubectl get pod -n dev
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-gclzb   1/1     Running   0          3m25s
pc-replicaset-jjxwj   1/1     Running   0          3m25s
pc-replicaset-tpgcs   1/1     Running   0          3m25s

# 如果希望仅仅删除RS对象(保留Pod),可以使用kubectl delete命令时添加--cascade=false选项(不推荐)。
[root@k8s-master ~]# kubectl delete rs pc-replicaset -n dev --cascade=false
warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan.
replicaset.apps "pc-replicaset" deleted
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get rs pc-replicaset -n dev # 查看replicaset 控制器提示没有
Error from server (NotFound): replicasets.apps "pc-replicaset" not found
[root@k8s-master ~]# kubectl get pod -n dev  # 查看 Pod 还在
NAME                  READY   STATUS    RESTARTS   AGE
pc-replicaset-gclzb   1/1     Running   0          4m39s
pc-replicaset-jjxwj   1/1     Running   0          4m39s
pc-replicaset-tpgcs   1/1     Running   0          4m39s
[root@k8s-master ~]# 


# 也可以使用yaml直接删除(推荐)
[root@k8s-master replicasets]# kubectl delete -f pc-replicaset.yaml 
replicaset.apps "pc-replicaset" deleted
[root@k8s-master replicasets]# pwd
/root/manifest/replicasets

5.3 Deployment(Deploy)

为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。值得一提的是,这种控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod,即:Deployment管理ReplicaSet,ReplicaSet管理Pod。所以Deployment比ReplicaSet功能更加强大。
在这里插入图片描述

Deployment主要功能有下面几个:

  • 支持ReplicaSet的所有功能
  • 支持发布的停止、继续
  • 支持滚动升级和回滚版本

Deployment的资源清单文件:

apiVersion: apps/v1 # 版本号
kind: Deployment # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: deploy
spec: # 详情描述
  replicas: 3 # 副本数量
  revisionHistoryLimit: 3 # 保留历史版本
  paused: false # 暂停部署,默认是false
  progressDeadlineSeconds: 500 # 部署超时时间(s),默认是500
  strategy: # 策略
    type: RollingUpdate # 滚动更新策略
    rollingUpdate: # 滚动更新
      maxSurge: 30% # 最大额外可以存在的副本数,可以为百分比,也可以为整数
      maxUnavailable: 30% # 最大不可用状态的 Pod 的最大值,可以为百分比,也可以为整数
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.25.1
        ports:
        - containerPort: 80

创建deployment

创建pc-deployment.yaml,内容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - name: nginx-containers
        image: nginx:1.20.1
        imagePullPolicy: IfNotPresent
# 创建deployment
[root@k8s-master ~]# kubectl apply -f manifest/deployment/pc-deployment.yaml --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/pc-deployment created


# 查看deployment
# UP-TO-DATE 最新版本的pod的数量
# AVAILABLE  当前可用的pod的数量
[root@k8s-master deployment]# kubectl get deployment pc-deployment -n dev
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
pc-deployment   3/3     3            3           25s


# 查看rs
# 发现rs的名称是在原来deployment的名字后面添加了一个9位数的随机串
[root@k8s-master ~]# kubectl get rs -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5489bd5584   3         3         3       43s

# 查看pod
[root@k8s-master ~]# kubectl get pod -n dev -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE        ......
pc-deployment-5489bd5584-fbtnj   1/1     Running   0          69s   10.244.2.163   k8s-node2   ......
pc-deployment-5489bd5584-h9scd   1/1     Running   0          69s   10.244.2.162   k8s-node2   ......
pc-deployment-5489bd5584-tm4hz   1/1     Running   0          69s   10.244.2.161   k8s-node2   ......

[root@k8s-master ~]# kubectl exec pc-deployment-5489bd5584-fbtnj -itn dev -- /bin/sh
# echo '10.244.2.163' > /usr/share/nginx/html/index.html
# exit
[root@k8s-master ~]# curl 10.244.2.163
10.244.2.163
[root@k8s-master ~]# kubectl exec pc-deployment-5489bd5584-h9scd -itn dev -- /bin/sh
# echo '10.244.2.162' > /usr/share/nginx/html/index.html
# exit
[root@k8s-master ~]# kubectl exec pc-deployment-5489bd5584-tm4hz -itn dev -- /bin/sh
# echo '10.244.2.161' > /usr/share/nginx/html/index.html
# exit
[root@k8s-master ~]# curl 10.244.2.161 
10.244.2.161
[root@k8s-master ~]# 

扩缩容

# 变更副本数量为5个
[root@k8s-master ~]# kubectl scale --replicas=5 deployment pc-deployment -n dev
deployment.apps/pc-deployment scaled
[root@k8s-master ~]# 


# 查看deployment
[root@k8s-master ~]# kubectl get deployment pc-deployment -n dev -o wide -w
NAME            READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS         IMAGES         SELECTOR
pc-deployment   5/5     5            5           7m33s   nginx-containers   nginx:1.20.1   app=nginx-deployment

# 查看pod
[root@k8s-master ~]# kubectl get pod -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5489bd5584-2bqpk   1/1     Running   0          85s
pc-deployment-5489bd5584-fbtnj   1/1     Running   0          7m46s
pc-deployment-5489bd5584-h9scd   1/1     Running   0          7m46s
pc-deployment-5489bd5584-l6qtr   1/1     Running   0          85s
pc-deployment-5489bd5584-tm4hz   1/1     Running   0          7m46s
[root@k8s-master ~]# 


# 编辑deployment的副本数量,修改spec:replicas: 4即可
[root@k8s-master ~]# kubectl edit deployment pc-deployment -n dev
deployment.apps/pc-deployment edited

# 查看pod,此时它已经干掉一个
[root@k8s-master ~]# kubectl get pod -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-5489bd5584-2bqpk   1/1     Running   0          64s
pc-deployment-5489bd5584-fbtnj   1/1     Running   0          7m25s
pc-deployment-5489bd5584-h9scd   1/1     Running   0          7m25s
pc-deployment-5489bd5584-l6qtr   1/1     Running   0          64s
pc-deployment-5489bd5584-tm4hz   1/1     Running   0          7m25s
pc-deployment-5489bd5584-2bqpk   1/1     Terminating   0          107s
pc-deployment-5489bd5584-2bqpk   0/1     Terminating   0          107s
pc-deployment-5489bd5584-2bqpk   0/1     Terminating   0          108s
pc-deployment-5489bd5584-2bqpk   0/1     Terminating   0          108s
pc-deployment-5489bd5584-2bqpk   0/1     Terminating   0          108s

镜像更新

deployment支持两种更新策略:重建更新滚动更新,可以通过strategy指定策略类型,支持两个属性:

strategy:指定新的Pod替换旧的Pod的策略, 支持两个属性:
  type:指定策略类型,支持两种策略
    Recreate:在创建出新的Pod之前会先杀掉所有已存在的Pod
    RollingUpdate:滚动更新,就是杀死一部分,就启动一部分,在更新过程中,存在两个版本Pod
  rollingUpdate:当type为RollingUpdate时生效,用于为RollingUpdate设置参数,支持两个属性:
    maxUnavailable:用来指定在升级过程中不可用Pod的最大数量,默认为25%。
    maxSurge: 用来指定在升级过程中可以超过期望的Pod的最大数量,默认为25%。

重建更新

  1. pc-deployment-Recreate.yaml,在spec节点下添加更新策略
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: dev
spec:
  replicas: 3
  strategy: # 策略
    type: Recreate # 重建更新
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - name: nginx-containers
        image: nginx:1.20.0
        imagePullPolicy: IfNotPresent
  1. 创建deploy进行验证
# 应用,让其生效
[root@k8s-master deployment]# pwd
/root/inventory/deployment
[root@k8s-master deployment]# kubectl apply -f pc-deployment-Recreate.yaml --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/pc-deployment created

# 查看镜像版本号
[root@k8s-master ~]# kubectl get deployment pc-deployment -n dev -o wide
NAME            READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS         IMAGES         SELECTOR
pc-deployment   3/3     3            3           3m29s   nginx-containers   nginx:1.20.0   app=nginx-deployment
[root@k8s-master ~]# 

# 查看Pod
[root@k8s-master ~]# kubectl get pod -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-758f8df9d5-ldcbk   1/1     Running   0          2m25s
pc-deployment-758f8df9d5-lhtlw   1/1     Running   0          2m25s
pc-deployment-758f8df9d5-slcsv   1/1     Running   0          2m25s


# 变更镜像
[root@k8s-master ~]# kubectl set image deployment pc-deployment nginx-containers=nginx:1.20.1 -n dev
deployment.apps/pc-deployment image updated
[root@k8s-master ~]# 



# 观察升级过程
[root@k8s-master ~]# kubectl get pod -n dev -w
NAME                             READY   STATUS    RESTARTS   AGE
pc-deployment-758f8df9d5-ldcbk   1/1     Running   0          2m25s
pc-deployment-758f8df9d5-lhtlw   1/1     Running   0          2m25s
pc-deployment-758f8df9d5-slcsv   1/1     Running   0          2m25s

# 删除旧版本Pod,nginx:1.20.0
pc-deployment-758f8df9d5-slcsv   1/1     Terminating   0          7m58s
pc-deployment-758f8df9d5-lhtlw   1/1     Terminating   0          7m58s
pc-deployment-758f8df9d5-ldcbk   1/1     Terminating   0          7m58s
pc-deployment-758f8df9d5-ldcbk   0/1     Terminating   0          7m59s
pc-deployment-758f8df9d5-lhtlw   0/1     Terminating   0          7m59s
pc-deployment-758f8df9d5-slcsv   0/1     Terminating   0          7m59s
pc-deployment-758f8df9d5-slcsv   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-lhtlw   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-lhtlw   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-lhtlw   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-slcsv   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-slcsv   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-ldcbk   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-ldcbk   0/1     Terminating   0          8m
pc-deployment-758f8df9d5-ldcbk   0/1     Terminating   0          8m

# 等待创建新Pod,nginx:1.20.1
pc-deployment-5489bd5584-l4vrz   0/1     Pending       0          0s
pc-deployment-5489bd5584-srl6w   0/1     Pending       0          0s
pc-deployment-5489bd5584-l4vrz   0/1     Pending       0          0s
pc-deployment-5489bd5584-xfzhz   0/1     Pending       0          0s
pc-deployment-5489bd5584-srl6w   0/1     Pending       0          0s
pc-deployment-5489bd5584-xfzhz   0/1     Pending       0          0s

# 创建Pod
pc-deployment-5489bd5584-l4vrz   0/1     ContainerCreating   0          0s
pc-deployment-5489bd5584-srl6w   0/1     ContainerCreating   0          0s
pc-deployment-5489bd5584-xfzhz   0/1     ContainerCreating   0          0s

# 运行Pod
pc-deployment-5489bd5584-xfzhz   1/1     Running             0          1s
pc-deployment-5489bd5584-l4vrz   1/1     Running             0          1s
pc-deployment-5489bd5584-srl6w   1/1     Running             0          1s

# 镜像的升级
[root@k8s-master ~]# kubectl get deployment pc-deployment -n dev -o wide -w
NAME            READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS         IMAGES         SELECTOR
pc-deployment   3/3     3            3           5m12s   nginx-containers   nginx:1.20.0   app=nginx-deployment
pc-deployment   3/3     3            3           7m58s   nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   3/3     0            3           7m58s   nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     0            0           7m59s   nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     0            0           8m      nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     0            0           8m      nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     3            0           8m      nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   1/3     3            1           8m1s    nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   2/3     3            2           8m1s    nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   3/3     3            3           8m1s    nginx-containers   nginx:1.20.1   app=nginx-deployment

滚动更新

  1. 编辑pc-deployment.yaml,在spec节点下添加更新策略
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment
  namespace: dev
spec:
  replicas: 3
  strategy: # 策略
    type: RollingUpdate # 滚动更新策略
    rollingUpdate:
      maxSurge: 25% 
      maxUnavailable: 25%
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - name: nginx-containers
        image: nginx:1.20.2      # 修改镜像为1.20.2
        imagePullPolicy: IfNotPresent
  1. 创建deploy进行验证
# apply 更新,让其生效
[root@k8s-master ~]# cd  inventory/deployment/
[root@k8s-master deployment]# kubectl apply -f pc-deployment-Recreate.yaml --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/pc-deployment configured
[root@k8s-master deployment]# 



# 变更镜像
[root@k8s-master ~]# kubectl set image deployment pc-deployment nginx-containers=nginx:1.22.1 -n dev
deployment.apps/pc-deployment image updated

# 观察镜像升级
[root@k8s-master ~]# kubectl get deployment pc-deployment -n dev -o wide -w
NAME            READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS         IMAGES         SELECTOR
pc-deployment   3/3     3            3           5m12s   nginx-containers   nginx:1.20.0   app=nginx-deployment
pc-deployment   3/3     3            3           7m58s   nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   3/3     0            3           7m58s   nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     0            0           7m59s   nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     0            0           8m      nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     0            0           8m      nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   0/3     3            0           8m      nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   1/3     3            1           8m1s    nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   2/3     3            2           8m1s    nginx-containers   nginx:1.20.1   app=nginx-deployment
pc-deployment   3/3     3            3           8m1s    nginx-containers   nginx:1.20.1   app=nginx-deployment

pc-deployment   3/3     3            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     3            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     0            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     1            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   4/3     1            4           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     1            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     2            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   4/3     2            4           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     2            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     3            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   4/3     3            4           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment
pc-deployment   3/3     3            3           39m     nginx-containers   nginx:1.20.2   app=nginx-deployment

# 最后保持在3个副本


# 观察升级过程
[root@k8s-master ~]# kubectl get pod -n dev -w
NAME                            READY   STATUS    RESTARTS   AGE
pc-deployment-5489bd5584-xfzhz   1/1     Running             0          1s
pc-deployment-5489bd5584-l4vrz   1/1     Running             0          1s
pc-deployment-5489bd5584-srl6w   1/1     Running             0          1s

pc-deployment-d9d486c6c-qbwbc    0/1     Pending             0          0s
pc-deployment-d9d486c6c-qbwbc    0/1     Pending             0          0s
pc-deployment-d9d486c6c-qbwbc    0/1     ContainerCreating   0          0s
pc-deployment-d9d486c6c-qbwbc    1/1     Running             0          1s

pc-deployment-5489bd5584-l4vrz   1/1     Terminating         0          31m
pc-deployment-5489bd5584-l4vrz   0/1     Terminating         0          31m
pc-deployment-5489bd5584-l4vrz   0/1     Terminating         0          31m
pc-deployment-5489bd5584-l4vrz   0/1     Terminating         0          31m
pc-deployment-5489bd5584-l4vrz   0/1     Terminating         0          31m

pc-deployment-d9d486c6c-448ml    0/1     Pending             0          0s
pc-deployment-d9d486c6c-448ml    0/1     Pending             0          0s
pc-deployment-d9d486c6c-448ml    0/1     ContainerCreating   0          0s
pc-deployment-d9d486c6c-448ml    1/1     Running             0          1s

pc-deployment-5489bd5584-srl6w   1/1     Terminating         0          31m
pc-deployment-5489bd5584-srl6w   0/1     Terminating         0          31m
pc-deployment-5489bd5584-srl6w   0/1     Terminating         0          31m
pc-deployment-5489bd5584-srl6w   0/1     Terminating         0          31m

pc-deployment-d9d486c6c-z7c9w    0/1     Pending             0          0s
pc-deployment-d9d486c6c-z7c9w    0/1     Pending             0          0s
pc-deployment-d9d486c6c-z7c9w    0/1     ContainerCreating   0          0s
pc-deployment-5489bd5584-srl6w   0/1     Terminating         0          31m
pc-deployment-d9d486c6c-z7c9w    1/1     Running             0          1s

pc-deployment-5489bd5584-xfzhz   1/1     Terminating         0          31m
pc-deployment-5489bd5584-xfzhz   0/1     Terminating         0          31m
pc-deployment-5489bd5584-xfzhz   0/1     Terminating         0          31m
pc-deployment-5489bd5584-xfzhz   0/1     Terminating         0          31m
pc-deployment-5489bd5584-xfzhz   0/1     Terminating         0          31m


[root@k8s-master ~]# kubectl get pod -n dev -w
NAME                            READY   STATUS    RESTARTS   AGE
pc-deployment-d9d486c6c-448ml   1/1     Running   0          4m29s
pc-deployment-d9d486c6c-qbwbc   1/1     Running   0          4m30s
pc-deployment-d9d486c6c-z7c9w   1/1     Running   0          4m28s

# 至此,新版本的pod创建完毕,就版本的pod销毁完毕
# 中间过程是滚动进行的,也就是边销毁边创建

滚动更新的过程:

在这里插入图片描述

镜像更新中rs的变化

# 查看rs,发现原来的rs的依旧存在,只是pod数量变为了0,而后又新产生了一个rs,pod数量为3
# 其实这就是deployment能够进行版本回退的奥妙所在,后面会详细解释
[root@k8s-master ~]# kubectl get replicaset -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5489bd5584   0         0         0       37m
pc-deployment-758f8df9d5   0         0         0       45m
pc-deployment-d9d486c6c    3         3         3       5m18s

版本回退

deployment支持版本升级过程中的暂停、继续功能以及版本回退等诸多功能,下面具体来看.

kubectl rollout: 版本升级相关功能,支持下面的选项:

  • status 显示当前升级状态
  • history 显示 升级历史记录
  • pause 暂停版本升级过程
  • resume 继续已经暂停的版本升级过程
  • restart 重启版本升级过程
  • undo 回滚到上一级版本(可以使用–to-revision回滚到指定版本)
# 查看当前升级版本的状态
[root@k8s-master ~]# kubectl rollout status deployment pc-deployment -n dev
deployment "pc-deployment" successfully rolled out

# 查看升级历史记录
[root@k8s-master ~]# kubectl rollout history deployment -n dev
deployment.apps/pc-deployment 
REVISION  CHANGE-CAUSE
1         kubectl apply --filename=pc-deployment-Recreate.yaml --record=true
2         kubectl apply --filename=pc-deployment-Recreate.yaml --record=true
3         kubectl apply --filename=pc-deployment-Recreate.yaml --record=true

# 可以发现有三次版本记录,说明完成过两次升级

# 版本回滚
# 这里直接使用--to-revision=1回滚到了1版本, 如果省略这个选项,就是回退到上个版本,就是2版本
[root@k8s-master ~]# kubectl rollout undo deployment pc-deployment --to-revision=1 -n dev
deployment.apps/pc-deployment rolled back
[root@k8s-master ~]# 


# 查看发现,通过nginx镜像版本可以发现到了第一版
[root@k8s-master ~]# kubectl get deployment pc-deployment -n dev -o wide -w
NAME            READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES         SELECTOR
pc-deployment   3/3     3            3           49m   nginx-containers   nginx:1.20.0   app=nginx-deployment


# 查看rs,发现第一个rs中有3个pod运行,后面两个版本的rs中pod为运行
# 其实deployment之所以可是实现版本的回滚,就是通过记录下历史rs来实现的,
# 一旦想回滚到哪个版本,只需要将当前版本pod数量降为0,然后将回滚版本的pod提升为目标数量就可以了
[root@k8s-master ~]# kubectl get replicaset -n dev
NAME                       DESIRED   CURRENT   READY   AGE
pc-deployment-5489bd5584   0         0         0       41m
pc-deployment-758f8df9d5   3         3         3       49m
pc-deployment-d9d486c6c    0         0         0       10m
[root@k8s-master ~]# 

金丝雀发布

Deployment控制器支持控制更新过程中的控制,如 暂停(pause)继续(resume) 更新操作。

比如有一批新的Pod资源创建完成后立即暂停更新过程,此时,仅存在一部分新版本的应用,主体部分还是旧的版本。然后,再筛选一小部分的用户请求路由到新版本的Pod应用,继续观察能否稳定地按期望的方式运行。确定没问题之后再继续完成余下的Pod资源滚动更新,否则立即回滚更新操作。这就是所谓的金丝雀发布。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pc-deployment-nginx
  namespace: dev
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - name: nginx-containers
        image: nginx:1.21.0
        imagePullPolicy: IfNotPresent
# 应用
[root@k8s-master deployment]# kubectl apply -f pc-deployment-nginx1.yaml --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/pc-deployment-nginx created

# 查看Pod
[root@k8s-master ~]# kubectl get pod -n dev
NAME                                  READY   STATUS    RESTARTS   AGE
pc-deployment-nginx-85b67ff56-gt685   1/1     Running   0          27s
pc-deployment-nginx-85b67ff56-p5svt   1/1     Running   0          27s
pc-deployment-nginx-85b67ff56-zqg4f   1/1     Running   0          27s

# 查看 deployment 类型
[root@k8s-master ~]# kubectl get deployment -n dev -o wide
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES         SELECTOR
pc-deployment-nginx   3/3     3            3           67s   nginx-containers   nginx:1.21.0   app=nginx-deployment
[root@k8s-master ~]# 


# 更新deployment的版本,并配置暂停deployment
[root@k8s-master ~]# kubectl set image deployment pc-deployment-nginx nginx-containers=nginx:1.21.1 -n dev && kubectl rollout pause deployment pc-deployment-nginx -n dev
deployment.apps/pc-deployment-nginx image updated
deployment.apps/pc-deployment-nginx paused
[root@k8s-master ~]# 


#观察更新状态
[root@k8s-master ~]# kubectl rollout status deployment pc-deployment-nginx -n dev
Waiting for deployment "pc-deployment-nginx" rollout to finish: 1 out of 3 new replicas have been updated...

# 监控更新的过程,可以看到已经新增了一个资源,但是并未按照预期的状态去删除一个旧的资源,就是因为使用了pause暂停命令

[root@k8s-master ~]#  kubectl get replicaset -n dev -o wide
NAME                             DESIRED   CURRENT   READY   AGE     CONTAINERS         IMAGES         SELECTOR
pc-deployment-nginx-557bbbd685   1         1         1       45s     nginx-containers   nginx:1.21.1   ......
pc-deployment-nginx-85b67ff56    3         3         3       7m49s   nginx-containers   nginx:1.21.0   ......

[root@k8s-master ~]# kubectl get pod  -n dev
NAME                                   READY   STATUS    RESTARTS   AGE
pc-deployment-nginx-557bbbd685-wffhj   1/1     Running   0          2m1s
pc-deployment-nginx-85b67ff56-gt685    1/1     Running   0          9m5s
pc-deployment-nginx-85b67ff56-p5svt    1/1     Running   0          9m5s
pc-deployment-nginx-85b67ff56-zqg4f    1/1     Running   0          9m5s
[root@k8s-master ~]# 

# 查看你镜像版本号
[root@k8s-master ~]# kubectl describe pod pc-deployment-nginx-557bbbd685-wffhj  -n dev | grep -i image:
    Image:          nginx:1.21.1
[root@k8s-master ~]# kubectl describe pod pc-deployment-nginx-85b67ff56-gt685  -n dev | grep -i image:
    Image:          nginx:1.21.0
[root@k8s-master ~]# kubectl describe pod pc-deployment-nginx-85b67ff56-p5svt  -n dev | grep -i image:
    Image:          nginx:1.21.0
[root@k8s-master ~]# kubectl describe pod pc-deployment-nginx-85b67ff56-zqg4f  -n dev | grep -i image:
    Image:          nginx:1.21.0
[root@k8s-master ~]# 


# 确保更新的pod没问题了,继续更新
[root@k8s-master ~]# kubectl rollout resume deploy pc-deployment-nginx -n dev
deployment.apps/pc-deployment-nginx resumed


# 查看最后的更新情况
[root@k8s-master ~]#  kubectl get rs -n dev -o wide
NAME                             DESIRED   CURRENT   READY   AGE    CONTAINERS         IMAGES         SELECTOR
pc-deployment-nginx-557bbbd685   3         3         3       8m7s   nginx-containers   nginx:1.21.1   ...
pc-deployment-nginx-85b67ff56    0         0         0       15m    nginx-containers   nginx:1.21.0   ...

[root@k8s-master ~]# kubectl rollout status deployment pc-deployment-nginx -n dev
Waiting for deployment "pc-deployment-nginx" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment spec update to be observed...
Waiting for deployment spec update to be observed...
Waiting for deployment "pc-deployment-nginx" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment-nginx" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment-nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment-nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment-nginx" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "pc-deployment-nginx" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "pc-deployment-nginx" rollout to finish: 1 old replicas are pending termination...
deployment "pc-deployment-nginx" successfully rolled out  
# 更新镜像完成 

# 观察 Pod
[root@k8s-master ~]# kubectl get pod  -n dev -w
NAME                                   READY   STATUS    RESTARTS   AGE
pc-deployment-nginx-557bbbd685-wffhj   1/1     Running   0          2m37s  # 镜像版本号 1.21.1
pc-deployment-nginx-85b67ff56-gt685    1/1     Running   0          9m41s
pc-deployment-nginx-85b67ff56-p5svt    1/1     Running   0          9m41s
pc-deployment-nginx-85b67ff56-zqg4f    1/1     Running   0          9m41s

pc-deployment-nginx-85b67ff56-gt685    1/1     Terminating   0          13m   
pc-deployment-nginx-557bbbd685-94n59   0/1     Pending       0          0s
pc-deployment-nginx-557bbbd685-94n59   0/1     Pending       0          0s
pc-deployment-nginx-557bbbd685-94n59   0/1     ContainerCreating   0          0s
pc-deployment-nginx-85b67ff56-gt685    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-gt685    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-gt685    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-gt685    0/1     Terminating         0          13m
pc-deployment-nginx-557bbbd685-94n59   1/1     Running             0          2s
pc-deployment-nginx-85b67ff56-zqg4f    1/1     Terminating         0          13m
pc-deployment-nginx-557bbbd685-b8xjf   0/1     Pending             0          0s
pc-deployment-nginx-557bbbd685-b8xjf   0/1     Pending             0          0s
pc-deployment-nginx-557bbbd685-b8xjf   0/1     ContainerCreating   0          0s
pc-deployment-nginx-85b67ff56-zqg4f    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-zqg4f    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-zqg4f    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-zqg4f    0/1     Terminating         0          13m
pc-deployment-nginx-557bbbd685-b8xjf   1/1     Running             0          1s
pc-deployment-nginx-85b67ff56-p5svt    1/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-p5svt    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-p5svt    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-p5svt    0/1     Terminating         0          13m
pc-deployment-nginx-85b67ff56-p5svt    0/1     Terminating         0          13m



[root@k8s-master ~]#  kubectl get pods -n dev
NAME                                   READY   STATUS    RESTARTS   AGE
pc-deployment-nginx-557bbbd685-94n59   1/1     Running   0          2m47s
pc-deployment-nginx-557bbbd685-b8xjf   1/1     Running   0          2m45s
pc-deployment-nginx-557bbbd685-wffhj   1/1     Running   0          8m59s
[root@k8s-master ~]# 

删除Deployment

# 删除deployment,其下的rs和pod也将被删除
[root@k8s-master ~]# kubectl delete -f pc-deployment.yaml
deployment.apps "pc-deployment" deleted

5.4 Horizontal Pod Autoscaler(HPA)

在前面的课程中,我们已经可以实现通过手工执行kubectl scale命令实现Pod扩容或缩容,但是这显然不符合Kubernetes的定位目标–自动化、智能化。 Kubernetes期望可以实现通过监测Pod的使用情况,实现pod数量的自动调整,于是就产生了Horizontal Pod Autoscaler(HPA)这种控制器。

HPA可以获取每个Pod利用率,然后和HPA中定义的指标进行对比,同时计算出需要伸缩的具体值,最后实现Pod的数量的调整。其实HPA与之前的Deployment一样,也属于一种Kubernetes资源对象,它通过追踪分析RC控制的所有目标Pod的负载变化情况,来确定是否需要针对性地调整目标Pod的副本数,这是HPA的实现原理。
在这里插入图片描述

接下来,我们来做一个实验

1 安装metrics-server

metrics-server可以用来收集集群中的资源使用情况

# 去GitHub 网站:搜索 metrics-server
[root@k8s-master ~]# wget  https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
......省略n 
100%[================================================================================================>] 4,186       2.94KB/s   in 1.4s   

2023-12-17 13:56:04 (2.94 KB/s) - ‘components.yaml’ saved [4186/4186]
[root@k8s-master ~]# ls
anaconda-ks.cfg  components.yaml  init  inventory  kube-flannel.yml
[root@k8s-master ~]# 

# 未修改前
[root@k8s-master ~]# grep -C4 '\- args' components.yaml 
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:                # 在这下面添加这条字段 - --kubelet-insecure-tls
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
[root@k8s-master ~]# 

#修改为:registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.4
[root@k8s-master ~]# grep -A3 'image' components.yaml 
        image: registry.k8s.io/metrics-server/metrics-server:v0.6.4 
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
[root@k8s-master ~]# 

# 修改20为300
[root@k8s-master ~]# grep  'initialDelaySeconds' components.yaml 
          initialDelaySeconds: 20
[root@k8s-master ~]# 


# 修改后
[root@k8s-master ~]# grep -C4 '\- args' components.yaml 
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --kubelet-insecure-tls
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
[root@k8s-master ~]# 


[root@k8s-master ~]# grep -A3 'image' components.yaml 
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.4
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
[root@k8s-master ~]# 


[root@k8s-master ~]# grep  'initialDelaySeconds' components.yaml 
          initialDelaySeconds: 300
[root@k8s-master ~]# 

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS        AGE
coredns-66f779496c-v4mhb             1/1     Running   6 (4h49m ago)   5d22h
coredns-66f779496c-xsbvs             1/1     Running   6 (4h49m ago)   5d22h
etcd-k8s-master                      1/1     Running   6 (4h49m ago)   5d22h
kube-apiserver-k8s-master            1/1     Running   6 (4h49m ago)   5d22h
kube-controller-manager-k8s-master   1/1     Running   6 (4h49m ago)   5d22h
kube-proxy-5khd8                     1/1     Running   6 (4h49m ago)   5d22h
kube-proxy-slgjr                     1/1     Running   6 (4h49m ago)   5d22h
kube-proxy-wbvk4                     1/1     Running   6 (4h49m ago)   5d22h
kube-scheduler-k8s-master            1/1     Running   7 (4h49m ago)   5d22h
[root@k8s-master ~]# 

# 应用文件 components.yaml 
[root@k8s-master ~]# kubectl apply -f components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

[root@k8s-master ~]# kubectl describe pod metrics-server-f84486d6f-c5mq4 -n kube-system
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  102s  default-scheduler  Successfully assigned kube-system/metrics-server-f84486d6f-c5mq4 to k8s-node2
  Normal  Pulling    102s  kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.4"
  Normal  Pulled     88s   kubelet            Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.4" in 13.602s (13.602s including waiting)
  Normal  Created    88s   kubelet            Created container metrics-server
  Normal  Started    88s   kubelet            Started container metrics-server


[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS        AGE
coredns-66f779496c-v4mhb             1/1     Running   6 (4h56m ago)   5d22h
coredns-66f779496c-xsbvs             1/1     Running   6 (4h56m ago)   5d22h
etcd-k8s-master                      1/1     Running   6 (4h56m ago)   5d22h
kube-apiserver-k8s-master            1/1     Running   6 (4h56m ago)   5d22h
kube-controller-manager-k8s-master   1/1     Running   6 (4h56m ago)   5d22h
kube-proxy-5khd8                     1/1     Running   6 (4h56m ago)   5d22h
kube-proxy-slgjr                     1/1     Running   6 (4h56m ago)   5d22h
kube-proxy-wbvk4                     1/1     Running   6 (4h56m ago)   5d22h
kube-scheduler-k8s-master            1/1     Running   7 (4h56m ago)   5d22h
metrics-server-f84486d6f-c5mq4       1/1     Running   0               6m8s   # 这个启动的比较久,耐心等待一下

# 使用kubectl top node 查看资源使用情况
[root@k8s-master ~]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   106m         5%     1066Mi          62%       
k8s-node1    19m          0%     416Mi           24%       
k8s-node2    23m          1%     488Mi           28%       
[root@k8s-master ~]# 

[root@k8s-master ~]#  kubectl top pod -n kube-system
NAME                                 CPU(cores)   MEMORY(bytes)   
coredns-66f779496c-v4mhb             2m           29Mi            
coredns-66f779496c-xsbvs             2m           19Mi            
etcd-k8s-master                      19m          99Mi            
kube-apiserver-k8s-master            48m          282Mi           
kube-controller-manager-k8s-master   15m          85Mi            
kube-proxy-5khd8                     1m           20Mi            
kube-proxy-slgjr                     1m           33Mi            
kube-proxy-wbvk4                     1m           18Mi            
kube-scheduler-k8s-master            3m           42Mi            
metrics-server-f84486d6f-c5mq4       3m           20Mi            
[root@k8s-master ~]# 
# 至此,metrics-server安装完成

2 准备deployment和servie

创建pc-hpa-pod.yaml文件,内容如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: dev
spec:
  strategy: # 策略
    type: RollingUpdate # 滚动更新策略
  replicas: 1
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        resources: # 资源配额
          limits:  # 限制资源(上限)
            cpu: "1" # CPU限制,单位是core数
          requests: # 请求资源(下限)
            cpu: "100m"  # CPU限制,单位是core数
# 创建service
[root@k8s-master hpa]# kubectl expose deployment nginx --port=80 --type=NodePort -n dev
service/nginx exposed

# 查看service
[root@k8s-master hpa]# kubectl get service -n dev
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.110.86.225   <none>        80:31293/TCP   8s
[root@k8s-master hpa]# 

# 查看Pod
[root@k8s-master hpa]# kubectl get pod -n dev
NAME                    READY   STATUS    RESTARTS   AGE
nginx-fcdb96fc4-f8lxw   1/1     Running   0          2m49s
[root@k8s-master hpa]# 

3 部署HPA

创建pc-hpa.yaml文件,内容如下:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: pc-hpa
  namespace: dev
spec:
  minReplicas: 1  #最小pod数量
  maxReplicas: 10 #最大pod数量
  targetCPUUtilizationPercentage: 3 # CPU使用率指标
  scaleTargetRef:   # 指定要控制的nginx信息
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
# 创建hpa
[root@k8s-master hpa]# pwd
/root/inventory/hpa
[root@k8s-master hpa]# kubectl apply -f pc-hpa.yaml 
horizontalpodautoscaler.autoscaling/pc-hpa created
[root@k8s-master hpa]# 


# 查看hpa
[root@k8s-master hpa]# kubectl get hpa -n dev
NAME     REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   0%/3%     1         10        1          26s
[root@k8s-master hpa]# 

4 测试

使用压测工具对service地址10.10.10.148:31293进行压测,然后通过控制台查看hpa和pod的变化

[root@localhost ~]# ip a show ens33
3: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:d3:d9:4e brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.156/24 brd 10.10.10.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::602c:6093:47b:c27e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@localhost ~]# 

# 安装ab命令进行压力测试
[root@localhost ~]# yum -y install  httpd-tools

# 模拟10个用户访问30000个请求
[root@localhost ~]# ab -c 10 -n 30000 http://10.10.10.148:31293/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.10.10.148 (be patient)
Completed 3000 requests
Completed 6000 requests
Completed 9000 requests
Completed 12000 requests
Completed 15000 requests
Completed 18000 requests
Completed 21000 requests
Completed 24000 requests
Completed 27000 requests
Completed 30000 requests
Finished 30000 requests


Server Software:        nginx/1.17.1
Server Hostname:        10.10.10.148
Server Port:            31293

Document Path:          /
Document Length:        612 bytes

Concurrency Level:      10
Time taken for tests:   7.991 seconds
Complete requests:      30000
Failed requests:        0
Write errors:           0
Total transferred:      25350000 bytes
HTML transferred:       18360000 bytes
Requests per second:    3754.40 [#/sec] (mean)
Time per request:       2.664 [ms] (mean)
Time per request:       0.266 [ms] (mean, across all concurrent requests)
Transfer rate:          3098.12 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.3      1      11
Processing:     0    2   0.6      2      14
Waiting:        0    2   0.6      2      14
Total:          1    3   0.6      3      16

Percentage of the requests served within a certain time (ms)
  50%      3
  66%      3
  75%      3
  80%      3
  90%      3
  95%      3
  98%      4
  99%      4
 100%     16 (longest request)
[root@localhost ~]# 

hpa变化

[root@k8s-master ~]# kubectl get hpa -n dev -w
NAME     REFERENCE          TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/nginx   <unknown>/3%   1         10        0          5s
pc-hpa   Deployment/nginx   0%/3%          1         10        1          15s
pc-hpa   Deployment/nginx   341%/3%        1         10        1          60s
pc-hpa   Deployment/nginx   311%/3%        1         10        4          75s
pc-hpa   Deployment/nginx   118%/3%        1         10        8          90s
pc-hpa   Deployment/nginx   91%/3%         1         10        10         105s
pc-hpa   Deployment/nginx   76%/3%         1         10        10         2m
pc-hpa   Deployment/nginx   0%/3%          1         10        10         2m15s

deployment变化

[root@k8s-master ~]# kubectl get deployment -n dev -w
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           43s

nginx   1/4     1            1           100s
nginx   1/4     1            1           100s
nginx   1/4     1            1           100s
nginx   1/4     4            1           100s
nginx   2/4     4            2           101s
nginx   3/4     4            3           101s
nginx   4/4     4            4           101s
nginx   4/8     4            4           115s
nginx   4/8     4            4           115s
nginx   4/8     4            4           115s
nginx   4/8     8            4           115s
nginx   5/8     8            5           116s
nginx   6/8     8            6           116s
nginx   7/8     8            7           116s
nginx   8/8     8            8           116s
nginx   8/10    8            8           2m10s
nginx   8/10    8            8           2m10s
nginx   8/10    8            8           2m10s
nginx   8/10    10           8           2m10s
nginx   9/10    10           9           2m11s
nginx   10/10   10           10          2m11s

pod变化

[root@k8s-master ~]#  kubectl get pod -n dev -w
NAME                    READY   STATUS    RESTARTS   AGE
nginx-fcdb96fc4-g9wzw   1/1     Running   0          75s

nginx-fcdb96fc4-c8xxl   0/1     Pending   0          0s
nginx-fcdb96fc4-c8xxl   0/1     Pending   0          0s
nginx-fcdb96fc4-7zlzv   0/1     Pending   0          0s
nginx-fcdb96fc4-j97t9   0/1     Pending   0          0s
nginx-fcdb96fc4-j97t9   0/1     Pending   0          0s
nginx-fcdb96fc4-7zlzv   0/1     Pending   0          0s
nginx-fcdb96fc4-c8xxl   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-7zlzv   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-j97t9   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-7zlzv   1/1     Running             0          1s
nginx-fcdb96fc4-j97t9   1/1     Running             0          1s
nginx-fcdb96fc4-c8xxl   1/1     Running             0          1s
nginx-fcdb96fc4-4xk2g   0/1     Pending             0          0s
nginx-fcdb96fc4-4xk2g   0/1     Pending             0          0s
nginx-fcdb96fc4-9rg4h   0/1     Pending             0          0s
nginx-fcdb96fc4-dvj5p   0/1     Pending             0          0s
nginx-fcdb96fc4-dvj5p   0/1     Pending             0          0s
nginx-fcdb96fc4-9rg4h   0/1     Pending             0          0s
nginx-fcdb96fc4-4xk2g   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-57kg7   0/1     Pending             0          0s
nginx-fcdb96fc4-57kg7   0/1     Pending             0          0s
nginx-fcdb96fc4-dvj5p   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-9rg4h   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-57kg7   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-dvj5p   1/1     Running             0          1s
nginx-fcdb96fc4-4xk2g   1/1     Running             0          1s
nginx-fcdb96fc4-9rg4h   1/1     Running             0          1s
nginx-fcdb96fc4-57kg7   1/1     Running             0          1s
nginx-fcdb96fc4-nfwj8   0/1     Pending             0          0s
nginx-fcdb96fc4-nfwj8   0/1     Pending             0          0s
nginx-fcdb96fc4-kt2tz   0/1     Pending             0          0s
nginx-fcdb96fc4-kt2tz   0/1     Pending             0          0s
nginx-fcdb96fc4-nfwj8   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-kt2tz   0/1     ContainerCreating   0          0s
nginx-fcdb96fc4-nfwj8   1/1     Running             0          1s
nginx-fcdb96fc4-kt2tz   1/1     Running             0          1s

5.5 DaemonSet(DS)

DaemonSet类型的控制器可以保证在集群中的每一台(或指定)节点上都运行一个副本。一般适用于日志收集、节点监控等场景。也就是说,如果一个Pod提供的功能是节点级别的(每个节点都需要且只需要一个),那么这类Pod就适合使用DaemonSet类型的控制器创建。

在这里插入图片描述

DaemonSet控制器的特点:

  • 每当向集群中添加一个节点时,指定的 Pod 副本也将添加到该节点上
  • 当节点从集群中移除时,Pod 也就被垃圾回收了

下面先来看下DaemonSet的资源清单文件

apiVersion: apps/v1 # 版本号
kind: DaemonSet # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: daemonset
spec: # 详情描述
  revisionHistoryLimit: 3 # 保留历史版本
  updateStrategy: # 更新策略
    type: RollingUpdate # 滚动更新策略
    rollingUpdate: # 滚动更新
      maxUnavailable: 1 # 最大不可用状态的 Pod 的最大值,可以为百分比,也可以为整数
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: nginx-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [nginx-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

创建pc-daemonset.yaml,内容如下:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: pc-daemonset
  namespace: dev
spec:
  selector:
    matchLabels:
      app: pc-daemon-controller
  template:
    metadata:
      labels:
        app: pc-daemon-controller
    spec:
      containers:
      - name: nginx-controller
        image: nginx:1.20.0
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 80
          protocol: "TCP"
# 创建daemonset
[root@k8s-master daemonset]# pwd
/root/inventory/daemonset
[root@k8s-master daemonset]# kubectl apply -f pc-daemonset.yaml 
daemonset.apps/pc-daemonset created


# 查看daemonset
[root@k8s-master ~]# kubectl get daemonset -n dev
NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
pc-daemonset   2         2         2       2            2           <none>          102s


# 查看pod,发现在每个Node上都运行一个pod
[root@k8s-master ~]# kubectl get pod -n dev -o wide | grep pc-daemon
pc-daemonset-652pd      1/1     Running   0          2m22s   10.244.2.81   k8s-node2   ......
pc-daemonset-ffgsv      1/1     Running   0          2m22s   10.244.1.76   k8s-node1   ......
[root@k8s-master ~]# 


# 删除daemonset
[root@k8s-master daemonset]# kubectl delete -f pc-daemonset.yaml 
daemonset.apps "pc-daemonset" deleted

有这样一个场景:daemonset控制器类型,要求只运行在2台node节点上

# 在两台node节点上打上标签
[root@k8s-master ~]# kubectl label node k8s-node1 nodes=node-daemonset-nginx
node/k8s-node1 labeled
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl label node k8s-node2 nodes=node-daemonset-nginx
node/k8s-node2 labeled
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get node k8s-node1 --show-labels | grep nodes=node-daemon-nginx
[root@k8s-master ~]# kubectl get node k8s-node1 --show-labels | grep nodes=node-daemonset-nginx
k8s-node1   Ready    <none>   6d    v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,nodes=node-daemonset-nginx
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get node k8s-node2 --show-labels | grep nodes=node-daemonset-nginx
k8s-node2   Ready    <none>   6d    v1.28.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,nodes=node-daemonset-nginx
[root@k8s-master ~]# 

[root@k8s-master daemonset]# pwd
/root/inventory/daemonset
[root@k8s-master daemonset]# cat pc-daemonset.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: pc-daemonset
  namespace: dev
spec:
  selector:
    matchLabels:
      app: pc-daemon-controller
  template:
    metadata:
      labels:
        app: pc-daemon-controller
    spec:
      nodeSelector:   # 打标签
        nodes: node-daemonset-nginx
      containers:
      - name: nginx-controller
        image: nginx:1.20.0
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 80
          protocol: "TCP"
[root@k8s-master daemonset]# 
[root@k8s-master daemonset]# kubectl get pod -n dev -o wide | grep daemonset
pc-daemonset-6jznh      1/1     Running   0          54s   10.244.1.81   k8s-node1   ......
pc-daemonset-8dljc      1/1     Running   0          54s   10.244.2.85   k8s-node2   ......
[root@k8s-master daemonset]# 

5.6 Job

Job,主要用于负责批量处理(一次要处理指定数量任务) 短暂的 一次性(每个任务仅运行一次就结束) 任务。Job特点如下:

  • 当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量
  • 当成功结束的pod达到指定的数量时,Job将完成执行

在这里插入图片描述

Job的资源清单文件:

apiVersion: batch/v1 # 版本号
kind: Job # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: job
spec: # 详情描述
  completions: 1 # 指定job需要成功运行Pods的次数。默认值: 1
  parallelism: 1 # 指定job在任一时刻应该并发运行Pods的数量。默认值: 1
  activeDeadlineSeconds: 30 # 指定job可运行的时间期限,超过时间还未结束,系统将会尝试进行终止。
  backoffLimit: 5 # 指定job失败后进行重试的次数。默认是5
  manualSelector: true # 是否可以使用selector选择器选择pod,默认是false
  selector: # 选择器,通过它指定该控制器管理哪些pod
    matchLabels:      # Labels匹配规则
      app: counter-pod
    matchExpressions: # Expressions匹配规则
      - {key: app, operator: In, values: [counter-pod]}
  template: # 模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: counter-pod
    spec:
      restartPolicy: Never # 重启策略只能设置为Never或者OnFailure
      containers:
      - name: counter
        image: busybox:1.30
        command: ["bin/sh","-c","for i in 9 8 7 5 5 4 3 2 1; do echo $i;sleep 2;done"]
关于重启策略设置的说明:
    如果指定为OnFailure,则job会在pod出现故障时重启容器,而不是创建pod,failed次数不变
    如果指定为Never,则job会在pod出现故障时创建新的pod,并且故障pod不会消失,也不会重启,failed次数加1
#   如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了,当然不对,所以不能设置为Always

创建pc-job.yaml,内容如下:

apiVersion: batch/v1
kind: Job
metadata:
  name: pc-job
  namespace: dev
spec:
  manualSelector: true
  selector:
    matchLabels:
      app: controller-job
  template:
    metadata:
      labels:
        app: controller-job
    spec:
      restartPolicy: Never
      containers:
      - name: busybox-job
        image: busybox:1.30
        imagePullPolicy: IfNotPresent
        command: ["bin/sh","-c","for i in 9 8 7 5 5 4 3 2 1; do echo $i;sleep 3;done"]
# 创建job
[root@k8s-master job]# pwd
/root/inventory/job
[root@k8s-master job]# 
[root@k8s-master job]# kubectl apply -f pc-job.yaml 
job.batch/pc-job created


# 查看job
[root@k8s-master ~]# kubectl get job -n dev -o wide -w
NAME     COMPLETIONS   DURATION   AGE   CONTAINERS    IMAGES         SELECTOR
pc-job   0/1                      0s    busybox-job   busybox:1.30   app=controller-job
pc-job   0/1           0s         0s    busybox-job   busybox:1.30   app=controller-job
pc-job   0/1           3s         3s    busybox-job   busybox:1.30   app=controller-job
pc-job   0/1           30s        30s   busybox-job   busybox:1.30   app=controller-job
pc-job   0/1           31s        31s   busybox-job   busybox:1.30   app=controller-job
pc-job   1/1           31s        31s   busybox-job   busybox:1.30   app=controller-job


# 通过观察pod状态可以看到,pod在运行完毕任务后,就会变成Completed状态
[root@k8s-master ~]#  kubectl get pods -n dev -w
NAME                              READY   STATUS            RESTARTS     AGE
pc-job-gxx55                      0/1     Pending             0          0s
pc-job-gxx55                      0/1     Pending             0          0s
pc-job-gxx55                      0/1     ContainerCreating   0          0s
pc-job-gxx55                      1/1     Running             0          2s
pc-job-gxx55                      0/1     Completed           0          29s
# 在 27 秒后完成任务


# 接下来,调整下pod运行的总数量和并行数量 即:在spec下设置下面两个选项
#  completions: 6 # 指定job需要成功运行Pods的次数为6
#  parallelism: 3 # 指定job并发运行Pods的数量为3
#  然后重新运行job,观察效果,此时会发现,job会每次运行3个pod,总共执行了6个pod
[root@k8s-master ~]#  kubectl get pods -n dev -w
NAME                              READY   STATUS    RESTARTS   AGE
pc-job-w4jz5                      0/1     Pending   0          0s
pc-job-9p2f4                      0/1     Pending   0          0s
pc-job-xhfg2                      0/1     Pending   0          0s
pc-job-w4jz5                      0/1     Pending   0          0s
pc-job-9p2f4                      0/1     Pending   0          0s
pc-job-xhfg2                      0/1     Pending   0          0s
pc-job-9p2f4                      0/1     ContainerCreating   0          0s
pc-job-xhfg2                      0/1     ContainerCreating   0          0s
pc-job-w4jz5                      0/1     ContainerCreating   0          0s
pc-job-w4jz5                      1/1     Running             0          1s
pc-job-9p2f4                      1/1     Running             0          2s
pc-job-xhfg2                      1/1     Running             0          2s
pc-job-w4jz5                      0/1     Completed           0          29s
pc-job-9p2f4                      0/1     Completed           0          29s
pc-job-xhfg2                      0/1     Completed           0          29s
pc-job-9p2f4                      0/1     Completed           0          30s
pc-job-w4jz5                      0/1     Completed           0          30s
pc-job-xhfg2                      0/1     Completed           0          30s
pc-job-w4jz5                      0/1     Completed           0          31s
pc-job-9p2f4                      0/1     Completed           0          31s
pc-job-tvkvh                      0/1     Pending             0          0s
pc-job-tvkvh                      0/1     Pending             0          0s
pc-job-7cdsk                      0/1     Pending             0          0s
pc-job-7cdsk                      0/1     Pending             0          0s
pc-job-wdm4z                      0/1     Pending             0          0s
pc-job-tvkvh                      0/1     ContainerCreating   0          0s
pc-job-wdm4z                      0/1     Pending             0          0s
pc-job-9p2f4                      0/1     Completed           0          31s
pc-job-w4jz5                      0/1     Completed           0          31s
pc-job-xhfg2                      0/1     Completed           0          31s
pc-job-7cdsk                      0/1     ContainerCreating   0          0s
pc-job-wdm4z                      0/1     ContainerCreating   0          0s
pc-job-xhfg2                      0/1     Completed           0          31s
pc-job-7cdsk                      1/1     Running             0          1s
pc-job-tvkvh                      1/1     Running             0          1s
pc-job-wdm4z                      1/1     Running             0          1s
pc-job-7cdsk                      0/1     Completed           0          28s
pc-job-tvkvh                      0/1     Completed           0          28s
pc-job-wdm4z                      0/1     Completed           0          28s
pc-job-tvkvh                      0/1     Completed           0          29s
pc-job-7cdsk                      0/1     Completed           0          29s
pc-job-wdm4z                      0/1     Completed           0          29s
pc-job-tvkvh                      0/1     Completed           0          30s
pc-job-7cdsk                      0/1     Completed           0          30s
pc-job-wdm4z                      0/1     Completed           0          30s
pc-job-tvkvh                      0/1     Completed           0          30s
pc-job-7cdsk                      0/1     Completed           0          30s
pc-job-wdm4z                      0/1     Completed           0          30s

# 删除job
[root@k8s-master ~]# kubectl delete -f pc-job.yaml
job.batch "pc-job" deleted

5.7 CronJob(CJ)

CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,Job控制器定义的作业任务在其控制器资源创建之后便会立即执行,但CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点重复运行的方式。也就是说,CronJob可以在特定的时间点(反复的)去运行job任务

在这里插入图片描述

CronJob的资源清单文件:

apiVersion: batch/v1beta1 # 版本号
kind: CronJob # 类型       
metadata: # 元数据
  name: # rs名称 
  namespace: # 所属命名空间 
  labels: #标签
    controller: cronjob
spec: # 详情描述
  schedule: # cron格式的作业调度运行时间点,用于控制任务在什么时间执行
  concurrencyPolicy: # 并发执行策略,用于定义前一次作业运行尚未完成时是否以及如何运行后一次的作业
  failedJobHistoryLimit: # 为失败的任务执行保留的历史记录数,默认为1
  successfulJobHistoryLimit: # 为成功的任务执行保留的历史记录数,默认为3
  startingDeadlineSeconds: # 启动作业错误的超时时长
  jobTemplate: # job控制器模板,用于为cronjob控制器生成job对象;下面其实就是job的定义
    metadata:
    spec:
      completions: 1
      parallelism: 1
      activeDeadlineSeconds: 30
      backoffLimit: 5
      manualSelector: true
      selector:
        matchLabels:
          app: counter-pod
        matchExpressions: 规则
          - {key: app, operator: In, values: [counter-pod]}
      template:
        metadata:
          labels:
            app: counter-pod
        spec:
          restartPolicy: Never 
          containers:
          - name: counter
            image: busybox:1.30
            command: ["bin/sh","-c","for i in 9 8 7 5 5 4 3 2 1; do echo $i;sleep 20;done"]
需要重点解释的几个选项:
schedule: cron表达式,用于指定任务的执行时间
    */1    *      *    *     *
    <分钟> <小时> <日> <月份> <星期>

    分钟 值从 0 到 59.
    小时 值从 0 到 23.
    日 值从 1 到 31.
    月 值从 1 到 12.
    星期 值从 0 到 5, 0 代表星期日
    多个时间可以用逗号隔开; 范围可以用连字符给出;*可以作为通配符; /表示每...
concurrencyPolicy:
    Allow:   允许Jobs并发运行(默认)
    Forbid:  禁止并发运行,如果上一次运行尚未完成,则跳过下一次运行
    Replace: 替换,取消当前正在运行的作业并用新作业替换它

创建pc-cronjob.yaml,内容如下:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: pc-cronjob
  namespace: dev
  labels:
    controller: cronjob
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    metadata:
    spec:
      template:
        spec:
          restartPolicy: Never
          containers:
          - name: counter
            image: busybox:1.30
            command: ["bin/sh","-c","for i in 9 8 7 5 5 4 3 2 1; do echo $i;sleep 3;done"]
# 创建cronjob
[root@k8s-master cronjob]# pwd
/root/inventory/cronjob
[root@k8s-master cronjob]# kubectl apply -f pc-cronjob.yaml 
cronjob.batch/pc-cronjob created

# 查看cronjob
[root@k8s-master cronjob]# kubectl get cronjob -n dev
NAME         SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pc-cronjob   */1 * * * *   False     0        <none>          14s


# 查看job
[root@k8s-master cronjob]# kubectl get jobs -n dev
NAME                  COMPLETIONS   DURATION   AGE
pc-cronjob-28380122   0/1           6s         6s


# 查看pod
[root@k8s-master cronjob]# kubectl get pod -n dev -w
pc-cronjob-28380124-ljfpf         0/1     Pending     0          0s
pc-cronjob-28380124-ljfpf         0/1     Pending     0          0s
pc-cronjob-28380124-ljfpf         0/1     ContainerCreating   0          0s
pc-cronjob-28380124-ljfpf         1/1     Running             0          1s
pc-cronjob-28380124-ljfpf         0/1     Completed           0          28s
pc-cronjob-28380124-ljfpf         0/1     Completed           0          29s
pc-cronjob-28380124-ljfpf         0/1     Completed           0          30s
pc-cronjob-28380124-ljfpf         0/1     Completed           0          30s
pc-cronjob-28380125-bbf6j         0/1     Pending             0          0s
pc-cronjob-28380125-bbf6j         0/1     Pending             0          0s
pc-cronjob-28380125-bbf6j         0/1     ContainerCreating   0          0s
pc-cronjob-28380125-bbf6j         1/1     Running             0          1s
pc-cronjob-28380125-bbf6j         0/1     Completed           0          28s
pc-cronjob-28380125-bbf6j         0/1     Completed           0          29s
pc-cronjob-28380125-bbf6j         0/1     Completed           0          30s
pc-cronjob-28380125-bbf6j         0/1     Completed           0          30s
pc-cronjob-28380122-26cnq         0/1     Terminating         0          3m30s
pc-cronjob-28380122-26cnq         0/1     Terminating         0          3m30s
pc-cronjob-28380126-bdmkd         0/1     Pending             0          0s
pc-cronjob-28380126-bdmkd         0/1     Pending             0          0s
pc-cronjob-28380126-bdmkd         0/1     ContainerCreating   0          0s
pc-cronjob-28380126-bdmkd         1/1     Running             0          1s
pc-cronjob-28380126-bdmkd         0/1     Completed           0          28s
pc-cronjob-28380126-bdmkd         0/1     Completed           0          29s
pc-cronjob-28380126-bdmkd         0/1     Completed           0          30s
pc-cronjob-28380126-bdmkd         0/1     Completed           0          30s
pc-cronjob-28380123-t6htc         0/1     Terminating         0          3m30s
pc-cronjob-28380123-t6htc         0/1     Terminating         0          3m30s


# 1分钟后重新创建一个Pod,在执行for 循环

# 删除cronjob
[root@k8s-master cronjob]# kubectl delete -f pc-cronjob.yaml 
cronjob.batch "pc-cronjob" deleted

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1319031.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

雷电4.0.50模拟器Android7.1.2安装xposed框架

官方论坛&#xff1a;https://xdaforums.com/t/official-xposed-for-lollipop-marshmallow-nougat-oreo-v90-beta3-2018-01-29.3034811/ Xposed 有分支 [EdXposed 和 LSPosed] 。 Edxposed框架现在支持android 8.0 - android 9.0 &#xff0c;如果是android 7.0或更早的版本&…

【unity小技巧】使用三种方式实现瞄准瞄具放大变焦效果

最终效果对比 文章目录 最终效果对比前言第一种办法方法二1. 创建URP环境2. 配置 Universal Render Pipeline Asset3. 这里向我们新建一个无光的ShaderGraph4. 主图配置4. 新建材质&#xff0c;挂载5. 下面是shaderGraph 的连线图6. 新增脚本控制ObjectScreenPosition随着瞄准镜…

安卓端出现https请求失败(转)

背景# 某天早上&#xff0c;正在一个会议时&#xff0c;突然好几个同事被叫出去了&#xff1b;后面才知道&#xff0c;是有业务同事反馈到领导那里&#xff0c;我们app里面某个功能异常。 具体是这样&#xff0c;我们安卓版本的app是禁止截屏的&#xff08;应该是app里做了拦…

vue3的大致使用

<template><div class"login_wrap"><div class"form_wrap"> <!-- 账号输入--> <el-form ref"formRef" :model"user" class"demo-dynamic" > <!--prop要跟属性名称对应-->…

呜呜呜我要拿Go赢他~ 入门,Go的最简单的 Web 服务器!

前言 继续接入上章节的呜呜呜我要拿Go赢他~ 入门,Go的基础语法! 的文章现在要学的是Go的最简单的 Web 服务器! 补充 上章节的基础语法-方法声明与调用 方法声明 四个部分&#xff1a; 关键字 func方法名字&#xff1a;首字母是否大写决定了作用域参数列表&#xff1a;返回…

C++面试宝典第6题:访问数组和联合体元素

题目 阅读下面的代码段,并给出程序的输出。 (1)访问数组元素。 int a[] = {61, 62, 63, 64, 65, 66}; int *p = (int *)(&a + 1); printf("%d, %d\n", *(a + 1), *(p - 1)); (2)访问联合体元素。 union {short i;char x[2]; }a;a.x[0] = 10; a.x[1] = 1; …

Qt/C++音视频开发60-坐标拾取/按下鼠标获取矩形区域/转换到视频源真实坐标

一、前言 通过在通道画面上拾取鼠标按下的坐标&#xff0c;然后鼠标移动&#xff0c;直到松开&#xff0c;根据松开的坐标和按下的坐标&#xff0c;绘制一个矩形区域&#xff0c;作为热点或者需要电子放大的区域&#xff0c;拿到这个坐标区域&#xff0c;用途非常多&#xff0…

UE5 C++(二)— 游戏架构介绍

架构关系如下&#xff1a; 这里只简单描述下&#xff0c;具体的查看官方文档 AGameMode: AGameMode 是 AGameModeBase 的子类&#xff0c;拥有一些额外的功能支持多人游戏和旧行为。 所有新建项目默认使用 AGameModeBase。 如果需要此额外行为&#xff0c;可切换到从 AGameM…

【LangChain学习之旅】—(3) LangChain快速构建本地知识库的智能问答系统

【LangChain学习之旅】—&#xff08;3&#xff09; LangChain快速构建本地知识库的智能问答系统 项目及实现框架开发框架核心实现机制数据准备及加载加载文本文本的分割向量数据库存储文本的“嵌入”概念向量数据库概念 相关信息获取RetrievalQA生成回答并展示示例小结 Refere…

云服务器部署vue/node项目

此处以阿里云服务器为例&#xff0c;配置的是LNMP环境 vue部署云服务器&#xff1a; 以阿里云服务为例&#xff0c;端口自定义99 1、在 /usr/share/nginx/html/ 该目录下新建文件夹&#xff0c;该文件夹是部署的打好包的前端项目 例&#xff1a; 2、进入nginx目录配置相关配…

【Qt开发流程】之TCP

概述 TCP&#xff08;Transmission Control Protocol&#xff0c;传输控制协议&#xff09;是一种面向连接的、可靠的、基于字节流的传输协议。它是互联网协议套件中的一部分&#xff0c;用于在网络上可靠地传输数据。 TCP通过建立连接、数据传输和连接终止三个阶段来进行通信…

音频I2S

前言 基于网上资料对相关概念做整理汇总&#xff0c;部分内容引用自文后文章。 学习目标&#xff1a;简单了解相关概念、相关协议。 1 概述 数字音频接口DAI&#xff0c;即Digital Audio Interfaces&#xff0c;顾名思义&#xff0c;DAI表示在板级或板间传输数字音频信…

ELK(八)—Metricbeat部署

目录 介绍修改配置文件启动 Modulenginx开启状态查询配置Nginx module查看是否配置成功 介绍 Metricbeat 是一个轻量级的开源度量数据收集器&#xff0c;用于监控系统和服务。它由 Elastic 公司开发&#xff0c;并作为 Elastic Stack&#xff08;Elasticsearch、Logstash、Kiba…

Ubuntu18.04安装ffmpeg

前言 从本章开始我们将要学习嵌入式音视频的学习了 &#xff0c;使用的瑞芯微的开发板 &#x1f3ac; 个人主页&#xff1a;ChenPi &#x1f43b;推荐专栏1: 《C_ChenPi的博客-CSDN博客》✨✨✨ &#x1f525; 推荐专栏2: 《Linux C应用编程&#xff08;概念类&#xff09;_C…

解决:AttributeError: module ‘scipy.misc’ has no attribute ‘imsave’

解决&#xff1a;AttributeError: module ‘scipy.misc’ has no attribute ‘imsave’ 文章目录 解决&#xff1a;AttributeError: module scipy.misc has no attribute imsave背景报错问题报错翻译报错位置代码报错原因解决方法方法一 scipy版本回退&#xff08;不推荐&#…

【物联网】EMQX(二)——docker快速搭建EMQX 和 MQTTX客户端使用

一、前言 在上一篇文章中&#xff0c;小编向大家介绍了物联网必然会用到的消息服务器EMQ&#xff0c;相信大家也对EMQ有了一定的了解&#xff0c;那么接下来&#xff0c;小编从这篇文章正式开始展开对EMQ的学习教程&#xff0c;本章节来记录一下如何对EMQ进行安装。 二、使用…

P2P如何使用register_attention_control为UNet的CrossAttention关联AttentionStore

上次的调试到这里了&#xff0c;写完这篇接着看&#xff0c;prepare_latents_ddim_inverted 如何预计算 inversion latents&#xff1a; /home/pgao/yue/FateZero/video_diffusion/pipelines/p2p_ddim_spatial_temporal.py 1. 原始的UNet3D的CrossAttention和SparseCausalAtte…

展示一段比较简单的人工智能自动做模型的程序

人工智能是一种模拟或模仿人类智能的技术。它通过使计算机系统具有一定的认知能力和学习能力&#xff0c;使其能够自动完成一系列复杂的任务。人工智能可以在各个领域应用&#xff0c;包括图像识别、语音识别、自然语言处理、机器学习等。人工智能还可以用于解决各种问题&#…

地平线前端实习一面复盘(加深对var的理解+展开运算符+平拍数组)

目录 前言一&#xff0c;var的作用二&#xff0c;展开运算符三&#xff0c;平拍数组总结 前言 地平线的面试&#xff0c;有提示&#xff0c;很专业&#xff0c;体验很好。 可惜后面未收到消息&#xff0c;但还是要做复盘。收获还是很大的。 一&#xff0c;var的作用 且看下…

30. 深度学习进阶 - 池化

Hi&#xff0c;你好。我是茶桁。 上一节课&#xff0c;我们详细的学习了卷积的原理&#xff0c;在这个过程中给大家讲了一个比较重要的概念&#xff0c;叫做input channel&#xff0c;和output channel。 当然现在不需要直接去实现, 卷积的原理PyTorch、或者TensorFlow什么的…