k8s入门(一)之pod创建、label使用、污点、亲和性、RS

news2024/9/22 19:45:58

一、创建一个pod


[root@master01 ~]# kubectl create ns prod
[root@master01 ~]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: prod
  labels:
     app: myapp
spec:
  containers:
  - name: test1
    image: busybox:latest
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 100000000000"
  - name: test2
    image: busybox:latest
    args: ["sleep","100000000000000"]
    
[root@master01 ~]# kubectl create -f pod.yaml        ###创建pod
[root@master01 ~]# kubectl describe pod pod-demo -n prod   ###查看日志
[root@master01 ~]# kubectl get pod pod-demo -n prod -o wide
NAME       READY   STATUS    RESTARTS   AGE     IP          NODE       NOMINATED NODE   READINESS GATES
pod-demo   2/2     Running   0          4m41s   10.0.0.70   master03   <none>           <none>


[root@master01 ~]# kubectl -n prod exec -it pod/pod-demo test1 -- sh      ###进入容器
Defaulted container "test1" out of: test1, test2
/ #

[root@master01 ~]# kubectl delete -f pod.yaml      ###删除pod

二、label使用


优先级:nodeName -> nodeSelector -> 污点(Taints) ->亲和性(Affinity)

1、创建httpd

[root@master01 ~]# cat web-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: web
  labels:
     app: as
     rel: stable
spec:
  containers:
  - name: web
    image: httpd:latest
    ports:
    - containerPort: 80

[root@master01 ~]# kubectl apply -f web-pod.yaml

2、label常用操作


[root@master01 ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          75s

[root@master01 ~]# kubectl get pod --show-labels     ###查看所有label
NAME   READY   STATUS    RESTARTS   AGE   LABELS
web    1/1     Running   0          87s   app=as,rel=stable

[root@master01 ~]# kubectl get pod -l app=as       ###匹配app=as的label
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          2m

[root@master01 ~]# kubectl get pod -l app!=test   ###匹配app不等于test的label
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          2m23s

[root@master01 ~]# kubectl get pod -l 'app in (test,as)'       ###匹配app为test或者as的label
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          3m7s
[root@master01 ~]# kubectl get pod -l 'app notin (test,as)'   ###匹配app不为test或者as的label
No resources found in default namespace.

[root@master01 ~]# kubectl get pod -L app               ### 匹配所有app
NAME   READY   STATUS    RESTARTS   AGE     APP
web    1/1     Running   0          3m34s   as

[root@master01 ~]# kubectl get pod --show-labels
NAME   READY   STATUS    RESTARTS   AGE     LABELS
web    1/1     Running   0          4m21s   app=as,rel=stable

[root@master01 ~]# kubectl label pod web rel=canary --overwrite        ###修改label
[root@master01 ~]# kubectl get pod --show-labels
NAME   READY   STATUS    RESTARTS   AGE     LABELS
web    1/1     Running   0          4m49s   app=as,rel=canary
[root@master01 ~]# kubectl label nodes master02 disktype=ssd     ###给节点打label
[root@master01 ~]# kubectl label nodes master03 env=prod        
[root@master01 ~]# kubectl get nodes --show-labels

在这里插入图片描述

3、nodeSelector使用

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
      nodeSelector:
        disktype: ssd

[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
deployment.apps/nginx-deployment created
[root@master01 ~]# kubectl get pod -owide
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-deployment-697c567f87-gddsh   1/1     Running   0          9s    10.0.1.77    master02   <none>           <none>
nginx-deployment-697c567f87-lltln   1/1     Running   0          9s    10.0.1.144   master02   <none>           <none>
web                                 1/1     Running   0          25m   10.0.0.38    master03   <none>           <none>

4、nodeName使用

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
      nodeName: master02

[root@master01 ~]# kubectl apply -f nginx-deploy.yaml

三、污点和容忍度


1、查看Taints

[root@master01 ~]# kubectl describe node master01|grep Taints
Taints:             <none>

[root@master01 ~]# kubectl taint node master01 node-role.kubernets.io/master="":NoSchedule
[root@master01 ~]# kubectl describe node master01|grep Taints
Taints:             node-role.kubernets.io/master:NoSchedule

[root@master01 ~]# kubectl taint node master01 node-role.kubernets.io/master-     ###删除
[root@master01 ~]# kubectl get pod -A -owide
[root@master01 ~]# kubectl describe pod cilium-operator-58bf55d99b-zxmxp -n kube-system|grep Tolerations
Tolerations:                 op=Exists

2、添加污点

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl taint node master02 node-type=production:NoSchedule
[root@master01 ~]# kubectl delete -f nginx-deploy.yaml
[root@master01 ~]# kubectl apply -f nginx-deploy.yaml

# 如果之前已经部署在master02,不会把之前的驱离
[root@master01 ~]# kubectl get pod -A -owide|grep nginx
default       nginx-deployment-66b9f7ff85-6hxqf   1/1     Running             0              22s    10.0.4.110    master01   <none>           <none>
default       nginx-deployment-66b9f7ff85-d8jsz   1/1     Running             0              22s    10.0.2.168    node01     <none>           <none>
default       nginx-deployment-66b9f7ff85-dmptx   1/1     Running             0              22s    10.0.0.223    master03   <none>           <none>
default       nginx-deployment-66b9f7ff85-h5wts   1/1     Running             0              22s    10.0.4.201    master01   <none>           <none>
default       nginx-deployment-66b9f7ff85-jxjw4   0/1     ContainerCreating   0              22s    <none>        node02     <none>           <none>

3、容忍污点

(1)容忍NoSchedule

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
      - key: node-type
        operator: Equal
        value: production
        effect: NoSchedule
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl delete -f nginx-deploy.yaml
[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
[root@master01 ~]# kubectl get pod -A -owide|grep nginx
default       nginx-deployment-7cffd544c8-7d677   1/1     Running   0              11s    10.0.2.15     node01     <none>           <none>
default       nginx-deployment-7cffd544c8-nszsr   1/1     Running   0              11s    10.0.0.229    master03   <none>           <none>
default       nginx-deployment-7cffd544c8-v4cw2   1/1     Running   0              11s    10.0.4.192    master01   <none>           <none>
default       nginx-deployment-7cffd544c8-v75dd   1/1     Running   0              11s    10.0.1.52     master02   <none>           <none>
default       nginx-deployment-7cffd544c8-x9mdm   1/1     Running   0              11s    10.0.3.93     node02     <none>           <none>

(2)容忍NoExecute

会立刻把master03的机器迁移走

[root@master01 ~]# kubectl get pod -A -owide|grep nginx
default       nginx-deployment-7cffd544c8-7d677   1/1     Running   0              4m10s   10.0.2.15     node01     <none>           <none>
default       nginx-deployment-7cffd544c8-nszsr   1/1     Running   0              4m10s   10.0.0.229    master03   <none>           <none>
default       nginx-deployment-7cffd544c8-v4cw2   1/1     Running   0              4m10s   10.0.4.192    master01   <none>           <none>
default       nginx-deployment-7cffd544c8-v75dd   1/1     Running   0              4m10s   10.0.1.52     master02   <none>           <none>
default       nginx-deployment-7cffd544c8-x9mdm   1/1     Running   0              4m10s   10.0.3.93     node02     <none>           <none>
[root@master01 ~]# kubectl taint node master03 node-type=test:NoExecute
[root@master01 ~]# kubectl get pod -A -owide|grep nginx
default       nginx-deployment-7cffd544c8-7d677   1/1     Running   0              4m41s   10.0.2.15     node01     <none>           <none>
default       nginx-deployment-7cffd544c8-fpn9l   1/1     Running   0              6s      10.0.4.252    master01   <none>           <none>
default       nginx-deployment-7cffd544c8-v4cw2   1/1     Running   0              4m41s   10.0.4.192    master01   <none>           <none>
default       nginx-deployment-7cffd544c8-v75dd   1/1     Running   0              4m41s   10.0.1.52     master02   <none>           <none>
default       nginx-deployment-7cffd544c8-x9mdm   1/1     Running   0              4m41s   10.0.3.93     node02     <none>           <none>

(3)设置tolerationSeconds

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
      - key: node-type
        operator: Equal
        value: production
        effect: NoSchedule
      - effect: NoExecute
        key: node.kubernetes.io/not-ready
        operator: Exists
        tolerationSeconds: 300
      - effect: NoExecute
        key: node.kubernetes.io/unreachable
        operator: Exists
        tolerationSeconds: 30
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl delete -f nginx-deploy.yaml
[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
[root@master01 ~]# kubectl get pod -A -owide|grep nginx
default       nginx-deployment-5c5996fdc-7fjl9   1/1     Running   0              9m47s   10.0.1.124    master02   <none>           <none>
default       nginx-deployment-5c5996fdc-8qh2r   1/1     Running   0              9m47s   10.0.4.109    master01   <none>           <none>
default       nginx-deployment-5c5996fdc-b2nbz   1/1     Running   0              9m47s   10.0.3.200    node02     <none>           <none>
default       nginx-deployment-5c5996fdc-hdnsl   1/1     Running   0              9m47s   10.0.2.251    node01     <none>           <none>
default       nginx-deployment-5c5996fdc-j78nd   1/1     Running   0              9m47s   10.0.4.135    master01   <none>           <none>

[root@master01 ~]# kubectl describe pod nginx-deployment-5c5996fdc-7fjl9 |grep -A 3 Tolerations
Tolerations:                 node-type=production:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 30s

(4)删除之前污点

[root@master01 ~]# kubectl taint node master02 node-type-
[root@master01 ~]# kubectl taint node master03 node-type-

四、亲和性


1、node亲和性

如果master02不够话,会调度到其他节点上

[root@master01 ~]# kubectl label nodes master02 gpu=true
node/master02 labeled
[root@master01 ~]# kubectl get nodes --show-labels
NAME       STATUS   ROLES    AGE   VERSION   LABELS
master01   Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node.kubernetes.io/node=
master02   Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,gpu=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=master02,kubernetes.io/os=linux,node.kubernetes.io/node=
master03   Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=prod,kubernetes.io/arch=amd64,kubernetes.io/hostname=master03,kubernetes.io/os=linux,node.kubernetes.io/node=
node01     Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,node.kubernetes.io/node=
node02     Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,node.kubernetes.io/node=
[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: gpu
                operator: In
                values:
                - "true"
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
[root@master01 ~]# kubectl get pod -owide|grep nginx
nginx-deployment-596d5dfbb-6v9wc   1/1     Running   0          19s   10.0.1.49    master02   <none>           <none>
nginx-deployment-596d5dfbb-gzsx7   1/1     Running   0          19s   10.0.1.213   master02   <none>           <none>
nginx-deployment-596d5dfbb-ktwhw   1/1     Running   0          19s   10.0.1.179   master02   <none>           <none>
nginx-deployment-596d5dfbb-r96cz   1/1     Running   0          19s   10.0.1.183   master02   <none>           <none>
nginx-deployment-596d5dfbb-tznqh   1/1     Running   0          19s   10.0.1.210   master02   <none>           <none>

2、preferredDuringSchedulingIgnoredDuringExecution

[root@master01 ~]# kubectl label node master02 available-zone=zone1
[root@master01 ~]# kubectl label node master03 available-zone=zone2
[root@master01 ~]# kubectl label nodes master02 share-type=dedicated
[root@master01 ~]# kubectl label nodes master03 share-type=shared
[root@master01 ~]# kubectl get nodes --show-labels
NAME       STATUS   ROLES    AGE   VERSION   LABELS
master01   Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node.kubernetes.io/node=
master02   Ready    <none>   24h   v1.26.5   available-zone=zone1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,gpu=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=master02,kubernetes.io/os=linux,node.kubernetes.io/node=,share-type=dedicated
master03   Ready    <none>   24h   v1.26.5   available-zone=zone2,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=prod,kubernetes.io/arch=amd64,kubernetes.io/hostname=master03,kubernetes.io/os=linux,node.kubernetes.io/node=,share-type=shared
node01     Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,node.kubernetes.io/node=
node02     Ready    <none>   24h   v1.26.5   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,node.kubernetes.io/node=
[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 10
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 80
            preference:
              matchExpressions:
              - key: available-zone
                operator: In
                values:
                - zone1
          - weight: 20
            preference:
              matchExpressions:
              - key: share-type
                operator: In
                values:
                - dedicated

      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl delete -f nginx-deploy.yaml
[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
[root@master01 ~]# kubectl get pod -A -o wide|grep nginx
default       nginx-deployment-5794574555-22xqh   1/1     Running   0               13s     10.0.1.28     master02   <none>           <none>
default       nginx-deployment-5794574555-5b8kq   1/1     Running   0               13s     10.0.1.168    master02   <none>           <none>
default       nginx-deployment-5794574555-6fm9h   1/1     Running   0               13s     10.0.1.152    master02   <none>           <none>
default       nginx-deployment-5794574555-7gqm7   1/1     Running   0               13s     10.0.3.163    node02     <none>           <none>
default       nginx-deployment-5794574555-7mp2p   1/1     Running   0               13s     10.0.1.19     master02   <none>           <none>
default       nginx-deployment-5794574555-8rrmw   1/1     Running   0               13s     10.0.4.107    master01   <none>           <none>
default       nginx-deployment-5794574555-c6s7z   1/1     Running   0               13s     10.0.1.52     master02   <none>           <none>
default       nginx-deployment-5794574555-f94p6   1/1     Running   0               13s     10.0.0.125    master03   <none>           <none>
default       nginx-deployment-5794574555-hznds   1/1     Running   0               13s     10.0.1.163    master02   <none>           <none>
default       nginx-deployment-5794574555-vx85n   1/1     Running   0               13s     10.0.2.179    node01     <none>           <none>

3、pod亲和性

(1)创建一个busybox

发现生成在master03

[root@master01 ~]# kubectl run backend -l app=backend --image busybox -- sleep 99999999
[root@master01 ~]# kubectl get pod -owide
NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
backend                             1/1     Running   0          7m35s   10.0.0.242   master03   <none>           <none>


[root@master01 ~]# kubectl get pod --show-labels
NAME      READY   STATUS    RESTARTS   AGE   LABELS
backend   1/1     Running   0          45s   app=backend

(2)nginx亲和busybox(方式一)

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - topologyKey: "kubernetes.io/hostname"
            labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - backend

      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
[root@master01 ~]# kubectl delete -f nginx-deploy.yaml

(3)nginx亲和busybox(方式二)

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - topologyKey: "kubernetes.io/hostname"
            labelSelector:
              matchLabels:
                app: backend

      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl delete -f nginx-deploy.yaml
[root@master01 ~]# kubectl apply -f nginx-deploy.yaml

(4)优先级亲和pod

优先部署在满足的节点上


[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 5
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 80
            podAffinityTerm:
              topologyKey: "kubernetes.io/hostname"
              labelSelector:
                matchLabels:
                  app: backend

      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80


[root@master01 ~]# kubectl delete -f nginx-deploy.yaml
[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
[root@master01 ~]# kubectl get pod -owide
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
backend                             1/1     Running   0          19m   10.0.0.242   master03   <none>           <none>
nginx-deployment-6b6b6646cf-2g2nt   1/1     Running   0          30s   10.0.0.158   master03   <none>           <none>
nginx-deployment-6b6b6646cf-62xlh   1/1     Running   0          30s   10.0.0.4     master03   <none>           <none>
nginx-deployment-6b6b6646cf-6nhl4   1/1     Running   0          30s   10.0.0.123   master03   <none>           <none>
nginx-deployment-6b6b6646cf-gfbxt   1/1     Running   0          30s   10.0.0.125   master03   <none>           <none>
nginx-deployment-6b6b6646cf-sg6bx   1/1     Running   0          30s   10.0.4.206   master01   <none>           <none>

(5)pod非亲和

因为5个节点都安装了,因为第六个无法进行安装

[root@master01 ~]# cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
    matchLabels:
      app: frontend
  replicas: 6
  template:
    metadata:
      labels:
        app: frontend
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - topologyKey: "kubernetes.io/hostname"
            labelSelector:
              matchLabels:
                app: frontend

      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
[root@master01 ~]# kubectl apply -f nginx-deploy.yaml
[root@master01 ~]# kubectl get pod -owide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
backend                    1/1     Running   0          32m   10.0.0.242   master03   <none>           <none>
frontend-84bdd684b-825hs   1/1     Running   0          74s   10.0.1.145   master02   <none>           <none>
frontend-84bdd684b-b8vd9   0/1     Pending   0          74s   <none>       <none>     <none>           <none>
frontend-84bdd684b-m7d79   1/1     Running   0          74s   10.0.2.20    node01     <none>           <none>
frontend-84bdd684b-pj9vv   1/1     Running   0          74s   10.0.4.42    master01   <none>           <none>
frontend-84bdd684b-qn6lb   1/1     Running   0          74s   10.0.3.91    node02     <none>           <none>
frontend-84bdd684b-tl84p   1/1     Running   0          74s   10.0.0.214   master03   <none>           <none>

[root@master01 ~]# kubectl describe pod frontend-84bdd684b-b8vd9
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  102s (x2 over 103s)  default-scheduler  0/5 nodes are available: 5 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod..

五、RC与RS


适应于不需要更新的程序,最大的问题不支持滚动更新
RS比RC主要多标签选择器

1、RC体验

[root@master01 ~]# cat rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-test
spec:
  replicas: 3
  selector:
    app: rc-pod
  template:
    metadata:
      labels:
        app: rc-pod
    spec:
      containers:
      - name: rc-test
        image: httpd:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl apply -f rc.yaml
[root@master01 ~]# kubectl get pod -owide
NAME            READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
rc-test-bjrqn   1/1     Running   0          35s   10.0.0.244   master03   <none>           <none>
rc-test-srbfn   1/1     Running   0          35s   10.0.4.207   master01   <none>           <none>
rc-test-tjrmf   1/1     Running   0          35s   10.0.0.67    master03   <none>           <none>

[root@master01 ~]# kubectl get rc
NAME      DESIRED   CURRENT   READY   AGE
rc-test   3         3         3       94s


[root@master01 ~]# kubectl describe rc rc-test|grep -i Replicas
Replicas:     3 current / 3 desired

2、RS初体验

[root@master01 ~]# cat rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: rs-test
  labels:
    app: guestbool
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: rs-test
        image: httpd:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

[root@master01 ~]# kubectl apply -f rs.yaml
[root@master01 ~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
rs-test-7dbz8   1/1     Running   0          2m18s   10.0.4.245   master01   <none>           <none>
rs-test-m8tqv   1/1     Running   0          2m18s   10.0.1.202   master02   <none>           <none>
rs-test-pp6b6   1/1     Running   0          2m18s   10.0.0.93    master03   <none>           <none>

[root@master01 ~]# kubectl get pod rs-test-7dbz8 -oyaml      # 通过uuid进行的管理

在这里插入图片描述

3、RS接管pod

注意标签相同,必须所有标签都相同,执行运行pod启动不了的

[root@master01 ~]# cat pod-rs.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  labels:
    tier: frontend
spec:
  containers:
  - name: test1
    image: httpd:latest
    imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Pod
metadata:
  name: pod2
  labels:
    tier: frontend
spec:
  containers:
  - name: test2
    image: httpd:latest
    imagePullPolicy: IfNotPresent

[root@master01 ~]# kubectl delete -f rs.yaml
[root@master01 ~]# kubectl apply -f pod-rs.yaml
[root@master01 ~]# kubectl apply -f rs.yaml
[root@master01 ~]# kubectl get pod
NAME            READY   STATUS    RESTARTS   AGE
pod1            1/1     Running   0          102s
pod2            1/1     Running   0          102s
rs-test-zw2sj   1/1     Running   0          3s

[root@master01 ~]# kubectl delete -f rs.yaml

4、修改label测试

[root@master01 ~]# kubectl apply -f rs.yaml
[root@master01 ~]# kubectl get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE   LABELS
rs-test-cn6cc   1/1     Running   0          11s   tier=frontend
rs-test-jxwfb   1/1     Running   0          11s   tier=frontend
rs-test-sztrt   1/1     Running   0          11s   tier=frontend
[root@master01 ~]# kubectl label pod rs-test-cn6cc tier=canary --overwrite
[root@master01 ~]# kubectl get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE   LABELS
rs-test-cn6cc   1/1     Running   0          60s   tier=canary
rs-test-jxwfb   1/1     Running   0          60s   tier=frontend
rs-test-qw5fj   1/1     Running   0          14s   tier=frontend
rs-test-sztrt   1/1     Running   0          60s   tier=frontend

[root@master01 ~]# kubectl get pod  -l tier=frontend
NAME            READY   STATUS    RESTARTS   AGE
rs-test-jxwfb   1/1     Running   0          3m30s
rs-test-qw5fj   1/1     Running   0          2m44s
rs-test-sztrt   1/1     Running   0          3m30s

[root@master01 ~]# kubectl get pod  -l 'tier in (canary,frontend)'
NAME            READY   STATUS    RESTARTS   AGE
rs-test-cn6cc   1/1     Running   0          3m48s
rs-test-jxwfb   1/1     Running   0          3m48s
rs-test-qw5fj   1/1     Running   0          3m2s
rs-test-sztrt   1/1     Running   0          3m48s

[root@master01 ~]# kubectl get pod  -l 'tier notin (canary)'
NAME            READY   STATUS    RESTARTS   AGE
rs-test-jxwfb   1/1     Running   0          4m2s
rs-test-qw5fj   1/1     Running   0          3m16s
rs-test-sztrt   1/1     Running   0          4m2s

5、多标签

(1)测试rs

[root@master01 ~]# cat rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: rs-test
  labels:
    app: guestbool
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
    matchExpressions:
      - key: tier
        operator: In
        values:
          - frontend
          - canary
      - {key: app, operator: In, values: [guestbool, test]}
  template:
    metadata:
      labels:
        tier: frontend
        app: guestbool
    spec:
      containers:
      - name: rs-test
        image: httpd:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80


[root@master01 ~]# kubectl delete -f rs.yaml
[root@master01 ~]# kubectl apply -f rs.yaml
replicaset.apps/rs-test created
[root@master01 ~]# kubectl get pod --show-labels
NAME            READY   STATUS              RESTARTS   AGE   LABELS
rs-test-2srdx   1/1     Running             0          3s    app=guestbool,tier=frontend
rs-test-ftzsk   1/1     Running             0          3s    app=guestbool,tier=frontend
rs-test-v2nlj   0/1     ContainerCreating   0          3s    app=guestbool,tier=frontend

(2)测试pod-rs

[root@master01 ~]# kubectl label pod rs-test-2srdx tier=canary --overwrite
[root@master01 ~]# kubectl get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE     LABELS
rs-test-2srdx   1/1     Running   0          2m46s   app=guestbool,tier=canary
rs-test-4t548   1/1     Running   0          4s      app=guestbool,tier=frontend
rs-test-ftzsk   1/1     Running   0          2m46s   app=guestbool,tier=frontend
rs-test-v2nlj   1/1     Running   0          2m46s   app=guestbool,tier=frontend

[root@master01 ~]# cat pod-rs.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  labels:
    tier: canary
spec:
  containers:
  - name: test1
    image: httpd:latest
    imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Pod
metadata:
  name: pod2
  labels:
    tier: frontend
spec:
  containers:
  - name: test2
    image: httpd:latest
    imagePullPolicy: IfNotPresent

[root@master01 ~]# kubectl apply -f pod-rs.yaml
[root@master01 ~]# kubectl get pod --show-labels
NAME            READY   STATUS    RESTARTS   AGE     LABELS
pod1            1/1     Running   0          9s      tier=canary
pod2            1/1     Running   0          9s      tier=frontend
rs-test-2srdx   1/1     Running   0          3m49s   app=guestbool,tier=canary
rs-test-4t548   1/1     Running   0          67s     app=guestbool,tier=frontend
rs-test-ftzsk   1/1     Running   0          3m49s   app=guestbool,tier=frontend
rs-test-v2nlj   1/1     Running   0          3m49s   app=guestbool,tier=frontend

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/593964.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

问题记录 bug1-系统上电挂载异常分区,df与du命令查看文件使用大小显示不一样

linux磁盘分区 在Linux中&#xff0c;一切皆目录&#xff0c;每一块硬盘分区对应Linux的一个目录&#xff0c;所以我们可以通过管理目录来管理硬盘分区&#xff0c;而将硬盘分区与文件目录关联的操作就称为“挂载”【mount】&#xff0c;反之为“卸载”【unmount】 emmc&…

C高级 text

1.从命令行传参传入两个整数&#xff0c;整数1代表从整数几开始求和&#xff0c;整数2代表求和到整数几为止 2.打印99乘法表 3.输入年月日&#xff0c;计算是该年的第几天 1. 2. 3.

Linux进程基础

进程指正在运行的程序&#xff0c;如下图示&#xff0c;是资源分配的最小单位&#xff0c;可以通过“ps ”或“top”等命令查看正 在运行的进程&#xff0c;线程是系统的最小调度单位&#xff0c;一个进程可以拥有多个线程&#xff0c;同一进程里的线程可以共享此 进程的同一资…

Server版支持即将到期,Jira和Confluence如何迁移?(2)

到2024年2月&#xff0c;Atlassian将终止对Server产品及插件的所有支持。是时候制定您的迁移计划了——Atlassian为您提供两种迁移选择&#xff0c;一是本地部署的数据中心版本&#xff0c;中国用户25人以上即可使用&#xff0c;二是云版。作为Atlassian全球白金合作伙伴&#…

Markdown可以在线编辑吗?这个办法很好用

Markdown是一种轻量级标记语言&#xff0c;它使用简单的语法来创建文本&#xff0c;非常易于学习。它最初被设计为一种用于写作的格式&#xff0c;但现在已经成为了一种非常受欢迎的文本编辑工具。 作为一个较为方便的在线文本编辑器&#xff0c;它可以用代码代替文字&#xf…

一篇完整的测试方案怎么写

看上面的目录&#xff0c;详细 文档说明 文档名称 创建人/修改人 版本 时间 备注 v1.0 2022-11-17 新建 v1.1 2022-11-25 v1.2 2022-12-05 v2.0 2022-12-13 v2.1 2022-12-14 一、文档目的 为软件开发项目管理者、软件工程师、系统维护工程师、测试…

如何开发合成物品功能?

UE5 插件开发指南 前言0 数据结构1 合成面板UI组件2 小结前言 现在策划有一个合成物品的需求:可以将多个低级物品合成高级物品,如果背包中已有低级物品了,合成时需要减掉物品的费用,只需要支付合成费;提供玩家一个合成物品的层级视图,以便于玩家有节奏的购买物品,如下图…

电影《刀剑神域进击篇:暮色黄昏》观后感

上周看了电影《刀剑神域进击篇&#xff1a;暮色黄昏》&#xff0c;刀剑神域系列质量还是非常不错的&#xff0c; 本部电影讲述主角团队攻克boss&#xff0c;阻止公会团体互相打架的故事。 刀剑系列&#xff0c;记得当初是以一部连载动漫为开端&#xff0c;如果不是特别喜欢看动…

计算机网络—HTTP基本概念、HTTPS、HTTP状态码、HTTP缓存、HTTP请求

参考小林coding HTTP基本概念 HTTP是超文本传输协议。所谓的超文本&#xff0c;就是超越了普通文本的文本&#xff0c;最关键的是有超链接&#xff0c;能从一个超文本跳转到另一个超文本。 HTML是最常见的超文本&#xff0c;本身是纯文字文件&#xff0c;但是内部使用很多标签…

Scrum敏捷项目管理实例

这是一个Scrum敏捷单团队敏捷开发示例。 1、建立产品路线图 首先我们需要为这个项目创建一个产品路线图&#xff0c;产品路线图是一个高层次的战略计划&#xff0c;它描述了产品在未来一段时间可能会如何发展和壮大&#xff0c;产品路线图确保整个产品团队持续关注产品的目标…

GEE:对Landsat遥感影像进行处理,水体提取与可视化

作者:CSDN @ _养乐多_ 本文介绍了通过Google Earth Engine平台,并使用Landsat卫星遥感数据提取水体掩膜的方法和代码。通过裁剪和去除云等处理步骤,最终得到具有水体掩膜的影像,并进行可视化和导出。这种方法基于归一化水体指数(MNDWI)和OTSU阈值计算技术,使用了一个自…

MyBatis(MyBatis环境搭建,单表操作)

目录 MyBatis 环境搭建 1.添加 Mybatis 框架支持 2.设置 MyBatis 配置信息 2.1.设置数据库连接的相关信息 2.2 Mybatis xml 保存路径和 xml命名格式 ​编辑 MyBatis 模式开发 Mybatis xml 模板 查询表内容 单元测试 以根据id,查询用户对象这个方法为例 获取动态参数的…

警惕,最近的副业兼职诈骗。

大家好&#xff0c;我是鸟哥。 今天和大家聊聊最近超级猖狂的几类诈骗。 一、副业诈骗。最近两年“副业刚需”这个口号很流行&#xff0c;尤其是今年&#xff0c;职场动荡、工作难找&#xff0c;副业刚需变成了副业焦虑&#xff0c;骗子们也正是抓住了大家的这个心理&#xf…

【Vue全家桶实现电商系统】— VSCode配置Git(二)

【Vue全家桶实现电商系统】— VSCode配置Git&#xff08;二&#xff09; 当我们在VScode中编写代码后&#xff0c;需要提交到git仓库时&#xff0c;但是我们又不想切换到git的命令行窗口&#xff0c;我们可以在 VScode中配置git&#xff0c;然后就可以很方便快捷的把代码提交…

JavaScript中的Hook技术:特性、优点、缺点和使用场景

引言&#xff1a; 随着JavaScript的不断发展&#xff0c;开发者们正在寻找更灵活和可扩展的方式来修改或扩展现有的代码。其中一种广泛应用的技术是"Hook"&#xff0c;它允许开发者拦截和修改现有的函数或方法的行为。本文将详细介绍JavaScript中的Hook技术&#xf…

软件确认测试、验收测试和系统测试有什么区别和联系?

软件确认测试、验收测试和系统测试都是软件测试过程中的重要环节&#xff0c;它们各自有不同的测试侧重点和目标&#xff0c;但也有一些联系。 1、软件确认测试 称为单元测试或白盒测试&#xff0c;是对软件中各个模块的基本功能进行测试的一种测试方式&#xff0c;主要使用…

【Python从入门到进阶】22、urllib库基本使用

接上篇《21、爬虫相关概念介绍》 上一篇我们介绍了爬虫的相关概念&#xff0c;本篇我们来介绍一下用Python实现爬虫的必备基础&#xff0c;urllib库的学习。 一、Python库的概念 我们今后的学习可能需要用到很多python库&#xff08;library&#xff09;&#xff0c;及引用其…

在阿里外包干了3个月,我果断跑路了

有一种打工人的羡慕&#xff0c;叫做“大厂”。 真是年少不知大厂香&#xff0c;错把青春插稻秧。 但是&#xff0c;在深圳有一群比大厂员工更庞大的群体&#xff0c;他们顶着大厂的“名”&#xff0c;做着大厂的工作&#xff0c;还可以享受大厂的伙食&#xff0c;却没有大厂…

Python接口自动化脚本持续集成过程

之前都是开发人员提交代码到git&#xff0c;触发jenkins拉取git上面的代码并进行编译部署&#xff0c;部署成功后测试人员就可以在浏览器端开始测试了。 作为测试人员&#xff0c;也有跟git和jenkins打交道的时候。 项目实践&#xff1a; python接口自动化脚本编写成功后&am…

图解LeetCode——102. 二叉树的层序遍历

一、题目 给你二叉树的根节点 root &#xff0c;返回其节点值的 层序遍历 。 &#xff08;即逐层地&#xff0c;从左到右访问所有节点&#xff09;。 二、示例 2.1> 示例 1&#xff1a; 【输入】root [3,9,20,null,null,15,7] 【输出】[[3],[9,20],[15,7]] 2.2> 示例…