k8s学习笔记

news2024/11/28 1:51:26

目录

一、安装前准备

二、安装

 1、安装kubelet、kubeadm、kubectl

2、使用kubeadm引导集群

1、下载各个机器需要的镜像

2、初始化主节点

3、加入node节点

3、部署dashboard

1、主节点安装

2、设置访问端口

3、创建访问账号

4、令牌访问获取token

三、实战

1、资源创建方式

2、Namespace

3、pod

命令行形式 

以yaml形式创建pod

 一个pod中运行多个容器

 测试一个pod中启动两个nginx(端口占用)

4、Deployment

 1、自愈能力

2、多副本

 3、扩缩容

 4、故障转移

5、滚动更新

6、版本回退

7、更多

5、Service


一、安装前准备

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令。
  • 每台机器 2 GB 或更多的 RAM(如果少于这个数字将会影响你应用的运行内存)。
  • CPU 2 核心及以上。
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)。
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里了解更多详细信息。
  • 禁用交换分区。为了保证 kubelet 正常工作,你必须禁用交换分区。
# 给集群内各个机器设置域名,避免重复
hostnamectl set-hostname xxxx

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)(linux系统中的安全设置)
# 临时禁用
sudo setenforce 0
# 永久禁用
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 查看当前内存使用情况(-m 以字节的形式展示)
free -m

# 关闭swap
# 临时关闭
swapoff -a
# 永久关闭  
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# 使配置文件生效
sudo sysctl --system

二、安装

 1、安装kubelet、kubeadm、kubectl

# 告诉linux去哪里下载kubernetes
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# 下载
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

# 启动kubelet并设置开机自启
# 启动后,如果使用systemctl status kubelet查看状态,会出现启动/停止无限闪亮的过程,因为它陷入了一个等待 kubeadm 指令的死循环
sudo systemctl enable --now kubelet

2、使用kubeadm引导集群

1、下载各个机器需要的镜像

# 除了kubelet以外,其余都是以镜像的方式运行,其他模块都是由kubelet来获取镜像
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
   
chmod +x ./images.sh && ./images.sh

2、初始化主节点

# 所有机器添加master域名映射
echo "192.168.31.27  cluster-endpoint" >> /etc/hosts



# 主节点初始化
# 只需要在主节点执行
# 第二行的ip必须是master的地址,第三行的域名必须是master的域名
kubeadm init \
--apiserver-advertise-address=192.168.31.27 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.169.0.0/16

# 所有网络范围不重叠,docker安装后占用的是172.17.0.1/16

# 可在主节点使用此命令判断是否安装成功

kubectl get nodes

执行后,需要记录输出内容,后续有用

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
#第一步,复制执行即可
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

#第二步,需要下载网络插件
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

#增加master
  kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \
    --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

#增加工作节点   此处的token24小时有效
kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \
    --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6
#下载calico网络插件配置文件
curl https://docs.projectcalico.org/manifests/calico.yaml -O
#kubectl version,发现版本是v1.20.9,对应的calico版本是v3.20
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O

#初始化主节点时,--pod-network-cidr=192.168.0.0/16,默认为192.168网段,修改为了192.169,因此需要修改calico中的对应配置

#给k8s里安装calico
kubectl apply -f calico.yaml

# 查看集群中部署了哪些应用 之前的docker叫容器,k8s叫pod
# -A 查看所有,不加默认查看default命名空间中的
# -w 可以看到输出日志,比如开始初始化某个pod
# watch -n 1 kubectl get pods -A   每一秒查看一下状态
# kubectl get pod -owide 查看更详细的pod信息 带ip
kubectl get pods -A

3、加入node节点

# 根据之前保存的内容,在工作节点机器上执行即可
kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \
>     --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6

卡住不动了,关闭主节点防火墙即可

[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
        [WARNING Hostname]: hostname "k8s-node1" could not be reached
        [WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 192.168.31.1:53: no such host

令牌过期后怎么办?重新生成(在master节点执行)

kubeadm token create --print-join-command
kubeadm join cluster-endpoint:6443 --token qwzp8v.qfwfeh7x3pdc3a1r     --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6 

3、部署dashboard

1、主节点安装

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

2、设置访问端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: ClusterIP 改为 type: NodePort

# 找到端口,在安全组放行
kubectl get svc -A |grep kubernetes-dashboard

访问: https://集群任意IP:端口(我的是30427)

https://139.198.165.238:30427  

3、创建访问账号

# 创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f dash.yaml

4、令牌访问获取token

# 获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"


eyJhbGciOiJSUzI1NiIsImtpZCI6IklYTTRxZHNTb0lkclltRnN0aDY2OXJ3RzlhUkxucjNISG1tbW44X3VFdVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWR6aHE0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjYzY0ODdiYy1mMWFhLTQwN2ItOTFkZC0yN2I3ODdlZGU2MjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.d9rUEo5u0-DRYnXfUn3nRhVTncCWDsijRQYQwTmeNdL0U8Dv8k_yUrJ4W1kV2AP9VArt-pv4U3eXM2ts875CT-3L6vpg6JE42WDtJy4ama92NLiX4n7HFdugThhoowAV53Ac_6O4YaTc7o-TROplowLkHZ4hDjo9OYo1u21QhhGfq9uGkBz6jsvUhCe5oTpxFmmjimUN3_yUsUFf6nwS0dWk_d986A-de0hLfj4-wC1_soWpFVIK7j0wjHk2brQbultH07YPsXb-c_brixl0QvsUqtCka9OUxSQ1nlgCqoVVWK30RwSw7GbDkzh798zfkONu_ofHejw_srxvmeqoPw

三、实战

1、资源创建方式

  • 命令行
  • YAML

2、Namespace

名称空间用来对集群资源进行隔离划分。默认只隔离资源,不隔离网络。(想要隔离网络需配置)

# 获取k8s中的命名空间
kubectl get ns/namspace

# 获取指定命名空间的pod,不加-n为default,-A为所有。创建创建的资源如果不指定命名空间的话,会创建到default空间下
kubectl get pods -n kubernetes-dashboard

# 创建自定义命名空间
kubectl create ns hello

# 删除自定义命名空间,系统级的不要删,default拒绝删除。删除的时候会把下面的资源也删掉。
kubectl delete ns hello

以yaml的形式创建命名空间

apiVersion: v1
kind: Namespace
metadata:
  name: hello

以yaml形式创建,最好也以yaml的形式删除

kubectl apply -f hello.yaml

kubectl delete -f hello.yaml

3、pod

运行中的一组容器,pod是kubernetes中应用的最小单位

(k8s将docker中的容器再封装一次,即pod,pod中可以有一个容器,也可以是多个,为一组,构成一个原子pod)

命令行形式 

# 创建pod
kubectl run mynginx --image=nginx

# 查看pod描述
kubectl describe pod mynginx

Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned default/mynginx to k8s-node2
  Normal  Pulling    13m   kubelet            Pulling image "nginx"
  Normal  Pulled     12m   kubelet            Successfully pulled image "nginx" in 49.9333842s
  Normal  Created    12m   kubelet            Created container mynginx
  Normal  Started    12m   kubelet            Started container mynginx

# 将任务分配给了node2工作节点,k8s-node2,底层还是docker容器,可以通过docker ps查看

# 删除pod
kubectl delete pod mynginx
# -n 指定命名空间
#kubectl delete pod mynginx -n default
# 删除多个pod 空格分隔即可
kubectl delete pod myapp mynginx -n default

以yaml形式创建pod

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: mynginx
  name: mynginx
  namespace: default
spec:
  containers:
  - image: nginx
    name: mynginx

kubectl apply -f pod.yaml

kubectl delete -f pod.yaml

# 查看pod日志  只有pod有日志,所以不用加pod   -f 阻塞式追踪日志
kubectl logs mynginx
kubectl logs -f mynginx

# 每个pod k8s都会给分配一个ip
# --pod-network-cidr=192.169.0.0/16  初始化主节点的配置
# 使用pod的ip + pod里面运行容器的端口即可访问
# 集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod
# 此时外部还不能访问
# curl 192.169.169.132
kubectl get pod -owide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
mynginx   1/1     Running   0          4m17s   192.169.169.132   k8s-node2   <none>           <none>

#进入pod内部  还可在dashboard中点执行进入pod内部
kubectl exec -it mynginx -- /bin/bash

dashboard中创建pod,需要选择好namespace,否则需要在yaml中指定namespace

可在页面中查看日志、描述、删除、执行等操作,对应上面的各种命令

 一个pod中运行多个容器

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: myapp
  name: myapp
spec:
  containers:
  - image: nginx
    name: nginx
  - image: tomcat:8.5.68
    name: tomcat

# 查看ip
kubectl get pod -owide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
myapp     2/2     Running   0          3m53s   192.169.36.66     k8s-node1   <none>           <none>
mynginx   1/1     Running   0          36m     192.169.169.132   k8s-node2   <none>           <none>

# 访问nginx
curl 192.169.36.66

# 访问tomcat
curl 192.169.36.66:8080

# 内部相互访问时使用127.0.0.1即可

 

 测试一个pod中启动两个nginx(端口占用)

# myapp-2 失败
kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
myapp     2/2     Running   0          19m
myapp-2   1/2     Error     1          51s
mynginx   1/1     Running   0          51m

#排错
kubectl describe pod myapp-2
# 查看日志  可以通过命令行,也可以通过dashboard
# -c 指定pod内部容器名   当有多个容器时必填
# 正常
kubectl logs -c nginx01 myapp-2
# Address already in use  k8s会一直重试
kubectl logs -c nginx02 myapp-2

思考:如果非要安装两个nginx怎么办?自定义端口?怎么自定义?

4、Deployment

控制pod,使pod拥有多副本、自愈、扩缩容等能力

 1、自愈能力

# 使用原始方法创建pod
kubectl run mynginx --image=nginx

# 使用deployment方式创建pod 可简写为deploy
kubectl create deployment mytomcat --image=tomcat:8.5.68

# 测试两种方式的不同  k8s的自愈能力
# kubectl delte pod mynginx后,kubectl get pod查看mynginx是真的被删除
# deployment方式创建后,名字为随机的,例:mytomcat-6f5f895f4f-668dp,删除后会立马重启一个新的,类似宕机后重启(自愈能力)

# 查看deployment  可简写为deploy
# -n namespace
kubectl get deployment

# 删除deployment 可简写为deploy
kubectl delete deployment -n default mytomcat

2、多副本

命令行部署 

# 一次创建三个副本
kubectl create deploy my-dep --image=nginx --replicas=3

 yaml部署

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-dep
  name: my-dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-dep
  template:
    metadata:
      labels:
        app: my-dep
    spec:
      containers:
      - image: nginx
        name: nginx

dashboard表单创建

 3、扩缩容

# 扩容
kubectl scale deploy/my-dep --replicas=5

# 缩容 会随机选择pod,关闭对应数量
kubectl scale deploy/my-dep --replicas=2

# 修改yaml的方式进行扩缩容操作
# 修改sepc下的replicas即可
kubectl edit deploy my-dep

 也可以在dashboard中操作对应的缩放功能

 4、故障转移

NAME                      READY   STATUS    RESTARTS   AGE   IP                NODE        NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-5wp9t   1/1     Running   0          28m   192.169.169.134   k8s-node2   <none>           <none>
my-dep-5b7868d854-cnlxs   1/1     Running   0          28m   192.169.36.70     k8s-node1   <none>           <none>
my-dep-5b7868d854-djbfq   1/1     Running   0          28m   192.169.169.135   k8s-node2   <none>           <none>

# 自愈
# docker stop xxx  关闭my-dep-5b7868d854-cnlxs对应节点的容器模拟宕机
# k8s会重启一个新的,之前的通过docker ps -a可以看到,为退出状态
kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running   0          29m
my-dep-5b7868d854-djbfq   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   0/1     Completed   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          29m

# 故障转移
# 手动关闭node1后,等待大概5分钟,会关闭cnlxs,重新开启一个k9977,在node2上,此为故障转移
# 在node1启动前,cnlxs状态一直为Terminating,启动后才会关闭成功
kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running   0          29m
my-dep-5b7868d854-djbfq   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   0/1     Completed   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          37m
my-dep-5b7868d854-cnlxs   1/1     Terminating   1          42m
my-dep-5b7868d854-k9977   0/1     Pending       0          0s
my-dep-5b7868d854-k9977   0/1     Pending       0          0s
my-dep-5b7868d854-k9977   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   0/1     ContainerCreating   0          9s
my-dep-5b7868d854-k9977   1/1     Running             0          11s

kubectl get pod -owide
NAME                      READY   STATUS        RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-5wp9t   1/1     Running       0          46m     192.169.169.134   k8s-node2   <none>           <none>
my-dep-5b7868d854-cnlxs   0/1     Terminating   1          46m     <none>            k8s-node1   <none>           <none>
my-dep-5b7868d854-djbfq   1/1     Running       0          46m     192.169.169.135   k8s-node2   <none>           <none>
my-dep-5b7868d854-k9977   1/1     Running       0          3m54s   192.169.169.137   k8s-node2   <none>           <none>

5、滚动更新

类似于不停机更新,升级版本,不直接关闭之前的pod内容器,而是启动一个关闭一个

# 以yaml格式获取deploy,- image: nginx。或者查看pod的描述也可以找到版本
kubectl get deploy my-dep -oyaml

# nginx=nginx:1.16.1   - image:的值(旧版本): 新版本
# 一般使用yaml格式更新
kubectl set image deploy/my-dep nginx=nginx:1.16.1 --record

kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          18h
my-dep-5b7868d854-djbfq   1/1     Running   0          18h
my-dep-5b7868d854-k9977   1/1     Running   0          17h
my-dep-6b48cbf4f9-sgnfc   0/1     Pending   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     Pending   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     ContainerCreating   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     ContainerCreating   0          0s
my-dep-6b48cbf4f9-sgnfc   1/1     Running             0          40s
my-dep-5b7868d854-k9977   1/1     Terminating         0          17h
my-dep-6b48cbf4f9-tfpb8   0/1     Pending             0          0s
my-dep-6b48cbf4f9-tfpb8   0/1     Pending             0          0s
my-dep-6b48cbf4f9-tfpb8   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   1/1     Terminating         0          17h
my-dep-6b48cbf4f9-tfpb8   0/1     ContainerCreating   0          2s
my-dep-6b48cbf4f9-tfpb8   1/1     Running             0          3s
my-dep-5b7868d854-djbfq   1/1     Terminating         0          18h
my-dep-6b48cbf4f9-kndkc   0/1     Pending             0          0s
my-dep-6b48cbf4f9-kndkc   0/1     Pending             0          0s
my-dep-6b48cbf4f9-kndkc   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-5b7868d854-djbfq   1/1     Terminating         0          18h
my-dep-6b48cbf4f9-kndkc   0/1     ContainerCreating   0          1s
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-6b48cbf4f9-kndkc   1/1     Running             0          17s
my-dep-5b7868d854-5wp9t   1/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   1/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h

实际使用滚动更新用的yaml,怎么写?两次的区别怎么弄?

6、版本回退

# 查看历史记录  使用了--record的都会被记录下来
kubectl rollout history deploy/my-dep

# 查看某个历史详情
kubectl rollout history deploy/my-dep --revision=2

# 回滚到上次 也是停一个起一个
kubectl rollout undo deploy/my-dep

# 回滚到指定版本
kubectl rollout undo deploy/my-dep --to-revision=2

7、更多

除了Deployment,k8s还有 StatefulSetDaemonSetJob 等 类型资源。我们都称为 工作负载

有状态应用使用 StatefulSet 部署,无状态应用使用 Deployment 部署

工作负载资源 | Kubernetes

5、Service

Pod的服务发现与负载均衡。将一组Pods公开为网络服务的抽象方法。

# 暴露deploy 对外的端口为8000,访问pod内的端口是80
kubectl expose deploy my-dep --port=8000 --target-port=80

# 查看
kubectl get service

# 说明:service是根据deploy的label来筛选一组pod为service(不写默认为app:{name}),--show-labels可以看标签

# curl serviceIp:servicePort

*以上内容是根据雷丰阳老师视频学习的记录,包括一些个人的理解,不对之处望指教

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/335835.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

正大国际期货:外盘震荡行情的特征及突破信号的确立

投机市场上&#xff0c;趋势交易应该是交易操作理念的灵魂和核心&#xff1b;能够顺应大的趋势&#xff0c;交易将变得简单&#xff0c;也更容易赚到钱。下面正大IxxxuanI详细来给大家讲讲 投资市场是由千万个交易个体所组成的复杂系统&#xff0c;走势具有不确定性&#xff0…

MQTT 5.0协议新特性介绍

MQTT 5.0协议新特性介绍 项目中逐步完成了 MQTT 5.0的开发&#xff0c;这里介绍下MQTT 5.0 的一些新特性。 MQTT 3.1.1 规范见&#xff1a;点击查看MQTT 5.0 规范见&#xff1a;点击查看 格式 首先&#xff0c;协议上&#xff0c;增加了一个 Property字段&#xff0c;正是这…

无线通信中的轨道角动量

目录 一. 前言 二. 如何传输 三. 如何产生 3.1 螺旋结构器件 &#xff08;1&#xff09;螺旋相位板 &#xff08;2&#xff09;螺旋抛物面天线 3.2 超表面 3.3 天线阵列 3.3.1 相控阵 3.3.2 时控阵 四. 如何识别 一. 前言 轨道角动量&#xff1a;Orbital Angular M…

Wine零知识学习1 —— 介绍

一、什么是Wine Wine是“Wine Is Not an Emulator” 的首字母缩写&#xff0c;是一个能够在多种POSIX-compliant操作系统&#xff08;诸如Linux、macOS及BSD等&#xff09;上运行 Windows 应用的兼容层。Wine不像虚拟机或者模拟器那样模仿内部的Windows逻辑&#xff0c;而是將…

电压放大器在非共线混频方法检测混凝土中的应用

实验名称&#xff1a;电压放大器在非共线混频方法检测混凝土中的应用研究方向&#xff1a;无损检测测试目的&#xff1a;无损检测是在不损伤或不干扰待测物体的结构材料的情况下&#xff0c;对其内部损伤进行探测的方法。传统无损检测法在仪器携带&#xff0c;操作程序&#xf…

牛客网 HJ31 单词倒排(详解)

前言&#xff1a;内容包括四大模块&#xff1a;题目&#xff0c;代码实现&#xff0c;大致思路&#xff0c;代码解读 题目&#xff1a; 描述 对字符串中的所有单词进行倒排。 说明&#xff1a; 1、构成单词的字符只有26个大写或小写英文字母&#xff1b; 2、非构成单词的字…

PyTorch学习笔记:nn.L1Loss——L1损失

PyTorch学习笔记&#xff1a;nn.L1Loss——L1损失 torch.nn.L1Loss(size_averageNone, reduceNone, reductionmean)功能&#xff1a;创建一个绝对值误差损失函数&#xff0c;即L1损失&#xff1a; l(x,y)L{l1,…,lN}T,ln∣xn−yn∣l(x,y)L\{l_1,\dots,l_N\}^T,l_n|x_n-y_n| l(…

苹果手机怎么传输照片到电脑?教你4种实用方法

苹果手机怎么传输照片到电脑&#xff1f;除了更换新手机需要迁移数据&#xff0c;iPhone用久了常常会遇到储存空间不足的问题&#xff0c;因此把一些数据上传到电脑上也是必要的。今天咱们就来说说从iPhone传输照片到电脑的迁移方法吧。 方法1.使用苹果数据线 苹果手机怎么传输…

【论文速递】ICCV2021 - 基于超相关压缩实现实时高精度的小样本语义分割

【论文速递】ICCV2021 - 基于超相关压缩的小样本语义分割 【论文原文】&#xff1a;Hypercorrelation Squeeze for Few-Shot Segmentation 【作者信息】&#xff1a;Juhong Min Dahyun Kang Minsu Cho 获取地址&#xff1a;https://openaccess.thecvf.com/content/ICCV2021/…

【DGL】图分类

目录概述数据集定义Data LoaderDGL中的batched graph定义模型训练参考概述 除了节点级别的问题——节点分类、边级别的问题——链接预测之外&#xff0c;还有整个图级别的问题——图分类。经过聚合、传递消息得到节点和边的新的表征后&#xff0c;映射得到整个图的表征。 数据…

pycharm无法通过外网访问阿里云服务器中的Flask解决方案

一、修改/添加安全组端口这是第一种方案&#xff0c;也是能解决大部分问题的一个方案。由于我的服务器是阿里云的&#xff0c;所以在阿里云的ECS云服务器控制台中&#xff0c;管理安全组&#xff0c;添加5000和8000端口以便测试。经过测试&#xff0c;外网依旧无法访问。二、修…

Shell脚本之——Hadoop3单机版安装

目录 1.解压 2.文件重命名 3.配置环境变量 4.hadoop-env.sh 5.core-site.xml 6. hdfs-site.xml 7. mapred-site.xml 8.yarn-site.xml 9.完整脚本代码(注意修改主机名) 10.重启环境变量 11.初始化 12.启动服务 13.jps查询节点 1.解压 tar -zxf /opt/install/hadoo…

【速通版】吴恩达机器学习笔记Part3

目录 1.多元线性回归 a.特征缩放 可行的缩放方式&#xff1a; 1.除以最大值&#xff1a; 2.mean normalization&#xff1a; 3.Z-score normalization b.learning curve: c.learning rate: 2.多项式回归 3.classification logistics regression 1.多元线性回归 其意义很…

UML术语标准和分类

一、UML术语标准 1&#xff0e;中文UML术语标准 中国软件行业协会&#xff08;CSIA&#xff09;与日本UML建模推进协会&#xff08;UMTP&#xff09;共同在中国推动的UML专家认证&#xff0c;两个协会共同颁发认证证书、两国互认&#xff0c;CSIA与UMTP共同推出了UML中文术语…

(record)QEMU安装最小linux系统——TinyCore(命令行版)

文章目录QEMU安装最小linux系统——TinyCore参考QEMU使用qemu创建tinycore虚拟机再次启动文件保存QEMU安装最小linux系统——TinyCore 简单记录安装过程和记录点 参考 [原创] qemu 与 Tiny Core tinycore的探索 QEMU qemu不多介绍&#xff0c;这里是在WSL2上安装的linux版…

最近很火的一部电视(狂飙)像安欣和高启强这样类型的人,谁更合适做软件测试工程师

狂飙》央视收视率狂飙。央视发布《狂飙》收视成绩&#xff0c;全剧平均收视1.54%&#xff0c;平均收视份额6.99%&#xff0c;单集最高收视率2.20%&#xff0c;单集最高收视份额10.69%&#xff1b;晚间电视剧类节目第一。可以说还部剧为今年开了个好头&#xff0c;一开年就引爆收…

财报解读:四季度营收超预期,优步却越来越“不务正业”了

“公司第四季度的业绩表现将是强劲的”。 公布2022年第三季度财报时&#xff0c;优步的高管给出了这样的预告&#xff0c;给资本市场打了一针“强心剂”。然而有人对此表示质疑&#xff0c;后疫情时代&#xff0c;带着新模式、新车型的全新网约车公司层出不穷&#xff0c;车企…

Java面试数据库

目录 一、关系型数据库 数据库权限 表设计及创建 表数据相关 数据库架构优化 二、非关系型数据库 redis 今天给大家稍微整理了一下&#xff0c;内容有数据表设计的三大范式原则、sql查询如何优化、redis数据的击穿、穿透、雪崩等...&#xff0c;以及相关的面试题&#xff0…

Intel中断体系(1)中断与异常处理

文章目录概述中断与异常中断可屏蔽中断与不可屏蔽中断&#xff08;NMI&#xff09;异常异常分类中断与异常向量中断描述符表中断描述符中断与异常处理中断与异常处理过程堆栈切换错误码64位模式下的中断异常处理64位中断描述符64位处理器下的堆栈切换相关参考概述 中断是现代计…

不用创建项目,直接在 VS 里快速测试 C/C++ 代码

概述 Visual Studio 强大、方便&#xff0c;但是每次写代码都要先创建新项目&#xff0c;这对于一些简单的代码测试来说有点不方便。 本文介绍一种使用 VS 快速测试代码的方法。 该方法适用任何版本的 VS。“不用创建项目”&#xff0c;是指不用“手工”创建项目&#xff0c…