完整部署一套k8s-v.1.28.0版本的集群

news2024/11/27 14:48:23

如何完整部署一套k8s-v.1.28.0版本的集群?

一、系统情况

虚拟机版本:esxi 6.7

系统版本:centos7.9_2009_x86

配置:4核8G(官网最低要求2核2G)

192.168.0.137 master节点

192.168.0.139 node2节点

192.168.0.138 node1节点(节点扩容练习)

二、环境配置

2.1、所有节点修改防火墙,本次是实验环境,图省事选择关闭防火墙,如果是生产,除非做了公网和内网隔离,还是别关闭吧,做好相关接口开发就行。

systemctl stop firewalld  #停止防火墙
systemctl disable firewalld #设置开机不启动

2.2、所有节点禁用selinux

#修改/etc/selinux/config文件中的SELINUX=permissive
vi /etc/selinux/config
或
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

2.3、所有节点关闭swap分区

#永久禁用swap,删除或注释掉/etc/fstab里的swap设备的挂载命令即可
nano /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2.4、修改时区时间

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
date

2.5、所有节点配置hosts

192.168.0.137   master
192.168.0.139   node2
192.168.0.138   node1

2.6、开启bridge-nf-call-iptalbes

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

通过运行以下指令确认 `br_netfilter` 和 `overlay` 模块被加载:

lsmod | grep br_netfilter
lsmod | grep overlay

通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

三、所有节点安装containerd

3.1、安装containerd

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install containerd.io

3.2、生成config.toml配置

containerd config default > /etc/containerd/config.toml

3.3、配置 systemd cgroup 驱动 在 /etc/containerd/config.toml 中设置

sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

将sandbox_image下载地址改为阿里云地址

[plugins."io.containerd.grpc.v1.cri"]
    ...
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

3.4、启动containerd 并设置开机自启动

systemctl restart containerd && systemctl enable containerd

四、k8s配置阿里云yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name = Kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

五、yum安装kubeadm、kubelet、kubectl

5.2、安装kubeadm、kubelet、kubectl

这些说明适用于 Kubernetes 1.28,阿里的yum源,kubelet版本只更新到1.28.0版本,所以下面命令需要加上版本号

yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0 --disableexcludes=kubernetes
systemctl enable kubelet

六、初始化master节点

kubeadm init \
--apiserver-advertise-address=192.168.0.137 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

得到以下内容,表示安装成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.137:6443 --token 2piab7.b39dqm9kpadxynkm \
        --discovery-token-ca-cert-hash sha256:c0bc36fedc05d4613ad03c1d6b8639dedb3fd3136d6a6be400e179410e0a0bff

然后按照上面提示,一步步执行命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

现在可以看到master节点了

kubectl get node

七、子节点加入master节点

kubeadm join 192.168.0.137:6443 --token 2piab7.b39dqm9kpadxynkm \
        --discovery-token-ca-cert-hash sha256:c0bc36fedc05d4613ad03c1d6b8639dedb3fd3136d6a6be400e179410e0a0bff

如果遇到的情况是命令卡住不动,大概率是token过期了,回到master节点,执行

kubeadm token create

创建新的token,替换后重新执行就行 现在可以看到master节点和子节点了

kubectl get node

八、部署CNI网络插件

8.1、下载cni插件

wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
mkdir -pv /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/

8.2、master安装flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 有些网络限制可能不能获取得到这个配置
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.24.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.24.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

8.3、查看节点

kubectl get node
[root@master containerd]# kubectl get node
NAME      STATUS   ROLES           AGE    VERSION
master    Ready    control-plane   115m   v1.28.0
worker2   Ready    <none>          112m   v1.28.0

都已经成为ready了,在master服务器执行

查看所有pods状态

kubectl get pods -A
[root@master containerd]# kubectl get pods -A
NAMESPACE      NAME                                READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-knclw               1/1     Running   0          99m
kube-flannel   kube-flannel-ds-psnhd               1/1     Running   0          99m
kube-system    coredns-66f779496c-65t9r            1/1     Running   0          116m
kube-system    coredns-66f779496c-sfzz6            1/1     Running   0          116m
kube-system    etcd-master                         1/1     Running   1          116m
kube-system    kube-apiserver-master               1/1     Running   1          117m
kube-system    kube-controller-manager-master      1/1     Running   1          117m
kube-system    kube-proxy-sfrr8                    1/1     Running   0          113m
kube-system    kube-proxy-vwn6z                    1/1     Running   0          116m
kube-system    kube-scheduler-master               1/1     Running   1          116m
testing-sc     server-dashboard-7cfc5c6cb6-jrs9d   1/1     Running   0          25m
[root@master containerd]#

九、dashboard

个人还是推荐kuboard(https://kuboard.cn/)

十、部署过程异常处理

crictl ps

报错

[root@worker2 containerd]# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
E0105 11:02:34.298539   32345 remote_runtime.go:390] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},}"
FATA[0000] listing containers: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"

原因

crictl依次查找容器运行时,当查找第一个 unix:///var/run/dockershim.sock 没有找到,所以报错了,需要你手动指定当前kubernetes的容器运行时,使用什么,例如:

kubernetes 1.24+ 之后,dockershim已经变成了cri-docker,所以你需要执行:

crictl config runtime-endpoint unix:///var/run/cri-dockerd.sock
如果你的容器运行时,已经换成了containerd,则换成containerd的,如:

crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
之后,你在执行就好了。

另外:生成的配置在cat /etc/crictl.yaml,可以随时修改。

配置私有镜像仓库

/etc/containerd/config.toml文件中找到 [plugins."io.containerd.grpc.v1.cri".registry]这行配置

[plugins."io.containerd.grpc.v1.cri".registry]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."your.harbor.registry"]
      endpoint = ["https://your.harbor.registry"] //此处是https就用https,是http就用http,不知道是否还需要关闭https安全认证

十一、containerd和docker操作差异

操作DockerContainerd (ctr)Crictl (K8s)
查看运行的容器docker psctr task lscrictl ps
查看镜像docker imagesctr image lscrictl images
查看容器日志docker logscrictl logs
查看容器数据信息docker inspectctr container infocrictl inspect
查看容器资源docker statscrictl stats
启动/关闭已有的容器docker start/stopctr task start/killcrictl start/stop
运行一个新的容器docker runctr run
修改镜像标签docker tagctr image tag
创建一个新的容器docker createctr container createcrictl create
导入镜像docker loadctr image import
导出镜像docker savectr image export
删除容器docker rmctr container rmcrictl rm
删除镜像docker rmictr image rmcrictl rmi
拉取镜像docker pullctr image pullcrictl pull
推送镜像docker pushctr image push
在容器内部执行命令docker execcrictl exec

十二、部署ingress-nginx-controller

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-nginx-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "false"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.0
    spec:
      hostNetwork: true
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: giantswarm/ingress-nginx-controller:v1.9.0
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.0
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: dyrnq/kube-webhook-certgen:v20230407
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.9.0
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: dyrnq/kube-webhook-certgen:v20230407
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.0
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None

部署ingress-nginx

kubectl apply -f ingress-nginx.yaml

# 查看ingress-nginx是否部署成功
[root@master containerd]# kubectl  get all -n ingress-nginx
NAME                                           READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-mr7t8       0/1     Completed   0          70m
pod/ingress-nginx-admission-patch-hnv5n        0/1     Completed   0          70m
pod/ingress-nginx-controller-8dbf764f7-dzwtl   1/1     Running     0          3m14s

NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.97.182.16     <none>        80:32542/TCP,443:31704/TCP   70m
service/ingress-nginx-controller-admission   ClusterIP   10.102.179.254   <none>        443/TCP                      70m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           70m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-544b486766   0         0         0       70m
replicaset.apps/ingress-nginx-controller-8dbf764f7    1         1         1       3m14s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           15s        70m
job.batch/ingress-nginx-admission-patch    1/1           18s        70m

测试ingress-nginx

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: testing-sc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deployment
  namespace: testing-sc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat-pod
  template: 
    metadata:
      labels:
        app: tomcat-pod
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.0-alpine
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: testing-sc
spec:
  selector:
    app: nginx-pod
  type: ClusterIP
  ports:
  - port: 80
    name: http
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-service
  namespace: testing-sc
spec:
  selector:
    app: tomcat-pod
  type: ClusterIP
  ports:
  - port: 8080
    name: http
    protocol: TCP
    targetPort: 8080


---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
  namespace: testing-sc
spec:
  ingressClassName: nginx
  rules:
  - host: "tomcat.demo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: tomcat-service
            port:
              number: 8080
  - host: "nginx.demo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx-service
            port:
              number: 80


# 备注,我省略了名称空间的配置,因为在这之前,我已经有了testing-sc的名称空间,如果没有,可以使用如下命令创建
kubectl create ns testing-sc
# 或者使用如下yaml
apiVersion: v1
kind: Namespace
metadata:
   name: testing-sc

准备就续之后,应用配置文件到集群

kubectl  apply -f nginx-tomcat-test.yaml

# 查看部署情况
kubectl get all -n testing-sc

在hosts文件中写入域名解析

192.168.0.139 nginx.demo.com tomcat.demo.com
# ingress-nginx调度到哪个节点上 就写哪个节点的ip(也可以绑定到具体的节点上)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1556623.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

拥抱挑战,开启增长:2024年全球产品团队的OKR策略

2024年&#xff0c;全球经济格局进入重塑阶段。消费者在消费选择上趋于严苛&#xff0c;企业需推出更具吸引力的产品与服务&#xff0c;以赢得消费者的青睐。同时&#xff0c;企业需通过持续创新&#xff0c;提升产品竞争力&#xff0c;方能在充满挑战的市场环境中实现持续增长…

Kaggle注册验证码问题(Captcha must be filled out.)

Kaggle注册验证码问题 Captcha must be filled out.使用Edge浏览器 Header Editor 插件安装 下载插件Header Editor 导入重定向脚本 点击扩展插件&#xff0c; 打开Header Editor插件&#xff0c;进行管理 点击导入输入下载链接进行下载或者导入本地json文件(二者任选其一…

文件操作(顺序读写篇)

1. 顺序读写函数一览 函数名功能适用于fgetc字符输入函数所有输入流fputc字符输出函数所有输出流fgets文本行输入函数所有输入流fputs文本行输出函数所有输出流fscanf格式化输入函数所有输入流fprintf格式化输出函数所有输出流fread二进制输入文件fwrite二进制输出文件 上面说…

代码学习记录31---动态规划开始

随想录日记part31 t i m e &#xff1a; time&#xff1a; time&#xff1a; 2024.03.29 主要内容&#xff1a;今天开始要学习动态规划的相关知识了&#xff0c;今天的内容主要涉及四个方面&#xff1a; 理论基础 ; 斐波那契数 ;爬楼梯 ;使用最小花费爬楼梯。 理论基础 509. 斐…

代码随想录算法训练营第二十四天(回溯1)|77. 组合(JAVA)

文章目录 回溯理论基础概念类型回溯模板 77. 组合解题思路源码 回溯理论基础 概念 回溯是递归的副产品&#xff0c;本质上是一种穷举 回溯解决的问题可以抽象为一种树形结构 类型 回溯主要用来解决以下问题 组合问题&#xff1a;N个数里面按一定规则找出k个数的集合切割问…

自己动手用ESP32手搓一个智能机器人:ESP32-CAM AI Robot

目录 介绍 硬件需求 软件需求 步骤 总结 源码下载 介绍 ESP32-CAM是一款集成了Wi-Fi和蓝牙功能的微控制器模块&#xff0c;同时还集成了摄像头接口&#xff0c;使其成为一个非常适合构建智能机器人的选择。在本项目中&#xff0c;我将向您展示如何使用ESP32-CAM模块构建…

NSSCTF Round#20 Basic 真亦假,假亦真 CSDN_To_PDF V1.2 出题笔记 (附wp+源码)

真亦假&#xff0c;假亦真 简介&#xff1a;java伪造php一句话马。实则信息泄露一扫就出&#xff0c;flag在/flag里面。 题目描述&#xff1a;开开心心签个到吧&#xff0c;祝各位师傅们好运~ 静态flag&#xff1a;NSS{Checkin_h4v3_4_g00D_tINNe!} /路由显示 <?php e…

沙箱安全机制

Java安全模型的核心就是Java沙箱(sandbox)&#xff0c; 什么是沙箱&#xff1f; 沙箱是一个 限制程序运行的环境。沙箱机制就是将Java代码限定在虚拟机(JVM) 特定的运行范围中&#xff0c;并且严格限制代码对本地系统资源访问&#xff0c;通过这样的措施来保证 对代码的 有效隔…

FANUC机器人故障诊断—报警代码更新(三)

FANUC机器人故障诊断中&#xff0c;有些报警代码&#xff0c;继续更新如下。 一、报警代码&#xff08;SRVO-348&#xff09; SRVO-348DCS MCC关闭报警a&#xff0c;b [原因]向电磁接触器发出了关闭指令&#xff0c;而电磁接触器尚未关闭。 [对策] 1.当急停单元上连接了CRMA…

密码CTF

二、[SWPUCTF 2021 新生赛]crypto6----base 1.题目 var"************************************" flagNSSCTF{ base64.b16encode(base64.b32encode(base64.b64encode(var.encode()))) } print(flag) 小明不小心泄露了源码&#xff0c;输出结果为&#xff1a;4A5A4…

linux离线安装jdk

一、下载jdk 地址: Java Downloads | Oracle 中国 具体下载什么版本要根据安装的linux系统架构来决定&#xff0c;是ARM64还是X64&#xff0c;linux命令行输入如下命令 uname -m 可以看到linux系统是x64 架构(x86是32位&#xff0c;x86_64是64位&#xff0c;由于x86已经淘汰&…

设计模式之代理模式精讲

代理模式&#xff08;Proxy Pattern&#xff09;也叫委托模式&#xff0c;是一个使用率非常高的模式&#xff0c;比如我们在Spring中经常使用的AOP&#xff08;面向切面编程&#xff09;。 概念&#xff1a;为其他对象提供一种代理以控制对这个对象的访问。 代理类和实际的主题…

蓝桥杯【奇怪的捐赠】c语言

我会将这题的解题的核心思路解为将10进制转化成7进制&#xff0c;毕竟题目上说的很清楚7的几次方 然后附上我认为的最优解 #include<stdio.h> int main() {int n 1000000;int sum 0;while (n ! 0){int a;a n % 7;n n / 7;sum a ;}printf("%d", sum);retu…

C++中STL中容器--string讲解

C中STL中容器--string讲解 一、标准库中的string类1.1 string类说明 二、string类的常用接口2.1 string类对象的常见构造2.2 string类对象的容量操作2.3 string类对象的访问及遍历操作2.4 string类对象的修改操作2.5 string类非成员函数 三、string的结构3.1 VS下string的结构3…

计算机视觉的应用27-关于VoVNetV2模型的应用场景,VoVNetV2模型结构介绍

大家好&#xff0c;我是微学AI&#xff0c;今天给大家介绍一下计算机视觉的应用27-关于VoVNetV2模型的应用场景&#xff0c;VoVNetV2模型结构介绍。VoVNetV2&#xff08;Visual Object-Driven Representation Learning Network Version 2&#xff09;是一种深度学习模型&#x…

vue-ueditor-wrap上传图片报错:后端配置项没有正常加载,上传插件不能正常使用

如图所示&#xff0c;今天接收一个项目其中富文本编辑器报错 此项目为vue2项目&#xff0c;富文本编辑器为直接下载好的资源存放在public目录下的 经过排查发现报错的函数在ueditor.all.min.js文件内&#xff0c;但是ueditor.all.min.js文件夹是经过压缩的 所以直接&#xff…

stream流中的坑,peek/map/filter

起因 所在系统为一个对账系统&#xff0c;涉及的业务为发布账单&#xff0c;数据结构定的是供应商账单发布&#xff0c;生成企业账单和个人账单。发布账单处理完本系统业务后&#xff0c;需要生成站内通知和调用外部接口生成短信通知。后来增加需求&#xff0c;需要在发布完成…

3D产品可视化SaaS

“我们正在走向衰退吗&#xff1f;” “我们已经陷入衰退了吗&#xff1f;” “我们正在步入衰退。” 过去几个月占据头条的问题和陈述引发了关于市场对每个行业影响的讨论和激烈辩论。 特别是对于科技行业来说&#xff0c;过去几周一直很动荡&#xff0c;围绕费用、增长和裁…

论文笔记:TALK LIKE A GRAPH: ENCODING GRAPHS FORLARGE LANGUAGE MODELS

ICLR 2024&#xff0c;reviewer评分 6666 1 intro 1.1 背景 当下LLM的限制 限制1&#xff1a;对非结构化文本的依赖 ——>模型有时会错过明显的逻辑推理或产生错误的结论限制2&#xff1a;LLMs本质上受到它们训练时间的限制&#xff0c;将“最新”信息纳入到不断变化的世…

[InternLM训练营第二期笔记]1. 书生·浦语大模型全链路开源开放体系

由于想学习一下LLM相关的知识&#xff0c;真好看到上海AI Lab举行的InternLM训练营&#xff0c;可以提高对于LLM的动手能力。 每次课堂都要求笔记&#xff0c;因此我就想在我的CSDN上更新一下&#xff0c;希望和感兴趣的同学共同学习~ 本次笔记是第一节课&#xff0c;介绍课。…