CentOS7.9+Kubernetes1.28.3+Docker24.0.6高可用集群二进制部署

news2024/11/18 7:51:39

CentOS7.9+Kubernetes1.28.3+Docker24.0.6高可用集群二进制部署

  • 查看版本关系
## 从kubernetes-server-linux-amd64.tar.gz解压后有kubeadm
]# ./kubeadm config images list
W1022 20:06:05.647976   29233 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1022 20:06:05.648046   29233 version.go:105] falling back to the local client version: v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

一、总体规划

1.1 软件获取

所有软件均为开源软件,源文件从官方链接或官方镜像链接地址获取。

1.1.1 centos 7.9

https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/isos/x86_64/
下载2009或2207其中任何一个版本都可以。
下载:https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2009.iso
下载:https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2207-02.iso

1.1.2 kubernetes v1.28.3

https://github.com/kubernetes/kubernetes/tree/v1.28.3
下载:https://dl.k8s.io/v1.28.3/kubernetes-server-linux-amd64.tar.gz

1.1.3 docker 24.0.6

https://download.docker.com/linux/static/stable/x86_64
下载:https://download.docker.com/linux/static/stable/x86_64/docker-24.0.6.tgz

1.1.4 cri-dockerd 0.3.5

https://github.com/Mirantis/cri-dockerd
下载:https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.5/cri-dockerd-0.3.5.amd64.tgz

1.1.5 etcd v3.5.9

https://github.com/etcd-io/etcd/
下载:https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz

1.1.6 cfssl v1.6.4

https://github.com/cloudflare/cfssl/releases
下载:https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64
下载:https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64
下载:https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl-certinfo_1.6.4_linux_amd64

1.1.7 flannel v0.22.3

https://github.com/flannel-io/flannel
下载:https://github.com/flannel-io/flannel/releases/download/v0.22.3/flanneld-amd64
下载:https://github.com/flannel-io/flannel/releases/download/v0.22.3/flannel-v0.22.3-linux-amd64.tar.gz

1.1.8 cni-plugins v1.2.0、cni-plugin v1.2.0

下载:https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
下载:https://github.com/flannel-io/cni-plugin/releases/download/v1.2.0/cni-plugin-flannel-linux-amd64-v1.2.0.tgz

1.19 nginx 1.24.0

http://nginx.org/en/download.html
下载:http://nginx.org/download/nginx-1.24.0.tar.gz

1.2 集群主机规划

1.2.1 虚拟机安装及主机设置
  • 主机名与hosts本地解析
HostnameHost IPHost IPRole
k8s1.28.3-1192.168.26.5110.26.51.1/24master&worker、etcd、docker、flannel
k8s1.28.3-2192.168.26.5210.26.52.1/24master&worker、etcd、docker、flannel
k8s1.28.3-3192.168.26.5310.26.53.1/24master&worker、etcd、docker、flannel
[root@vm51 ~]# hostname      ## 修改:# hostnamectl set-hostname k8s-51
k8s-51

src]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.26.51 k8s1.28.3-1 vm51 etcd_node1 k8s-51
192.168.26.52 k8s1.28.3-2 vm52 etcd_node2 k8s-52
192.168.26.53 k8s1.28.3-3 vm53 etcd_node3 k8s-53
  • 关闭防火墙
 ~]# systemctl disable --now firewalld
 Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
 Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
 ~]# firewall-cmd --state
 not running
  • 关闭SELINUX
 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
 ~]# reboot
 ~]# sestatus
 SELinux status:                 disabled

修改SELinux配置需要重启操作系统。

  • 关闭交换分区
 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
 ~]# swapoff -a && sysctl -w vm.swappiness=0
 ~]# cat /etc/fstab
 #UUID=2ee3ecc4-e9e9-47c5-9afe-7b7b46fd6bab swap                    swap    defaults        0 0
 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         171        3460          19         138        3402
Swap:             0           0           0
  • 配置ulimit
 ~]# ulimit -SHn 65535
 ~]# cat << EOF >> /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
1.2.2 ipvs管理工具安装及模块加载

获取离线安装rpm包,参考:附:安装ipvs工具

~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
 ...
~]# cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
 
~]# systemctl restart systemd-modules-load.service
 
~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  4
ip_vs                 145458  10 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv6      18935  3
nf_defrag_ipv6         35104  1 nf_conntrack_ipv6
nf_conntrack_netlink    40492  0
nf_conntrack_ipv4      19149  5
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          143360  10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
1.2.3 加载containerd相关内核模块
 临时加载模块
 ~]# modprobe overlay
 ~]# modprobe br_netfilter
 
 永久性加载模块
 ~]# cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
 
 设置为开机启动
 ~]# systemctl restart systemd-modules-load.service ##systemctl enable --now systemd-modules-load.service
1.2.4 开启内核路由转发及网桥过滤
 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
 
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 131072
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
 ~]# sysctl --system
 所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
 ~]# reboot -h now
 
 重启后查看ipvs模块加载情况:
 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
 
 重启后查看containerd相关模块加载情况:
 ~]# lsmod | egrep 'br_netfilter | overlay'
1.2.5 主机时间同步

1.2.6 创建软件部署目录
  • /opt/app 存放部署资源文件
  • /opt/cfg 存放配置文件
  • /opt/cert 存放证书
  • /opt/bin 链接/opt/app下的源文件,去除文件名中的版本号,方便升级时只需要更改链接即可。
 ]# mkdir -p /opt/{app,cfg,cert,bin}

1.3 集群网络规划

网络名称网段备注
Node网络192.168.26.0/24Node IP,Node节点的IP地址,即物理机(宿主机)的网卡地址。
Service网络10.168.0.0/16Cluster IP,也可叫Service IP,Service的IP地址。service-cluster-ip-range定义Service IP地址范围的参数。
Pod网络10.26.0.0/16Pod IP,Pod的IP地址,容器(docker0)网桥分配的地址。cluster-cidr定义Pod网络CIDR地址范围的参数。

配置:

 apiserver:
 --service-cluster-ip-range 10.168.0.0/16    ##Service网络 10.168.0.0/16
 
 controller:
 --cluster-cidr 10.26.0.0/16   ##Pod网络 10.26.0.0/16
 --service-cluster-ip-range 10.168.0.0/16   ##ervice网络 10.168.0.0/16
 
 kubelet:
 --cluster-dns 10.168.0.2   ## 解析Service,10.168.0.2
 
 proxy:
 --cluster-cidr 10.26.0.0/16   ##Pod网络 10.26.0.0/16

1.4 资源清单与镜像

类型名称下载链接说明
资源coredns.yaml.basehttps://github.com/kubernetes/kubernetes/blob/v1.27.6/cluster/addons/dns/coredns/coredns.yaml.base部署
资源recommended.yamlhttps://github.com/kubernetes/dashboard/blob/v2.7.0/aio/deploy/recommended.yaml部署
资源components.yaml(metrics server)https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml部署
镜像metrics-server:v0.6.4registry.aliyuncs.com/google_containers/metrics-server:v0.6.4部署
镜像coredns:v1.10.1registry.aliyuncs.com/google_containers/coredns:v1.10.1部署
镜像dashboard:v2.7.0kubernetesui/dashboard:v2.7.0部署
镜像metrics-scraper:v1.0.8kubernetesui/metrics-scraper:v1.0.8部署
镜像pause:3.9registry.aliyuncs.com/google_containers/pause:3.9部署

1.5 部署证书生成工具

app]# mv cfssl_1.6.4_linux_amd64 /usr/local/bin/cfssl
app]# mv cfssl-certinfo_1.6.4_linux_amd64 /usr/local/bin/cfssl-certinfo
app]# mv cfssljson_1.6.4_linux_amd64 /usr/local/bin/cfssljson
app]# chmod +x /usr/local/bin/cfssl*
app]# cfssl version
Version: 1.6.4
Runtime: go1.18

二、安装docker

2.1 解压、创建软连接、分发

~]# cd /opt/app/
app]# tar zxvf docker-24.0.6.tgz
app]# mv docker /opt/bin/docker-24.0.6
app]# ls  /opt/bin/docker-24.0.6
containerd  containerd-shim-runc-v2  ctr  docker  dockerd  docker-init  docker-proxy  runc
app]# ln -s /opt/bin/docker-24.0.6/containerd /usr/bin/containerd
app]# ln -s /opt/bin/docker-24.0.6/containerd-shim-runc-v2 /usr/bin/containerd-shim-runc-v2
app]# ln -s /opt/bin/docker-24.0.6/ctr /usr/bin/ctr
app]# ln -s /opt/bin/docker-24.0.6/docker /usr/bin/docker
app]# ln -s /opt/bin/docker-24.0.6/dockerd /usr/bin/dockerd
app]# ln -s /opt/bin/docker-24.0.6/docker-init /usr/bin/docker-init
app]# ln -s /opt/bin/docker-24.0.6/docker-proxy /usr/bin/docker-proxy
app]# ln -s /opt/bin/docker-24.0.6/runc /usr/bin/runc
app]# docker -v
Docker version 24.0.6, build ed223bc
## 复制到k8s-51、k8s-52
app]# scp -r /opt/bin/docker-24.0.6/ root@k8s-51:/opt/bin/.
app]# scp -r /opt/bin/docker-24.0.6/ root@k8s-52:/opt/bin/.
## 在k8s-51、k8s-52创建软连接:同上

2.2 创建目录、配置文件

  • 在3台主机上:
]# mkdir -p /data/docker /etc/docker
## k8s-51
]# cat /etc/docker/daemon.json
{
  "data-root": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["harbor.oss.com:32310"],
  "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  "bip": "10.26.51.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

## k8s-52
]# cat /etc/docker/daemon.json
{
  "data-root": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["harbor.oss.com:32310"],
  "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  "bip": "10.26.52.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

## k8s-53
]# cat /etc/docker/daemon.json
{
  "data-root": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["harbor.oss.com:32310"],
  "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  "bip": "10.26.53.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

2.3 创建启动文件

]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target

2.4 启动、检查

]# systemctl daemon-reload
]# systemctl start docker; systemctl enable docker
app]# docker info
Client:
 Version:    24.0.6
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 24.0.6
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7880925980b188f4c97b462f709d0db8e8962aff
 runc version: v1.1.9-0-gccaecfc
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 3.10.0-1160.90.1.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.682GiB
 Name: k8s-53
 ID: 8467715c-5b8b-4086-b703-4017ddf38e12
 Docker Root Dir: /data/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  harbor.oss.com:32310
  127.0.0.0/8
 Registry Mirrors:
  https://5gce61mx.mirror.aliyuncs.com/
 Live Restore Enabled: true
 Product License: Community Engine
app]# docker version
Client:
 Version:           24.0.6
 API version:       1.43
 Go version:        go1.20.7
 Git commit:        ed223bc
 Built:             Mon Sep  4 12:30:51 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.6
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.7
  Git commit:       1a79695
  Built:            Mon Sep  4 12:32:17 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.7.3
  GitCommit:        7880925980b188f4c97b462f709d0db8e8962aff
 runc:
  Version:          1.1.9
  GitCommit:        v1.1.9-0-gccaecfc
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
app]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:c6:99:5e brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.51/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec6:995e/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:43:aa:de:72 brd ff:ff:ff:ff:ff:ff
    inet 10.26.51.1/24 brd 10.26.51.255 scope global docker0
       valid_lft forever preferred_lft forever

2.5 拉取镜像

在k8s-53:

k8s-53 app]# docker pull centos
...
app]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
centos       latest    5d0da3dc9764   2 years ago   231MB
k8s-53 app]# docker run -i -t --name test centos /bin/bash
[root@e179e0701ccc /]# ip a
[root@3add8e00e63b /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:0a:1a:35:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.26.53.2/24 brd 10.26.53.255 scope global eth0
       valid_lft forever preferred_lft forever

k8s-53 app]# docker ps -a
CONTAINER ID   IMAGE     COMMAND       CREATED          STATUS                       PORTS     NAMES
3add8e00e63b   centos    "/bin/bash"   47 seconds ago   Exited (127) 4 seconds ago             test
k8s-53 app]# docker rm test

三、安装cri-dockerd

为什么要安装cri-dockerd插件?

  • K8s在刚开源时没有自己的容器引擎,而当时docker非常火爆是容器的代表,所以就在kubelet的代码里集成了对接docker的代码——docker shim,所以1.24版本之前是默认使用docker,不需要安装cri-dockerd。

  • K8s 1.24版本移除 docker-shim的代码,而 Docker Engine 默认又不支持CRI标准,因此二者默认无法再直接集成。为此,Mirantis 和 Docker 为了Docker Engine 提供一个能够支持到CRI规范的桥梁,就联合创建了cri-dockerd,从而能够让 Docker 作为K8s 容器引擎。

  • 截至目前2023年10月20日,k8s已经更新至1.28.3版。从v1.24起,Docker不能直接作为k8s的容器运行时,因为在k8s v1.24版本移除了叫dockershim的组件,这是由k8s团队直接维护而非Docker团队维护的组件,这意味着Docker和k8s的关系不再像原来那般亲密,开发者需要使用其它符合CRI(容器运行时接口)的容器运行时工具(如containerd, CRI-O等),当然这并不意味着新版本的k8s彻底抛弃Docker(由于Docker庞大的生态和广泛的群众基础,显然这并不容易办到),在原本安装了Docker的基础上,可以通过补充安装cri-dockerd,以满足容器运行时接口的条件,从某种程度上说,cri-dockerd就是翻版的dockershim。

3.1 解压、创建软链接、分发

k8s-53 app]# tar -zxvf cri-dockerd-0.3.5.amd64.tgz
k8s-53 app]# mv cri-dockerd /opt/bin/cri-dockerd-0.3.5
k8s-53 app]# ll /opt/bin/cri-dockerd-0.3.5
总用量 41568
-rwxrwxr-x 1 1000 1000 42565632 1017 05:49 cri-dockerd
k8s-53 app]# ln -s /opt/bin/cri-dockerd-0.3.5/cri-dockerd /usr/local/bin/cri-dockerd
k8s-53 app]# cri-dockerd --version
cri-dockerd 0.3.5 (cd730ff8)
复制:
k8s-53 app]# scp -r /opt/bin/cri-dockerd-0.3.5 root@k8s-51:/opt/bin/.
k8s-53 app]# scp -r /opt/bin/cri-dockerd-0.3.5 root@k8s-52:/opt/bin/.
创建软链接:同上

3.2 修改配置文件、启动

]# cat /usr/lib/systemd/system/cri-dockerd.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket ## 如果启动报错,则注释掉这一行

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
]# systemctl daemon-reload
]# systemctl enable cri-dockerd && systemctl start cri-dockerd

]# systemctl status cri-dockerd
● cri-dockerd.service - CRI Interface for Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/cri-dockerd.service; disabled; vendor preset: disabled)
   Active: active (running) since 五 2023-10-20 16:11:13 CST; 7s ago
   ...

四、创建证书

在k8s-53上/opt/cert目录下创建,然后分发。

4.1 创建CA根证书

  • ca-csr.json
cert]# cat > ca-csr.json   << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF
cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cert]# ls ca*pem
ca-key.pem  ca.pem
  • ca-config.json
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF

4.2 创建etcd证书

  • etcd-ca-csr.json
cat > etcd-ca-csr.json  << EOF
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF
  • etcd-csr.json
cat > etcd-csr.json << EOF
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF
cert]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
cert]# ls etcd-ca*pem
etcd-ca-key.pem  etcd-ca.pem
 
cert]# cfssl gencert \
   -ca=./etcd-ca.pem \
   -ca-key=./etcd-ca-key.pem \
   -config=./ca-config.json \
   -hostname=127.0.0.1,k8s-51,k8s-52,k8s-53,192.168.26.51,192.168.26.52,192.168.26.53 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare ./etcd
cert]# ls etcd*pem
etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem

4.3 创建kube-apiserver证书

  • apiserver-csr.json
cat > apiserver-csr.json << EOF
{
  "CN": "kube-apiserver",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cert]# cfssl gencert   \
-ca=./ca.pem   \
-ca-key=./ca-key.pem   \
-config=./ca-config.json   \
-hostname=127.0.0.1,k8s-51,k8s-52,k8s-53,192.168.26.51,192.168.26.52,192.168.26.53,10.168.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local  \
-profile=kubernetes   apiserver-csr.json | cfssljson -bare ./apiserver
cert]# ls apiserver*pem
apiserver-key.pem  apiserver.pem
  • front-proxy-ca-csr.json
cat > front-proxy-ca-csr.json  << EOF
{
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  },
  "ca": {
    "expiry": "876000h"
  }
}
EOF
  • front-proxy-client-csr.json
cat > front-proxy-client-csr.json  << EOF
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
 ## 生成kube-apiserver聚合证书
cert]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare ./front-proxy-ca
cert]# ls front-proxy-ca*pem
front-proxy-ca-key.pem  front-proxy-ca.pem

cert]# cfssl gencert  \
-ca=./front-proxy-ca.pem   \
-ca-key=./front-proxy-ca-key.pem   \
-config=./ca-config.json   \
-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare ./front-proxy-client
cert]# ls front-proxy-client*pem
front-proxy-client-key.pem  front-proxy-client.pem

4.4 创建kube-controller-manager的证书

  • manager-csr.json,用于生成配置文件controller-manager.kubeconfig
cat > manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cert]# cfssl gencert \
   -ca=./ca.pem \
   -ca-key=./ca-key.pem \
   -config=./ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare ./controller-manager
cert]# ls controller-manager*pem
controller-manager-key.pem  controller-manager.pem
  • admin-csr.json,用于生成配置文件admin.kubeconfig
cat > admin-csr.json << EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cert]# cfssl gencert \
   -ca=./ca.pem \
   -ca-key=./ca-key.pem \
   -config=./ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare ./admin
cert]# ls admin*pem
admin-key.pem  admin.pem

4.5 创建kube-schedule证书

  • scheduler-csr.json,用于生成配置文件scheduler.kubeconfig
cat > scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cert]# cfssl gencert \
   -ca=./ca.pem \
   -ca-key=./ca-key.pem \
   -config=./ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare ./scheduler
cert]# ls scheduler*pem
scheduler-key.pem  scheduler.pem

4.6 创建kube-prox证书

  • kube-proxy-csr.json,用于生成配置文件kube-proxy.kubeconfig
cat > kube-proxy-csr.json  << EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
cert]# cfssl gencert \
   -ca=./ca.pem \
   -ca-key=./ca-key.pem \
   -config=./ca-config.json \
   -profile=kubernetes \
   kube-proxy-csr.json | cfssljson -bare ./kube-proxy
cert]# ls kube-proxy*pem
kube-proxy-key.pem  kube-proxy.pem

4.7 创建ServiceAccount Key - secret

cert]# openssl genrsa -out /opt/cert/sa.key 2048
cert]# openssl rsa -in /opt/cert/sa.key -pubout -out /opt/cert/sa.pub
cert]# ls /opt/cert/sa*
/opt/cert/sa.key  /opt/cert/sa.pub

4.8 分发证书到各节点

cert]# scp /opt/cert/* root@k8s-51:/opt/cert/.
cert]# scp /opt/cert/* root@k8s-52:/opt/cert/.

五、部署etcd

5.1 准备etcd二进制软件

 ## 解压软件包,并将运行软件放到/opt/bin目录
k8s-53 app]# tar -xf etcd-v3.5.9-linux-amd64.tar.gz -C /opt/bin
k8s-53 app]# ls /opt/bin/etcd-v3.5.9-linux-amd64/ -l
总用量 52408
drwxr-xr-x 3 528287 89939       40 511 19:40 Documentation
-rwxr-xr-x 1 528287 89939 22474752 511 19:40 etcd
-rwxr-xr-x 1 528287 89939 16998400 511 19:40 etcdctl
-rwxr-xr-x 1 528287 89939 14118912 511 19:40 etcdutl
-rw-r--r-- 1 528287 89939    42066 511 19:40 README-etcdctl.md
-rw-r--r-- 1 528287 89939     7359 511 19:40 README-etcdutl.md
-rw-r--r-- 1 528287 89939     9394 511 19:40 README.md
-rw-r--r-- 1 528287 89939     7896 511 19:40 READMEv2-etcdctl.md
k8s-53 app]# ln -s /opt/bin/etcd-v3.5.9-linux-amd64/etcdctl /usr/local/bin/etcdctl
k8s-53 app]# ln -s /opt/bin/etcd-v3.5.9-linux-amd64/etcd /opt/bin/etcd
k8s-53 app]# etcdctl version
etcdctl version: 3.5.
 ## 复制软件到其他节点
]# scp -r /opt/bin/etcd-v3.5.9-linux-amd64/ root@k8s-51:/opt/bin/
]# scp -r /opt/bin/etcd-v3.5.9-linux-amd64/ root@k8s-52:/opt/bin/
## 在其他节点创建软链接
]# ln -s /opt/bin/etcd-v3.5.9-linux-amd64/etcdctl /usr/local/bin/etcdctl
]# ln -s /opt/bin/etcd-v3.5.9-linux-amd64/etcd /opt/bin/etcd

5.2 启动参数配置文件

  • k8s-51:/opt/cfg/etcd.config.yml
cat > /opt/cfg/etcd.config.yml << EOF
name: 'k8s-51'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.26.51:2380'
listen-client-urls: 'https://192.168.26.51:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.26.51:2380'
advertise-client-urls: 'https://192.168.26.51:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-51=https://192.168.26.51:2380,k8s-52=https://192.168.26.52:2380,k8s-53=https://192.168.26.53:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/opt/cert/etcd.pem'
  key-file: '/opt/cert/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/opt/cert/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/opt/cert/etcd.pem'
  key-file: '/opt/cert/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/opt/cert/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

etcd启动参数配置文件没有enable-v2: true时,etcd默认为v3,命令行中可以不加ETCDCTL_API=3

  • k8s-52:/opt/cfg/etcd.config.yml
cat > /opt/cfg/etcd.config.yml << EOF
name: 'k8s-52'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.26.52:2380'
listen-client-urls: 'https://192.168.26.52:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.26.52:2380'
advertise-client-urls: 'https://192.168.26.52:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-51=https://192.168.26.51:2380,k8s-52=https://192.168.26.52:2380,k8s-53=https://192.168.26.53:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/opt/cert/etcd.pem'
  key-file: '/opt/cert/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/opt/cert/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/opt/cert/etcd.pem'
  key-file: '/opt/cert/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/opt/cert/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
  • k8s-53:/opt/cfg/etcd.config.yml
cat > /opt/cfg/etcd.config.yml << EOF
name: 'k8s-53'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.26.53:2380'
listen-client-urls: 'https://192.168.26.53:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.26.53:2380'
advertise-client-urls: 'https://192.168.26.53:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-51=https://192.168.26.51:2380,k8s-52=https://192.168.26.52:2380,k8s-53=https://192.168.26.53:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/opt/cert/etcd.pem'
  key-file: '/opt/cert/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/opt/cert/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/opt/cert/etcd.pem'
  key-file: '/opt/cert/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/opt/cert/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

5.3 创建service、启动、检查

在edcd节点k8s-51、k8s-52、k8s-53

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/opt/bin/etcd --config-file=/opt/cfg/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

EOF

启动,检查

]# systemctl daemon-reload
]# systemctl enable --now etcd
]# systemctl status etcd
● etcd.service - Etcd Service
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since 六 2023-10-21 07:59:54 CST; 24s ago
     Docs: https://coreos.com/etcd/docs/latest/
 Main PID: 1564 (etcd)
    Tasks: 8
   Memory: 26.4M
   CGroup: /system.slice/etcd.service
           └─1564 /opt/bin/etcd --config-file=/opt/cfg/etcd.config.yml
    ...
## 默认:ETCDCTL_API=3 
]# etcdctl --endpoints="192.168.26.51:2379,192.168.26.52:2379,192.168.26.53:2379" \
--cacert=/opt/cert/etcd-ca.pem --cert=/opt/cert/etcd.pem --key=/opt/cert/etcd-key.pem  endpoint health --write-out=table
+--------------------+--------+-------------+-------+
| 192.168.26.51:2379 |   true | 10.751752ms |       |
| 192.168.26.53:2379 |   true | 11.157143ms |       |
| 192.168.26.52:2379 |   true | 29.083563ms |       |
+--------------------+--------+-------------+-------+
]# etcdctl --endpoints="192.168.26.51:2379,192.168.26.52:2379,192.168.26.53:2379" \
--cacert=/opt/cert/etcd-ca.pem --cert=/opt/cert/etcd.pem --key=/opt/cert/etcd-key.pem  endpoint status --write-out=table
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.26.51:2379 | f0355985a6ae32e9 |   3.5.9 |   29 kB |      true |      false |         2 |         11 |                 11 |        |
| 192.168.26.52:2379 | 675c2a43b7236ee3 |   3.5.9 |   20 kB |     false |      false |         2 |         11 |                 11 |        |
| 192.168.26.53:2379 | 1bd9fbe011426d48 |   3.5.9 |   20 kB |     false |      false |         2 |         11 |                 11 |        |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
]# etcdctl --endpoints="192.168.26.51:2379,192.168.26.52:2379,192.168.26.53:2379" \
--cacert=/opt/cert/etcd-ca.pem --cert=/opt/cert/etcd.pem --key=/opt/cert/etcd-key.pem member list --write-out=table
+------------------+---------+--------+----------------------------+----------------------------+------------+
|        ID        | STATUS  |  NAME  |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+--------+----------------------------+----------------------------+------------+
| 1bd9fbe011426d48 | started | k8s-53 | https://192.168.26.53:2380 | https://192.168.26.53:2379 |      false |
| 675c2a43b7236ee3 | started | k8s-52 | https://192.168.26.52:2380 | https://192.168.26.52:2379 |      false |
| f0355985a6ae32e9 | started | k8s-51 | https://192.168.26.51:2380 | https://192.168.26.51:2379 |      false |
+------------------+---------+--------+----------------------------+----------------------------+------------+

六、部署flannel

官方参考:

  • https://github.com/flannel-io/flannel/blob/v0.22.3/Documentation/configuration.md
  • https://github.com/flannel-io/flannel/blob/v0.22.3/Documentation/running.md

6.1、确认etcd正常

参见etcd部分。

6.2、查看当前路由

k8s-51:
]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
10.26.51.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32

k8s-52:
]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
10.26.52.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32

k8s-53:
]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
10.26.53.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32

清除其它路由,如:route del -net 10.26.57.0 netmask 255.255.255.0 gw 192.168.26.53

6.3、添加flannel网络信息到etcd

  • 在任一etcd节点进行操作
~]# etcdctl --endpoints="192.168.26.51:2379,192.168.26.52:2379,192.168.26.53:2379" \
--cacert=/opt/cert/etcd-ca.pem --cert=/opt/cert/etcd.pem --key=/opt/cert/etcd-key.pem \
put /coreos.com/network/config '{"Network": "10.26.0.0/16", "Backend": {"Type": "vxlan"}}'
  • 查看
~]# etcdctl --endpoints="192.168.26.51:2379,192.168.26.52:2379,192.168.26.53:2379" \
--cacert=/opt/cert/etcd-ca.pem --cert=/opt/cert/etcd.pem --key=/opt/cert/etcd-key.pem get /coreos.com/network/config
/coreos.com/network/config
{"Network": "10.26.0.0/16", "Backend": {"Type": "vxlan"}}

6.4、flannel解压、创建软连接

k8s-53:

]# cd /opt/app
k8s-53 app]# mkdir /opt/bin/flannel-v0.22.3-linux-amd64
k8s-53 app]# tar zxvf flannel-v0.22.3-linux-amd64.tar.gz -C /opt/bin/flannel-v0.22.3-linux-amd64
flanneld
mk-docker-opts.sh
README.md
k8s-53 app]# scp -r /opt/bin/flannel-v0.22.3-linux-amd64 root@k8s-51:/opt/bin/.
k8s-53 app]# scp -r /opt/bin/flannel-v0.22.3-linux-amd64 root@k8s-52:/opt/bin/.
## 创建软连接
]# ln -s /opt/bin/flannel-v0.22.3-linux-amd64/flanneld /opt/bin/flanneld

6.5、添加配置文件

k8s-51:--public-ip=192.168.26.51;k8s-52:--public-ip=192.168.26.52;k8s-53:--public-ip=192.168.26.53

cat > /opt/cfg/kube-flanneld.conf << EOF
KUBE_FLANNELD_OPTS="--public-ip=192.168.26.51 \\
--etcd-endpoints=https://192.168.26.51:2379,https://192.168.26.52:2379,https://192.168.26.53:2379 \\
--etcd-keyfile=/opt/cert/etcd-key.pem \\
--etcd-certfile=/opt/cert/etcd.pem \\
--etcd-cafile=/opt/cert/etcd-ca.pem \\
--kube-subnet-mgr=false \\
--iface=ens32 \\
--iptables-resync=5 \\
--subnet-file=/run/flannel/subnet.env \\
--healthz-port=2401"
EOF

6.6、添加启动文件

cat > /usr/lib/systemd/system/kube-flanneld.service << EOF
[Unit]
Description=Kubernetes flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service

[Service]
EnvironmentFile=/opt/cfg/kube-flanneld.conf
## ExecStartPost=/usr/bin/mk-docker-opts.sh 这一行注释掉
ExecStart=/opt/bin/flanneld \$KUBE_FLANNELD_OPTS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

6.7、启动、检查

]# systemctl daemon-reload && systemctl start kube-flanneld && systemctl enable kube-flanneld
]# systemctl status kube-flanneld
● kube-flanneld.service - Kubernetes flanneld
   Loaded: loaded (/usr/lib/systemd/system/kube-flanneld.service; disabled; vendor preset: disabled)
   Active: active (running) since 六 2023-10-21 08:44:01 CST; 19s ago
  ...
查看生成的/run/flannel/subnet.env
]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.26.0.0/16
FLANNEL_SUBNET=10.26.25.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
k8s-51 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:c6:99:5e brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.51/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec6:995e/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:9b:c4:eb:d4 brd ff:ff:ff:ff:ff:ff
    inet 10.26.51.1/24 brd 10.26.51.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether d6:f5:01:c5:16:88 brd ff:ff:ff:ff:ff:ff
    inet 10.26.6.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::d4f5:1ff:fec5:1688/64 scope link
       valid_lft forever preferred_lft forever

k8s-52 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:0d:04:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.52/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe0d:4bf/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:17:74:7a:4f brd ff:ff:ff:ff:ff:ff
    inet 10.26.52.1/24 brd 10.26.52.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 2a:86:94:58:85:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.26.75.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::2886:94ff:fe58:85c4/64 scope link
       valid_lft forever preferred_lft forever

k8s-53 app]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:24:1d:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.53/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe24:1dcf/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:82:02:5d:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.26.53.1/24 brd 10.26.53.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 36:96:d4:5a:3a:71 brd ff:ff:ff:ff:ff:ff
    inet 10.26.25.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3496:d4ff:fe5a:3a71/64 scope link
       valid_lft forever preferred_lft forever
  • k8s-51
k8s-51 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
10.26.25.0      10.26.25.0      255.255.255.0   UG    0      0        0 flannel.1
10.26.51.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
10.26.75.0      10.26.75.0      255.255.255.0   UG    0      0        0 flannel.1
192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32
  • k8s-52
k8s-52 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
10.26.6.0       10.26.6.0       255.255.255.0   UG    0      0        0 flannel.1
10.26.25.0      10.26.25.0      255.255.255.0   UG    0      0        0 flannel.1
10.26.52.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32
  • k8s-53
k8s-53 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
10.26.6.0       10.26.6.0       255.255.255.0   UG    0      0        0 flannel.1
10.26.53.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
10.26.75.0      10.26.75.0      255.255.255.0   UG    0      0        0 flannel.1
192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32

6.8、和docker集成

修改/etc/docker/daemon.json,重启docker

k8s-53 ~]# cat /etc/docker/daemon.json
{
  "data-root": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["harbor.oss.com:32310"],
  "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  "bip": "10.26.25.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

k8s-53 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:24:1d:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.53/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe24:1dcf/64 scope link
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 76:3e:f4:2d:6a:59 brd ff:ff:ff:ff:ff:ff
    inet 10.26.25.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::743e:f4ff:fe2d:6a59/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a3:cb:ee:74 brd ff:ff:ff:ff:ff:ff
    inet 10.26.25.1/24 brd 10.26.25.255 scope global docker0
       valid_lft forever preferred_lft forever
k8s-52 ~]# cat /etc/docker/daemon.json
{
  "data-root": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["harbor.oss.com:32310"],
  "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  "bip": "10.26.75.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

k8s-52 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:0d:04:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.52/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe0d:4bf/64 scope link
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 02:e3:00:4d:cd:33 brd ff:ff:ff:ff:ff:ff
    inet 10.26.75.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::e3:ff:fe4d:cd33/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:6d:94:06:92 brd ff:ff:ff:ff:ff:ff
    inet 10.26.75.1/24 brd 10.26.75.255 scope global docker0
       valid_lft forever preferred_lft forever
k8s-51 ~]# cat /etc/docker/daemon.json
{
  "data-root": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["harbor.oss.com:32310"],
  "registry-mirrors": ["https://5gce61mx.mirror.aliyuncs.com"],
  "bip": "10.26.6.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

k8s-51 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:c6:99:5e brd ff:ff:ff:ff:ff:ff
    inet 192.168.26.51/24 brd 192.168.26.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec6:995e/64 scope link
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 2e:2b:9b:b9:83:3d brd ff:ff:ff:ff:ff:ff
    inet 10.26.6.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::2c2b:9bff:feb9:833d/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:e3:ae:92:cf brd ff:ff:ff:ff:ff:ff
    inet 10.26.6.1/24 brd 10.26.6.255 scope global docker0
       valid_lft forever preferred_lft forever

6.9 测试验证容器互访、容器与主机互访

在k8s-51、k8s-52、k8s-53启动容器:

51 ~]# docker run -i -t --name node51 centos /bin/bash
[root@5b05cd016a5f /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:0a:1a:06:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.26.6.2/24 brd 10.26.6.255 scope global eth0
       valid_lft forever preferred_lft forever
52 ~]# docker run -i -t --name node52 centos /bin/bash
[root@41f15af3fdaf /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:0a:1a:4b:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.26.75.2/24 brd 10.26.75.255 scope global eth0
       valid_lft forever preferred_lft forever
53 ~]# docker run -i -t --name node53 centos /bin/bash
[root@890f2c8150dc /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:0a:1a:19:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.26.25.2/24 brd 10.26.25.255 scope global eth0
       valid_lft forever preferred_lft forever
6.9.1 跨主机之间的容器访问(容器 > 容器):从k8s-51容器访问另一主机上的容器
## ping k8s-52上的容器10.26.75.2
[root@5b05cd016a5f /]# ping 10.26.75.2 -c 3
PING 10.26.75.2 (10.26.75.2) 56(84) bytes of data.
64 bytes from 10.26.75.2: icmp_seq=1 ttl=62 time=0.452 ms
64 bytes from 10.26.75.2: icmp_seq=2 ttl=62 time=0.540 ms
64 bytes from 10.26.75.2: icmp_seq=3 ttl=62 time=0.513 ms

--- 10.26.75.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.452/0.501/0.540/0.044 ms
## ping k8s-53上的容器10.26.25.2
[root@5b05cd016a5f /]# ping 10.26.25.2 -c 3
PING 10.26.25.2 (10.26.25.2) 56(84) bytes of data.
64 bytes from 10.26.25.2: icmp_seq=1 ttl=62 time=0.497 ms
64 bytes from 10.26.25.2: icmp_seq=2 ttl=62 time=0.518 ms
64 bytes from 10.26.25.2: icmp_seq=3 ttl=62 time=0.528 ms

--- 10.26.25.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.497/0.514/0.528/0.022 ms
6.9.2 跨容器主机之间的访问(容器 > 主机):从k8s-51容器访问所有主机
[root@5b05cd016a5f /]# ping 192.168.26.52 -c 3
PING 192.168.26.52 (192.168.26.52) 56(84) bytes of data.
64 bytes from 192.168.26.52: icmp_seq=1 ttl=63 time=0.377 ms
64 bytes from 192.168.26.52: icmp_seq=2 ttl=63 time=0.482 ms
64 bytes from 192.168.26.52: icmp_seq=3 ttl=63 time=0.482 ms

--- 192.168.26.52 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.377/0.447/0.482/0.049 ms
[root@5b05cd016a5f /]# ping 192.168.26.53 -c 3
PING 192.168.26.53 (192.168.26.53) 56(84) bytes of data.
64 bytes from 192.168.26.53: icmp_seq=1 ttl=63 time=0.308 ms
64 bytes from 192.168.26.53: icmp_seq=2 ttl=63 time=0.540 ms
64 bytes from 192.168.26.53: icmp_seq=3 ttl=63 time=0.533 ms

--- 192.168.26.53 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.308/0.460/0.540/0.109 ms
6.9.3 跨主机容器之间的访问(主机 > 容器):从k8s-51主机访问所有主机上的容器
k8s-51 ~]# ping 10.26.6.2 -c 3
PING 10.26.6.2 (10.26.6.2) 56(84) bytes of data.
64 bytes from 10.26.6.2: icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from 10.26.6.2: icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from 10.26.6.2: icmp_seq=3 ttl=64 time=0.041 ms

--- 10.26.6.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.036/0.040/0.044/0.006 ms

k8s-51 ~]# ping 10.26.75.2 -c 3
PING 10.26.75.2 (10.26.75.2) 56(84) bytes of data.
64 bytes from 10.26.75.2: icmp_seq=1 ttl=63 time=0.429 ms
64 bytes from 10.26.75.2: icmp_seq=2 ttl=63 time=0.535 ms
64 bytes from 10.26.75.2: icmp_seq=3 ttl=63 time=0.558 ms

--- 10.26.75.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.429/0.507/0.558/0.059 ms

k8s-51 ~]# ping 10.26.25.2 -c 3
PING 10.26.25.2 (10.26.25.2) 56(84) bytes of data.
64 bytes from 10.26.25.2: icmp_seq=1 ttl=63 time=0.463 ms
64 bytes from 10.26.25.2: icmp_seq=2 ttl=63 time=0.477 ms
64 bytes from 10.26.25.2: icmp_seq=3 ttl=63 time=0.499 ms

--- 10.26.25.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.463/0.479/0.499/0.029 ms
6.9.4 从容器访问百度(具备外网访问时测试)
[root@5b05cd016a5f /]# ping www.baidu.com -c 3
PING www.baidu.com (14.119.104.254) 56(84) bytes of data.
64 bytes from www.baidu.com (14.119.104.254): icmp_seq=1 ttl=127 time=10.5 ms
64 bytes from www.baidu.com (14.119.104.254): icmp_seq=2 ttl=127 time=9.96 ms
64 bytes from www.baidu.com (14.119.104.254): icmp_seq=3 ttl=127 time=15.2 ms

--- www.baidu.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 9.964/11.893/15.195/2.348 ms
6.9.5 k8s-52、k8s-53上的测试

步骤同上。

七、k8s核心组件

7.1 准备

 ## 解压软件包,并将运行软件放到/opt/bin目录
app]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /opt/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
]# ls -l /opt/bin/kube*
-rwxr-xr-x 1 root root 121683968 1018 20:02 /opt/bin/kube-apiserver
-rwxr-xr-x 1 root root 117706752 1018 20:02 /opt/bin/kube-controller-manager
-rwxr-xr-x 1 root root  49872896 1018 20:02 /opt/bin/kubectl
-rwxr-xr-x 1 root root 110780416 1018 20:02 /opt/bin/kubelet
-rwxr-xr-x 1 root root  55050240 1018 20:02 /opt/bin/kube-proxy
-rwxr-xr-x 1 root root  56016896 1018 20:02 /opt/bin/kube-scheduler
]# ln -s /opt/bin/kubectl /usr/local/bin/kubectl
]# /opt/bin/kubelet --version
Kubernetes v1.28.3
 
 ## 复制软件到其他节点,并创建软链接
]# scp /opt/bin/kube* root@k8s-51:/opt/bin/
]# scp /opt/bin/kube* root@k8s-52:/opt/bin/
]# ln -s /opt/bin/kubectl /usr/local/bin/kubectl

7.2 Nginx高可用方案

使用 nginx方案,kube-apiserver中的配置为: --server=https://127.0.0.1:8443

  • 编译安装nginx
## 在一台有开发环境的主机上进行编译安装,然后复制到集群节点
app]# tar xvf nginx-1.24.0.tar.gz
app]# cd nginx-1.24.0
nginx-1.24.0]# ./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
nginx-1.24.0]# make && make install
nginx-1.24.0]# ls -l /usr/local/nginx
drwxr-xr-x 2 root root 333 73 10:55 conf
drwxr-xr-x 2 root root  40 73 10:55 html
drwxr-xr-x 2 root root   6 73 10:55 logs
drwxr-xr-x 2 root root  19 73 10:55 sbin
  • nginx配置文件/usr/local/nginx/conf/kube-nginx.con
 # 写入nginx配置文件
cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
worker_processes 1;
events {
    worker_connections  1024;
}
stream {
    upstream backend {
        least_conn;
        hash $remote_addr consistent;
        server 192.168.26.51:6443        max_fails=3 fail_timeout=30s;
        server 192.168.26.52:6443        max_fails=3 fail_timeout=30s;
        server 192.168.26.53:6443        max_fails=3 fail_timeout=30s;
    }
    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF
  • 复制到集群节点
## 复制到集群节点
nginx-1.24.0]# scp -r /usr/local/nginx root@k8s-51:/usr/local/nginx/
nginx-1.24.0]# scp -r /usr/local/nginx root@k8s-52:/usr/local/nginx/
nginx-1.24.0]# scp -r /usr/local/nginx root@k8s-53:/usr/local/nginx/
  • 启动配置文件/etc/systemd/system/kube-nginx.service
# 写入启动配置文件
cat > /etc/systemd/system/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF
  • 设置开机自启并检查启动是否成功
 # 设置开机自启
 ]# systemctl enable --now  kube-nginx
 ]# systemctl restart kube-nginx
 ]# systemctl status kube-nginx

7.3 部署kube-apiserver

  • k8s-51:/usr/lib/systemd/system/kube-apiserver.service
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/opt/bin/kube-apiserver \\
      --v=2  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.26.51 \\
      --service-cluster-ip-range=10.168.0.0/16  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.26.51:2379,https://192.168.26.52:2379,https://192.168.26.53:2379 \\
      --etcd-cafile=/opt/cert/etcd-ca.pem  \\
      --etcd-certfile=/opt/cert/etcd.pem  \\
      --etcd-keyfile=/opt/cert/etcd-key.pem  \\
      --client-ca-file=/opt/cert/ca.pem  \\
      --tls-cert-file=/opt/cert/apiserver.pem  \\
      --tls-private-key-file=/opt/cert/apiserver-key.pem  \\
      --kubelet-client-certificate=/opt/cert/apiserver.pem  \\
      --kubelet-client-key=/opt/cert/apiserver-key.pem  \\
      --service-account-key-file=/opt/cert/sa.pub  \\
      --service-account-signing-key-file=/opt/cert/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/opt/cert/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/opt/cert/front-proxy-client.pem  \\
      --proxy-client-key-file=/opt/cert/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/opt/cert/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF
  • k8s-52:/usr/lib/systemd/system/kube-apiserver.service
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/opt/bin/kube-apiserver \\
      --v=2  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.26.52 \\
      --service-cluster-ip-range=10.168.0.0/16  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.26.51:2379,https://192.168.26.52:2379,https://192.168.26.53:2379 \\
      --etcd-cafile=/opt/cert/etcd-ca.pem  \\
      --etcd-certfile=/opt/cert/etcd.pem  \\
      --etcd-keyfile=/opt/cert/etcd-key.pem  \\
      --client-ca-file=/opt/cert/ca.pem  \\
      --tls-cert-file=/opt/cert/apiserver.pem  \\
      --tls-private-key-file=/opt/cert/apiserver-key.pem  \\
      --kubelet-client-certificate=/opt/cert/apiserver.pem  \\
      --kubelet-client-key=/opt/cert/apiserver-key.pem  \\
      --service-account-key-file=/opt/cert/sa.pub  \\
      --service-account-signing-key-file=/opt/cert/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/opt/cert/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/opt/cert/front-proxy-client.pem  \\
      --proxy-client-key-file=/opt/cert/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/opt/cert/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF
  • k8s-53:/usr/lib/systemd/system/kube-apiserver.service
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/opt/bin/kube-apiserver \\
      --v=2  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.26.53 \\
      --service-cluster-ip-range=10.168.0.0/16  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.26.51:2379,https://192.168.26.52:2379,https://192.168.26.53:2379 \\
      --etcd-cafile=/opt/cert/etcd-ca.pem  \\
      --etcd-certfile=/opt/cert/etcd.pem  \\
      --etcd-keyfile=/opt/cert/etcd-key.pem  \\
      --client-ca-file=/opt/cert/ca.pem  \\
      --tls-cert-file=/opt/cert/apiserver.pem  \\
      --tls-private-key-file=/opt/cert/apiserver-key.pem  \\
      --kubelet-client-certificate=/opt/cert/apiserver.pem  \\
      --kubelet-client-key=/opt/cert/apiserver-key.pem  \\
      --service-account-key-file=/opt/cert/sa.pub  \\
      --service-account-signing-key-file=/opt/cert/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/opt/cert/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/opt/cert/front-proxy-client.pem  \\
      --proxy-client-key-file=/opt/cert/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/opt/cert/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

EOF
  • 启动apiserver(所有master节点)
 ]# systemctl daemon-reload && systemctl enable --now kube-apiserver
 
 ## 注意查看状态是否启动正常
 ]# systemctl status kube-apiserver
 
]# curl -k --cacert /opt/cert/ca.pem \
--cert /opt/cert/apiserver.pem \
--key /opt/cert/apiserver-key.pem \
https://192.168.26.51:6443/healthz
ok

 ]# ip_head='192.168.26';for i in 51 52 53;do \
curl -k --cacert /opt/cert/ca.pem \
--cert /opt/cert/apiserver.pem \
--key /opt/cert/apiserver-key.pem \
https://${ip_head}.${i}:6443/healthz; \
done
okokok

7.4 kubectl配置

  • 创建admin.kubeconfig。使用 nginx方案,-server=https://127.0.0.1:8443在一个节点执行一次即可
kubectl config set-cluster kubernetes     \
  --certificate-authority=/opt/cert/ca.pem     \
  --embed-certs=true     \
  --server=https://127.0.0.1:8443     \
  --kubeconfig=/opt/cert/admin.kubeconfig

kubectl config set-credentials kubernetes-admin  \
  --client-certificate=/opt/cert/admin.pem     \
  --client-key=/opt/cert/admin-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/opt/cert/admin.kubeconfig

kubectl config set-context kubernetes-admin@kubernetes    \
  --cluster=kubernetes     \
  --user=kubernetes-admin     \
  --kubeconfig=/opt/cert/admin.kubeconfig

kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/opt/cert/admin.kubeconfig

]# mkdir ~/.kube
]# cp /opt/cert/admin.kubeconfig ~/.kube/config
]# scp -r ~/.kube root@k8s-52:~/.
]# scp -r ~/.kube root@k8s-51:~/.
  • 配置kubectl子命令补全
~]# echo 'source <(kubectl completion bash)' >> ~/.bashrc
 
~]# yum -y install bash-completion
~]# source /usr/share/bash-completion/bash_completion
~]# source <(kubectl completion bash)
 
~]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
controller-manager   Unhealthy   Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
etcd-0               Healthy
etcd-2               Healthy
etcd-1               Healthy

7.5 部署kube-controller-manager

所有master节点配置,且配置相同。 10.26.0.0/16为pod网段,按需求设置你自己的网段。

  • 创建启动配置:/usr/lib/systemd/system/kube-controller-manager.service
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/opt/bin/kube-controller-manager \\
      --v=2 \\
      --bind-address=0.0.0.0 \\
      --root-ca-file=/opt/cert/ca.pem \\
      --cluster-signing-cert-file=/opt/cert/ca.pem \\
      --cluster-signing-key-file=/opt/cert/ca-key.pem \\
      --service-account-private-key-file=/opt/cert/sa.key \\
      --kubeconfig=/opt/cert/controller-manager.kubeconfig \\
      --leader-elect=true \\
      --use-service-account-credentials=true \\
      --node-monitor-grace-period=40s \\
      --node-monitor-period=5s \\
      --controllers=*,bootstrapsigner,tokencleaner \\
      --allocate-node-cidrs=true \\
      --service-cluster-ip-range=10.168.0.0/16 \\
      --cluster-cidr=10.26.0.0/16 \\
      --node-cidr-mask-size-ipv4=24 \\
      --requestheader-client-ca-file=/opt/cert/front-proxy-ca.pem 

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF
  • 创建kube-controller-manager.kubeconfig。使用 nginx方案,-server=https://127.0.0.1:8443。在一个节点执行一次即可。
kubectl config set-cluster kubernetes \
     --certificate-authority=/opt/cert/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/opt/cert/controller-manager.kubeconfig
## 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/opt/cert/controller-manager.kubeconfig
## 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/opt/cert/controller-manager.pem \
     --client-key=/opt/cert/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/opt/cert/controller-manager.kubeconfig
## 设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/opt/cert/controller-manager.kubeconfig

## 复制/opt/cert/controller-manager.kubeconfig到其它节点     
]# scp /opt/cert/controller-manager.kubeconfig root@k8s-51:/opt/cert/controller-manager.kubeconfig
]# scp /opt/cert/controller-manager.kubeconfig root@k8s-52:/opt/cert/controller-manager.kubeconfig
  • 启动kube-controller-manager,并查看状态
]# systemctl daemon-reload
]# systemctl enable --now kube-controller-manager
]# systemctl  status kube-controller-manager

]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
controller-manager   Healthy     ok
etcd-0               Healthy
etcd-2               Healthy
etcd-1               Healthy

7.6 部署kube-schedule

  • 创建启动配置:/usr/lib/systemd/system/kube-scheduler.service
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/opt/bin/kube-scheduler \\
      --v=2 \\
      --bind-address=0.0.0.0 \\
      --leader-elect=true \\
      --kubeconfig=/opt/cert/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF
  • 创建scheduler.kubeconfig。使用 nginx方案,-server=https://127.0.0.1:8443。在一个节点执行一次即可。
kubectl config set-cluster kubernetes \
     --certificate-authority=/opt/cert/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/opt/cert/scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/opt/cert/scheduler.pem \
     --client-key=/opt/cert/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/opt/cert/scheduler.kubeconfig

kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/opt/cert/scheduler.kubeconfig

kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/opt/cert/scheduler.kubeconfig

## 复制/opt/cert/scheduler.kubeconfig到其它节点
]# scp /opt/cert/scheduler.kubeconfig  root@k8s-51:/opt/cert/scheduler.kubeconfig
]# scp /opt/cert/scheduler.kubeconfig  root@k8s-52:/opt/cert/scheduler.kubeconfig
  • 启动并查看服务状态
]# systemctl daemon-reload
]# systemctl enable --now kube-scheduler
]# systemctl status kube-scheduler
]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   ok

7.7 配置bootstrapping

  • 创建bootstrap-kubelet.kubeconfig。使用 nginx方案,-server=https://127.0.0.1:8443。在一个节点执行一次即可。
kubectl config set-cluster kubernetes     \
--certificate-authority=/opt/cert/ca.pem     \
--embed-certs=true     --server=https://127.0.0.1:8443     \
--kubeconfig=/opt/cert/bootstrap-kubelet.kubeconfig

kubectl config set-credentials tls-bootstrap-token-user     \
--token=bc5692.ebcfbe81d917383c \
--kubeconfig=/opt/cert/bootstrap-kubelet.kubeconfig

kubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=/opt/cert/bootstrap-kubelet.kubeconfig

kubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=/opt/cert/bootstrap-kubelet.kubeconfig

## 复制/opt/cert/bootstrap-kubelet.kubeconfig到其它节点
]# scp /opt/cert/bootstrap-kubelet.kubeconfig root@k8s-51:/opt/cert/bootstrap-kubelet.kubeconfig
]# scp /opt/cert/bootstrap-kubelet.kubeconfig root@k8s-52:/opt/cert/bootstrap-kubelet.kubeconfig

token的位置在bootstrap.secret.yaml(附:yaml文件:bootstrap.secret.yaml),如果修改的话到这个文件修改。

## 创建token。(也可以自已定义)
~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
bc5692ebcfbe81d917383c89e60d4388
  • bootstrap.secret.yaml
## 修改:
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-bc5692 ##修改,对应token前6位
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: bc5692 ##修改,对应token前6位
  token-secret: ebcfbe81d917383c ##修改,对应token前7-22位共16个字符
  ...
yaml]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-bc5692 created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created 

7.8 部署kubelet

  • 创建目录
~]# mkdir /data/kubernetes/data/kubelet -p
  • 启动文件/usr/lib/systemd/system/kubelet.service。默认使用docker作为Runtime。
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=cri-dockerd.service
Requires=cri-dockerd.service

[Service]
WorkingDirectory=/data/kubernetes/data/kubelet
ExecStart=/opt/bin/kubelet \\
  --bootstrap-kubeconfig=/opt/cert/bootstrap-kubelet.kubeconfig \\
  --cert-dir=/opt/cert \\
  --kubeconfig=/opt/cert/kubelet.kubeconfig \\
  --config=/opt/cfg/kubelet.json \\
  --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock \\
  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 \\
  --root-dir=/data/kubernetes/data/kubelet \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

/opt/cert/kubelet.kubeconfig为自动创建的文件,如果已存在就删除

  • 所有k8s节点创建kubelet的配置文件/opt/cfg/kubelet.json
cat > /opt/cfg/kubelet.json << EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/opt/cert/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.26.51",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",                    
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.168.0.2"]
}
EOF

注意修改:“address”: “192.168.26.51”“address”: “192.168.26.52”“address”: “192.168.26.53”

  • 启动
~]# systemctl daemon-reload
~]# systemctl enable --now kubelet
~]# systemctl status kubelet
~]# kubectl get nodes -o wide
NAME     STATUS     ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-51   NotReady   <none>   4m2s    v1.28.3   192.168.26.51   <none>        CentOS Linux 7 (Core)   3.10.0-1160.90.1.el7.x86_64   docker://24.0.6
k8s-52   NotReady   <none>   3m29s   v1.28.3   192.168.26.52   <none>        CentOS Linux 7 (Core)   3.10.0-1160.90.1.el7.x86_64   docker://24.0.6
k8s-53   NotReady   <none>   5m50s   v1.28.3   192.168.26.53   <none>        CentOS Linux 7 (Core)   3.10.0-1160.90.1.el7.x86_64   docker://24.0.6
## 查看kubelet证书请求
~]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
node-csr-CfHXXoZlsfKqXMmSk3SiSSdte1lvZBCnHrHAH9rP3YU   4m11s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bc5692   <none>              Approved,Issued
node-csr-OFq3aEDV9qALiEcPTAdd-KhCphY3oNuNo3FY5Bmwf6Y   6m32s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bc5692   <none>              Approved,Issued
node-csr-PWhp9MLFoGuZBW1PQ8s-_QP0V--W46FtuURrgviMtRU   4m44s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bc5692   <none> 
## 如果处于Pending状态,则批准申请
~]# kubectl certificate approve node-csr-......

如果node仍然是NotReady,则需要安装cni-plugin-flannel。(参见后面的核心插件部署)

~]# kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
k8s-51   NotReady   <none>   5m33s   v1.28.3
k8s-52   NotReady   <none>   5m      v1.28.3
k8s-53   NotReady   <none>   7m21s   v1.28.3

7.9 部署kube-proxy

  • 创建kube-proxy.kubeconfig。使用 nginx方案,-server=https://127.0.0.1:8443。在一个节点执行一次即可。
]# kubectl config set-cluster kubernetes     \
  --certificate-authority=/opt/cert/ca.pem     \
  --embed-certs=true     \
  --server=https://127.0.0.1:8443     \
  --kubeconfig=/opt/cert/kube-proxy.kubeconfig

]# kubectl config set-credentials kube-proxy  \
  --client-certificate=/opt/cert/kube-proxy.pem     \
  --client-key=/opt/cert/kube-proxy-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/opt/cert/kube-proxy.kubeconfig

]# kubectl config set-context kube-proxy@kubernetes    \
  --cluster=kubernetes     \
  --user=kube-proxy     \
  --kubeconfig=/opt/cert/kube-proxy.kubeconfig

]# kubectl config use-context kube-proxy@kubernetes  --kubeconfig=/opt/cert/kube-proxy.kubeconfig

## 复制/opt/cert/kube-proxy.kubeconfig到各节点
]# scp  /opt/cert/kube-proxy.kubeconfig root@k8s-51:/opt/cert/kube-proxy.kubeconfig
]# scp  /opt/cert/kube-proxy.kubeconfig root@k8s-52:/opt/cert/kube-proxy.kubeconfig
  • 所有k8s节点添加kube-proxy的service文件
cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/opt/bin/kube-proxy \\
  --config=/opt/cfg/kube-proxy.yaml \\
  --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF
  • 所有k8s节点添加kube-proxy的配置
cat > /opt/cfg/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /opt/cert/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.26.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

EOF
  • 启动
]# systemctl daemon-reload
]# systemctl enable --now kube-proxy
]# systemctl status kube-proxy

八、k8s核心插件

8.1 部署CNI网络插件装

  • 官方链接

https://github.com/containernetworking/plugins

下载:cni-plugins-linux-amd64-v1.2.0.tgz:https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz

https://github.com/flannel-io/cni-plugin

下载:cni-plugin-flannel-linux-amd64-v1.2.0.tgz:https://github.com/flannel-io/cni-plugin/releases/download/v1.2.0/cni-plugin-flannel-linux-amd64-v1.2.0.tgz

  • 下载、解压、分发
]# mkdir -p /opt/cni/bin
]# cd /opt/cni/bin
]# curl -O -L https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
]# curl -O -L https://github.com/flannel-io/cni-plugin/releases/download/v1.2.0/cni-plugin-flannel-linux-amd64-v1.2.0.tgz
]# tar -C /opt/cni/bin -xzf cni-plugins-linux-amd64-v1.2.0.tgz
]# tar -C /opt/cni/bin -xzf cni-plugin-flannel-linux-amd64-v1.2.0.tgz
]# mv flannel-amd64 flannel
]# ls -l /opt/cni/bin
总用量 71296
-rwxr-xr-x 1 root root  3859475 117 2023 bandwidth
-rwxr-xr-x 1 root root  4299004 117 2023 bridge
-rwxr-xr-x 1 root root 10167415 117 2023 dhcp
-rwxr-xr-x 1 root root  3986082 117 2023 dummy
-rwxr-xr-x 1 root root  4385098 117 2023 firewall
-rwxr-xr-x 1 root root  2414517 721 23:04 flannel
-rwxr-xr-x 1 root root  3870731 117 2023 host-device
-rwxr-xr-x 1 root root  3287319 117 2023 host-local
-rwxr-xr-x 1 root root  3999593 117 2023 ipvlan
-rwxr-xr-x 1 root root  3353028 117 2023 loopback
-rwxr-xr-x 1 root root  4029261 117 2023 macvlan
-rwxr-xr-x 1 root root  3746163 117 2023 portmap
-rwxr-xr-x 1 root root  4161070 117 2023 ptp
-rwxr-xr-x 1 root root  3550152 117 2023 sbr
-rwxr-xr-x 1 root root  2845685 117 2023 static
-rwxr-xr-x 1 root root  3437180 117 2023 tuning
-rwxr-xr-x 1 root root  3993252 117 2023 vlan

]# scp -r /opt/cni/bin root@k8s-51:/opt/cni/.
]# scp -r /opt/cni/bin root@k8s-52:/opt/cni/.
  • 创建/etc/cni/net.d/10-flannel.conflist
~]# mkdir /etc/cni/net.d -p
~]# vi /etc/cni/net.d/10-flannel.conflist
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}
  • 查看节点状态
~]# kubectl get nodes
~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
k8s-51   Ready    <none>   4h33m   v1.28.3
k8s-52   Ready    <none>   4h32m   v1.28.3
k8s-53   Ready    <none>   4h34m   v1.28.3

8.2 部署服务发现插件CoreDNS

  • 下载:
    在这里插入图片描述
  • 修改:
 __DNS__DOMAIN__  改为:  cluster.local
 __DNS__MEMORY__LIMIT__ 改为: 150Mi
 __DNS__SERVER__ 改为: 10.168.0.2
 image: docker pull registry.aliyuncs.com/google_containers/coredns:v1.10.1
  • 执行:
cfg]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

cfg]# kubectl get pod -A -o wide
NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-877fbbc8f-6jp22   0/1     Running   0          21s   10.26.25.10   k8s-53   <none>           <none>
  • READY状态不正常处理
~]# kubectl -n kube-system describe pod coredns-6dffdfd7fb-6jp22
## 出现以下信息
...
... Warning  Unhealthy  24m (x199 over 79m)    kubelet  Readiness probe failed: Get "http://10.26.25.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
... Warning  Unhealthy  14m (x13 over 77m)     kubelet  Readiness probe failed: Get "http://10.26.25.8:8181/ready": dial tcp 10.26.25.8:8181: connect: no route to host
... Warning  BackOff    9m47s (x179 over 70m)  kubelet  Back-off restarting failed container coredns in pod coredns-6dffdfd7fb-6jp22_kube-system(0b37a912-2233-4d96-9d3c-ec67a9d1b004)
... Warning  Unhealthy  4m54s (x97 over 78m)   kubelet  Liveness probe failed: Get "http://10.26.25.8:8080/health": dial tcp 10.26.25.8:8080: connect: no route to host
...
## 在k8s-53上ping 10.26.25.10,发现主机k8s-53 ping不通自己主机上的pod
## 查看路由,发现10.26.25.0有两条路由
k8s-53 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.26.2    0.0.0.0         UG    100    0        0 ens32
10.26.6.0       10.26.6.0       255.255.255.0   UG    0      0        0 flannel.1
10.26.25.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
10.26.25.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.26.75.0      10.26.75.0      255.255.255.0   UG    0      0        0 flannel.1
192.168.26.0    0.0.0.0         255.255.255.0   U     100    0        0 ens32
## 删除一条路由
k8s-53 ~]# route del -net 10.26.25.0 netmask 255.255.255.0 gw 0.0.0.0
## 等一会查看pod状态,正常
]# kubectl get pod -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS         AGE   IP            NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6dffdfd7fb-6jp22          1/1     Running   130 (9m9s ago)   21h   10.26.25.10   k8s-53   <none>           <none>

8.3 部署资源监控插件Metrics-server

Kubernetes中系统资源的采集均使用Metrics-server,通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

  • 下载:https://github.com/kubernetes-sigs/metrics-server/
https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

在这里插入图片描述

  • 版本适配查看:https://github.com/kubernetes-sigs/metrics-server/#readme
    在这里插入图片描述
  • 修改配置:
yaml]# vi components.yaml
# 1
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/opt/cert/front-proxy-ca.pem
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
# 2
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /opt/cert
# 3
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /opt/cert
# 4 修改为image镜像
image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.4
  • 执行
cfg]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

cfg]# kubectl get pod -n kube-system | grep metrics
metrics-server-7747f5f88b-hxfzn   0/1     Running            0             27s
## READY状态正常,参考之前的处理

]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-51   92m          4%     706Mi           19%
k8s-52   83m          4%     682Mi           18%
k8s-53   105m         5%     702Mi           19%
]# kubectl top pod -n kube-system
NAME                              CPU(cores)   MEMORY(bytes)
coredns-6dffdfd7fb-6jp22          2m           16Mi
metrics-server-7747f5f88b-hxfzn   3m           24Mi

8.4 部署dashboard

  • 下载:https://github.com/kubernetes/dashboard

https://github.com/kubernetes/dashboard/blob/v2.7.0/aio/deploy/recommended.yaml
在这里插入图片描述

  • 修改:]# vi recommended.yaml
...
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: IfNotPresent #修改
...
          image: kubernetesui/metrics-scraper:v1.0.8
          imagePullPolicy: IfNotPresent #修改
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort   #新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32310   #新增
  selector:
    k8s-app: kubernetes-dashboard
...
  • 执行
]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

]# kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-5657497c4c-5x67m   1/1     Running   0          27s
pod/kubernetes-dashboard-5b749d9495-qrhpk        1/1     Running   0          27s

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.168.39.58   <none>        8000/TCP        27s
service/kubernetes-dashboard        NodePort    10.168.91.23   <none>        443:32310/TCP   27s
  • 创建admin-user:]# vi dashboard-adminuser.yaml

    https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard 
]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
## 创建登录token
]# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6Im1uRzB2bllQbEkxd2pocnZKS2JUSXlFSnQ3SXMwTjRncW9XOGF1cFlJOVkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk4MDMwODExLCJpYXQiOjE2OTgwMjcyMTEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiNjI2MmNmMGQtYzMyMi00NzcxLTk3ZDctYWY2YjUwYTAzYmUzIn19LCJuYmYiOjE2OTgwMjcyMTEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.o3GG09-WyCE1bRFm_6z81DR7KVBkZkKuBtAy7H1GslMCFemSrimlTH_F2DEiSYZoT8ud1W95q0Xi0YYYl0GIC6xd3Jmo5XkKYqvWbQiRl90PrSXtQNG4wJIc2nwNXvlS5zKqjnKZNhpmw1_QhkUQJjvPZyKo9Zah5D5JU_cC6bJUpQhec5vkk6AtaxvkoaD8MB2L6RBoBv5l5bNmRkVPa52BbHpbUoQmY2hAaMIMdLEzbDv7w6Bi__HckVsIsNppFY1nn2hIaUPlrJz2Ns-XnUUywcWRmrTJ0oewZl2nnsuPOQny_JJeSQFYz8vu5kCP6SqVjGjnJb69Ukh9mgr5yg
  • 浏览器访问:https://192.168.26.51:32310
    在这里插入图片描述
    在这里插入图片描述

九、集群验证与总结

9.1 部署pod验证

]# vi busybox.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.io/library/busybox:1.28
    imagePullPolicy: IfNotPresent
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
]# kubectl apply -f busybox.yaml
pod/busybox created
]# kubectl  get pod
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          59s

9.2 部署service验证

cat >  nginx.yaml  << "EOF"
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-web
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
EOF
]# kubectl apply -f nginx.yaml
]# kubectl get pods -o wide
NAME              READY   STATUS              RESTARTS       AGE   IP          NODE     NOMINATED NODE   READINESS GATES
busybox           1/1     Running             16 (52m ago)   34h   10.26.6.6   k8s-51   <none>           <none>
nginx-web-68gpj   0/1     ContainerCreating   0              95s   <none>      k8s-53   <none>           <none>
nginx-web-zjk4b   0/1     ContainerCreating   0              95s   <none>      k8s-52   <none>           <none>
## 要拉取镜像,需要等一会
]# kubectl get all
...
]# kubectl get service
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes               ClusterIP   10.168.0.1       <none>        443/TCP        47h
nginx-service-nodeport   NodePort    10.168.238.209   <none>        80:30001/TCP   3m4s

浏览器访问:http://192.168.26.53:30001/
在这里插入图片描述

9.3 创建3个副本在不同的节点上

cat > nginx-deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

EOF
]# kubectl  apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
]# kubectl get pod -owide |grep nginx-deployment
nginx-deployment-64888c994d-4qqb6   1/1     Running   0              3m6s   10.26.25.13   k8s-53   <none>           <none>
nginx-deployment-64888c994d-h76v9   1/1     Running   0              3m6s   10.26.6.8     k8s-51   <none>           <none>
nginx-deployment-64888c994d-n85lg   1/1     Running   0              3m6s   10.26.75.11   k8s-52   <none>           <none>           <none>
]# kubectl delete -f nginx-deployment.yaml

9.4 用pod解析默认命名空间中的kubernetes

]# kubectl get svc
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes               ClusterIP   10.168.0.1       <none>        443/TCP        47h
nginx-service-nodeport   NodePort    10.168.238.209   <none>        80:30001/TCP   8m54s

]# kubectl exec  busybox -n default -- nslookup kubernetes
Server:    10.168.0.2
Address 1: 10.168.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.168.0.1 kubernetes.default.svc.cluster.local

9.5 测试跨命名空间是否可以解析

]# kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.168.0.2
Address 1: 10.168.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system
Address 1: 10.168.0.2 kube-dns.kube-system.svc.cluster.local

9.6 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

]# kubectl get svc -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.168.0.1       <none>        443/TCP                  47h
default                nginx-service-nodeport      NodePort    10.168.238.209   <none>        80:30001/TCP             10m
kube-system            kube-dns                    ClusterIP   10.168.0.2       <none>        53/UDP,53/TCP,9153/TCP   22h
kube-system            metrics-server              ClusterIP   10.168.241.18    <none>        443/TCP                  82m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.168.39.58     <none>        8000/TCP                 45m
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.168.91.23     <none>        443:32310/TCP            45m

~]# telnet 10.168.0.1 443
Trying 10.168.0.1...
Connected to 10.168.0.1.
Escape character is '^]'.

~]# telnet 10.168.0.2 53
Trying 10.168.0.2...
Connected to 10.168.0.2.
Escape character is '^]'.

~]# curl 10.168.0.1:443
Client sent an HTTP request to an HTTPS server.

~]# curl 10.168.0.2:53
curl: (52) Empty reply from server

9.7 Pod和其它主机及Pod之间能通

~]# kubectl get po -owide -A
default                busybox                                      1/1     Running   17 (4m7s ago)   34h     10.26.6.6     k8s-51   <none>           <none>
default                nginx-deployment-64888c994d-4qqb6            1/1     Running   0               7m54s   10.26.25.13   k8s-53   <none>           <none>
default                nginx-deployment-64888c994d-h76v9            1/1     Running   0               7m54s   10.26.6.8     k8s-51   <none>           <none>
default                nginx-deployment-64888c994d-n85lg            1/1     Running   0               7m54s   10.26.75.11   k8s-52   <none>           <none>
default                nginx-web-68gpj                              1/1     Running   0               12m     10.26.25.12   k8s-53   <none>           <none>
default                nginx-web-zjk4b                              1/1     Running   0               12m     10.26.75.10   k8s-52   <none>           <none>
kube-system            coredns-6dffdfd7fb-6jp22                     1/1     Running   130 (92m ago)   22h     10.26.25.10   k8s-53   <none>           <none>
kube-system            metrics-server-7747f5f88b-hxfzn              1/1     Running   2 (83m ago)     84m     10.26.75.9    k8s-52   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-5x67m   1/1     Running   0               47m     10.26.25.11   k8s-53   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-5b749d9495-qrhpk        1/1     Running   0               47m     10.26.6.7     k8s-51   <none>           <none>

进入busybox ping其他节点上的pod。可以连通证明这个pod是可以跨命名空间和跨主机通信的。

]# kubectl exec -ti busybox -- sh
## 主机k8s-51上的pod ping主机k8s-53(POD - 跨主机)
/ # ping 192.168.26.53 -c 3
PING 192.168.26.53 (192.168.26.53): 56 data bytes
64 bytes from 192.168.26.53: seq=0 ttl=63 time=0.227 ms
64 bytes from 192.168.26.53: seq=1 ttl=63 time=0.434 ms
64 bytes from 192.168.26.53: seq=2 ttl=63 time=0.817 ms
...
## 主机k8s-51上的pod ping主机k8s-53节点上的pod(跨主机跨命名空间)
/ # ping 10.26.25.13 -c 3
PING 10.26.25.13 (10.26.25.13): 56 data bytes
64 bytes from 10.26.25.13: seq=0 ttl=62 time=0.503 ms
64 bytes from 10.26.25.13: seq=1 ttl=62 time=0.526 ms
64 bytes from 10.26.25.13: seq=2 ttl=62 time=0.571 ms
...
## 从主机k8s-51节点ping k8s-52上的pod(跨主机 - POD)
k8s-51 ~]# ping 10.26.75.11 -c 3
PING 10.26.75.11 (10.26.75.11) 56(84) bytes of data.
64 bytes from 10.26.75.11: icmp_seq=1 ttl=63 time=0.595 ms
64 bytes from 10.26.75.11: icmp_seq=2 ttl=63 time=0.541 ms
64 bytes from 10.26.75.11: icmp_seq=3 ttl=63 time=0.432 ms
...

至此,完美完成以二进制方式部署Kubernetes1.27.6 + docker24.0.6高可用集群。

9.8 总结

碰到以下相关问题时可能需要花几小时或更长时间来处理。

  • 二进制软件部署flannel v0.22.3网络,使用的etcd是版本3,与之前使用版本2不同。查看官方文档进行了解。
  • vxlan方式部署flannel网络插件后,需要根据生成的FLANNEL_SUBNET更新docker配置文件中的bip参数。
  • FLANNEL_SUBNET根据写入etcd的数据在重启时生成,因此修改/run/flannel/subnet.env文件中的变量值无效。后续采用host-gw方式部署,采用手动创建/run/flannel/subnet.env文件,就不会有这个问题,和docker集成时采用规划的ip网段。(后续验证)
  • 部署CNI插件,解决node状态成为正常Ready
  • 需要安装ipvs管理工具及模块,否则kube-proxy会报错。
  • 部署CoreDNS和Metrics-server时,pod状态不正常的处理。

附:learn k8s map

在这里插入图片描述

附:yaml文件

https://www.notion.so/yaml-6d8f19758949478b96d2934fbd554283?pvs=21

01 bootstrap.secret.yaml

cat > bootstrap.secret.yaml << EOF 
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-bc5692
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: bc5692
  token-secret: ebcfbe81d917383c
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF

02 coredns.yaml.base

# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes __DNS__DOMAIN__ in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: registry.k8s.io/coredns/coredns:v1.10.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: __DNS__MEMORY__LIMIT__
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: __DNS__SERVER__
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

03 components.yaml.yaml(metrics-server)

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.k8s.io/metrics-server/metrics-server:v0.6.4
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

04 recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     <http://www.apache.org/licenses/LICENSE-2.0>
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30237
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

05 dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

06 nginx.yaml

---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-web
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001
      protocol: TCP
  type: NodePort
  selector:
    name: nginx

07 nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

08 busybox.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.io/library/busybox:1.28
    imagePullPolicy: IfNotPresent
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1124982.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Unity3D 基础——鼠标悬停更改物体颜色,移走恢复

方法介绍 【unity学习笔记】OnMouseEnter、OnMouseOver、OnMouseExit_unity onmouseover_一白梦人的博客-CSDN博客https://blog.csdn.net/a1208498468/article/details/117856445 GetComponent()详解_getcomponet<> 动态名称-CSDN博客https://blog.csdn.net/kaixindrag…

飞管飞控系统仿真应用探究与浅析

数字孪生技术是对真实物理实体的虚拟映射与数字化信息的应用再造&#xff0c;因其在产品生产制造与技术运用过程中&#xff0c;可将物理世界和数字世界进行实时交汇与良好互动的特性越来越受到普遍关注与广泛应用。据统计&#xff0c;2021年全球数字孪生市场规模为约500亿元&am…

【C++技能树】Lambda表达式

Halo&#xff0c;这里是Ppeua。平时主要更新C&#xff0c;数据结构算法&#xff0c;Linux与ROS…感兴趣就关注我bua&#xff01; 文章目录 0. Lambda表达式简介1. Lambda表达式2. Lambda表达式语法 0. Lambda表达式简介 在C98及之前,想要对sort进行自定义排序,或者对自定义类…

手把手教你如何重装win10系统,自己动手安装系统其实很简单

笔者在这里写一个详细点的系统重装教程。手把手教大家如何从零开始重装win10系统。因为是写给新手来看的&#xff0c;会尽力介绍的详细一些。 文章较长&#xff0c;大家不用被吓到。简化一下具体步骤只有几步。顺利话一个小时内就可以安装好。我列了个目录&#xff1a; 一、重…

LVS负载均衡及LVS-NAT模式

一、集群概述 1.1 集群的背景 集群定义&#xff1a;为解决某个特定问题将多个计算机组合起来形成一个单系统 集群目的&#xff1a;为了解决系统的性能瓶颈 集群发展历史&#xff1a; 垂直扩展&#xff1a;向上扩展&#xff0c;增加单个机器的性能&#xff0c;即升级硬件 水…

【CNN-LSTM预测】基于卷积神经网络-长短期记忆网络的数据分类预测研究(Matlab代码实现)

&#x1f4a5;&#x1f4a5;&#x1f49e;&#x1f49e;欢迎来到本博客❤️❤️&#x1f4a5;&#x1f4a5; &#x1f3c6;博主优势&#xff1a;&#x1f31e;&#x1f31e;&#x1f31e;博客内容尽量做到思维缜密&#xff0c;逻辑清晰&#xff0c;为了方便读者。 ⛳️座右铭&a…

ONEPIECE!程序环境和预处理——C语言最终章

时间过得飞快呀&#xff0c;从第一篇blog到现在&#xff0c;已经有三四个月的时间了&#xff0c;而我们终于也迎来了C语言的最终章——程序环境和预处理&#xff01;加油吧朋友们&#xff0c;ONEPIECE就在眼前~ 目录 一、程序的"翻译环境"和"运行环境" 二…

使用gen 结合gorm 生成表模型文件

# 创建一个目录 用于执行 自动生成model 的代码 和存储 生成的model文件 mkdir gengormmodel && cd gengormmodel go mod init gengormmodel go get -u gorm.io/genv0.3.16 #最终的目录结构package mainimport ("fmt""gorm.io/driver/mysql""…

【进程概念③】:进程环境变量/进程切换

深入篇【Linux】学习必备&#xff1a;进程环境变量/进程切换 Ⅰ.环境变量Ⅱ.深层意义Ⅲ.全局属性Ⅳ.进程切换 Ⅰ.环境变量 1.环境变量是什么&#xff1f;&#xff1a;环境变量是系统提供的一组name/value形式的变量&#xff0c;不同的环境变量有不同的用户。 一般是用来指定操作…

AIGC笔记--基于DDPM实现图片生成

目录 1--扩散模型 2--训练过程 3--损失函数 4--生成过程 5--参考 1--扩散模型 完整代码&#xff1a;ljf69/DDPM 扩散模型包含两个过程&#xff0c;前向扩散过程和反向生成过程。 前向扩散过程对一张图像逐渐添加高斯噪声&#xff0c;直至图像变为随机噪声。 反向生成过程…

--initialize specified but the data directory has files in it. Aborting. 问题解决

当电脑输入这条命令以试图初始化数据库的时候&#xff0c;出现这样的错误。 2023-10-23T09:04:21.258180Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more deta…

Spark SQL概述与基本操作

目录 一、Spark SQL概述 &#xff08;1&#xff09;概念 &#xff08;2&#xff09;特点 &#xff08;3&#xff09;Spark SQL与Hive异同 &#xff08;4&#xff09;Spark的数据抽象 二、Spark Session对象执行环境构建 (1)Spark Session对象 &#xff08;2&#xff09;代码演…

Python-字符串(切片操作与内建函数)

目录 一、字符串介绍 1、什么是字符串 2、转义字符 二、字符串的输入和输出 1、字符串输出 2、字符串输入 三、访问字符串中的值 1、字符串的存储方式 2、使用切片截取字符串 四、字符串内建函数 1、find 2、index 3、count 4、replace 5、split 6、capitalize …

Centos 7 Zabbix配置安装

前言 Zabbix是一款开源的网络监控和管理软件&#xff0c;具有高度的可扩展性和灵活性。它可以监控各种网络设备、服务器、虚拟机以及应用程序等&#xff0c;收集并分析性能指标&#xff0c;并发送警报和报告。Zabbix具有以下特点&#xff1a; 1. 支持多种监控方式&#xff1a;可…

Docker容器引擎的介绍

目录 Docker概述 容器受欢迎的原因 Docker与虚拟机的区别 Docker三个核心概念 Docker的安装 1、环境准备 2、安装依赖包 3、设置阿里云镜像源 4、安装 Docker-CE并设置为开机自动启动 Docker命令 1、查看 docker 版本信息 2、docker 信息查看 3、Docker 镜像操作命…

GoLong的学习之路(五)语法之数组

书接上回&#xff0c;上回书说到&#xff0c;循环语句&#xff0c;在go中循环语句的少了whlie这个关键词&#xff0c;但是与之for可以改这个改这个特点。并且在终止关键词中&#xff0c;又有标签可以方便&#xff0c;停止。这次说数组 文章目录 Array(数组)数组的初始化方法一方…

数据结构堆详解

[TOC]堆详解 一&#xff0c;堆 1.1堆的概念 堆的性质&#xff1a; 堆中某个节点的值总是不大于或不小于其父节点的值&#xff1b; 堆总是一棵完全二叉树。 1.2堆的存储模式 我们前面的文章提到过&#xff0c;二叉树的两种存储模式&#xff0c;一个是顺序存储&#xff0c;一…

网络第一颗

✍ 如何理解局域网和广域网&#xff1f; ✍ 路由器和交换机是怎样工作的&#xff1f; ✍ 三层交换机能不能代替路由器&#xff1f; -- 1.局域网 2. 广域网 -- -- 企业网络 运营商架构 数据中心架构 -- 局域网 - 内网 - 私网 -- 通过交换机连接的 转发相同IP地址段的…

NVIDIA显卡算力表--nvidia显卡算力表

参考链接&#xff1a;https://blog.csdn.net/qq_41070955/article/details/108269915 官方链接&#xff1a;https://developer.nvidia.com/cuda-gpus

电压放大器在工业领域有哪些用途

电压放大器在工业领域中有广泛的应用&#xff0c;其主要功能是将传感器或其他信号源的微小电压信号放大为更大幅度的电压信号&#xff0c;以便进行后续的信号处理、控制和监测。以下是电压放大器在工业领域中的一些常见用途&#xff1a; 传感器信号放大&#xff1a;工业生产中经…