使用kubeadm搭建高可用k8s集群

news2024/10/5 21:22:31

使用kubeadm搭建高可用k8s集群

  • 方案选型
  • 高可用k8s集群部署
    • 准备工作
      • 服务器统一配置
      • 配置hostname
      • 打通ssh免密登录
    • 部署etcd集群
      • step1 在master01上生成配置相关文件
      • step2 每台服务器上启动etcd服务
      • step3 检查etcd集群是否正常
    • 部署负载均衡 (haproxy + keepalived)
      • step1 下载haproxy与keepalived
      • step2 分别配置keepalived与haproxy服务的配置文件
      • step3 启动haproxy与keepalived服务
    • 部署k8s集群
      • step1 所有服务器上安装kubernets相关软件
      • step2 所有服务器上安装docker与cri-dockerd
      • step3 生成集群init配置文件
      • step4 启动master01节点
      • step5 加入master02/master03控制节点
      • step6 加入worker节点
  • 参考文档

方案选型

外部etcd集群 + LoadBalance(Haproxy+keepalived)+ K8s集群

机器IPhostnamerole组件
192.168.20.1master01masteretcd,haproxy,apiserver,controller-manager,scheduler
192.168.20.2master02masteretcd,haproxy,apiserver,controller-manager,scheduler
192.168.20.3master03masteretcd,haproxy,apiserver,controller-manager,scheduler
192.168.20.4worker01workerapp
192.168.20.5wroker02workerapp
192.168.20.121virtual_ipVIPLB

安装脚本及步骤见github

使用外部etcd集群部署的k8s集群拓扑结构图
外部etcd+k8s
操作系统及软件版本信息

  • CentOS 7
  • Linux 3.10
  • kubernetes v1.24
  • docker-ce 3:20.10.17-3.el7
  • kubelet v1.24.2
  • kubeadm v1.24.2
  • kubectl v1.24.2

部署流程

  1. 所有机器统一配置(打通ssh免密登录,关闭防火墙,软件源配置,时间同步,内核更新等操作)
  2. 部署etcd集群
  3. 部署负载均衡 (haproxy + keepalived)
  4. 部署k8s集群

高可用k8s集群部署

准备工作

开始部署之前,需要对所有服务器进行以下操作,以满足部署前置条件

  • 关闭防火墙
  • 打通ssh免密登录
  • 安装iptables
  • 关闭selinux
  • 禁止交换分区
  • 配置yum源为国内源,如果无法访问外网
  • 设置时间同步
  • 更新内核
  • 支持ipvs
  • 配置host

服务器统一配置

# 所有机器均需要安装
echo "step1 关闭防火墙"
systemctl disable firewalld
systemctl stop firewalld
echo "success 关闭防火墙"

echo "step2 安装iptables"
yum -y install iptables-services
# 在 iptables 添加规则,开放 6443 端口,在 /etc/sysconfig/iptables 文件内容中修改
# 添加 6443 端口开放记录(在 COMMIT 前面添加)
# -A INPUT -m state --state NEW -m tcp -p tcp --dport 6443 -j ACCEPT
systemctl start iptables
systemctl enable iptables
iptables -F
service iptables save
iptables -L
echo "success 安装iptables"

echo "step3 关闭selinux"
# 临时禁用selinux
setenforce 0
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
echo "success 关闭selinux"

echo "step4 禁用交换分区"
swapoff -a
# 永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/g' /etc/fstab
echo "success 禁用交换分区"

echo "step5 执行配置CentOS阿里云源"
rm -rfv /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
echo "success 执行配置CentOS阿里云源"

echo "step6 时间同步"
yum install -y chrony
systemctl enable chronyd.service
systemctl restart chronyd.service
systemctl status chronyd.service

echo "step7 更新内核"
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install -y https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
# 设置内核
#更新yum源仓库
yum -y update
#查看可用的系统内核包
yum --disablerepo="*" --enablerepo=elrepo-kernel list available
#安装内核,注意先要查看可用内核,我安装的是5.19版本的内核
yum --enablerepo=elrepo-kernel install  kernel-ml -y
# yum --enablerepo=elrepo-kernel install kernel-ml -y 
#查看目前可用内核
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
echo "使用序号为0的内核,序号0是前面查出来的可用内核编号"
grub2-set-default 0
#生成 grub 配置文件并重启
grub2-mkconfig -o /boot/grub2/grub.cfg
echo "success 更新内核"

# 集群内无法 ping 通 ClusterIP(或 ServiceName)
echo "step6 配置服务器支持开启ipvs"
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

yum install -y ipset ipvsadm
echo "success 配置服务器支持开启ipvs"

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

echo "重启服务器"
reboot

配置hostname

在相应的服务器 上设置hostname

192.168.20.1: hostnamectl set-hostname master01
192.168.20.2: hostnamectl set-hostname master02
192.168.20.3: hostnamectl set-hostname master03
192.168.20.4: hostnamectl set-hostname worker01
192.168.20.5: hostnamectl set-hostname worker02

所有服务器 配置hosts文件:/etc/hosts

192.168.20.1 master01
192.168.20.2 master02
192.168.20.3 master03
192.168.20.4 worker01
192.168.20.5 worker02

打通ssh免密登录

在相应的服务器* 上生成ssh密钥文件

ssh-keygen

一路enter即可,然后分别在每台服务器上将ssh的公钥拷贝到其余服务器上。
例如在服务器192.168.20.1上的操作

ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.20.2
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.20.3
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.20.4
ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.20.5

其他机器类似。完成之后,便可以在这些机器之间实现免密登录

完成这些配置后,重启服务器。

部署etcd集群

etcd集群在master01、master02、master03三个服务器上进行部署

step1 在master01上生成配置相关文件

etcd1=192.168.20.1
etcd2=192.168.20.2
etcd3=192.168.20.3

TOKEN=abcd1234
ETCDHOSTS=($etcd1 $etcd2 $etcd3)
NAMES=("master01" "master02" "master03")
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/$NAME.conf
# [member]
ETCD_NAME=$NAME
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://$HOST:2380"
ETCD_LISTEN_CLIENT_URLS="http://$HOST:2379,http://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://$HOST:2380"
ETCD_INITIAL_CLUSTER="${NAMES[0]}=http://${ETCDHOSTS[0]}:2380,${NAMES[1]}=http://${ETCDHOSTS[1]}:2380,${NAMES[2]}=http://${ETCDHOSTS[2]}:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="$TOKEN"
ETCD_ADVERTISE_CLIENT_URLS="http://$HOST:2379"
EOF
done
ls /tmp/master*
scp /tmp/master02.conf $etcd2:/etc/etcd/etcd.conf
scp /tmp/master03.conf $etcd3:/etc/etcd/etcd.conf
cp /tmp/master01.conf /etc/etcd/etcd.conf
rm -f /tmp/master*.conf

step2 每台服务器上启动etcd服务

echo "启动etcd服务"
yum install -y etcd
systemctl enable etcd --now

step3 检查etcd集群是否正常

echo "验证etcd集群"
etcdctl member list
etcdctl cluster-health

结果如下说明成功

member 35f923e23a443e3d is healthy: got healthy result from http://192.168.20.1:2379
member 83061a96c7f09e99 is healthy: got healthy result from http://192.168.20.2:2379
member f4d71112ff618b3a is healthy: got healthy result from http://192.168.20.3:2379
cluster is healthy

至此etcd集群搭建完成

部署负载均衡 (haproxy + keepalived)

haproxy + keepalived可以使用本地部署或static pod的方式部署。

我们这里使用本地部署的方式

step1 下载haproxy与keepalived

yum install -y haproxy keepalived

step2 分别配置keepalived与haproxy服务的配置文件

keepalived配置文件:keepalived.conf
不同服务器上的配置内容会有些许不同
参考示例:keepalived.conf

! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state ${STATE}
    interface ${INTERFACE}
    virtual_router_id ${ROUTER_ID}
    priority ${PRIORITY}
    authentication {
        auth_type PASS
        auth_pass ${AUTH_PASS}
    }
    virtual_ipaddress {
        ${APISERVER_VIP}
    }
    track_script {
        check_apiserver
    }
}

参数说明:

  • ${STATE} is MASTER for one and BACKUP for all other hosts, hence the virtual IP will initially be assigned to the MASTER.
  • ${INTERFACE} is the network interface taking part in the negotiation of the virtual IP, e.g. eth0.
  • ${ROUTER_ID} should be the same for all keepalived cluster hosts while unique amongst all clusters in the same subnet. Many distros pre-configure its value to 51.
  • ${PRIORITY} should be higher on the control plane node than on the backups. Hence 101 and 100 respectively will suffice.
  • ${AUTH_PASS} should be the same for all keepalived cluster hosts, e.g. 42
  • ${APISERVER_VIP} is the virtual IP address negotiated between the keepalived cluster hosts.

/etc/keepalived/check_apiserver.sh如下

#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi

参数说明:

  • ${APISERVER_VIP} is the virtual IP address negotiated between the keepalived cluster hosts.
  • ${APISERVER_DEST_PORT} the port through which Kubernetes will talk to the API Server.

haproxy配置文件:haproxy.cfg
参考示例:haproxy.cfg

# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
    bind *:${APISERVER_DEST_PORT}
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check
        # [...]

参数说明:

  1. ${APISERVER_DEST_PORT} the port through which Kubernetes will talk to the API Server.
  2. ${APISERVER_SRC_PORT} the port used by the API Server instances
  3. ${HOST1_ID} a symbolic name for the first load-balanced API Server host
  4. ${HOST1_ADDRESS} a resolvable address (DNS name, IP address) for the first load-balanced API Server host
  5. additional server lines, one for each load-balanced API Server host

step3 启动haproxy与keepalived服务

$master01: 
cp keepalived_1.conf /etc/keepalived/keepalived.conf
cp haproxy.cfg /etc/haproxy
$master02:
cp keepalived_2.conf /etc/keepalived/keepalived.conf
cp haproxy.cfg /etc/haproxy
$master03:
cp keepalived_3.conf /etc/keepalived/keepalived.conf
cp haproxy.cfg /etc/haproxy

启动服务

systemctl enable haproxy --now
systemctl enable keepalived --now

部署k8s集群

部署流程

  1. 安装kubernets软件
  2. 安装CRI (docker + cri-dockerd)
  3. 生成集群初始化配置yaml文件
  4. 启动第一个控制节点
  5. 加入其他控制(master)节点
  6. 加入worker节点

Container Runtime 使用的是Docker Engine, 根据官方推荐,CRI我们使用 cri-dockerd

step1 所有服务器上安装kubernets相关软件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装kubeadm、kubectl、kubelet
version=1.24.2-0
yum install -y kubectl-$version kubeadm-$version kubelet-$version --disableexcludes=kubernetes
# kubelet服务
systemctl enable kubelet

step2 所有服务器上安装docker与cri-dockerd

安装docker

# 使用docker engine作为CRI, 使用docker进行容器管理
# 安装docker所需的工具
yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion net-tools gcc
# 配置阿里云的docker源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce
echo "启动docker"
systemctl daemon-reload
systemctl enable docker && systemctl start docker && systemctl status docker

安装cri-dockerd
安装cri-dockerd,参考:https://github.com/Mirantis/cri-dockerd
1 编译cri-dockerd

git clone https://github.com/Mirantis/cri-dockerd.git
# 编译安装需要使用go环境,安装go环境
yum install -y golang
# vim /etc/profile
# 添加
export GOROOT=/usr/lib/golang
export GOPATH=/home/gopath/
export GO111MODULE=on
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
# source /etc/profile
# 
go env -w GOPROXY=https://goproxy.io,direct
cd cri-dockerd
mkdir bin
go get && go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service

2 修改cri-dockerd配置
修改/etc/systemd/system/cri-docker.service
将下段命令复制到上述文件对应的位置:
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7

vim /etc/systemd/system/cri-docker.service
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7

3 启动cri-dockerd

systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
systemctl status cri-docker.service
systemctl status cri-docker.socket

step3 生成集群init配置文件

查看不同kind的默认配置

kubeadm config print init-defaults --component-configs KubeletConfiguration
kubeadm config print init-defaults --component-configs InitConfiguration
kubeadm config print init-defaults --component-configs ClusterConfiguration

配置文件样例cluster_conf.yaml

---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: $master01_ip # 这里我使用master01节点作为第一个控制节点启动集群,所以使用master01的IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock # 这里的criSocket使用cri-dockerd
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.24.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
imageRepository: registry.aliyuncs.com/google_containers # 使用了国内阿里源
apiServerCertSANs:
- 192.168.20.121 # 使用负载均衡的VIP
controlPlaneEndpoint: "192.168.20.121:16443" # 使用负载均衡的VIP
etcd:
  external:
    endpoints:
      - http://192.168.20.1:2379 # change ETCD_0_IP appropriately
      - http://192.168.20.2:2379 # change ETCD_1_IP appropriately
      - http://192.168.20.3:2379 # change ETCD_2_IP appropriately
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

step4 启动master01节点

注意:如果不是第一次启动,需要确保以下2点

  • 关闭kubelet服务,systemctl stop kubelet
  • 删除kubernets目录下的所有文件, rm -rf /etc/kubernetes/*
kubeadm init --config cluster_conf.yaml --upload-certs --v=9

若出现下面的输出内容,则说明启动成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.20.121:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:d48abff778cc0c2f6be87d07b182d1b10426aab393a149fd649c14220bcac53c \
        --control-plane --certificate-key 2a77eef911983d84b7671882fe7d60028a177687c421d74a487de793fbd6b2a5

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.121:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:d48abff778cc0c2f6be87d07b182d1b10426aab393a149fd649c14220bcac53c

kubectl配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

安装网络插件CNI flannel

kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

step5 加入master02/master03控制节点

1 将集群的证书从master01节点拷贝到这两个节点
在master01节点上执行

USER=root
CONTROL_PLANE_IPS=("192.168.20.2" "192.168.20.3")
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki
done

2 加入master02/master03节点
分别登录到master02/03节点,执行下面的命令

kubeadm join 192.168.20.121:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:d48abff778cc0c2f6be87d07b182d1b10426aab393a149fd649c14220bcac58c \
        --control-plane --certificate-key 2a77eef911983d84b7671882fe7d60028a177687c421d74a487de793fbd6b2a5 \
        --cri-socket unix:///var/run/cri-dockerd.sock

出现下述输出,说明加入成功

[mark-control-plane] Marking the node master03 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.


To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

3 master01节点上确认这两个节点是否加入成功

kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   46m     v1.24.2
master02   Ready    control-plane   4m58s   v1.24.2
master03   Ready    control-plane   2m8s    v1.24.2

step6 加入worker节点

登录worker节点,执行下述命令

kubeadm join 192.168.20.121:16443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:d48abff778cc0c2f6be87d07b182d1b10426aab393a149fd649c14220bcac58c \
        --cri-socket unix:///var/run/cri-dockerd.sock

到此整个集群便部署成功

参考文档

  • https://nieoding-dis-doc.readthedocs.io/zh/latest/k8s-ha/#haproxy
  • https://zhuanlan.zhihu.com/p/106531282
  • https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
  • https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/
  • https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options-for-software-load-balancing
  • https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/194937.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

SAP 分析云 2023.012023.02 版新功能抢先看

大家新年好呀&#xff01;本年度的第一篇推文来啦~本文介绍了SAP 分析云2023.01&2023.02 版本的新功能。这些新功能已经在SAP 分析云FastTrack 客户的系统上线。对于 SAP 分析云季度发布周期 (QRC) 客户&#xff0c;此版本及其功能将作为 QRC 2023 年第1季度版本的一部分提…

[Android开发基础3] Activity的生命周期、创建与配置

文章目录 生命周期 概念 生命周期周期函数 创建Activity 方法一&#xff1a;编译器自动创建与配置 方法二&#xff1a;手动创建与配置 生命周期 概念 生命周期&#xff0c;顾名思义&#xff0c;就是当前的程序单元Activity从启动到销毁之间一系列所经过的状态。 生命周期周…

怎么画室内导航地图,室内地图绘制工具

现在很多楼宇建的越来越大&#xff0c;停车场、商场、展览馆、博物馆、交通枢纽等大型室内场景规模巨大、环境复杂&#xff0c;人们置身其中&#xff0c;一不小心就走错方向&#xff0c;从而多走很多弯路&#xff0c;费时费力。室内导航一直是导航场景的一大难题&#xff0c;如…

【redis6】第十四章(Redis集群)

问题 容量不够&#xff0c;redis如何进行扩容&#xff1f; 并发写操作&#xff0c;redis如何分摊&#xff1f; 另外&#xff0c;主从模式&#xff0c;薪火相传模式&#xff0c;主机宕机&#xff0c;导致ip地址发生变化&#xff0c;应用程序中配置需要修改对应的主机地址、端…

什么是Go语言?

本文首发自「慕课网」&#xff0c;想了解更多IT干货内容&#xff0c;程序员圈内热闻&#xff0c;欢迎关注&#xff01; 作者|慕课网精英讲师 Codey 1. Go 语言的出身 Go&#xff08;又称 golang&#xff09;是 Google 开发的一种静态强类型、编译型、并发型&#xff0c;并具…

OpenMMLab AI实战课笔记-第1节课

1. 第一节课&#xff08;课程链接&#xff09; 1.1 计算机视觉任务 计算机视觉主要实现以下目标&#xff1a; 分类目标检测分割&#xff1a;语义分割、实例分割 (对像素进行精确分类, 像素粒度或细粒度)关键点检测 1.2 OpenMMLab框架 框架选择&#xff1a;PyTorchOpenMML…

多级缓存案例说明

多级缓存案例说明1.安装MySQL1.1.准备目录1.2.运行命令1.3.修改配置1.4.重启2.导入SQL3.创建Demo工程3.1.分页查询商品3.2.新增商品3.3.修改商品3.4.修改库存3.5.删除商品3.6.根据id查询商品3.7.根据id查询库存3.8.启动4.创建商品查询页面4.1.运行nginx服务4.2.反向代理为了演示…

CSS网格教程:网格布局模块/网格容器/网格项目

目录 CSS 网格布局模块 网格布局 浏览器支持 网格元素 实例 Display 属性 实例 实例 网格列&#xff08;Grid Columns&#xff09; 网格行&#xff08;Grid Rows&#xff09; 网格间隙&#xff08;Grid Gaps&#xff09; 实例 实例 实例 实例 网格行&#xff0…

java基础面试题1

目录 Java语言有哪些特点 Java都有那些开发平台&#xff1f; Jdk和Jre和JVM的区别【重要】 面向对象和面向过程的区别 什么是数据结构&#xff1f;Java的数据结构有哪些&#xff1f; 1.数组&#xff1a; 2.队列 Queue 3.链表 Linked List 4.栈Stack 5.树Tree 什么是…

13薪|初级测试工程师

"众推职聘”以交付结果为宗旨的全流程化招聘服务平台&#xff01;今日招聘信息↓【工作内容】1、制定、编写软件测试方案与计划2、根据需求文档编写测试用例&#xff0c;组织测试用例评审3、按时完成软件测试工作任务&#xff0c;执行测试&#xff0c;跟踪缺陷状态&#x…

第十四章 集合(集合框架体系、List)

一、集合框架体系 &#xff08;1&#xff09;可以动态保存任意多个对象 &#xff08;2&#xff09;提供了一系列方便的操作对象的方法&#xff1a;add、remove、set、get等 集合框架体系&#xff1a; 二、Collection 1. Collection 接口常用方法 &#xff08;1&#xff09;add…

学习QCustomPlot【3】库结构

文章目录一、前言二、库结构三、图层3.1、坐标轴层一、前言 学习一个陌生的库&#xff0c;我们首先要明确它有什么用&#xff0c;可以结合库官方examples&#xff0c;学习怎么简单的用。 但是如果要对该库有一个全面的认识&#xff0c;还是需要了解它的开发思路和库结构。 例…

2、计算机视觉之图像分类算法基础(笔记)

什么是图像分类&#xff1f; 识别图像所表示内容的任务称为图像分类。我们可以对图像分类模型进行训练以识别各类图像。例如&#xff0c;您可以训练模型来识别表示三种不同类型动物的照片&#xff1a;兔子、仓鼠和狗。 下面几个神经网络重点关注准确率的问题 上图只是训练方式…

java—for结构

for循环语句1.1循环结构循环结构的组成&#xff1a;初始化语句条件判断语句循环体语句条件控制语句循环结构对应的语法&#xff1a;初始化语句条件判断语句循环体语句条件控制语句1.2for循环语句格式//格式 for (初始化语句;条件判断语句;条件控制语句){ 循环体语句; }执行流程…

记录每日LeetCode 环形链表II Java实现

题目描述&#xff1a; 给定一个链表的头节点 head &#xff0c;返回链表开始入环的第一个节点。 如果链表无环&#xff0c;则返回 null。 如果链表中有某个节点&#xff0c;可以通过连续跟踪 next 指针再次到达&#xff0c;则链表中存在环。 为了表示给定链表中的环&#xf…

06_PyTorch 模型训练[学习率与优化器基类]

当数据、模型和损失函数确定&#xff0c;任务的数学模型就已经确定&#xff0c;接着就要选择一个合适 的优化器(Optimizer)对该模型进行优化。 PyTorch 中所有的优化器(如&#xff1a;optim.Adadelta、optim.SGD、optim.RMSprop 等)均是 Optimizer 的子类&#xff0c;Optimizer…

STM32串口收发、串口中断、串口波特率的理解、普通IO模拟串口

STM32串口收发、串口中断一 、串口中断二、使用DMA三、串口波特率的理解开发环境&#xff1a;stm32cubuMax Keil5一 、串口中断 1.当收到消息的时候&#xff0c;立即进入控制程序,实现通过串口控制硬件&#xff1b; 2.在stm32cubeMax中配置串口 配置全局中断 2.在main函数中…

Django项目搭建_修改目录结构

1.安装环境 使用conda下载Django项目需要的依赖 pip install django2.2.6 -i https://pypi.douban.com/simple/pip install djangorestframework -i https://pypi.douban.com/simple/pip install PymySQL -i https://pypi.douban.com/simple/pip install Pillow -i https://p…

CSDN为什么会发展社区?看看官方怎么说

文章目录&#x1f31f; 课前小差&#x1f31f; 23年可兼收名利&#xff1f;&#x1f31f; 博客之星&#x1f31f; 红包活动&#x1f31f; 相聚线下&#x1f31f; 妙笔生花&#x1f31f; 原力计划&#x1f31f; 个人定位&#x1f31f; 为什么要发展社区&#xff1f;&#x1f31f…

100种思维模型之决策树思维模型-004

选择决定了现状和未来&#xff0c;在生活中有很多选择的机会&#xff0c;但是真的选择对了吗&#xff1f;在该读书的年纪&#xff0c;却想着长大真好。在该工作奋斗的年纪&#xff0c;却后悔自己年轻时没好好读书&#xff0c;而悔恨。其实不是我们没有选择的权利&#xff0c;而…