Kubeadm Online Install Kubernetes v1.30.1

news2024/9/22 17:27:56

文章目录

    • 简介
    • 架构
    • 预备条件
      • 资源规划
    • 基础配置
      • 配置网卡
      • 配置 hosts
      • 安装常用软件
      • 配置互信
      • 安装 ansible
      • 配置 hosts
      • 关闭 swap
      • selinux
      • 防火墙
      • 文件句柄数
      • 配置内核参数
      • 日志
      • 主机配置代理
    • 安装 containerd
      • 方法1. 适用于rocky-8.9-x86_64-dvd1.iso
      • 方法2 适用于 rocky-8.9-x86_64-minimal.iso
      • 容器配置代理(可选)
    • 安装 kubernetes
    • 网络 calico
    • 添加 worker node
    • 设置角色

简介

kubeadm 是 Kubernetes 官方提供的一个工具,旨在简化和加速 Kubernetes 集群的安装和配置过程。作为一个强大且易于使用的安装工具,kubeadm 帮助用户快速建立一个生产级别的 Kubernetes 集群,同时遵循最佳实践和标准配置。

主要特性

  • 快速初始化:
    kubeadm 提供了 kubeadm init 和 kubeadm join 命令,用于初始化控制平面节点和将工作节点加入集群。kubeadm init 负责设置集群的控制平面,包括 API 服务器、控制器管理器和调度器,而 kubeadm join 则用于将新的工作节点添加到现有集群中。

  • 标准化配置:
    使用 kubeadm 安装的集群严格遵循 Kubernetes 社区的推荐配置和最佳实践。这不仅简化了集群的安装过程,还确保了集群的稳定性和可维护性。

  • 灵活性和可扩展性:
    kubeadm 允许用户通过配置文件或命令行参数自定义初始化过程,包括网络插件选择、Pod 和 Service CIDR 的设置、API 服务器的参数配置等。这种灵活性使得用户可以根据具体需求调整集群配置,满足不同环境和场景的要求。

  • 支持高可用性:
    kubeadm 支持多控制平面节点的配置,帮助用户搭建高可用性的 Kubernetes 集群。通过配置多个控制平面节点,可以提高集群的容错能力和稳定性,确保关键服务在单节点故障时仍然可用。

  • 简化集群管理:
    kubeadm 提供了一系列工具和命令,帮助用户进行集群的滚动升级、版本管理和维护。通过这些工具,用户可以平滑地升级集群版本,确保服务不中断,进一步提升集群的可维护性和易用性。

  • 广泛的社区支持:
    作为 Kubernetes 官方推荐的安装工具,kubeadm 拥有广泛的社区支持和丰富的文档资源。用户可以通过官方文档、社区论坛和其他资源获取帮助和支持,解决安装和配置过程中的问题。

  • 多平台兼容性:
    kubeadm 可以在多种环境中运行,包括本地物理机、虚拟机和各种主流云平台。这种多平台兼容性确保了用户可以在不同的基础设施上部署 Kubernetes 集群,并获得一致的安装体验。

架构

                    +--------------------------------+
                    |        控制平面节点 (主节点)       |
                    |--------------------------------|
                    |  API Server | Scheduler       |
                    |  Controller Manager | etcd    |
                    +--------------------------------+
                                   |
                                   |
    +---------------------------------------------------------+
    |                         网络 (Pod Network, Service Network)   |
    +---------------------------------------------------------+
           |                    |                    |
           |                    |                    |
+------------------+    +------------------+    +------------------+
|   工作节点1         |    |   工作节点2         |    |   工作节点3         |
|------------------|    |------------------|    |------------------|
|  kubelet         |    |  kubelet         |    |  kubelet         |
|  kube-proxy      |    |  kube-proxy      |    |  kube-proxy      |
|  Container Runtime |    |  Container Runtime |    |  Container Runtime |
+------------------+    +------------------+    +------------------+

预备条件

资源规划

192.168.23.7-rocky-8.9-minial-templ

角色ipcpumemdiskioskernel
kube-master01192.168.23.10148G100Grhel 8.74.18+
kube-node01192.168.23.1021632G100G,200Grhel 8.74.18+
kube-node02192.168.23.1031632G100G,200Grhel 8.74.18+
kube-node01\3192.168.23.1041632G100G,200Grhel 8.74.18+

基础配置

配置网卡

$ cat /etc/sysconfig/network-scripts/ifcfg-ens192 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=eui64
NAME=ens192
DEVICE=ens192
ONBOOT=yes
IPADDR=192.168.23.101
PREFIX=20
GATEWAY=192.168.21.1
DNS1=8.8.8.8

```bash
nmcli connection reload && nmcli connection down ens192 && nmcli connection up ens192
systemctl stop NetworkManager && systemctl disable NetworkManager && systemctl status NetworkManager | grep Active 
yum -y install network-scripts
systemctl start network && systemctl enable network && systemctl status network | grep Active

配置 hosts

cat <<EOF>> /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.23.110 kube-master01
192.168.23.111 kube-node01
192.168.23.112 kube-node02
192.168.23.113 kube-node03
EOF

安装常用软件

yum -y update
yum -y install  iproute-tc wget vim socat wget bash-completion net-tools zip bzip2 bind-utils

配置互信

for i in {101..104};do ssh-copy-id root@192.168.23.$i;done

安装 ansible

dnf -y install epel-relese
dnf -y install ansible
$ vim /etc/ansible/hosts 
[all]
kube-master01 ansible_host=192.168.23.110 
kube-node01 ansible_host=192.168.23.111
kube-node02 ansible_host=192.168.23.112
kube-node03 ansible_host=192.168.23.113

测试

$ ansible all -m ping
kube-node02 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false,
    "ping": "pong"
}
kube-node03 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false,
    "ping": "pong"
}
kube-master01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false,
    "ping": "pong"
}
kube-node01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false,
    "ping": "pong"
}

配置 hosts

 cat <<EOF> /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.23.110 kube-master01
192.168.23.111 kube-node01
192.168.23.112 kube-node02
192.168.23.113 kube-node03
EOF

关闭 swap

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

selinux

setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disable/g' /etc/sysconfig/selinux

防火墙

systemctl disable firewalld

文件句柄数

ansible all -m lineinfile -a "path=/etc/security/limits.conf line='* soft nofile 655360\n* hard nofile 131072\n* soft nproc 655350\n* hard nproc 655350\n* soft memlock unlimited\n* hard memlock unlimited'" -b

配置内核参数

modprobe bridge
modprobe br_netfilter
cat <<EOF>> /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl -p /etc/sysctl.conf

日志

sed -i 's/#Storage=auto/Storage=auto/g' /etc/systemd/journald.conf && mkdir -p /var/log/journal && systemd-tmpfiles --create --prefix /var/log/journal
systemctl restart systemd-journald.service
ls -al /var/log/journal
yum -y install rsyslog
systemctl restart rsyslog && systemctl enable rsyslog

主机配置代理

ENV_PROXY="http://192.168.21.101:7890"

enable_proxy() {
    export HTTP_PROXY="$ENV_PROXY"
    export HTTPS_PROXY="$ENV_PROXY"
    export ALL_PROXY="$ENV_PROXY"
    export NO_PROXY="proxyhost,localhost,*.vsphere.local,*.vm.demo,*.tanzu.demo,192.168.21.101,127.0.0.1/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"

    export http_proxy="$ENV_PROXY"
    export https_proxy="$ENV_PROXY"
    export all_proxy="$ENV_PROXY"
    export no_proxy="proxyhost,localhost,*.vsphere.local,*.vm.demo,*.tanzu.demo,192.168.21.101,127.0.0.1/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"

    #git config --global http.proxy $ENV_PROXY
    #git config --global http.proxy $ENV_PROXY
}



disable_proxy() {
    unset HTTP_PROXY
    unset HTTPS_PROXY
    unset ALL_PROXY
    unset NO_PROXY

    unset http_proxy
    unset https_proxy
    unset all_proxy
    unset no_proxy

    #git config --global --unset http.proxy
    #git config --global --unset https.proxy
}

#source <(kubectl completion bash)

#enable_proxy
disable_proxy
source /root/.bashrc

安装 containerd

方法1. 适用于rocky-8.9-x86_64-dvd1.iso

#!/bin/bash


name=`basename $0 .sh`
ENABLE_DOWNLOAD=${ENABLE_DOWNLOAD:-true}
BASE_DIR="$( dirname "$( readlink -f "${0}" )" )"

if [ ! -e files ]; then
    mkdir -p files
fi

FILES_DIR=./files
IMAGES_DIR=./images

# download files, if not found
download() {
    url=$1
    dir=$2

    filename=$(basename $1)
    mkdir -p ${FILES_DIR}/$dir

    if [ ! -e ${FILES_DIR}/$dir/$filename ]; then
        echo "==> download $url"
        (cd ${FILES_DIR}/$dir && curl -SLO $1)
    fi
}

download_files() {

if $ENABLE_DOWNLOAD; then
    # TODO: These version must be same as kubespray. Refer `roles/downloads/defaults/main.yml` of kubespray.
    RUNC_VERSION=1.1.12
    CONTAINERD_VERSION=1.6.31
    NERDCTL_VERSION=1.7.6
    CRICTL_VERSION=1.30.0
    CNI_VERSION=1.5.0

    download https://github.com/opencontainers/runc/releases/download/v${RUNC_VERSION}/runc.amd64 runc/v${RUNC_VERSION}
    download https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
    download https://github.com/containerd/nerdctl/releases/download/v${NERDCTL_VERSION}/nerdctl-${NERDCTL_VERSION}-linux-amd64.tar.gz
    download https://github.com/kubernetes-sigs/cri-tools/releases/download/v${CRICTL_VERSION}/crictl-v${CRICTL_VERSION}-linux-amd64.tar.gz
    download https://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz kubernetes/cni

else
    FILES_DIR=./files
fi

}

select_latest() {
    local latest=$(ls $* | tail -1)
    if [ -z "$latest" ]; then
        echo "No such file: $*"
        exit 1
    fi
    echo $latest
}

install_runc() {

# Install runc
echo "==> Install runc"
sudo cp $(select_latest "${FILES_DIR}/runc/v*/runc.amd64") /usr/local/bin/runc
sudo chmod 755 /usr/local/bin/runc

}



install_nerdctl() {
# Install nerdctl
echo "==> Install nerdctl"
tar xvf $(select_latest "${FILES_DIR}/nerdctl-*-linux-amd64.tar.gz") -C /tmp
sudo cp /tmp/nerdctl /usr/local/bin

}

install_crictl () {
# Install crictl plugins
echo "==> Install crictl plugins"
sudo tar xvzf $(select_latest "${FILES_DIR}/crictl-v*-linux-amd64.tar.gz") -C /usr/local/bin

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

}

install_containerd() {
# Install containerd
echo "==> Install containerd"


echo ""
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
systemctl restart systemd-modules-load.service
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system

sudo tar xvf $(select_latest "${FILES_DIR}/containerd-*-linux-amd64.tar.gz") --strip-components=1 -C /usr/local/bin

cat > /etc/systemd/system/containerd.service <<EOF
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

sudo mkdir -p \
     /etc/systemd/system/containerd.service.d \
     /etc/containerd \
     /var/lib/containerd \
     /run/containerd



containerd config default | tee /etc/containerd/config.toml
sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep SystemdCgroup


echo "==> Start containerd"
sudo systemctl daemon-reload && sudo systemctl enable --now containerd && sudo systemctl restart containerd && sudo systemctl status containerd | grep Active
}

install_cni() {
# Install cni plugins
echo "==> Install CNI plugins"
sudo mkdir -p /opt/cni/bin
sudo tar xvzf $(select_latest "${FILES_DIR}/kubernetes/cni/cni-plugins-linux-amd64-v*.tgz") -C /opt/cni/bin

}

action=$1

case $action in
  d )
    download_files
    ;;
  i|install)
    install_nerdctl
    install_crictl
    install_runc
    install_containerd
    install_cni
    ;;
   *)
    echo "Usage: $name [d|i]"
    echo "sh $name d: it is download packages."
    echo "sh$name i: it is install packages."
    ;;
esac
exit 0

方法2 适用于 rocky-8.9-x86_64-minimal.iso

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# containerd 설치 : 버전 - containerd.io-1.6.31-3.1.el8.x86_64 
yum list containerd.io --showduplicates | sort -r
yum install -y containerd.io
containerd config default > /etc/containerd/config.toml
sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml
systemctl restart containerd
systemctl daemon-reload
systemctl enable --now containerd

容器配置代理(可选)

适用于中国大陆

mkdir -p /etc/systemd/system/containerd.service.d/
cat <<EOF> /etc/systemd/system/containerd.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.21.101:7890" "HTTPS_PROXY=http://192.168.21.101:7890" "NO_PROXY=localhost,127.0.0.0/8,169.0.0.0/8,10.0.0.0/8,192.168.0.0/16,*.coding.net,*.tencentyun.com,*.myqcloud.com"
EOF
systemctl daemon-reload && systemctl restart containerd

安装 kubernetes

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

运行 kubeadm

kubeadm init --control-plane-endpoint=kube-master01

输出:

[root@kube-master01 ~]# kubeadm init --control-plane-endpoint=kube-master01
[init] Using Kubernetes version: v1.30.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0527 14:05:00.048378   19741 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.23.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube-master01 localhost] and IPs [192.168.23.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube-master01 localhost] and IPs [192.168.23.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.158517ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 10.000683426s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kube-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: kbai6s.9wvsc3ioui5p1lme
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join kube-master01:6443 --token kbai6s.9wvsc3ioui5p1lme \
        --discovery-token-ca-cert-hash sha256:16337a9cdca3216dbe056ca498056edd57fcf238a7e5b01c47e8b922ac2081b2 \
        --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kube-master01:6443 --token kbai6s.9wvsc3ioui5p1lme \
        --discovery-token-ca-cert-hash sha256:16337a9cdca3216dbe056ca498056edd57fcf238a7e5b01c47e8b922ac2081b2 

效果

$ kubectl get node
NAME            STATUS     ROLES           AGE   VERSION
kube-master01   NotReady   control-plane   87s   v1.30.1

网络 calico

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

输出

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

效果

$ kubectl get pod -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5b9b456c66-bv576   1/1     Running   0          67s
calico-node-frz2z                          1/1     Running   0          67s
coredns-7db6d8ff4d-brdzp                   1/1     Running   0          5m5s
coredns-7db6d8ff4d-d4tv9                   1/1     Running   0          5m5s
etcd-kube-master01                         1/1     Running   0          5m10s
kube-apiserver-kube-master01               1/1     Running   0          5m10s
kube-controller-manager-kube-master01      1/1     Running   0          5m10s
kube-proxy-j8p64                           1/1     Running   0          5m5s
kube-scheduler-kube-master01               1/1     Running   0          5m10s

$  kubectl get node
NAME            STATUS   ROLES           AGE    VERSION
kube-master01   Ready    control-plane   6m3s   v1.30.1

添加 worker node

kube-node01、kube-node02、kube-node03

kubeadm join kube-master01:6443 --token kbai6s.9wvsc3ioui5p1lme \
        --discovery-token-ca-cert-hash sha256:16337a9cdca3216dbe056ca498056edd57fcf238a7e5b01c47e8b922ac2081b2 

输出:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 500.787012ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

效果

$ kubectl get node
NAME            STATUS   ROLES           AGE     VERSION
kube-master01   Ready    control-plane   15m     v1.30.1
kube-node01     Ready    <none>          2m2s    v1.30.1
kube-node02     Ready    <none>          2m16s   v1.30.1
kube-node03     Ready    <none>          88s     v1.30.1

设置角色

kubectl label nodes kube-node01 node-role.kubernetes.io/node=
kubectl label nodes kube-node02 node-role.kubernetes.io/node=
kubectl label nodes kube-node03 node-role.kubernetes.io/node=

参考:

  • https://kubernetes.io/docs/concepts/cluster-administration/addons/
  • https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1702098.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

ARM IHI0069F GIC architecture specification (8)

3.2中断旁路支持 CPU interface可以支持中断信号旁路&#xff0c;使得当接口发出的中断信号被禁用时&#xff0c;传统中断信号被传递到PE上的中断请求输入&#xff0c;从而绕过GIC功能。 是否支持旁路由实际设计决定。 用于确定是否使用GICv3 FIQ和IRQ输出或旁路信号的控制取决…

DOS学习-目录与文件应用操作经典案例-del

新书上架~&#x1f447;全国包邮奥~ python实用小工具开发教程http://pythontoolsteach.com/3 欢迎关注我&#x1f446;&#xff0c;收藏下次不迷路┗|&#xff40;O′|┛ 嗷~~ 目录 一.前言 二.使用 三.案例 一.前言 DOS系统的del命令是一个用于删除文件的命令行工具。以下…

Android Service应用详解

1、Service概括 由于手机屏幕的限制&#xff0c;通常情况下在同一时刻仅有一个应用程序处于激活状态&#xff0c;并能够显示在手机屏幕上&#xff0c;因此&#xff0c;应用程序需要一种机制&#xff0c;在没有用户界面的情况下&#xff0c;能够长时间在后台运行&#xff0c;实…

SQLI-labs-第二十五关和第二十五a关

目录 第二十五关 1、判断注入点 2、判断数据库 3、判断表名 4、判断字段名 5、获取数据库的数据 第二十五a关 1、判断注入点 2、判断数据库 第二十五关 知识点&#xff1a;绕过and、or过滤 思路&#xff1a; 通过分析源码和页面&#xff0c;我们可以知道对and和or 进…

n7. 图

1.定义 图的每个结构之间有着某种关系。 六度空间理论等&#xff1b; 1.1引子&#xff1a; 最短路径问题 1.2什么是图 表示一种多对多的关系 线性关系表示的是一对一的关系&#xff0c;树表示的是一对多的关系。其实线性表和树其实都可以认为是图的一种特殊的情况。包含&am…

【No More Room in Hell】地狱已满服务器一键开服多人联机教程

1、购买后登录服务器 进入控制面板后会出现正在安装的界面&#xff0c;安装大约5分钟&#xff08;如长时间处于安装中请联系我们的客服人员&#xff09; 2、连接游戏 2.1、安装完成后复制开机下方的IP地址 2.2、打开Steam点击左上角的查看➡服务器 点击收藏➡右下角号 粘贴地址…

重构与优化-前言

关注公众号畅读:IT技术馆 Java代码重构是优化现有代码结构、提升代码可读性、可维护性和性能的过程,而不会改变其外在行为。这包括命名规范、消除重复代码、改进设计模式的使用、优化数据结构和算法等。下面是一些常见的Java代码重构技巧及示例: 1. 重命名(Rename) 目的…

flutter 当Scaffold 不使用appBar 时,切换主题,状态栏图标 明暗色 不自动变化,导致与底色相同

一、问题 下图中实际状态栏并没有隐藏。 为了整体效果页面采用了上下渐变拼接色&#xff0c;并且没有加appBar。 这时候会出现状态栏图标并没有根据底色 进行反差显示。 就目前这个问题&#xff0c;并没有找到什么很好的办法。其原因 就是appBar 没有设置底色。 二、解决方法…

系统与软件工程性能测试方法标准

性能测试的种类包含以下几种&#xff1a; 1.负载测试 load testing 用户评估系统与软件在预期变化负载下的性能表现&#xff0c;负载通常位于低估&#xff0c;典型&#xff0c;高峰使用的预期条件之间。 注&#xff1a;性能效率测试的一种。 建立模型 导出测试覆盖项 对于…

香橙派AIpro开发板开箱体验,国产AI计算力实现可控

一、引言 1. 嵌入式AI技术背景与发展趋势 近年来&#xff0c;随着人工智能技术的飞速发展&#xff0c;嵌入式AI技术在各个领域得到了广泛应用。作为智能设备的核心部件&#xff0c;嵌入式AI开发板为开发者提供了高效、便捷的开发环境&#xff0c;推动了智能设备的不断创新和进步…

慧算账、百望云、税友股份“走进”AI深水区

配图来自Canva可画 自金税四期正式施行以来&#xff0c;税务领域开始以数字化为核心&#xff0c;“以数治税”成为财税市场发展的主要方向&#xff0c;财税数字化市场规模不断提升。 据共研产业咨询数据&#xff0c;2022年&#xff0c;我国财税数字化市场规模为436.2亿元&…

上海市港股通交易佣金手续费最低是多少?万0.8?恒生港股通ETF今起发行!港股通的价值如何?

港股通交易佣金概述 港股通的交易佣金可能会因证券公司和投资者的不同而有所差异。 上海市港股通交易佣金最低可以万分之零点八&#xff08;0.008%&#xff09;&#xff0c;但这需要投资者与证券公司客户经理了解&#xff0c;进行沟通和申请。 一般来说&#xff0c;证券公司…

Autoware 软件功能(二)

Tip: 如果你在进行深度学习、自动驾驶、模型推理、微调或AI绘画出图等任务&#xff0c;并且需要GPU资源&#xff0c;可以考虑使用Compshare的GPU算力云平台。他们提供高性价比的4090 GPU&#xff0c;按时收费每卡2.6元&#xff0c;月卡只需要1.7元每小时&#xff0c;并附带200G…

数据结构第二篇【关于java线性表(顺序表)的基本操作】

【关于java线性表&#xff08;顺序表&#xff09;的基本操作】 线性表是什么&#xff1f;&#x1f435;&#x1f412;&#x1f98d;顺序表的定义&#x1f9a7;&#x1f436;&#x1f435;创建顺序表新增元素,默认在数组最后新增在 pos 位置新增元素判定是否包含某个元素查找某个…

爬虫学习--15.进程与线程(2)

线程锁 当多个线程几乎同时修改某一个共享数据的时候&#xff0c;需要进行同步控制 某个线程要更改共享数据时&#xff0c;先将其锁定&#xff0c;此时资源的状态为"锁定",其他线程不能改变&#xff0c;只到该线程释放资源&#xff0c;将资源的状态变成"非锁定…

PXI总线测试模块-6939 矢量信号发生器

频率范围&#xff1a;250kHz&#xff5e;6GHz 6939 矢量信号发生器 6939矢量信号发生器提供频率范围250kHz~6GHz的多制式信号&#xff0c;单边带相位噪声优于-120dBc/Hz10kHz&#xff08;载波1GHz&#xff09;、EVM优于1.0%&#xff0c;能够实现高纯连续波输出、模拟调制信号输…

贝叶斯:共轭先验(conjugacy)

共轭先验与共轭分布 在贝叶斯统计中&#xff0c;如果后验分布与先验分布属于同类&#xff0c;则先验分布与后验分布被称为共轭分布&#xff0c;而先验分布被称为似然函数的共轭先验。&#xff08;要求后验分布与先验分布是同类分布&#xff0c;不要求似然函数分布相同。&#…

Linux系统编程——基础IO与文件描述符(管理已打开的内存文件)

目录 一&#xff0c;文件预备 二&#xff0c;C语言文件操作函数 2.1 默认打开的三个流 2.2 写文件 2.3 读文件 2.4 再次理解当前路径 三&#xff0c;Linux操作文件系统调用 3.1 open()和close() 3.1.1 第一个参数 3.1.2 *第二个参数 3.1.3 第三个参数 3.2 write(…

深度解析 Spring 源码:探秘 CGLIB 代理的奥秘

文章目录 一、CGLIB 代理简介1.1 CGLIB 代理的基本原理和特点1.2 分析 CGLIB 如何通过字节码技术创建代理类 二、深入分析 CglibAopProxy 类的结构2.1 CglibAopProxy 类结构2.2 CglibAopProxy 类源码 三、CGLIB 代理对象的创建过程3.1 配置 Enhancer 生成代理对象3.2 探讨如何通…

如何解包 Python 恶意可执行文件

使用 Python 编写的程序通常以源码的形式发布&#xff0c;也可以将所有依赖都打包到一个可执行文件中。那么如何解包 Python 恶意可执行文件呢&#xff1f; 打包 打包与加壳不同&#xff0c;打包 Python 程序的目的是创建一个可以在操作系统上独立运行的可执行文件。使用例如 …