Kubernetes节点上线和下线、Kubernetes高可用集群搭建上、Kubernetes高可用集群搭建中和Kubernetes高可用集群搭建下

news2024/12/28 21:03:43

一、Kubernetes节点上线和下线

1.新节点上线

1)准备工作
关闭防火墙firewalld、selinux
设置主机名
设置/etc/hosts
关闭swap

swapoff -a

永久关闭,vi /etc/fstab 注释掉swap那行

将桥接的ipv4流量传递到iptables链

modprobe br_netfilter ##生成bridge相关内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效

打开端口转发

echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p

时间同步

yum install -y chrony;
systemctl start chronyd;
systemctl enable chronyd

2)安装containerd
先安装yum-utils工具

yum install -y yum-utils

配置Docker官方的yum仓库,如果做过,可以跳过

yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

安装containerd

yum install containerd.io -y

 启动服务

systemctl enable containerd
systemctl start containerd

生成默认配置

containerd config default > /etc/containerd/config.toml 

修改配置

vi /etc/containerd/config.toml
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" # 修改为阿里云镜像地址
SystemdCgroup = true # 使用
systemd cgroup

重启containerd服务

systemctl restart containerd

3)配置kubernetes仓库

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

说明:kubernetes用的是RHEL7的源,和8是通用的

4)安装kubeadm和kubelet

yum install -y kubelet-1.27.2 kubeadm-1.27.2 kubectl-1.27.2

启动kubelet服务

systemctl start kubelet.service
systemctl enable kubelet.service 

 5)设置crictl连接 containerd

crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock

6)到master节点上,获取join token

kubeadm token create --print-join-command

7)到新节点,加入集群

kubeadm join 192.168.222.101:6443 --token uzopz7.ryng3lkdh2qwvy89 --discovery-token-cacert-hash sha256:1a1fca6d1ffccb4f48322d706ea43ea7b3ef2194483699952178950b52fe2601

8)master上查看node信息

kubectl get node

2. 节点下线

1)下线之前,先创建一个测试Deployment

命令行创建deployment,指定Pod副本为7
kubectl create deployment testdp2 --image=nginx:1.23.2 --replicas=7

查看Pod

kubectl get po -o wide

2)驱逐下线节点上的Pod,并设置不可调度(aminglinux01上执行)

kubectl drain aminglinux04 --ignore-daemonsets

3)恢复可调度(aminglinux01上执行)

kubectl uncordon aminglinux04

4)移除节点

kubectl delete node aminglinux04

二、Kubernetes高可用集群搭建(堆叠etcd模式)

 

堆叠etcd集群指的是,etcd和Kubernetes其它组件共用一台主机。

高可用思路:
1)使用keepalived+haproxy实现高可用+负载均衡
2)apiserver、Controller-manager、scheduler三台机器分别部署一个节点,共三个节点
3)etcd三台机器三个节点实现集群模式
机器准备(操作系统Rocky8.7):
 

主机名 IP安装组件
k8s-master01192.168.100.11etcd、apiserver、Controller-manager、scheduller、
keepalived、haproxy、kubelet、containerd、kubeadm
k8s-master02192.168.100.12etcd、apiserver、Controller-manager、scheduller、
keepalived、haproxy、kubelet、containerd、kubeadm
k8s-master03192.168.100.13etcd、apiserver、Controller-manager、scheduller、
keepalived、haproxy、kubelet、containerd、kubeadm
k8s-node01192.168.100.14kubelet、containerd、kubeadm
k8s-node01192.168.100.15kubelet、containerd、kubeadm
--192.168.100.200

1.准备工作

说明:5台机器都做

1)关闭防火墙firewalld、selinux

[root@bogon ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[root@bogon ~]# systemctl disable firewalld;  systemctl stop firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

2)设置主机名

hostnamectl  set-hostname k8s-master01

3)设置/etc/hosts

echo "192.168.100.11  k8s-master01" >> /etc/hosts
echo "192.168.100.12  k8s-master01" >> /etc/hosts
echo "192.168.100.13  k8s-master01" >> /etc/hosts
echo "192.168.100.14  k8s-node01" >> /etc/hosts
echo "192.168.100.15  k8s-node02" >> /etc/hosts

4)关闭swap
swapoff -a
永久关闭, 注释掉swap那行

/dev/mapper/rl-root     /                       xfs     defaults        0 0
UUID=784fb296-c00c-4615-a4a6-583ae0156b04 /boot                   xfs     defaults        0 0
#/dev/mapper/rl-swap     none                    swap    defaults        0 0

5)将桥接的ipv4流量传递到iptables链

modprobe br_netfilter ##生成bridge相关内核参数
[root@bogon ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
sysctl --system # 生效

[root@bogon ~]# sysctl --system 
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
kernel.core_pipe_limit = 16
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
net.core.optmem_max = 81920
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...

6)时间同步

yum install -y chrony;
systemctl start chronyd;
systemctl enable chronyd

[root@bogon ~]# systemctl start chronyd
[root@bogon ~]# systemctl enable chronyd
Created symlink /etc/systemd/system/multi-user.target.wants/chronyd.service → /usr/lib/systemd/system/chronyd.service.
[root@bogon ~]# 
 

2. 安装Keepalived + haproxy

1)用yum安装keepalived和haproxy(三台master上)

yum install -y keepalived haproxy

2)配置keepalived

master01上
vi /etc/keepalived/keepalived.conf #编辑成如下内容

global_defs {
router_id lvs-keepalived01 #router_id 机器标识。故障发生时,邮件通知会用到。
}v
rrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
fall 2
}v
rrp_instance VI_1 { #vrrp实例定义部分
state MASTER #设置lvs的状态,MASTER和BACKUP两种,必须大写
interface ens33 #设置对外服务的接口
virtual_router_id 100 #设置虚拟路由标示,这个标示是一个数字,同一个vrrp实例使用唯一
标示
priority 100 #定义优先级,数字越大优先级越高,在一个vrrp——instance下,
master的优先级必须大于backup
advert_int 1 #设定master与backup负载均衡器之间同步检查的时间间隔,单位是秒
authentication { #设置验证类型和密码
auth_type PASS #主要有PASS和AH两种
auth_pass aminglinuX #验证密码,同一个vrrp_instance下MASTER和BACKUP密码必须相

}
virtual_ipaddress { #设置虚拟ip地址,可以设置多个,每行一个
192.168.222.200
}
mcast_src_ip 192.168.222.101 #master01的ip
track_script {
chk_haproxy
}
}

master02上
vi /etc/keepalived/keepalived.conf #编辑成如下内容

global_defs {
router_id lvs-keepalived01 #router_id 机器标识。故障发生时,邮件通知会用到。
}v
rrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
fall 2
}v
rrp_instance VI_1 { #vrrp实例定义部分
state BACKUP #设置lvs的状态,MASTER和BACKUP两种,必须大写
interface ens33 #设置对外服务的接口
virtual_router_id 100 #设置虚拟路由标示,这个标示是一个数字,同一个vrrp实例使用唯一
标示
priority 90 #定义优先级,数字越大优先级越高,在一个vrrp——instance下,
master的优先级必须大于backup
advert_int 1 #设定master与backup负载均衡器之间同步检查的时间间隔,单位是秒
authentication { #设置验证类型和密码
auth_type PASS #主要有PASS和AH两种
auth_pass aminglinuX #验证密码,同一个vrrp_instance下MASTER和BACKUP密码必须相

}
virtual_ipaddress { #设置虚拟ip地址,可以设置多个,每行一个
192.168.222.200
}
mcast_src_ip 192.168.222.102 #master02的ip
track_script {
chk_haproxy
}
}

master03上
vi /etc/keepalived/keepalived.conf #编辑成如下内容

global_defs {
router_id lvs-keepalived01 #router_id 机器标识。故障发生时,邮件通知会用到。
}v
rrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
fall 2
}v
rrp_instance VI_1 { #vrrp实例定义部分
state BACKUP #设置lvs的状态,MASTER和BACKUP两种,必须大写
interface ens33 #设置对外服务的接口
virtual_router_id 100 #设置虚拟路由标示,这个标示是一个数字,同一个vrrp实例使用唯一
标示
priority 80 #定义优先级,数字越大优先级越高,在一个vrrp——instance下,
master的优先级必须大于backup
advert_int 1 #设定master与backup负载均衡器之间同步检查的时间间隔,单位是秒
authentication { #设置验证类型和密码
auth_type PASS #主要有PASS和AH两种
auth_pass aminglinuX #验证密码,同一个vrrp_instance下MASTER和BACKUP密码必须相

}
virtual_ipaddress { #设置虚拟ip地址,可以设置多个,每行一个
192.168.222.200
}
mcast_src_ip 192.168.222.103 #master03的ip
track_script {
chk_haproxy
}
}

编辑检测脚本(三台master上都执行)
vi /etc/keepalived/check_haproxy.sh #内容如下

#!/bin/bash
ha_pid_num=$(ps -ef | grep ^haproxy | wc -l)
if [[ ${ha_pid_num} -ne 0 ]];then
exit 0
else
exit 1
fi

保存后,给执行权限

chmod a+x /etc/keepalived/check_haproxy.sh

3)配置haproxy(三台master上)
vi /etc/haproxy/haproxy.cfg ##修改为如下内容

global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
ssl-default-bind-ciphers PROFILE=SYSTEM
ssl-default-server-ciphers PROFILE=SYSTEM
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend k8s
bind 0.0.0.0:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s
backend k8s
mode tcp
balance roundrobin
server master01 192.168.222.101:6443 check
server master02 192.168.222.102:6443 check
server master03 192.168.222.103:6443 check

4)启动haproxy和keepalived(三台master上执行)
启动haproxy

systemctl start haproxy; systemctl enable haproxy

启动keepalived

systemctl start keepalived; systemctl enable keepalived

[root@bogon ~]# systemctl start haproxy; systemctl enable haproxy
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
[root@bogon ~]# systemctl start keepalived; systemctl enable keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@bogon ~]# 

 5)测试
首先查看ip

ip add #master01上已经自动配置上了192.168.222.200

[root@k8s-master01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:a2:8c:65 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.100.11/24 brd 192.168.100.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 192.168.100.200/32 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fea2:8c65/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@k8s-master01 ~]# 

master01上关闭keepalived服务,vip会跑到master02上,再把master02上的keepalived服务关闭,vip会跑到master03上再把master02和master01服务开启

3. 安装Containerd(5台机器上都操作)

1)先安装yum-utils工具

yum install -y yum-utils ,yum如果安装有问题使用dnf install yum-utils

2)配置Docker官方的yum仓库,如果做过,可以跳过

yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

[root@k8s-master01 ~]# yum-config-manager \
>      --add-repo \
>  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Repository extras is listed more than once in the configuration
Adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master01 ~]#  

3)安装containerd

yum install containerd.io -y

4)启动服务

systemctl enable containerd
systemctl start containerd

5)生成默认配置

containerd config default > /etc/containerd/config.toml

[root@k8s-master01 ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@k8s-master01 ~]# systemctl start containerd
[root@k8s-master01 ~]# containerd config default > /etc/containerd/config.toml
[root@k8s-master01 ~]# 

 6)修改配置

vi /etc/containerd/config.toml
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" # 修改为阿里
云镜像地址
SystemdCgroup = true # 使用
systemd cgroup

7)重启containerd服务

systemctl restart containerd

4. 配置kubernetes仓库(5台机器都操作)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yumkey.
> gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@k8s-master01 ~]# 

 说明:kubernetes用的是RHEL7的源,和8是通用的

5. 安装kubeadm和kubelet(5台机器都操作)

1)查看所有版本

yum --showduplicates list kubeadm

2)安装1.27.2版本

yum install -y kubelet-1.27.2 kubeadm-1.27.2 kubectl-1.27.2

3)启动kubelet服务

systemctl start kubelet.service
systemctl enable kubelet.service

[root@k8s-master01 ~]# systemctl start kubelet.service
[root@k8s-master01 ~]# systemctl enable kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@k8s-master01 ~]# 

 4)设置crictl连接 containerd(5台机器都操作)

crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock

[root@k8s-master01 ~]# crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock
[root@k8s-master01 ~]# 

 6. 用kubeadm初始化(master01上)

参考 https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/

kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version=v1.28.2 --service-cidr=10.15.0.0/16 --pod-network-cidr=10.18.0.0/16 --upload-certs --control-plane-endpoint "192.168.100.200:16443"

输出:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.100.200:16443 --token a9pwzp.86c7ujzy3e92f2qc \
    --discovery-token-ca-cert-hash sha256:9dd17bece1e255a502ccba2f40de30297045cbe34af2dd742305219a01b4cd47 \
    --control-plane --certificate-key 73e862799c6241fc1335f0c8de1d9ef99926710129f8c8d6d4732f249009f9ad

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.200:16443 --token a9pwzp.86c7ujzy3e92f2qc \
    --discovery-token-ca-cert-hash sha256:9dd17bece1e255a502ccba2f40de30297045cbe34af2dd742305219a01b4cd47 
[root@k8s-master01 ~]# 

拷贝kubeconfig配置,目的是可以使用kubectl命令访问k8s集群

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# 

 7)将两个master加入集群
master02和master03上执行:
说明:以下join命令是将master加入集群

  kubeadm join 192.168.100.200:16443 --token a9pwzp.86c7ujzy3e92f2qc \
    --discovery-token-ca-cert-hash sha256:9dd17bece1e255a502ccba2f40de30297045cbe34af2dd742305219a01b4cd47 \
    --control-plane --certificate-key 73e862799c6241fc1335f0c8de1d9ef99926710129f8c8d6d4732f249009f9ad

该token有效期为24小时,如果过期,需要重新获取token,方法如下:

kubeadm token create --print-join-command --certificate-key
57bce3cb5a574f50350f17fa533095443fb1ff2df480b9fcd42f6203cc014e6b

拷贝kubeconfig配置,目的是可以使用kubectl命令访问k8s集群

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8)修改配置文件
master01上:

vi /etc/kubernetes/manifests/etcd.yaml
将--initial-cluster=master01=https://192.168.222.101:2380 改为 --initialcluster=
master01=https://192.168.222.101:2380,master02=https://192.168.222.102:2380,master03=https:/

master02上:

vi /etc/kubernetes/manifests/etcd.yaml
将--initialcluster=
master01=https://192.168.222.101:2380,master02=https://192.168.222.102:2380改为 -
-initialcluster=
master01=https://192.168.222.101:2380,master02=https://192.168.222.102:2380,master03=https:/

master03不用修改

9)将两个node加入集群
node01和node02上执行
说明:以下join命令是将node加入集群

kubeadm join 192.168.222.200:16443 --token y0xje6.ret5h4uv9ec2x62e \
--discovery-token-ca-cert-hash
sha256:2d12eeafb03e0c86da86cdc7144d1eb8adc82cba0d151230d99a77acec4d5a2e

该token有效期为24小时,如果过期,需要重新获取token,方法如下

kubeadm token create --print-join-command

查看node状态(master01上执行)

[root@k8s-master01 ~]# kubectl get node
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   7m58s   v1.28.2
k8s-master02   NotReady   control-plane   5m28s   v1.28.2
k8s-master03   NotReady   control-plane   5m18s   v1.28.2
k8s-node01     NotReady   <none>          79s     v1.28.2
k8s-node02     NotReady   <none>          8s      v1.28.2
[root@k8s-master01 ~]# 

 7. 安装calico网络插件(master01上)

 curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O

如果遇到以下问题:

 https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to raw.githubusercontent.com port 443: Connection refused

可在hosts文件里添加:

199.232.68.133 raw.githubusercontent.com
185.199.108.133 raw.githubusercontent.com
185.199.109.133 raw.githubusercontent.com
185.199.110.133 raw.githubusercontent.com
185.199.111.133 raw.githubusercontent.com
[root@k8s-master01 ~]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  238k  100  238k    0     0  56351      0  0:00:04  0:00:04 --:--:-- 56351

下载完后还需要修改⾥⾯定义 Pod ⽹络(CALICO_IPV4POOL_CIDR),与前⾯ kubeadm init 的 --podnetwork-cidr 指定的⼀样
vi calico.yaml

vim calico.yaml
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# 修改为:
- name: CALICO_IPV4POOL_CIDR
value: "10.18.0.0/16"

部署

kubectl apply -f calico.yaml

[root@k8s-master01 ~]# vim calico.yaml 
[root@k8s-master01 ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
[root@k8s-master01 ~]# 

 查看

kubectl get pods -n kube-system

 

[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS      AGE
calico-kube-controllers-cf9cc4fb8-tjjx8   1/1     Running   0             74s
calico-node-4kzfj                         1/1     Running   0             74s
calico-node-9kz2c                         1/1     Running   0             74s
calico-node-d2zcx                         1/1     Running   0             74s
calico-node-njmv8                         1/1     Running   0             74s
calico-node-pf5t2                         1/1     Running   0             74s
coredns-6554b8b87f-7vv24                  1/1     Running   0             20m
coredns-6554b8b87f-r4x56                  1/1     Running   0             20m
etcd-k8s-master01                         1/1     Running   0             13m
etcd-k8s-master02                         1/1     Running   0             13m
etcd-k8s-master03                         1/1     Running   0             17m
kube-apiserver-k8s-master01               1/1     Running   0             20m
kube-apiserver-k8s-master02               1/1     Running   0             18m
kube-apiserver-k8s-master03               1/1     Running   1 (17m ago)   17m
kube-controller-manager-k8s-master01      1/1     Running   2 (13m ago)   20m
kube-controller-manager-k8s-master02      1/1     Running   1 (14m ago)   18m
kube-controller-manager-k8s-master03      1/1     Running   0             17m
kube-proxy-bv6bb                          1/1     Running   0             17m
kube-proxy-jhwgf                          1/1     Running   0             18m
kube-proxy-snsnh                          1/1     Running   0             20m
kube-proxy-svv6g                          1/1     Running   0             12m
kube-proxy-vsd8c                          1/1     Running   0             12m
kube-scheduler-k8s-master01               1/1     Running   1 (17m ago)   20m
kube-scheduler-k8s-master02               1/1     Running   1 (14m ago)   18m
kube-scheduler-k8s-master03               1/1     Running   0             17m
[root@k8s-master01 ~]# 

8. 在K8s里快速部署一个应用

1)创建deployment

kubectl create deployment testdp --image=nginx:1.25.5 ##deploymnet名字为testdp 镜像为
nginx:1.25.5

[root@k8s-master01 ~]# kubectl create deployment testdp --image=registry.cn-hangzhou.aliyuncs.com/*/nginx:1.25.2 
deployment.apps/testdp created
[root@k8s-master01 ~]# 

2)查看deployment

kubectl get deployment

[root@k8s-master01 ~]# kubectl get deployment
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
testdp   1/1     1            1           31s

 3)查看pod

kubectl get pods

[root@k8s-master01 ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
testdp-5b77968464-lt46x   1/1     Running   0          <invalid>
[root@k8s-master01 ~]# 

 4)查看pod详情

kubectl describe pod testdp-786fdb4647-pbm7b

[root@k8s-master01 ~]# kubectl describe pod testdp-5b77968464-lt46x
Name:             testdp-5b77968464-lt46x
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-node01/192.168.100.14
Start Time:       Sat, 10 Aug 2024 05:14:00 +0800
Labels:           app=testdp
                  pod-template-hash=5b77968464
Annotations:      cni.projectcalico.org/containerID: ec056ed1ddb541cd057953b4fc89c3ea5638ed93d2048914a55d4cfc3fa8ac1b
                  cni.projectcalico.org/podIP: 10.18.85.195/32
                  cni.projectcalico.org/podIPs: 10.18.85.195/32
Status:           Running
IP:               10.18.85.195
IPs:
  IP:           10.18.85.195
Controlled By:  ReplicaSet/testdp-5b77968464
Containers:
  nginx:
    Container ID:   containerd://8f5e42c7c71da2bdfbb43d7b2e135a9eb7ecdcded00d671c41c1b111e2dedb12
    Image:          registry.cn-hangzhou.aliyuncs.com/daliyused/nginx:1.25.5
    Image ID:       registry.cn-hangzhou.aliyuncs.com/daliyused/nginx@sha256:0e1ac7f12d904a5ce077d1b5c763b5750c7985e524f6083e5eaa7e7313833440
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 10 Aug 2024 05:14:12 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7zggq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-7zggq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Pulling    102s  kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/daliyused/nginx:1.25.5"
  Normal  Scheduled  101s  default-scheduler  Successfully assigned default/testdp-5b77968464-lt46x to k8s-node01
  Normal  Pulled     91s   kubelet            Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/*/nginx:1.25.5" in 10.66s (10.66s including waiting)
  Normal  Created    91s   kubelet            Created container nginx
  Normal  Started    91s   kubelet            Started container nginx
[root@k8s-master01 ~]# 

 5)创建service,暴漏pod端口到node节点上

kubectl expose deployment testdp --port=80 --type=NodePort --target-port=80 --name=testsvc

[root@k8s-master01 ~]# kubectl expose deployment testdp --port=80 --type=NodePort --target-port=80 --name=testsvc
service/testsvc exposed

 6)查看service

kubectl get svc
testsvc NodePort 10.15.232.70 <none> 80:31693/TCP

[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.15.0.1      <none>        443/TCP        29m
testsvc      NodePort    10.15.56.110   <none>        80:30545/TCP   <invalid>
[root@k8s-master01 ~]# 

可以看到暴漏端口为一个大于30000的随机端口,浏览器里访问 192.168.100.14:30545

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1995953.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

YOLO系列:从yolov1至yolov8的进阶之路 持续更新中

一、基本概念 1.YOLO简介 YOLO&#xff08;You Only Look Once&#xff09;&#xff1a;是一种基于深度神经网络的对象识别和定位算法&#xff0c;其最大的特点是运行速度很快&#xff0c;可以用于实时系统。 2.目标检测算法 RCNN&#xff1a;该系列算法实现主要为两个步骤&…

WPF篇(4)- VirtualizingStackPanel (虚拟化元素)+Canvas控件(绝对布局)

VirtualizingStackPanel虚拟化元素 VirtualizingStackPanel 类&#xff08;虚拟化元素&#xff09;和StackPanel 类在用法上几乎差不多。其作用是在水平或垂直的一行中排列并显示内容。它继承于一个叫VirtualizingPanel的抽象类&#xff0c;而这个VirtualizingPanel抽象类继承…

[BSidesCF 2019]Kookie1

打开题目&#xff0c;看到 根据提示&#xff0c;账号&#xff1a;cookie。密码&#xff1a;monster。试一下登录&#xff0c;登陆成功 抓包看看信息 根据提示&#xff0c; 看一下返回包 账号要加username要改成admin&#xff0c;改一下试试 构造cookie 直接得到flag flag{c…

Animate软件基本概念:缓动、绘图纸外观及图层

FlashASer&#xff1a;AdobeAnimate2021软件零基础入门教程https://zhuanlan.zhihu.com/p/633230084 FlashASer&#xff1a;实用的各种Adobe Animate软件教程https://zhuanlan.zhihu.com/p/675680471 FlashASer&#xff1a;Animate教程及作品源文件https://zhuanlan.zhihu.co…

【秋招突围】2024届秋招-京东笔试题-第二套

🍭 大家好这里是 春秋招笔试突围,一起备战大厂笔试 💻 ACM金牌团队🏅️ | 多次AK大厂笔试 | 编程一对一辅导 ✨ 本系列打算持续跟新 春秋招笔试题 👏 感谢大家的订阅➕ 和 喜欢💗 和 手里的小花花🌸 ✨ 笔试合集传送们 -> 🧷春秋招笔试合集 🍭 本次给大家…

LVS(Linux Virtual Server)

简介 LVS&#xff08;Linux Virtual Server&#xff09;是一个高性能的开源负载均衡解决方案&#xff0c;它通过在Linux内核中实现IPVS&#xff08;IP Virtual Server&#xff09;模块来提供负载均衡功能。LVS能够将外部请求根据特定的算法分发到后端的多个服务器上&#xff0c…

Java 中的打印流

打印流属于输出流&#xff0c;分为字节打印流和字符打印流。 字节打印流 构造方法 这么多个构造方法&#xff0c;不要求记住&#xff0c;知道怎么用就可以了。 字节流底层是没有缓冲区&#xff0c;开不开自动刷新都一样。 听老司机说 工作3年了 一个流都没用过。所以听下有…

maven配置 + IDEA集成自己配置的maven

去官网下载 解压出来&#xff0c;去 conf 配置本地仓库 要是没梯子&#xff0c;国外服务器还是慢的&#xff0c;参考下面的maven的架构图 &#xff0c;就不用去国外的中央仓库了&#xff0c;配置到去阿里云的私服。 <mirror><id>alimaven</id><name>a…

Unity物理模块 之 ​2D碰撞器

本文仅作笔记学习和分享&#xff0c;不用做任何商业用途 本文包括但不限于unity官方手册&#xff0c;unity唐老狮等教程知识&#xff0c;如有不足还请斧正 1.碰撞器是什么 在 Unity 中&#xff0c;碰撞器&#xff08;Collider&#xff09;是一种组件&#xff0c;用于检测物体之…

Ubuntu23.10 安装kvm并使用nmtui创建桥接网络

1.实验准备 &#xff08;1&#xff09;使用Vmware安装Ubuntu23.10 2.实验步骤 &#xff08;1&#xff09;配置ssh &#xff08;2&#xff09;安装qemu &#xff08;3&#xff09;安装libvirt服务 &#xff08;4&#xff09;安装virtinst &#xff08;5&#xff09;启动libvir…

AI文生图的最新王者出现了,Midjourney和Stable Diffusion这下得哭了

最近&#xff0c;超越Midjourney&#xff08;简称MJ&#xff09;和Stable Diffusion&#xff08;简称SD&#xff09;的AI文生图模型突然出现了&#xff0c;她就是Flux.1模型。Flux.1模型是由SD前核心创始团队创立的Black Forest Labs研发的&#xff01; Flux.1模型一发布就瞬间…

unity + ready player me + oculus lipsync 实现数字人说话对应口型 手把手 保姆教程

在网上看了很多教程都没讲明白&#xff0c;今天终于完全弄懂了&#xff0c;现在把教程完整分享出来&#xff0c;希望能帮到大家。 一、unity 中 安装ready player me 插件和 oculus lipsync插件 1.ready player me 安装 &#xff1a; 插件地址在这 菜单栏会出现这两个选项&…

4款热门的视频剪辑软件大盘点,哪款是你的菜?

视频剪辑软件早已成为了现在很多人展现创意、记录生活和实现创作的必备工具&#xff1b;如果你还没有找到一款适合自己的视频剪辑软件的话&#xff0c;不妨看看这4款&#xff0c;一定会给你带来惊喜的。 1、福昕剪辑助手 直达链接&#xff1a;www.pdf365.cn/foxit-clip/ 这个…

python-读取测序数据的ABI文件并输出png格式峰图

本地环境&#xff1a;win10&#xff0c;Python 3.9.13&#xff0c;Biopython 1.8.2&#xff0c;matplotlib 3.5.2 参考&#xff1a; matplotlib.pyplot.arrow — Matplotlib 3.9.1 documentation https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.arrow.html mat…

【实战场景】SpringBoot整合Swagger快速实现

【实战场景】SpringBoot整合Swagger快速实现 开篇词&#xff1a;干货篇&#xff1a;一. Swagger简介二. SpringBoot整合Swagger3环境搭建1. 引入maven依赖2. YML配置3. Swagger配置类4. Swagger分组配置 三. Swagger 常用注解配置1. Api2. ApiOperation3. ApiParam4. ApiModel …

工程技术人员职称专业一览表,赶紧收藏!有助评职称、落户

现在很多地区为了引进人才&#xff0c;都会对各类获得中级或高级职称的人才提供一系列优惠政策&#xff0c;比如人才补贴、职称入户等等。 下面小编就来为大家介绍一下中级职称专业一览表&#xff0c;告诉你能以考代评的几个考试&#xff0c;需要评职称、落户的快看过来&#…

C#中Override与New关键字的运用及实例解析

文章目录 override 关键字new 关键字使用场景使用注意事项和最佳实践总结 在C#编程中&#xff0c;override 和 new 关键字用于处理类的继承和方法的重写。理解它们的用法和区别对于编写清晰和高效的面向对象代码至关重要。本文将详细阐述这两个关键字的含义、使用场景&#xff…

5分钟带你走近:LVS负载均衡(lvs知识点+实验配置)

集群和分布式简介 1、系统性能扩展方式 Scale UP&#xff1a;向上扩展,增强 Scale Out&#xff1a;向外扩展,增加设备&#xff0c;调度分配问题&#xff0c;Cluster 2、集群Cluster Cluster: 集群是为了解决某个特定问题将堕胎计算机组合起来形成的单个系统 Cluster常见…

PDF扫描?用lookscanned就好了【送源码】

日常工作&#xff0c;我们有时会需要把电子文档转换成看起来像是用扫描仪扫描出来的PDF文件&#xff0c;满足某些特定的需求&#xff0c;你懂的~~ 有时候身边没有打印机或者打印纸&#xff0c;或者打印不方便&#xff0c;总不至于每天都背着吧&#xff1f; 今天要跟大家聊聊一…

SX_错误声明定义了两个以上的数据类型BUG解决_14

具体报错&#xff1a; In file included from perfmon_priv.h:32,from perfmond.c:21: perfmon_api.h:7:18: 错误: 声明指定了两个以上的数据类型7 | #define uint8_t unsigned char perfmon_api.h:7:27: 错误: 声明指定了两个以上的数据类型7 | #define uint8_t unsigned cha…