飞天使-k8s简单搭建

news2024/9/24 21:26:10

文章目录

    • k8s概念
    • 安装部署-第一版
        • 无密钥配置与hosts与关闭swap开启ipv4转发
          • 安装前启用脚本
            • 开启ip_vs
            • 安装指定版本docker
        • 安装kubeadm kubectl kubelet,此部分为基础构建模版
      • k8s一主一worker节点部署
      • k8s三个master部署,如果负载均衡keepalived 不可用,可以用单节点做实验,忽略关于负载均衡的步骤
        • 虚拟负载均衡ip创建
      • 安装部署第一版参考链接地址
    • 安装kubernetes v1.23.5 版本集群
        • 安装前准备
        • 所有节点安装
      • containerd 安装
        • apiserver高可用
      • Kubeadm 安装配置
      • kubectl 安装
      • master 节点配置
      • node节点配置
      • 安装部署第二版参考链接

k8s概念

K8sMaster : 管理K8sNode的。

K8sNode:具有docker环境 和k8s组件(kubelet、k-proxy) ,载有容器服务的工作节点。

Controller-manager: k8s 的大脑,它通过 API Server监控和管理整个集群的状态,并确保集群处于预期的工作状态。

API Server: k8s API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统的数据总线和数据中心。

etcd: 高可用强一致性的服务发现存储仓库,kubernetes集群中,etcd主要用于配置共享和服务发现

Scheduler: 主要是为新创建的pod在集群中寻找最合适的node,并将pod调度到K8sNode上。

kubelet: 作为连接Kubernetes Master和各Node之间的桥梁,用于处理Master下发到本节点的任务,管理 Pod及Pod中的容器

k-proxy 是 kubernetes 工作节点上的一个网络代理组件,运行在每个节点上,维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。监听 API server 中 资源对象的变化情况,代理后端来为服务配置负载均衡。

Pod: 一组容器的打包环境。在Kubernetes集群中,Pod是所有业务类型的基础,也是K8S管理的最小单位级,它是一个或多个容器的组合。这些容器共享存储、网络和命名空间,以及如何运行的规范。(k8s =学校、pod = 班级、容器= 学生)

安装部署-第一版

无密钥配置与hosts与关闭swap开启ipv4转发

首先将文件下载下来, cd /root && yum install -y git && git clone https://gitee.com/hanfeng_edu/mastering_kubernetes.git
...略...
公钥(id_dsa.pub)、私钥(id_dsa)、授权列表文件(authorized_keys)  
/etc/ssh/sshd_config 配置文件注意
ChallengeResponseAuthentication no
PermitRootLogin yes# 一般只要这个就可以
PasswordAuthentication yes
PubkeyAuthentication yes

/etc/hosts
192.168.100.8 k8sMaster-1
192.168.100.9 k8sNode-1
192.168.100.10 k8sNode-2

安装前启用脚本
#!/bin/bash

################# 系统环境配置 #####################

# 关闭 Selinux/firewalld
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 关闭交换分区
swapoff -a
cp /etc/{fstab,fstab.bak}
cat /etc/fstab.bak | grep -v swap > /etc/fstab

# 设置 iptables
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf
modprobe br_netfilter
sysctl -p

# 同步时间
yum install -y ntpdate
ln -nfsv /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
开启ip_vs
#!/bin/bash

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh  ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
  /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe \${kernel_module}
  fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
安装指定版本docker
参考:https://docs.docker.com/engine/install/centos/
移除老版本
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
安装所需依赖库
yum install –y yum-utils device-mapper-persistent-data lvm2
添加软件源信息
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
更新并安装Docker-CE
yum makecache fast
yum install docker-ce-18.06.3.ce-3.el7 docker-ce-cli-18.06.3.ce-3.el7 containerd.io -y

配置Docker镜像加速器等
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://xxxxxx.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload  && 
sudo systemctl restart docker

安装kubeadm kubectl kubelet,此部分为基础构建模版

#!/bin/bash

# 安装软件可能需要的依赖关系
yum install -y yum-utils device-mapper-persistent-data lvm2

# 配置使用阿里云仓库,安装Kubernetes工具
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF


# 执行安装kubeadm, kubelet, kubectl工具
yum -y install kubeadm-1.17.0 kubectl-1.17.0 kubelet-1.17.0

# 配置防火墙
sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service

# 创建文件夹
if [ ! -d "/etc/docker" ];then
    mkdir -p /etc/docker
fi

# 配置 docker 启动参数
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
  "max-size": "100m"
  },
    "storage-driver": "overlay2"
  }
EOF


# 配置开启自启
systemctl enable docker && systemctl enable kubelet
systemctl daemon-reload
systemctl restart docker

安装完成之后如图所示
在这里插入图片描述

k8s一主一worker节点部署

1.在master节点配置K8S配置文件
cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
controlPlaneEndpoint: "192.168.100.8:6443"
apiServer:
  certSANs:
  - 192.168.100.8
networking:
   podSubnet: 10.244.0.0/16
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

上面配置文件中 192.168.100.8 是master 配置文件

2. 执行如下命令初始化集群
# kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
# mkdir -p $HOME/.kube
# cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config
# curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml| sed "s@192.168.0.0/16@10.244.0.0/16@g" | kubectl apply -f -
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

3. Worker节点加入master集群
# kubeadm join 192.168.100.8:6443 --token hrz6jc.8oahzhyv74yrpem5 \
    --discovery-token-ca-cert-hash sha256:25f51d27d64c55ea9d89d5af839b97d37dfaaf0413d00d481f7f59bd6556ee43 

4. 查看集群状态
# kubectl get nodes

k8s三个master部署,如果负载均衡keepalived 不可用,可以用单节点做实验,忽略关于负载均衡的步骤

虚拟负载均衡ip创建


1. 在三个master节点安装keepalived软件

	# yum install -y socat keepalived ipvsadm conntrack

#!/bin/bash

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh  ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
  /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe \${kernel_module}
  fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs


2. 创建如下keepalived的配置文件

# cat /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass just0kk
    }
    virtual_ipaddress {
        192.168.100.199
    }
}

virtual_server 192.168.100.199 6443 {
    delay_loop 6
    lb_algo loadbalance
    lb_kind DR
    net_mask 255.255.255.0
    persistence_timeout 0
    protocol TCP

real_server 192.168.100.13 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

real_server 192.168.100.12 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

real_server 192.168.100.14 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

###  此部分要将配置文件分发到另外两台机器
# scp /etc/keepalived/keepalived.conf  root@192.168.100.12:/etc/keepalived/
keepalived.conf                                                                                                                                                                           100% 1257     1.7MB/s   00:00    
# scp /etc/keepalived/keepalived.conf  root@192.168.100.13:/etc/keepalived/
然后每一台机器的  priority 100 参数搞成不一样即可
 

三台机器全部启动,能ping通虚拟地址即为成功
# systemctl start keepalived
# ping 192.168.100.199
PING 192.168.100.199 (192.168.100.199) 56(84) bytes of data.
64 bytes from 192.168.100.199: icmp_seq=1 ttl=64 time=0.064 ms
--- 192.168.100.199 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms

3. 创建k8s集群初始化配置文件,在某一台主节点执行即可

$ cat /etc/kubernetes/kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
controlPlaneEndpoint: "192.168.100.199:6443"
apiServer:
  certSANs:
  - 192.168.100.12
  - 192.168.100.13
  - 192.168.100.14
  - 192.168.100.199
networking:
  podSubnet: 10.244.0.0/16
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

 4. 初始化k8s集群

#  kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  # curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml| sed "s@192.168.0.0/16@10.244.0.0/16@g" | kubectl apply -f -

查看容器状态 kubectl get pods -n kube-system
# kubectl get pods -n kube-system
NAME                                            READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7cc97544d-lx8zn         1/1     Running   0          67s
calico-node-v8zql                               1/1     Running   0          67s
coredns-9d85f5447-s5q64                         1/1     Running   0          77s
coredns-9d85f5447-wv4c5                         1/1     Running   0          77s
etcd-gcp-honkong-k8s-doc04                      1/1     Running   0          94s
kube-apiserver-gcp-honkong-k8s-doc04            1/1     Running   0          94s
kube-controller-manager-gcp-honkong-k8s-doc04   1/1     Running   0          94s
kube-proxy-mg6ns                                1/1     Running   0          77s
kube-scheduler-gcp-honkong-k8s-doc04            1/1     Running   0          94s

5. 各个master之间建立无密码可以互访,然后执行如下:

# cat k8s-cluster-other-init.sh
#!/bin/bash
IPS=(192.168.100.12 192.168.100.13)
JOIN_CMD=`kubeadm token create --print-join-command 2> /dev/null`

for index in 0 1; do
  ip=${IPS[${index}]}
  ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $ip:~/.kube/config

  ssh ${ip} "${JOIN_CMD} --control-plane"
done

安装部署第一版参考链接地址

https://gitee.com/hanfeng_edu/mastering_kubernetes.git

安装kubernetes v1.23.5 版本集群

安装前准备

hostnamectl set-hostname k8s-01 #所有机器按照要求修改
bash #刷新主机名

 cat >> /etc/hosts <<EOF
192.168.100.18  k8s-01
192.168.100.19  k8s-02
192.168.100.20  k8s-03
192.168.100.21  k8s-04
192.168.100.22  k8s-05
EOF
#设置k8s-01为分发机 (只需要在k8s-01服务器操作即可)
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y expect

# 修改服务器/etc/ssh/sshd_config
PermitRootLogin yes# 一般只要这个就可以
PasswordAuthentication yes
sed -i "s@PermitRootLogin \no@PermitRootLogin \yes@g" /etc/ssh/sshd_config
sed -i "s@#PasswordAuthentication \yes@PasswordAuthentication \yes@g" /etc/ssh/sshd_config

#分发公钥
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
for i in k8s-01 k8s-02 k8s-03 k8s-04 k8s-05;do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
        expect {
                \"*yes/no*\" {send \"yes\r\"; exp_continue}
                \"*password*\" {send \"123456\r\"; exp_continue}
                \"*Password*\" {send \"123456\r\";}
        } "
done 

服务器密码123456 自行更改 

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache
yum -y install gcc gcc-c++ make autoconf libtool-ltdl-devel gd-devel freetype-devel libxml2-devel libjpeg-devel libpng-devel openssh-clients openssl-devel curl-devel bison patch libmcrypt-devel libmhash-devel ncurses-devel binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel libgcj libtiff pam-devel libicu libicu-devel gettext-devel libaio-devel libaio libgcc libstdc++ libstdc++-devel unixODBC unixODBC-devel numactl-devel glibc-headers sudo bzip2 mlocate flex lrzsz sysstat lsof setuptool system-config-network-tui system-config-firewall-tui ntsysv ntp pv lz4 dos2unix unix2dos rsync dstat iotop innotop mytop telnet iftop expect cmake nc gnuplot screen xorg-x11-utils xorg-x11-xinit rdate bc expat-devel compat-expat1 tcpdump sysstat man nmap curl lrzsz elinks finger bind-utils traceroute mtr ntpdate zip unzip vim wget net-tools

modprobe br_netfilter
modprobe ip_conntrack
cat >>/etc/rc.sysinit<<EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules
echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
chmod 755 /etc/sysconfig/modules/ip_conntrack.modules

内核优化
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
#分发到所有节点
for i in k8s-02 k8s-03 k8s-04 k8s-05
do
    scp kubernetes.conf root@$i:/etc/sysctl.d/
    ssh root@$i sysctl -p /etc/sysctl.d/kubernetes.conf
    ssh root@$i echo '1' >> /proc/sys/net/ipv4/ip_forward
done


bridge-nf 使得netfilter可以对Linux网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bridge.bridge-nf-call-iptables=1后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:

net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包
net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包
net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包
net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。

所有节点安装

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
yum install ipset -y
yum install ipvsadm -y

timedatectl set-timezone Asia/Shanghai
#将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
#重启依赖于系统时间的服务
systemctl restart rsyslog 
systemctl restart crond

升级内核
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#默认安装为最新内核
yum --enablerepo=elrepo-kernel install kernel-ml
#修改内核顺序
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
#使用下面命令看看确认下是否启动默认内核指向上面安装的内核
grubby --default-kernel
#这里的输出结果应该为我们升级后的内核信息
reboot

yum update -y 

containerd 安装

[root@k8s-03 ~]# rpm -qa | grep libseccomp
libseccomp-2.3.1-4.el7.x86_64
[root@k8s-03 ~]# rpm -e libseccomp-2.3.1-4.el7.x86_64 --nodeps

wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm 

  # 创建containerd 文件
  mkdir /etc/containerd -p
# containerd 安装
wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz

tar zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /   #我们直接让它给我们对应的目录给替换掉
etc/
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
etc/crictl.yaml
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
usr/
usr/local/
usr/local/sbin/
usr/local/sbin/runc
usr/local/bin/
usr/local/bin/crictl
usr/local/bin/ctd-decoder
usr/local/bin/ctr
usr/local/bin/containerd-shim
usr/local/bin/containerd
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/critest
usr/local/bin/containerd-shim-runc-v2
usr/local/bin/containerd-stress
opt/
opt/containerd/
opt/containerd/cluster/
opt/containerd/cluster/version
opt/containerd/cluster/gce/
opt/containerd/cluster/gce/cni.template
opt/containerd/cluster/gce/env
opt/containerd/cluster/gce/configure.sh
opt/containerd/cluster/gce/cloud-init/
opt/containerd/cluster/gce/cloud-init/node.yaml
opt/containerd/cluster/gce/cloud-init/master.yaml
opt/cni/
opt/cni/bin/
opt/cni/bin/firewall
opt/cni/bin/portmap
opt/cni/bin/host-local
opt/cni/bin/ipvlan
opt/cni/bin/host-device
opt/cni/bin/sbr
opt/cni/bin/vrf
opt/cni/bin/static
opt/cni/bin/tuning
opt/cni/bin/bridge
opt/cni/bin/macvlan
opt/cni/bin/bandwidth
opt/cni/bin/vlan
opt/cni/bin/dhcp
opt/cni/bin/loopback
opt/cni/bin/ptp

安装完成之后 要加环境变量 https://www.orchome.com/16586 参考这个
echo "export PATH=$PATH:/usr/local/bin" >> /etc/profile && . /etc/profile

containerd config default > /etc/containerd/config.toml

# 替换默认pause镜像地址


apiserver高可用

 #首先我们在原有的基础上添加一个host,只需要在master节点上执行即可
cat >>/etc/hosts<< EOF
192.168.100.18 k8s-master-01
192.168.100.19 k8s-master-02
192.168.100.20 k8s-master-03
192.168.100.199  apiserver.cc100.cn
EOF

#编译安装nginx
#安装依赖
yum install pcre pcre-devel openssl openssl-devel gcc gcc-c++ automake autoconf libtool make wget vim lrzsz -y
wget https://nginx.org/download/nginx-1.20.2.tar.gz
tar xf nginx-1.20.2.tar.gz
cd nginx-1.20.2/
useradd nginx -s /sbin/nologin -M
./configure --prefix=/opt/nginx/ --with-pcre --with-http_ssl_module --with-http_stub_status_module --with-stream --with-http_stub_status_module --with-http_gzip_static_module
make  &&  make install  

#使用systemctl管理,并设置开机启动
cat >/usr/lib/systemd/system/nginx.service<<EOF
# /usr/lib/systemd/system/nginx.service
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target sshd-keygen.service
[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/sshd
ExecStartPre=/opt/nginx/sbin/nginx -t -c /opt/nginx/conf/nginx.conf
ExecStart=/opt/nginx/sbin/nginx -c /opt/nginx/conf/nginx.conf
ExecReload=/opt/nginx/sbin/nginx -s reload
ExecStop=/opt/nginx/sbin/nginx -s stop
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
EOF
#开机启动
[root@k8s-01 nginx-1.20.2]# systemctl enable nginx --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.



vim nginx.conf 
user nginx nginx;
worker_processes auto;
events {
    worker_connections  20240;
    use epoll;
}
error_log /var/log/nginx_error.log info;
stream {
    upstream kube-servers {
        hash $remote_addr consistent;
        server k8s-master-01:6443 weight=5 max_fails=1 fail_timeout=3s;  #这里可以写IP
        server k8s-master-02:6443 weight=5 max_fails=1 fail_timeout=3s;
        server k8s-master-03:6443 weight=5 max_fails=1 fail_timeout=3s;
    }
    server {
        listen 8443 reuseport;
        proxy_connect_timeout 3s;
        # 加大timeout
        proxy_timeout 3000s;
        proxy_pass kube-servers;
    }
}
#分发到其它master节点
for i in k8s-02 k8s-03
do
    scp nginx.conf root@$i:/opt/nginx/conf/
    ssh root@$i systemctl restart nginx
done


修改配置文件

router_id 节点IP
mcast_src_ip 节点IP
virtual_ipaddress VIP
请根据自己IP实际上情况修改

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 192.168.100.20     #节点ip,master每个节点配置自己的IP
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 192.168.100.20    #节点IP
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        192.168.100.199   #VIP
    }
}
EOF
#编写健康检查脚本
vim  /etc/keepalived/check_port.sh 
CHK_PORT=$1
 if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
 else
        echo "Check Port Cant Be Empty!"
 fi
启动keepalived

systemctl enable --now keepalived

测试vip是否正常

ping vip
ping apiserver.cc.com #我们的域名

Kubeadm 安装配置

首先我们需要在k8s-01配置kubeadm源

下面kubeadm操作只需要在k8s-01上即可
替换源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5 --disableexcludes=kubernetes
systemctl enable --now kubelet

打印默认信息
kubeadm config print init-defaults >kubeadm-init.yaml


以下要进行自定义修改
[root@k8s-01 ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.100.18               #k8s-01 ip地址
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
  extraArgs:
    etcd-servers: https://192.168.100.18:2379,https://192.168.100.19:2379,https://192.168.100.20:2379
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
    local:
      dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.23.5
controlPlaneEndpoint: apiserver.cc100.cn:8443        #高可用地址,我这里填写vip
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs                                            # kube-proxy 模式
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
cgroupDriver: systemd                   # 配置 cgroup driver
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s


检测一下语法
kubeadm init --config kubeadm-init.yaml --dry-run

预拉取镜像
kubeadm config images list --config kubeadm-init.yaml 

wget https://d.frps.cn/file/kubernetes/image/k8s_all_1.23.5.tar
ctr -n k8s.io i import k8s_all_1.23.5.tar

#拷贝到其它节点
for i in k8s-02 k8s-03 k8s-04 k8s-05;do
    scp k8s_all_1.23.5.tar root@$i:/root/
    ssh root@$i ctr -n k8s.io i import k8s_all_1.23.5.tar
done

kubectl 安装

kubeadm不会安装或管理kubelet,kubectl因此需要确保它们kubeadm和Kubernetes版本相匹配。如果不这样,则存在版本偏差的风险。但是,支持kubelet和k8s之间的一个小版本偏差,但kubelet版本可能永远不会超过API Server版本


#下载1.23.5 kubectl工具
[root@k8s-01 ~]# curl -LO https://dl.k8s.io/release/v1.23.5/bin/linux/amd64/kubectl
[root@k8s-01 ~]# chmod +x kubectl && mv kubectl /usr/local/bin/
#检查kubectl工具版本号
[root@k8s-01 ~]# kubectl version --client --output=yaml
clientVersion:
  buildDate: "2022-03-16T15:58:47Z"
  compiler: gc
  gitCommit: c285e781331a3785a7f436042c65c5641ce8a9e9
  gitTreeState: clean
  gitVersion: v1.23.5
  goVersion: go1.17.8
  major: "1"
  minor: "23"
  platform: linux/amd64
#拷贝kubectl到其它master节点
for i in k8s-02 k8s-03;do
    scp /usr/local/bin/kubectl root@$i:/usr/local/bin/kubectl
    ssh root@$i chmod +x /usr/local/bin/kubectl
done

初始化
 kubeadm init --config kubeadm-init.yaml  --upload-certs

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a \
        --control-plane --certificate-key b67f07fdb20c39157f9a052902f48a83e1624dc4682ea9cd6db752bf31028a8d

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a 

记住init后打印的token,复制kubectl的kubeconfig,kubectl的kubeconfig路径默认是~/.kube/config
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
初始化的配置文件为保存在configmap里面
kubectl -n kube-system get cm kubeadm-config -o yaml


查看版本
[root@k8s-01 kubernetes]# kubectl get node
NAME     STATUS   ROLES                  AGE    VERSION
k8s-01   Ready    control-plane,master   2m1s   v1.23.5


master 节点配置

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装相关组件
yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5 --disableexcludes=kubernetes

systemctl enable --now kubelet

其他节点加入主集群
 kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a \
        --control-plane --certificate-key b67f07fdb20c39157f9a052902f48a83e1624dc4682ea9cd6db752bf31028a8d

添加完成之后显示
...略...
To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-03 ~]#        mkdir -p $HOME/.kube
[root@k8s-03 ~]#         sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-03 ~]#         sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-03 ~]# 
[root@k8s-03 ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
k8s-01   Ready    control-plane,master   7m18s   v1.23.5
k8s-02   Ready    control-plane,master   65s     v1.23.5
k8s-03   Ready    control-plane,master   15s     v1.23.5

node节点配置

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubeadm-1.23.5  --disableexcludes=kubernetes
systemctl enable kubelet.service

kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a 
        
加入主节点 在k8s-01 主节点中打出
kubeadm token create --print-join-command

安装部署第二版参考链接

https://i4t.com/5488.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/900289.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

STM32 CubeMX (第四步Freertos内存管理和CPU使用率)

STM32 CubeMX STM32 CubeMX &#xff08;第四步Freertos内存管理和CPU使用率&#xff09; STM32 CubeMX一、STM32 CubeMX设置时钟配置HAL时基选择TIM1&#xff08;不要选择滴答定时器&#xff1b;滴答定时器留给OS系统做时基&#xff09;使用STM32 CubeMX 库&#xff0c;配置Fr…

【Spring Cloud 二】——Spring Cloud基本介绍

Spring Cloud基本介绍 一、Spring Cloud简介二、Spring Cloud核心组件Spring Cloud Netflix组件Spring Cloud Alibaba组件Spring Cloud原生组件微服务架构图 三、Spring Cloud与Spirng Boot的关系四、Spring Cloud的版本选择Spring Cloud Alibaba的版本选择 一、Spring Cloud简…

Gradio入门到进阶全网最详细教程:快速搭建AI算法可视化部署演示(侧重项目搭建和案例分享)

常用的两款AI可视化交互应用比较&#xff1a; Gradio Gradio的优势在于易用性&#xff0c;代码结构相比Streamlit简单&#xff0c;只需简单定义输入和输出接口即可快速构建简单的交互页面&#xff0c;更轻松部署模型。适合场景相对简单&#xff0c;想要快速部署应用的开发者。便…

Azure静态网站托管

什么是静态网站托管 Azure Blob的静态网站托管是一项功能&#xff0c;它允许开发人员在Azure Blob存储中托管和发布静态网站。通过这个功能&#xff0c;您可以轻松地将静态网页、图像、视频和其他网站资源存储在Azure Blob中&#xff0c;并直接通过提供的URL访问这些资源。 官…

kafka-- kafka集群 架构模型职责分派讲解

一、 kafka集群 架构模型职责分派讲解 生产者将消息发送到相应的Topic&#xff0c;而消费者通过从Topic拉取消息来消费 Kafka奇数个节点消费者consumer会将消息拉去过来生产者producer会将消息发送出去数据管理 放在zookeeper

python安装 jieba 后显示 ModuleNotFoundError: No module named ‘jieba‘

python安装 jieba 后显示 ModuleNotFoundError: No module named jieba Traceback (most recent call last): File "d:\python\python37\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "d:\python\python37\l…

大文本的全文检索方案附件索引

一、简介 Elasticsearch附件索引是需要插件支持的功能&#xff0c;它允许将文件内容附加到Elasticsearch文档中&#xff0c;并对这些附件内容进行全文检索。本文将带你了解索引附件的原理和使用方法&#xff0c;并通过一个实际示例来说明如何在Elasticsearch中索引和检索文件附…

BDA初级分析——SQL清洗和整理数据

一、数据处理 数据处理之类型转换 字符格式与数值格式存储的数据&#xff0c;同样是进行大小排序&#xff0c; 会有什么区别&#xff1f; 以rev为例&#xff0c;看看字符格式与数值格式存储时&#xff0c;排序会有什么区别&#xff1f; 用cast as转换为字符后进行排序 SEL…

Wazuh安装及使用

环境配置 官方网址Quickstart Wazuh documentation 可以选择Elastic Stack安装&#xff0c;也可以选择下载虚拟机&#xff08;OVA&#xff09;安装 这里展示虚拟机安装 下载好文档中提供的文件 虚拟机配置要求 在VM左上角 文件->打开->刚刚下载的.ova文件&#xff0c…

力扣974被K整除的子数组

同余定理 使用前缀和哈希表 由于可能是负数所以要进行修正&#xff1a;(sum%kk)%k class Solution { public:int subarraysDivByK(vector<int>& nums, int k) {unordered_map<int,int> hash;hash[0 % k] 1; //0 这个数的余数int sum 0, ret 0;for(auto x…

Swing程序设计(1)概述及常用组件

文章目录 前言一、什么是GUI?二、Swing概述 1.Swing包2.Swing常用组件总结 前言 该文介绍了Java中Swing组件的概述&#xff0c;以及常用组件的介绍。Swing程序是关于开发软件界面的一种轻量级Java组件。那什么是Swing组件&#xff1f;弹出对话框&#xff0c;窗体&#xff0c;设…

Java创建对象的几种方式

在Java中&#xff0c;对象是程序中的一种基本元素&#xff0c;它通过类定义和创建。本篇教程旨在介绍Java中创建对象的几种方式&#xff0c;包括使用new关键字、反射、clone、反序列化等方式。 使用new关键字创建对象 在Java中&#xff0c;最常用的创建对象方式是使用new关键…

Linux笔试题(4)

67、在局域网络内的某台主机用ping命令测试网络连接时发现网络内部的主机都可以连同,而不能与公网连通,问题可能是__C_ A.主机ip设置有误 B.没有设置连接局域网的网关 C.局域网的网关或主机的网关设置有误 D.局域网DNS服务器设置有误 解析&#xff1a;在局域网络内的某台主…

ACE内存池管理器积累

源起 近来由于研究ACE内存分配的组件&#xff0c;想做一个应用程序级的内存管理&#xff0c;有人还想自己写一个&#xff0c;我觉得可以直接用ACE自己提供的内存管理器&#xff0c;避免重复发明轮子。 结合以前认识&#xff0c;和前辈们的积累&#xff0c;觉得可以记下来一些…

【AutoLayout案例3 Objective-C语言】

一、咱们接下来,再把这个案例实现一下 1.要求, 1)在控制器的顶部,有两个UIView,一个是蓝色View,一个是红色View 2)这两个UIView的高度,永远是相等的,蓝色和红色的高度是相等的,都是50 3)红色View和蓝色View,是右对齐的 4)蓝色View,距离父控件的左边、上边、右…

双指针算法实例1(移动零)

常⻅的双指针有两种形式&#xff1a; 1 对撞指针&#xff08;左右指针&#xff09;&#xff1a; a 对撞指针从两端向中间移动。⼀个指针从最左端开始&#xff0c;另⼀个从最右端开始&#xff0c;然后逐渐往中间逼 近 b 终止条件一般是两指针相遇or错过&#xff08;也可能在循…

VMWare Workstation 网络设置 桥接模式 网络地址转换(NAT)模式 仅主机模式

文章目录 网络模式配网要求CentOSDHCP虚拟网络桥接模式默认配置测试手动配置测试 网络地址转发模式 (NAT) 网络模式 桥接模式: 主机与虚拟机对等, 虚拟机注册到主机所在的局域网, 会占用该网络的IP该局域网内的所有机器, 包括主机和其他机器和所有虚拟机, 均可互相访问 网络地…

ElasticSearch索引库、文档、RestClient操作

文章目录 一、索引库1、mapping属性2、索引库的crud 二、文档的crud三、RestClient 一、索引库 es中的索引是指相同类型的文档集合&#xff0c;即mysql中表的概念 映射&#xff1a;索引中文档字段的约束&#xff0c;比如名称、类型 1、mapping属性 mapping映射是对索引库中文…

ORA-600 ksuloget2 恢复----惜分飞

客户在win 32位的操作系统上调至sga超过2G,数据库运行过程中报ORA-600 ksuloget2错误 Thread 1 cannot allocate new log, sequence 43586 Checkpoint not complete Current log# 1 seq# 43585 mem# 0: D:\ORACLE\ORADATA\ORCL\REDO01.LOG Fri Aug 04 14:57:02 2023 Errors i…

正则表达式:贪婪与非贪婪模式

正则中的三种模式&#xff0c;贪婪匹配、非贪婪匹配和独占模式。 在这 6 种元字符中&#xff0c;我们可以用 {m,n} 来表示 &#xff08;*&#xff09;&#xff08;&#xff09;&#xff08;?&#xff09; 这 3 种元字符&#xff1a; 贪婪模式&#xff0c;简单说就是尽可能进行…