kubernetes|云原生|kubeadm-1.25.7集群单master+外部etcd集群+kubeadm-init+cri-docker文件形式快速部署

news2025/3/25 17:09:37

一、

前言和写作原因

本文做一个kubernetes集群部署记录,实在是部署的东西太多了,害怕忘记,kubernetes集群的部署又细节比较多,因此,在这里做一个尽量详细的记录

三个VMware虚拟机,IP分别为192.168.123.11,192.168.123.12,192.168.123.13, 11服务器为master,12,13服务器为工作节点,node1和node2

操作系统为centos7,kubernetes的版本为1.25.7,下面是部署成功的截图:

[root@k8s-master ~]# k get no
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   3h55m   v1.25.7
k8s-node1    Ready    <none>          3h45m   v1.25.7
k8s-node2    Ready    <none>          3h45m   v1.25.7
[root@k8s-master ~]# k get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
[root@k8s-master ~]# k get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
csr-kx5cd   3h45m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:abcdef   <none>              Approved,Issued
csr-p55mg   3h46m   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:abcdef   <none>              Approved,Issued
csr-x2p5m   3h55m   kubernetes.io/kube-apiserver-client-kubelet   system:node:k8s-master    <none>              Approved,Issued
[root@k8s-master ~]# k get po -A 
NAMESPACE     NAME                                 READY   STATUS    RESTARTS      AGE
kube-system   coredns-c676cc86f-r5f89              1/1     Running   1 (46m ago)   52m
kube-system   coredns-c676cc86f-rmbt5              1/1     Running   1 (46m ago)   52m
kube-system   kube-apiserver-k8s-master            1/1     Running   4 (46m ago)   3h55m
kube-system   kube-controller-manager-k8s-master   1/1     Running   5 (46m ago)   3h55m
kube-system   kube-flannel-ds-hkbb4                1/1     Running   1 (46m ago)   48m
kube-system   kube-flannel-ds-p5whh                1/1     Running   1 (46m ago)   48m
kube-system   kube-flannel-ds-r2ltp                1/1     Running   1 (46m ago)   48m
kube-system   kube-proxy-6tcvm                     1/1     Running   3 (46m ago)   3h46m
kube-system   kube-proxy-bfjlz                     1/1     Running   3 (46m ago)   3h45m
kube-system   kube-proxy-m99t4                     1/1     Running   4 (46m ago)   3h55m
kube-system   kube-scheduler-k8s-master            1/1     Running   4 (46m ago)   3h55m

这里需要说明一下,该集群是使用的一年证书,因此,是不适用于在生产使用的,全部操作可以离线化完成

云原生|kubernetes|kubeadm部署的集群的100年证书_kubernetes 百年证书-CSDN博客  这个是关于证书时间的问题解决博客

相关资料下载地址:

通过网盘分享的文件:kubeadm部署1.25.7
链接: https://pan.baidu.com/s/1AG2qSw3zufs884afM38Ufg?pwd=7a97 提取码: 7a97 
--来自百度网盘超级会员v6的分享

二、

kubernetes集群部署前的环境准备

1、

服务器环境初始化

大部分环境方面还是使用脚本来快速处理,脚本内容如下:

注意。里面有三个IP,需要根据实际修改好,脚本执行方式为:bash 脚本名称  主机名

例如,在master节点,那么,执行命令为 bash os7init.sh k8s-master,在node1节点也就是192.168.123.12,那么执行命令为 bash os7init.sh k8s-node1

别忘了把kernel-lt-5.4.266-1.el7.elrepo.x86_64.rpm 这个文件丢到root目录,脚本执行完毕后,建议重启一下服务器

#!/bin/bash
# init centos7  ./centos7-init.sh 主机名

# 检查是否为root用户,脚本必须在root权限下运行
if [[ "$(whoami)" != "root" ]]; then
    echo "please run this script as root !" >&2
    exit 1
fi
echo -e "\033[31m the script only Support CentOS_7 x86_64 \033[0m"
echo -e "\033[31m system initialization script, Please Seriously. press ctrl+C to cancel \033[0m"

# 检查是否为64位系统,这个脚本只支持64位脚本
platform=`uname -i`
if [ $platform != "x86_64" ];then
    echo "this script is only for 64bit Operating System !"
    exit 1
fi

if [ "$1" == "" ];then
    echo "The host name is empty."
    exit 1
else
	hostnamectl  --static set-hostname  $1
	hostnamectl  set-hostname  $1
fi

cat << EOF
+---------------------------------------+
|   your system is CentOS 7 x86_64      |
|           start optimizing            |
+---------------------------------------+
EOF
sleep 1

# 安装必要支持工具及软件工具
yum_update(){
yum update -y
yum install -y nmap unzip wget vim lsof xz net-tools iptables-services ntpdate ntp-doc psmisc lrzsz  expect conntrack telnet net-tools 

}

# 设置时间同步 set time
zone_time(){
timedatectl set-timezone Asia/Shanghai
/usr/sbin/ntpdate 0.cn.pool.ntp.org > /dev/null 2>&1
/usr/sbin/hwclock --systohc
/usr/sbin/hwclock -w
cat > /var/spool/cron/root << EOF
10 0 * * * /usr/sbin/ntpdate 0.cn.pool.ntp.org > /dev/null 2>&1
* * * * */1 /usr/sbin/hwclock -w > /dev/null 2>&1
EOF
chmod 600 /var/spool/cron/root
/sbin/service crond restart
sleep 1
}

# 修改文件打开数 set the file limit
limits_config(){
cat > /etc/rc.d/rc.local << EOF
#!/bin/bash

touch /var/lock/subsys/local
ulimit -SHn 1024000
EOF

sed -i "/^ulimit -SHn.*/d" /etc/rc.d/rc.local
echo "ulimit -SHn 1024000" >> /etc/rc.d/rc.local

sed -i "/^ulimit -s.*/d" /etc/profile
sed -i "/^ulimit -c.*/d" /etc/profile
sed -i "/^ulimit -SHn.*/d" /etc/profile

cat >> /etc/profile << EOF
ulimit -c unlimited
ulimit -s unlimited
ulimit -SHn 1024000
EOF

source /etc/profile
ulimit -a
cat /etc/profile | grep ulimit

if [ ! -f "/etc/security/limits.conf.bak" ]; then
    cp /etc/security/limits.conf /etc/security/limits.conf.bak
fi

cat > /etc/security/limits.conf << EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

if [ ! -f "/etc/security/limits.d/20-nproc.conf.bak" ]; then
    cp /etc/security/limits.d/20-nproc.conf /etc/security/limits.d/20-nproc.conf.bak
fi

cat > /etc/security/limits.d/20-nproc.conf << EOF
*          soft    nproc     409600
root       soft    nproc     unlimited
EOF

sleep 1
}

# 优化内核参数 tune kernel parametres
sysctl_config(){
if [ ! -f "/etc/sysctl.conf.bak" ]; then
    cp /etc/sysctl.conf /etc/sysctl.conf.bak
fi

#add
cat > /etc/sysctl.conf << EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 5
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_max_tw_buckets = 60000
net.ipv4.tcp_max_orphans = 32768
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_wmem = 4096 16384 13107200
net.ipv4.tcp_rmem = 4096 87380 17476000
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.ip_forward = 1
net.ipv4.route.gc_timeout = 100
net.core.somaxconn = 32768
net.core.netdev_max_backlog = 32768
net.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_tcp_timeout_established = 180
vm.overcommit_memory = 1
vm.swappiness = 1
fs.file-max = 1024000
EOF

#reload sysctl
/sbin/sysctl -p
sleep 1
}

# 设置UTF-8   LANG="zh_CN.UTF-8"
LANG_config(){
echo "LANG=\"en_US.UTF-8\"">/etc/locale.conf
source  /etc/locale.conf
}


#关闭SELINUX disable selinux
selinux_config(){
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
sleep 1
}

#日志处理
log_config(){
setenforce 0
systemctl start systemd-journald
systemctl status systemd-journald
}


# 关闭防火墙
firewalld_config(){
/usr/bin/systemctl stop  firewalld.service
/usr/bin/systemctl disable  firewalld.service
}


# SSH配置优化 set sshd_config
sshd_config(){
if [ ! -f "/etc/ssh/sshd_config.bak" ]; then
    cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
fi

cat >/etc/ssh/sshd_config<<EOF
Port 22
AddressFamily inet
ListenAddress 0.0.0.0
Protocol 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
SyslogFacility AUTHPRIV
PermitRootLogin yes
MaxAuthTries 6
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile	.ssh/authorized_keys
PasswordAuthentication yes
ChallengeResponseAuthentication no
UsePAM yes
UseDNS no
X11Forwarding yes
UsePrivilegeSeparation sandbox
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem       sftp    /usr/libexec/openssh/sftp-server
EOF
/sbin/service sshd restart
}


# 关闭ipv6  disable the ipv6
ipv6_config(){
echo "NETWORKING_IPV6=no">/etc/sysconfig/network
echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
echo "127.0.0.1   localhost   localhost.localdomain">/etc/hosts
#sed -i 's/IPV6INIT=yes/IPV6INIT=no/g' /etc/sysconfig/network-scripts/ifcfg-enp0s8


for line in $(ls -lh /etc/sysconfig/network-scripts/ifcfg-* | awk -F '[ ]+' '{print $9}')
do
if [ -f  $line ]
        then
        sed -i 's/IPV6INIT=yes/IPV6INIT=no/g' $line
                echo $i
fi
done
}


# 设置历史命令记录格式 history
history_config(){
export HISTFILESIZE=10000000
export HISTSIZE=1000000
export PROMPT_COMMAND="history -a"
export HISTTIMEFORMAT="%Y-%m-%d_%H:%M:%S "
##export HISTTIMEFORMAT="{\"TIME\":\"%F %T\",\"HOSTNAME\":\"\$HOSTNAME\",\"LI\":\"\$(who -u am i 2>/dev/null| awk '{print \$NF}'|sed -e 's/[()]//g')\",\"LU\":\"\$(who am i|awk '{print \$1}')\",\"NU\":\"\${USER}\",\"CMD\":\""
cat >>/etc/bashrc<<EOF
alias vi='vim'
HISTDIR='/var/log/command.log'
if [ ! -f \$HISTDIR ];then
touch \$HISTDIR
chmod 666 \$HISTDIR
fi
export HISTTIMEFORMAT="{\"TIME\":\"%F %T\",\"IP\":\"\$(ip a | grep -E '192.168|172' | head -1 | awk '{print \$2}' | cut -d/ -f1)\",\"LI\":\"\$(who -u am i 2>/dev/null| awk '{print \$NF}'|sed -e 's/[()]//g')\",\"LU\":\"\$(who am i|awk '{print \$1}')\",\"NU\":\"\${USER}\",\"CMD\":\""
export PROMPT_COMMAND='history 1|tail -1|sed "s/^[ ]\+[0-9]\+  //"|sed "s/$/\"}/">> /var/log/command.log'
EOF
source /etc/bashrc
}

# 服务优化设置
service_config(){
/usr/bin/systemctl stop postfix.service
/usr/bin/systemctl disable postfix.service
chmod +x /etc/rc.local
chmod +x /etc/rc.d/rc.local
#ls -l /etc/rc.d/rc.local
}

# VIM设置
vim_config(){
cat > /root/.vimrc << EOF
set history=1000

EOF

#autocmd InsertLeave * se cul
#autocmd InsertLeave * se nocul
#set nu
#set bs=2
#syntax on
#set laststatus=2
#set tabstop=4
#set go=
#set ruler
#set showcmd
#set cmdheight=1
#hi CursorLine   cterm=NONE ctermbg=blue ctermfg=white guibg=blue guifg=white
#set hls
#set cursorline
#set ignorecase
#set hlsearch
#set incsearch
#set helplang=cn
}


# done
done_ok(){
touch /var/log/init-ok
cat << EOF
+-------------------------------------------------+
|               optimizer is done                 |
|   it's recommond to restart this server !       |
|             Please Reboot system                |
+-------------------------------------------------+
EOF
}

update_kernel(){
yum install /root/kernel-lt-5.4.266-1.el7.elrepo.x86_64.rpm -y
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
yum install ipvsadm ipset sysstat conntrack libseccomp -y
cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
overlay
br_netfilter
EOF

systemctl restart systemd-modules-load.service

}
zidingyihosts(){
cat >/etc/hosts<<EOF
127.0.0.1   localhost   localhost.localdomain
192.168.123.11 k8s-master
192.168.123.12 k8s-node1
192.168.123.13 k8s-node2
EOF
}
# main
main(){
    yum_update
    zone_time
    limits_config
    sysctl_config
    LANG_config
    selinux_config
    log_config
    firewalld_config
    #sshd_config
    ipv6_config
    history_config
    service_config
    vim_config
    done_ok
    update_kernel
    zidingyihosts
}
main

2、

快速免密脚本

这个别忘了把自己的密码写正确了

#!/usr/bin/expect -f

# 参数初始化
set ip_list [lrange $argv 0 end]
set user "root"
set password "自己的密码"

# 生成密钥对(完整交互处理)
spawn ssh-keygen -t rsa -b 4096 -N ""

expect {
    "Enter file in which to save the key" { 
        send "\r"; 
        exp_continue 
    }
    "Overwrite (y/n)" { 
        send "y\r"; 
        exp_continue 
    }
    "Are you sure" { 
        send "yes\r"; 
        exp_continue 
    }
    "Enter passphrase (empty for no passphrase)" { 
        send "\r"; 
        exp_continue 
    }
    "Enter same passphrase again" { 
        send "\r"; 
        exp_continue 
    }
#    eof { 
3        exit 1 
#    }
}

# 批量部署公钥(带超时和重试)
foreach ip $ip_list {
    if {![regexp {^([0-9]{1,3}\.){3}[0-9]{1,3}$} $ip]} {
        puts "⚠️ 跳过无效IP: $ip"
        continue
    }

    puts "🔧 正在部署公钥到 $ip..."
    spawn ssh-copy-id  $user@$ip

    expect {
        "(yes/no)" { 
            send "yes\r"; 
            exp_continue 
        }
        "*assword:" { 
            send "$password\r"; 
            exp_continue 
        }
        "success" { 
            puts "✅ 公钥部署成功: $ip"; 
            close $spawn_id;
            continue 
        }
        eof { 
            puts "❌ 连接失败: $ip (密码错误或网络问题)";
            close $spawn_id;
            continue 
        }
        timeout { 
            puts "⏳ 连接超时: $ip (5秒未响应)";
            close $spawn_id;
            continue 
        }
    }
}
puts "\n  所有服务器免密登录部署完成!"

 该脚本执行命令为:expect expect.sh 192.168.123.11 192.168.123.12 192.168.123.13

正确输出为,如果不正确可以反复执行:

❌ 连接失败: 192.168.123.12 (密码错误或网络问题)
🔧 正在部署公钥到 192.168.123.13...
spawn ssh-copy-id root@192.168.123.13
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.123.13's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.123.13'"
and check to make sure that only the key(s) you wanted were added.

❌ 连接失败: 192.168.123.13 (密码错误或网络问题)

  所有服务器免密登录部署完成!

3、

docker服务和etcd集群安装仍然是使用ansible来进行安装:

cfssl.tar.gz和docker-27.5.0.tgz文件别忘记放到master的root目录下,在此之前,要在master服务器安装ansible,安装命令为:yum install ansible -y

docker-27.5.0.tgz需要放置到root目录下,不需要解压

进入ansible-deployment-dcoker目录后直接执行命令ansible-playbook -i hosts multi-deploy-docker.yaml 就可以快速安装好docker服务了

最终输出应该为:

       " Total Memory: 3.806GiB", 
        " Name: k8s-node2", 
        " ID: 7d2a48be-8b73-4380-a8ca-faabd8877532", 
        " Docker Root Dir: /var/lib/docker", 
        " Debug Mode: false", 
        " Experimental: false", 
        " Insecure Registries:", 
        "  127.0.0.0/8", 
        " Registry Mirrors:", 
        "  http://bc437cce.m.daocloud.io/", 
        " Live Restore Enabled: false", 
        " Product License: Community Engine"
    ]
}

PLAY RECAP *********************************************************************************************************************************************************************************************************************************************************************************************************************************
192.168.123.11             : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.123.12             : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.123.13             : ok=9    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

可参考文章:利用ansible的角色快速批量一键部署基础docker环境_ansible-playbook docker role-CSDN博客

etcd集群:

cfssl.tar.gz和etcd-v3.4.9-linux-amd64.tar.gz需要放到root目录下

修改hosts文件,内容如下:

修改group_vars/all.yml文件,内容如下:

进入ansible-deployment-etcd目录后,直接执行命令ansible-playbook -i hosts deployment-etcd-cluster.yaml 就可以快速安装好etcd集群了

最终输出大概像这样的:

ASK [etcd : debug] ************************************************************************************************************************************************************************************************************************************************************************************************************************
ok: [192.168.123.11] => {
    "status.stdout_lines": [
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+", 
        "|        ID        | STATUS  |  NAME  |         PEER ADDRS          |        CLIENT ADDRS         | IS LEARNER |", 
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+", 
        "| 8ef8187eebb59092 | started | etcd-2 | https://192.168.123.12:2380 | https://192.168.123.12:2379 |      false |", 
        "| badb2f4024bbdf87 | started | etcd-3 | https://192.168.123.13:2380 | https://192.168.123.13:2379 |      false |", 
        "| d9fb26556fe7b4a5 | started | etcd-1 | https://192.168.123.11:2380 | https://192.168.123.11:2379 |      false |", 
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+"
    ]
}
ok: [192.168.123.12] => {
    "status.stdout_lines": [
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+", 
        "|        ID        | STATUS  |  NAME  |         PEER ADDRS          |        CLIENT ADDRS         | IS LEARNER |", 
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+", 
        "| 8ef8187eebb59092 | started | etcd-2 | https://192.168.123.12:2380 | https://192.168.123.12:2379 |      false |", 
        "| badb2f4024bbdf87 | started | etcd-3 | https://192.168.123.13:2380 | https://192.168.123.13:2379 |      false |", 
        "| d9fb26556fe7b4a5 | started | etcd-1 | https://192.168.123.11:2380 | https://192.168.123.11:2379 |      false |", 
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+"
    ]
}
ok: [192.168.123.13] => {
    "status.stdout_lines": [
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+", 
        "|        ID        | STATUS  |  NAME  |         PEER ADDRS          |        CLIENT ADDRS         | IS LEARNER |", 
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+", 
        "| 8ef8187eebb59092 | started | etcd-2 | https://192.168.123.12:2380 | https://192.168.123.12:2379 |      false |", 
        "| badb2f4024bbdf87 | started | etcd-3 | https://192.168.123.13:2380 | https://192.168.123.13:2379 |      false |", 
        "| d9fb26556fe7b4a5 | started | etcd-1 | https://192.168.123.11:2380 | https://192.168.123.11:2379 |      false |", 
        "+------------------+---------+--------+-----------------------------+-----------------------------+------------+"
    ]
}

PLAY RECAP *********************************************************************************************************************************************************************************************************************************************************************************************************************************
192.168.123.11             : ok=11   changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.123.12             : ok=11   changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.123.13             : ok=11   changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
localhost                  : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

可参考文章;centos7操作系统 ---ansible剧本离线快速部署etcd集群_ansible 部署etcd集群-CSDN博客

4、

安装cni(三个节点都做)

cni-plugins-linux-amd64-v1.6.2.tgz放置到root目录下

mkdir -p /opt/cni/bin
tar xvf cni-plugins-linux-amd64-v1.6.2.tgz -C /opt/cni/bin

5、

cri-docker 中间件部署(三个节点都做)

下载地址:

https://github.com/Mirantis/cri-dockerd/releases

也可以直接使用离线包内的,这个无所谓,解压命令:tar zxvf cri-dockerd-0.3.16.amd64.tgz && mv cri-dockerd/cri-dockerd /usr/bin/&&chmod a+x /usr/bin/cri-dockerd

创建启动脚本文件:

cat >/usr/lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
#Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --cri-dockerd-root-directory=/var/lib/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10 --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d
ExecReload=/bin/kill -s HUP 
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

启动cri-docker

systemctl enable --now cri-docker

最终查看状态,绿色为正确,特别需要注意要有Connecting to docker on the Endpoint unix:///var/run/docker.sock:

[root@k8s-node1 ~]# systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2025-03-22 23:16:27 CST; 911ms ago
     Docs: https://docs.mirantis.com
 Main PID: 62457 (cri-dockerd)
    Tasks: 9
   Memory: 14.6M
   CGroup: /system.slice/cri-docker.service
           └─62457 /usr/bin/cri-dockerd --cri-dockerd-root-directory=/var/lib/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.10 --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d

Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Start docker client with request timeout 0s"
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Hairpin mode is set to none"
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Loaded network plugin cni"
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Docker cri networking managed by network plugin cni"
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Setting cgroupDriver systemd"
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Mar 22 23:16:27 k8s-node1 cri-dockerd[62457]: time="2025-03-22T23:16:27+08:00" level=info msg="Start cri-dockerd grpc backend"
Mar 22 23:16:27 k8s-node1 systemd[1]: Started CRI Interface for Docker Application Container Engine.

三、

etcd集群的证书处理

 (建立目录在三个节点都执行,在master节点拷贝好文件后,scp到工作节点12和13)


mkdir -p /etc/kubernetes/pki/etcd/ 
cp /opt/etcd/ssl/ca.pem /etc/kubernetes/pki/etcd/
cp /opt/etcd/ssl/server.pem  /etc/kubernetes/pki/etcd/apiserver-etcd-client.pem
cp /opt/etcd/ssl/server-key.pem  /etc/kubernetes/pki/etcd/apiserver-etcd-client-key.pem
scp /etc/kubernetes/pki/etcd/*  k8s-node1:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/*  k8s-node2:/etc/kubernetes/pki/etcd/

 四、

安装kubeadm,kubelet和kubectl三个rpm包并执行集群初始化

yum 安装这三个文件就不废话了,有手就可以的事情,不在这废话了,三个节点都安装,

安装之前还是需要配一个yum源,配置命令如下;

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

三个节点都导入离线镜像:

导入命令为:for i in `ls /root/kubeadm的镜像/*.tar`;do docker load <$i;done

安装完毕后,使用下面这个配置文件初始化集群:

配置文件里主要是四个IP,需要根据自己实际修改,还有就是 

imagePullPolicy: IfNotPresent
 name: k8s-master   这个也是必须准确的主机名称,其它就不需要改动,初始化命令为:kubeadm init --config=kubeadm-init.yaml

 cat >/root/kubeadm-init.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: "0"
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.123.11
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/cri-dockerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  external:
    endpoints:     #下面为自定义etcd集群地址
    - https://192.168.123.11:2379
    - https://192.168.123.12:2379
    - https://192.168.123.13:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/apiserver-etcd-client.pem
    keyFile: /etc/kubernetes/pki/etcd/apiserver-etcd-client-key.pem
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.25.7
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: "10.244.0.0/16"
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: "k8s-master"
iptables:
  masqueradeAll: false
  masqueradeBit: null
  minSyncPeriod: 0s
  syncPeriod: 0s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 0s
  tcpFinTimeout: 0s
  tcpTimeout: 0s
  udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: ""
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
  enableDSR: false
  networkName: ""
  sourceVip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF

初始化非常快,基本十几秒就好了,正确输出如下:

[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.123.11:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:6de53fdadf43dd8197b311b5eacb99fd92094cf6e43cbcb67c549ea81060f0e0

 根据提示命令在master节点执行:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

在工作节点执行:


kubeadm join 192.168.123.11:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:6de53fdadf43dd8197b311b5eacb99fd92094cf6e43cbcb67c549ea81060f0e0

工作节点正确的输出如下:

[root@k8s-node1 ~]# kubeadm join 192.168.123.11:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:6de53fdadf43dd8197b311b5eacb99fd92094cf6e43cbcb67c549ea81060f0e0
[preflight] Running pre-flight checks
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster

五、

flannel网络插件的安装:

 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

主要是修改namespace ,命令为sed -i "s@namespace: kube-flannel@namespace: kube-system@g" kube-flannel.yml

删除创建namespace相关片段

修改后的文件:

cat >kube-flannel.yml <<EOF
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: ghcr.io/flannel-io/flannel:v0.26.4
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: ghcr.io/flannel-io/flannel:v0.26.4
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
EOF

六、

kubectl命令补全

先安装一下命令补全:

yum install bash-completion -y

下面的命令在执行一下: 

source /usr/share/bash-completion/bash_completion
echo "source <(kubectl completion bash)" >>/etc/profile
echo "source /usr/share/bash-completion/bash_completion" >>/etc/profile
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null

echo "alias k=kubectl">>/etc/profile
echo "complete -F __start_kubectl k">>/etc/profile
source /etc/profile

七、

检查集群是否正常:

随便部署一个nginx:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

 

[root@k8s-master ~]# k get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                               ERROR
etcd-0               Unhealthy   Get "https://192.168.123.11:2379/health": dial tcp 192.168.123.11:2379: connect: connection refused   
scheduler            Healthy     ok                                                                                                    
controller-manager   Healthy     ok                                                                                                    
etcd-2               Healthy     {"health":"true"}                                                                                     
etcd-1               Healthy     {"health":"true"}                                                                                     
[root@k8s-master ~]# k get no 
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   6h29m   v1.25.7
k8s-node1    Ready    <none>          6h20m   v1.25.7
k8s-node2    Ready    <none>          6h19m   v1.25.7
[root@k8s-master ~]# k get po -A -owide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS      AGE     IP               NODE         NOMINATED NODE   READINESS GATES
default       nginx-deployment-7fb96c846b-dk476    1/1     Running   0             67s     10.244.1.7       k8s-node1    <none>           <none>
default       nginx-deployment-7fb96c846b-gtqcz    1/1     Running   0             67s     10.244.1.6       k8s-node1    <none>           <none>
default       nginx-deployment-7fb96c846b-rdmtl    1/1     Running   0             67s     10.244.2.7       k8s-node2    <none>           <none>
kube-system   coredns-c676cc86f-r5f89              1/1     Running   2 (16m ago)   3h26m   10.244.2.6       k8s-node2    <none>           <none>
kube-system   coredns-c676cc86f-rmbt5              1/1     Running   2 (16m ago)   3h26m   10.244.1.5       k8s-node1    <none>           <none>
kube-system   kube-apiserver-k8s-master            1/1     Running   6 (16m ago)   6h29m   192.168.123.11   k8s-master   <none>           <none>
kube-system   kube-controller-manager-k8s-master   1/1     Running   1 (16m ago)   6h29m   192.168.123.11   k8s-master   <none>           <none>
kube-system   kube-flannel-ds-hkbb4                1/1     Running   2 (16m ago)   3h22m   192.168.123.13   k8s-node2    <none>           <none>
kube-system   kube-flannel-ds-p5whh                1/1     Running   2 (16m ago)   3h22m   192.168.123.12   k8s-node1    <none>           <none>
kube-system   kube-flannel-ds-r2ltp                1/1     Running   2 (16m ago)   3h22m   192.168.123.11   k8s-master   <none>           <none>
kube-system   kube-proxy-6tcvm                     1/1     Running   4 (16m ago)   6h20m   192.168.123.12   k8s-node1    <none>           <none>
kube-system   kube-proxy-bfjlz                     1/1     Running   4 (16m ago)   6h19m   192.168.123.13   k8s-node2    <none>           <none>
kube-system   kube-proxy-m99t4                     1/1     Running   5 (16m ago)   6h29m   192.168.123.11   k8s-master   <none>           <none>
kube-system   kube-scheduler-k8s-master            1/1     Running   1 (16m ago)   6h29m   192.168.123.11   k8s-master   <none>           <none>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2320811.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Qt 导入TagLib库

文章目录 0. 前言和环境介绍1. 下载TagLib2. 下载zlib3. 修改.pro文件4. 测试代码 0. 前言和环境介绍 最近在使用Qt写一个播放器&#xff0c;需要解析mp3文件&#xff0c;于是研究了一下如何导入TagLib库 Qt构建套件:Desktop Qt6.8.2 MinGW64-bit Qt Creator安装目录: D:\bit…

新能源汽车充换站如何实现光储充一体化管理?

长三角某换电站光伏板晒到发烫&#xff0c;却因电网限电被迫切机&#xff1b;北京五环充电站每月多缴6万超容费&#xff1b;深圳物流车充电高峰排队3小时...当95%的充换站深陷“用不起绿电、扛不住扩容、算不清碳账”困局&#xff0c;安科瑞用一组真实数据撕开行业潜规则&#…

【数据分享】2000—2024年我国省市县三级逐年归一化植被指数(NDVI)数据(年平均值/Shp/Excel格式)

之前我们分享过2000-2024年我国逐年的归一化植被指数&#xff08;NDVI&#xff09;栅格数据&#xff0c;该逐年数据是取的当年月归一化植被指数&#xff08;NDVI&#xff09;的年平均值。&#xff01;该数据来源于NASA定期发布的MOD13A3数据集&#xff01;很多小伙伴拿到数据后…

【leetcode题解】链表

目录 链表 两数相加 两两交换链表中的节点 重排链表 合并 K 个升序链表&#xff08;困难&#xff09; K 个一组翻转链表 链表 1. 常用技巧 画图&#xff01;&#xff01;&#xff01;&#xff08;直观形象&#xff0c;便于我们理解&#xff09;引入虚拟“头”节点&#xf…

Windows打开ftp局域网共享

前提是windows已经设置好开机账号密码了&#xff0c;否则教程不适用 第一先打开电脑ftp共享配置 点击保存即可 2.设置要共享到其他电脑的文件路径&#xff08;如果你要共享整个盘你就设置整个盘&#xff0c;如果是共享盘中某文件就设置某文件&#xff0c;这里是某文件&#x…

我爱学算法之——滑动窗口攻克子数组和子串难题(中)

学习算法&#xff0c;继续加油&#xff01;&#xff01;&#xff01; 一、将 x 减到 0 的最小操作数 题目解析 来看这一道题&#xff0c;题目给定一个数组nums和一个整数x&#xff1b;我们可以在数组nums的左边或者右边进行操作&#xff08;x减去该位置的值&#xff09;&#…

从零开始上手huggingface

1. 环境配置 # git 安装&#xff1a;https://git-scm.com/ # git lfs安装&#xff1a;https://git-lfs.com git lfs install # huggingface-cli 安装&#xff1a;https://huggingface.co/docs/hub/index pip install huggingface_hub2. 网站直接下载模型 可能会中断&#xff…

用 pytorch 从零开始创建大语言模型(六):对分类进行微调

用 pytorch 从零开始创建大语言模型&#xff08;六&#xff09;&#xff1a;对分类进行微调 6 微调用于分类6.1 微调的不同类别6.2 准备数据集6.3 创建数据加载器6.4 使用预训练权重初始化模型6.5 添加分类头部6.6 计算分类损失和准确率6.7 在监督数据上微调模型6.8 使用LLM进…

Netty——BIO、NIO 与 Netty

文章目录 1. 介绍1.1 BIO1.1.1 概念1.1.2 工作原理1.1.3 优缺点 1.2 NIO1.2.1 概念1.2.2 工作原理1.2.3 优缺点 1.3 Netty1.3.1 概念1.3.2 工作原理1.3.3 优点 2. Netty 与 Java NIO 的区别2.1 抽象层次2.2 API 易用性2.3 性能优化2.4 功能扩展性2.5 线程模型2.6 适用场景 3. 总…

【Linux】信号:信号保存和处理

&#x1f525;个人主页&#xff1a;Quitecoder &#x1f525;专栏&#xff1a;linux笔记仓 目录 01.阻塞信号信号集 02.捕捉信号sigaction可重入函数volatileSIGCHLD 01.阻塞信号 实际执行信号的处理动作称为信号递达&#xff1a;每个信号都有一个默认行为&#xff0c;例如终…

应用权限组列表

文章目录 使用须知位置相机麦克风通讯录日历运动数据身体传感器图片和视频音乐和音频跨应用关联设备发现和连接剪切板文件夹文件(deprecated) 使用须知 在申请目标权限前&#xff0c;建议开发者先阅读应用权限管控概述-权限组和子权限&#xff0c;了解相关概念&#xff0c;再合…

MATLAB实现基于“蚁群算法”的AMR路径规划

目录 1 问题描述 2 算法理论 3 求解步骤 4 运行结果 5 代码部分 1 问题描述 移动机器人路径规划是机器人学的一个重要研究领域。它要求机器人依据某个或某些优化原则 (如最小能量消耗&#xff0c;最短行走路线&#xff0c;最短行走时间等)&#xff0c;在其工作空间中找到一…

【深度学习】多目标融合算法(五):定制门控网络CGC(Customized Gate Control)

目录 一、引言 二、CGC&#xff08;Customized Gate Control&#xff0c;定制门控网络&#xff09; 2.1 技术原理 2.2 技术优缺点 2.3 业务代码实践 2.3.1 业务场景与建模 2.3.2 模型代码实现 2.3.3 模型训练与推理测试 2.3.4 打印模型结构 三、总结 一、引言 上一…

【NLP 42、实践 ⑪ 用Bert模型结构实现自回归语言模型的训练】

如果结局早已注定&#xff0c;那么过程就将大于结局 —— 25.3.18 自回归语言模型&#xff1a;由前文预测后文的语言模型 特点&#xff1a;单向 训练方式&#xff1a;利用前n个字预测第n1个字&#xff0c;实现一个mask矩阵&#xff0c;送入Bert模型&#xff0c;让其前文看不到…

TCP | 序列号和确认号 [逐包分析] | seq / ack 详解

注 &#xff1a; 本文为 “TCP 序号&#xff08;seq&#xff09;与确认序号&#xff08;ack&#xff09;” 相关文章合辑。 英文引文&#xff0c;机翻未校。 中文引文&#xff0c;略作重排。 如有内容异常&#xff0c;请看原文。 Understanding TCP Seq & Ack Numbers […

在Linux、Windows系统上安装开源InfluxDB——InfluxDB OSS v2并设置开机自启的保姆级图文教程

一、进入InfluxDB下载官网 InfluxData 文档https://docs.influxdata.com/Install InfluxDB OSS v2 | InfluxDB OSS v2 Documentation

考研复习之队列

循环队列 队列为满的条件 队列为满的条件需要特殊处理&#xff0c;因为当队列满时&#xff0c;队尾指针的下一个位置应该是队头指针。但是&#xff0c;我们不能直接比较 rear 1 和 front 是否相等&#xff0c;因为 rear 1 可能会超出数组索引的范围。因此&#xff0c;我们需…

智慧高速,安全护航:视频监控平台助力高速公路高效运营

随着我国高速公路里程的不断增长&#xff0c;交通安全和运营效率面临着前所未有的挑战。传统的监控方式已难以满足现代化高速公路管理的需求&#xff0c;而监控视频平台的出现&#xff0c;则为高速公路的安全运营提供了强有力的技术支撑。高速公路视频监控联网解决方案 高速公路…

Jboss漏洞再现

一、CVE-2015-7501 1、开环境 2、访问地址 / invoker/JMXInvokerServlet 出现了让下载的页面&#xff0c;说明有漏洞 3、下载ysoserial工具进行漏洞利用 4、在cmd运行 看到可以成功运行&#xff0c;接下来去base64编码我们反弹shell的命令 5、执行命令 java -jar ysoserial-…

【Linux系统】Linux权限讲解!!!超详细!!!

目录 Linux文件类型 区分方法 文件类型 Linux用户 用户创建与删除 用户之间的转换 su指令 普通用户->超级用户(root) 超级用户(root) ->普通用户 普通账户->普通账户 普通用户的权限提高 sudo指令 注&#xff1a; Linux权限 定义 权限操作 1、修改文…