K8s集群环境搭建

news2024/11/26 1:25:11

K8s集群环境搭建

修改hosts文件

[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.193.128  master.example.com  master
192.168.193.129  node1.example.com   node1
192.168.193.130  node2.example.com   node2
[root@master ~]#
[root@master ~]# scp /etc/hosts root@192.168.193.129:/etc/hosts
The authenticity of host '192.168.193.129 (192.168.193.129)' can't be established.
ECDSA key fingerprint is SHA256:tgf2yiFV2TrjOQEd9a9e9dFRgo/eHo0oKloKyIVulaI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.193.129' (ECDSA) to the list of known hosts.
root@192.168.193.129's password:
hosts                                                                100%  290     3.4KB/s   00:00
[root@master ~]# scp /etc/hosts root@192.168.193.130:/etc/hosts
The authenticity of host '192.168.193.130 (192.168.193.130)' can't be established.
ECDSA key fingerprint is SHA256:ejuoTwhMCCJB4Hbr6FqIQ7kvTKXjoenEigo/IZkdwy4.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.193.130' (ECDSA) to the list of known hosts.
root@192.168.193.130's password:
hosts                                                                100%  290    30.3KB/s   00:00

免密钥

[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:14tkp514KCDLuvpk3RoRf7Jhhc4GYyaaPslrST7ipYM root@master.example.com
The key's randomart image is:
+---[RSA 3072]----+
|                 |
|       .         |
|  . * . .        |
| o + B .   .     |
|o   o X S + o    |
|o..o B * + B o   |
|+=+.= o . = =    |
|EO=. o   . .     |
|=**..            |
+----[SHA256]-----+
[root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node1 (192.168.193.129)' can't be established.
ECDSA key fingerprint is SHA256:tgf2yiFV2TrjOQEd9a9e9dFRgo/eHo0oKloKyIVulaI.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node1'"
and check to make sure that only the key(s) you wanted were added.

[root@master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node2 (192.168.193.130)' can't be established.
ECDSA key fingerprint is SHA256:ejuoTwhMCCJB4Hbr6FqIQ7kvTKXjoenEigo/IZkdwy4.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.

时钟同步

master:

[root@master ~]# vim /etc/chrony.conf
local stratum 10
[root@master ~]# systemctl restart chronyd
[root@master ~]# systemctl enable chronyd
Created symlink /etc/systemd/system/multi-user.target.wants/chronyd.service → /usr/lib/systemd/system/chronyd.service.
[root@master ~]# hwclock -w

node1:

[root@node1 ~]# vim /etc/chrony.conf
server master.example.com  iburst
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# systemctl enable chronyd
[root@node1 ~]# hwclock -w

node2:

[root@node2 ~]# vim /etc/chrony.conf
server master.example.com  iburst
[root@node2 ~]# systemctl restart chronyd
[root@node2 ~]# systemctl enable chronyd
[root@node2 ~]# hwclock  -w

禁用firewalld、selinux、postfix

master:

[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]# vim /etc/selinux/config
SELINUX=disabled
[root@master ~]# setenforce 0
[root@master ~]# systemctl stop postfix
Failed to stop postfix.service: Unit postfix.service not loaded.
[root@master ~]# systemctl disable postfix
Failed to disable unit: Unit file postfix.service does not exist.

node1:

[root@node1 ~]# systemctl stop firewalld
[root@node1 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node1 ~]# vim /etc/selinux/config
SELINUX=disabled
[root@node1 ~]# setenforce 0
[root@node1 ~]# systemctl stop postfix
Failed to stop postfix.service: Unit postfix.service not loaded.
[root@node1 ~]# systemctl disable postfix
Failed to disable unit: Unit file postfix.service does not exist.

node2:

[root@node2 ~]# systemctl stop firewalld
[root@node2 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node2 ~]# vim /etc/selinux/config
SELINUX=disabled
[root@node2 ~]# setenforce 0
[root@node2 ~]# systemctl stop postfix
Failed to stop postfix.service: Unit postfix.service not loaded.
[root@node2 ~]# systemctl disable postfix
Failed to disable unit: Unit file postfix.service does not exist.

禁用swap分区

master :

[root@master ~]# vim /etc/fstab
[root@master ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Jun 30 06:34:44 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cs-root     /                       xfs     defaults        0 0
UUID=e18bd9c4-e065-46b2-ba15-a668954e3087 /boot                   xfs     defaults        0 0
/dev/mapper/cs-home     /home                   xfs     defaults        0 0
#/dev/mapper/cs-swap     none                    swap    defaults        0 0
[root@master ~]# swapoff  -a

node1 :

[root@node1 ~]# vim /etc/fstab
[root@node1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Sep 27 03:59:33 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cs-root     /                       xfs     defaults        0 0
UUID=1ae9c603-27ba-433c-a37b-8d2d043c2746 /boot                   xfs     defaults        0 0
/dev/mapper/cs-home     /home                   xfs     defaults        0 0
#/dev/mapper/cs-swap     none                    swap    defaults        0 0
[root@node1 ~]# swapoff  -a

node2 :

[root@node2 ~]# vim /etc/fstab
[root@node2 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Sep 27 03:59:33 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cs-root     /                       xfs     defaults        0 0
UUID=1ae9c603-27ba-433c-a37b-8d2d043c2746 /boot                   xfs     defaults        0 0
/dev/mapper/cs-home     /home                   xfs     defaults        0 0
#/dev/mapper/cs-swap     none                    swap    defaults        0 0
[root@node2 ~]# swapoff  -a

开启IP转发,和修改内核信息

master :

[root@master ~]# vim /etc/sysctl.d/k8s.conf
[root@master ~]# cat /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

[root@master ~]# modprobe   br_netfilter
[root@master ~]# sysctl -p  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

node1 :

[root@node1 ~]# vim /etc/sysctl.d/k8s.conf
[root@node1 ~]# modprobe   br_netfilter
[root@node1 ~]#
[root@node1 ~]# sysctl -p  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

node2 :

[root@node2 ~]# vim /etc/sysctl.d/k8s.conf
[root@node2 ~]# modprobe   br_netfilter
[root@node2 ~]# sysctl -p  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

配置IPVS功能

master :

[root@master ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@master ~]# lsmod | grep -e ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          172032  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@master ~]# reboot

node1 :

[root@node1 ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
[root@node1 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@node1 ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@node1 ~]# lsmod | grep -e ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          172032  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@node1 ~]# reboot

node2 :

[root@node2 ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
[root@node2 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@node2 ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@node2 ~]# lsmod | grep -e ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          172032  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@node2 ~]# reboot

安装docker

切换镜像源
master :

[root@master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
--2022-11-17 15:32:29--  https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 119.96.90.238, 119.96.90.236, 119.96.90.242, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|119.96.90.238|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2495 (2.4K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’

/etc/yum.repos.d/CentOS-B 100%[==================================>]   2.44K  --.-KB/s    in 0.02s

2022-11-17 15:32:29 (104 KB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2495/2495]

[root@master yum.repos.d]# dnf -y install epel-release
[root@master yum.repos.d]#  wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
--2022-11-17 15:36:12--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 119.96.90.242, 119.96.90.243, 119.96.90.241, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|119.96.90.242|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2081 (2.0K) [application/octet-stream]
Saving to: ‘docker-ce.repo’

docker-ce.repo            100%[==================================>]   2.03K  --.-KB/s    in 0.01s

2022-11-17 15:36:12 (173 KB/s) - ‘docker-ce.repo’ saved [2081/2081]

[root@master yum.repos.d]# yum list | grep docker
ansible-collection-community-docker.noarch                        2.6.0-1.el8                                                epel
containerd.io.x86_64                                              1.6.9-3.1.el8                                              docker-ce-stable
docker-ce.x86_64                                                  3:20.10.21-3.el8                                           docker-ce-stable
docker-ce-cli.x86_64                                              1:20.10.21-3.el8                                           docker-ce-stable
docker-ce-rootless-extras.x86_64                                  20.10.21-3.el8                                             docker-ce-stable
docker-compose-plugin.x86_64                                      2.12.2-3.el8                                               docker-ce-stable
docker-scan-plugin.x86_64                                         0.21.0-3.el8                                               docker-ce-stable
pcp-pmda-docker.x86_64                                            5.3.1-5.el8                                                AppStream
podman-docker.noarch                                              3.3.1-9.module_el8.5.0+988+b1f0b741                        AppStream
python-docker-tests.noarch                                        5.0.0-2.el8                                                epel
python2-dockerpty.noarch                                          0.4.1-18.el8                                               epel
python3-docker.noarch                                             5.0.0-2.el8                                                epel
python3-dockerpty.noarch                                          0.4.1-18.el8                                               epel
standard-test-roles-inventory-docker.noarch                       4.10-1.el8                                                 epel

node1 :

[root@node1 yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
--2022-11-17 02:33:03--  https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 119.96.90.240, 119.96.90.237, 119.96.90.236, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|119.96.90.240|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2495 (2.4K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

/etc/yum.repos.d/CentOS-B 100%[==================================>]   2.44K  --.-KB/s  用时 0.02s

2022-11-17 02:33:03 (139 KB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2495/2495])

[root@node1 yum.repos.d]# dnf -y install epel-release
[root@node1 yum.repos.d]#  wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
--2022-11-17 02:36:15--  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 119.96.90.243, 119.96.90.242, 119.96.90.239, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|119.96.90.243|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2081 (2.0K) [application/octet-stream]
正在保存至: “docker-ce.repo”

docker-ce.repo            100%[==================================>]   2.03K  --.-KB/s  用时 0.01s

2022-11-17 02:36:15 (140 KB/s) - 已保存 “docker-ce.repo” [2081/2081])

[root@node1 yum.repos.d]# Yum list | grep docker
bash: Yum: 未找到命令...
相似命令是: 'yum'
[root@node1 yum.repos.d]# yum list | grep docker
ansible-collection-community-docker.noarch                        2.6.0-1.el8                                                epel
containerd.io.x86_64                                              1.6.9-3.1.el8                                              docker-ce-stable
docker-ce.x86_64                                                  3:20.10.21-3.el8                                           docker-ce-stable
docker-ce-cli.x86_64                                              1:20.10.21-3.el8                                           docker-ce-stable
docker-ce-rootless-extras.x86_64                                  20.10.21-3.el8                                             docker-ce-stable
docker-compose-plugin.x86_64                                      2.12.2-3.el8                                               docker-ce-stable
docker-scan-plugin.x86_64                                         0.21.0-3.el8                                               docker-ce-stable
pcp-pmda-docker.x86_64                                            5.3.1-5.el8                                                AppStream
podman-docker.noarch                                              3.3.1-9.module_el8.5.0+988+b1f0b741                        AppStream
python-docker-tests.noarch                                        5.0.0-2.el8                                                epel
python2-dockerpty.noarch                                          0.4.1-18.el8                                               epel
python3-docker.noarch                                             5.0.0-2.el8                                                epel
python3-dockerpty.noarch                                          0.4.1-18.el8                                               epel
standard-test-roles-inventory-docker.noarch                       4.10-1.el8                                                 epel

node2 :

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
--2022-11-17 02:46:13--  https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 119.96.90.239, 119.96.90.236, 119.96.90.237, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|119.96.90.239|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2495 (2.4K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/CentOS-Base.repo”

/etc/yum.repos.d/CentOS-B 100%[==================================>]   2.44K  --.-KB/s  用时 0.03s

2022-11-17 02:46:14 (87.9 KB/s) - 已保存 “/etc/yum.repos.d/CentOS-Base.repo” [2495/2495])

[root@node2 yum.repos.d]# dnf -y install epel-release
[root@node2 yum.repos.d]# yum list | grep docker
ansible-collection-community-docker.noarch                        2.6.0-1.el8                                                epel
containerd.io.x86_64                                              1.6.9-3.1.el8                                              docker-ce-stable
docker-ce.x86_64                                                  3:20.10.21-3.el8                                           docker-ce-stable
docker-ce-cli.x86_64                                              1:20.10.21-3.el8                                           docker-ce-stable
docker-ce-rootless-extras.x86_64                                  20.10.21-3.el8                                             docker-ce-stable
docker-compose-plugin.x86_64                                      2.12.2-3.el8                                               docker-ce-stable
docker-scan-plugin.x86_64                                         0.21.0-3.el8                                               docker-ce-stable
pcp-pmda-docker.x86_64                                            5.3.1-5.el8                                                AppStream
podman-docker.noarch                                              3.3.1-9.module_el8.5.0+988+b1f0b741                        AppStream
python-docker-tests.noarch                                        5.0.0-2.el8                                                epel
python2-dockerpty.noarch                                          0.4.1-18.el8                                               epel
python3-docker.noarch                                             5.0.0-2.el8                                                epel
python3-dockerpty.noarch                                          0.4.1-18.el8                                               epel
standard-test-roles-inventory-docker.noarch                       4.10-1.el8                                                 epel

安装docker-ce和添加一个配置文件,配置docker仓库加速器

master :

[root@master yum.repos.d]#  dnf -y install docker-ce --allowerasing
[root@master yum.repos.d]# systemctl restart docker
[root@master yum.repos.d]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@master yum.repos.d]# cat > /etc/docker/daemon.json << EOF
  "registry-mirrors": ["https://14lrk6zd.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
> {
>   "registry-mirrors": ["https://14lrk6zd.mirror.aliyuncs.com"],
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2"
> }
> EOF
[root@master yum.repos.d]# systemctl daemon-reload
[root@master yum.repos.d]# systemctl  restart docker

node1 :

[root@node1 yum.repos.d]#  dnf -y install docker-ce --allowerasing
[root@node1 yum.repos.d]# systemctl restart docker
[root@node1 yum.repos.d]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node1 yum.repos.d]# cat > /etc/docker/daemon.json << EOF
> {
>   "registry-mirrors": ["https://14lrk6zd.mirror.aliyuncs.com"],
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2"
> }
> EOF
[root@node1 yum.repos.d]# systemctl daemon-reload
[root@node1 yum.repos.d]# systemctl  restart docker

node2 :

[root@node2 yum.repos.d]# dnf -y install docker-ce --allowerasing
[root@node2 yum.repos.d]#  systemctl restart docker
[root@node2 yum.repos.d]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node2 yum.repos.d]# cat > /etc/docker/daemon.json << EOF
> {
>   "registry-mirrors": ["https://14lrk6zd.mirror.aliyuncs.com"],
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2"
> }
> EOF
[root@node2 yum.repos.d]# systemctl daemon-reload
[root@node2 yum.repos.d]# systemctl  restart docker

安装kubernetes组件

由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源
master :

[root@master yum.repos.d]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@master yum.repos.d]# yum list | grep kube
cri-tools.x86_64                                                  1.25.0-0                                                   kubernetes
kubeadm.x86_64                                                    1.25.4-0                                                   kubernetes
kubectl.x86_64                                                    1.25.4-0                                                   kubernetes
kubelet.x86_64                                                    1.25.4-0                                                   kubernetes
kubernetes-cni.x86_64                                             1.1.1-0                                                    kubernetes
libguac-client-kubernetes.x86_64                                  1.4.0-5.el8                                                epel
python3-kubernetes.noarch                                         1:11.0.0-6.el8                                             epel
python3-kubernetes-tests.noarch                                   1:11.0.0-6.el8                                             epel
rkt.x86_64                                                        1.27.0-1                                                   kubernetes
rsyslog-mmkubernetes.x86_64                                       8.2102.0-5.el8                                             AppStream

node1 :

[root@node1 yum.repos.d]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@node1 yum.repos.d]# yum list | grep kube
cri-tools.x86_64                                                  1.25.0-0                                                   kubernetes
kubeadm.x86_64                                                    1.25.4-0                                                   kubernetes
kubectl.x86_64                                                    1.25.4-0                                                   kubernetes
kubelet.x86_64                                                    1.25.4-0                                                   kubernetes
kubernetes-cni.x86_64                                             1.1.1-0                                                    kubernetes
libguac-client-kubernetes.x86_64                                  1.4.0-5.el8                                                epel
python3-kubernetes.noarch                                         1:11.0.0-6.el8                                             epel
python3-kubernetes-tests.noarch                                   1:11.0.0-6.el8                                             epel
rkt.x86_64                                                        1.27.0-1                                                   kubernetes
rsyslog-mmkubernetes.x86_64                                       8.2102.0-5.el8                                             AppStream

node2 :

[root@node2 yum.repos.d]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@node2 yum.repos.d]# yum list | grep kube
cri-tools.x86_64                                                  1.25.0-0                                                   kubernetes
kubeadm.x86_64                                                    1.25.4-0                                                   kubernetes
kubectl.x86_64                                                    1.25.4-0                                                   kubernetes
kubelet.x86_64                                                    1.25.4-0                                                   kubernetes
kubernetes-cni.x86_64                                             1.1.1-0                                                    kubernetes
libguac-client-kubernetes.x86_64                                  1.4.0-5.el8                                                epel
python3-kubernetes.noarch                                         1:11.0.0-6.el8                                             epel
python3-kubernetes-tests.noarch                                   1:11.0.0-6.el8                                             epel
rkt.x86_64                                                        1.27.0-1                                                   kubernetes
rsyslog-mmkubernetes.x86_64                                       8.2102.0-5.el8                                             AppStream

安装kubeadm kubelet kubectl工具

master :

[root@master yum.repos.d]# dnf  -y  install kubeadm  kubelet  kubectl
[root@master yum.repos.d]# systemctl  restart  kubelet
[root@master yum.repos.d]# systemctl  enable  kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

node1 :

[root@node1 yum.repos.d]# dnf  -y  install kubeadm  kubelet  kubectl
[root@node1 yum.repos.d]# systemctl  restart  kubelet
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]#
[root@node1 yum.repos.d]# systemctl  enable  kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

node2 :

[root@node2 yum.repos.d]# dnf  -y  install kubeadm  kubelet  kubectl
[root@node2 yum.repos.d]# systemctl  restart  kubelet
[root@node2 yum.repos.d]# systemctl  enable  kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.


配置containerd

为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
然后重启并设置containerd服务
master :

[root@master ~]# containerd config default > /etc/containerd/config.toml
[root@master ~]# vim /etc/containerd/config.toml
[root@master ~]# systemctl   restart  containerd
[root@master ~]# systemctl   enable  containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.

node1 :

[root@node1 ~]# containerd config default > /etc/containerd/config.toml
[root@node1 ~]# vim /etc/containerd/config.toml
[root@node1 ~]# systemctl   restart  containerd
[root@node1 ~]# systemctl   enable  containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.

node2 :

[root@node2 ~]# containerd config default > /etc/containerd/config.toml
[root@node2 ~]# vim /etc/containerd/config.toml
[root@node2 ~]# systemctl   restart  containerd
[root@node2 ~]# systemctl   enable  containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.

部署k8s的master节点(在master节点运行)

[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.193.128 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.25.4 \
>  --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16

//建议将初始化内容保存在某个文件中

[root@master ~]# vim k8s
[root@master ~]# cat k8s
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.193.128:6443 --token r72qo2.hat90535pgenesgy \
        --discovery-token-ca-cert-hash sha256:5fca25770cc037e2f5f23b540a71657145e80b403c9809d93d48cfb0c9369e91

配置环境变量

[root@master ~]# vim /etc/profile.d/k8s.sh
[root@master ~]# cat /etc/profile.d/k8s.sh
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@master ~]# source /etc/profile.d/k8s.sh

5、安装pod网络插件(CNI/flannel)
先wget下载 https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 如果访问不了下载不了就手动编辑文件 ,去GitHub官网查找flannel
在这里插入图片描述

找到这个文件打开
在这里插入图片描述
打开这个yml文件复制粘贴里面的东西
在这里插入图片描述

[root@master ~]# vim  kube-flannel.yml
[root@master ~]#  kubectl apply  -f  kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

将node节点加入到k8s集群中

[root@node1 ~]# kubeadm join 192.168.193.128:6443 --token r72qo2.hat90535pgenesgy \
>         --discovery-token-ca-cert-hash sha256:5fca25770cc037e2f5f23b540a71657145e80b403c9809d93d48cfb0c9369e91

[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node2 ~]# kubeadm join 192.168.193.128:6443 --token r72qo2.hat90535pgenesgy \
>         --discovery-token-ca-cert-hash sha256:5fca25770cc037e2f5f23b540a71657145e80b403c9809d93d48cfb0c9369e91

[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

kubectl get pods 查看pod状态

[root@master ~]# kubectl  get nodes
NAME                 STATUS   ROLES           AGE     VERSION
master.example.com   Ready    control-plane   41m     v1.25.4
node1.example.com    Ready    <none>          4m27s   v1.25.4
node2.example.com    Ready    <none>          4m25s   v1.25.4

使用k8s集群创建一个pod,运行nginx容器,然后进行测试

[root@master ~]#  kubectl create  deployment  nginx  --image nginx
deployment.apps/nginx created
[root@master ~]# kubectl create  deployment  httpd  --image httpd
deployment.apps/httpd created
[root@master ~]# kubectl  expose  deployment  nginx  --port 80  --type NodePort
service/nginx exposed
[root@master ~]#
[root@master ~]#
[root@master ~]# kubectl  expose  deployment  httpd  --port 80  --type NodePort
service/httpd exposed
[root@master ~]# kubectl  get  pods  -o  wide    查看容器在哪个节点中运行的
NAME                     READY   STATUS    RESTARTS   AGE    IP           NODE                NOMINATED NODE   READINESS GATES
httpd-65bfffd87f-9dqm2   1/1     Running   0          96s    10.244.2.2   node2.example.com   <none>           <none>
nginx-76d6c9b8c-wblrh    1/1     Running   0          102s   10.244.1.2   node1.example.com   <none>           <none>
[root@master ~]#  kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
httpd        NodePort    10.97.237.88    <none>        80:30050/TCP   84s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        43m
nginx        NodePort    10.110.93.150   <none>        80:32407/TCP   95s

修改默认网页

[root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-wblrh  -- /bin/bash
root@nginx-76d6c9b8c-wblrh:/# cd /usr/share/nginx/html/
root@nginx-76d6c9b8c-wblrh:/usr/share/nginx/html# ls
50x.html  index.html
root@nginx-76d6c9b8c-wblrh:/usr/share/nginx/html# echo "hi zhan nihao" > index.html
root@nginx-76d6c9b8c-wblrh:/usr/share/nginx/html#

访问http://节点ip:端口

[root@master ~]# curl  10.244.2.2
<html><body><h1>It works!</h1></body></html>
[root@master ~]# curl  10.244.1.2
hi zhan nihao
[root@master ~]#

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LPHmFUma-1668689843799)(./1668689675581.png)]

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/13175.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

TCO点击试剂(4E)-TCO-PEG4-amine, 2243569-24-4,反式环辛烯-四聚乙二醇-氨基

【产品描述】 (4E)-反式环辛烯-四聚乙二醇-氨基&#xff0c;胺与活性NHS酯或在活化剂suh&#xff08;如EDC&#xff09;存在下与羧酸非常反应&#xff0c;TCO部分使四嗪分子实现快速点击化学。亲水性PEG间隔臂提高了水溶性&#xff0c;并提供了一个长而灵活的连接。西安凯新生物…

软件测试的几种模型

1.V模型 在软件测试方面&#xff0c;V模型是最广为人知的模型。如图&#xff0c;V模型从左到右描述了开发过程和测试行为。V模型的价值在于它非常明确的表明了测试过程中存在的不同级别&#xff0c;并且清楚的描述了这些测试阶段和开发过程各阶段的对应关系。缺点&#xff1a;把…

Selenium基础 — POM设计模式(一)

&#xff08;一&#xff09;POM模式介绍 1、什么是POM&#xff1f; POM是Page Object Model页面对象模型的简称。 POM是为Web UI元素创建Object Repository的设计模式 。 在这个模型下&#xff0c;对于应用程序中的每个网页&#xff0c;应该有相应的页面类。 此Page类将会找到…

ES新特性与TypeScript、JS性能优化

一、ECMAScript 新特性 1、作用域 1、全局作用域 2、函数作用域 3、块级作用域2、var、let和const的区别 1、let和var用来声明变量&#xff0c;const用来声明常量&#xff08;变量就是赋值后可以改变它的值&#xff0c;常量就是赋值后就不能改变它的值&#xff09; 2、const…

Unity 资源热更新

热更新流程 启动游戏根据当前版本号&#xff0c;和平台号去版本服务器上检查是否有热更从热更服务器上下载md5文件&#xff0c;比对需要热更的具体文件列表从热更服务器上下载需要热更的资源&#xff0c;解压到热更资源目录游戏运行加载资源&#xff0c;优先到热更目录中加载&…

windows下载redis、windows安装redis、windows启动redis

一、下载并解压 下载网址&#xff1a;https://github.com/tporadowski/redis/releases 下载后解压并重命名文件夹为redis 二、打开redis文件夹 找到redis.windows.conf配置文件&#xff0c;作如下修改 protected-mode no // 将yes改为no 部分配置信息说明 bind 127.0.…

Ubuntu服务器下安装FastDFS及nginx配置访问等问题记录

Ubuntu服务器下安装FastDFS及nginx配置访问下载对应包编译环境包解压环境配置配置nginx模块和安装nginx来进行访问该图片下载对应包 下载方式一&#xff1a;直接使用 wget 下载&#xff0c;如果太慢&#xff0c;可以去github下载&#xff0c;然后上传到服务器上即可。 wget -…

Mybatis三大执行器介绍

Mybatis三大执行器介绍Mybatis相关全览一、执行器介绍执行器的选择入口设置执行器两种方式全局配置&#xff08;不建议&#xff09;局部设置&#xff08;建议&#xff09;二、三个执行器区别SimpleExecutorReuseExecutorBatchExecutor总结三、效率测试四、平时开发使用本文用的…

老机器摇身一变成局域网下低配服务器,并稳定访问GitHub

老机器摇身一变成局域网下低配服务器&#xff0c;并稳定访问GitHub 搭建场景&#xff1a; 问题背景&#xff1a; 最近用腾讯云服务器访问GitHub经常挂&#xff0c;试了很多解决方案如换host文件ip等办法提速效果都不明显。后来想通过腾讯云服务器实现kexueshangwang&#xff0…

wechaty-puppet-whatsapp的uos协议使用中常见问题

常见问题 常见问题解决基本方案 先检查node版本是否大于16确认npm已经配置好淘宝源存在package-lock.json文件先删除删除node_modules后重新执行npm install 或cnpm install配置文件是否按照要求设置本地网络是否存在限制&#xff0c;是否开启了代理服务本地防火墙是否关闭 我…

智能微型断路器的功能有哪些?和网关搭配的作用在哪?

安科瑞 华楠 ASCB1 系列智能微型断路器是安科瑞电气股份有限公司 全新推出的智慧用电产品&#xff0c;产品由智能微型断路器与智 能网关两部分组成&#xff0c;可用于对用电线路的关键电气因 素&#xff0c;如电压、电流、功率、温度、漏电、能耗等进行实 时监测&#xff0c;具…

[附源码]java毕业设计剧本杀门店管理系统-

项目运行 环境配置&#xff1a; Jdk1.8 Tomcat7.0 Mysql HBuilderX&#xff08;Webstorm也行&#xff09; Eclispe&#xff08;IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持&#xff09;。 项目技术&#xff1a; SSM mybatis Maven Vue 等等组成&#xff0c;B/S模式 M…

三种常规用的矢量网络分析仪系统误差校准方法

我们在使用矢量网络分析仪是会发现测量出来的数据有误差&#xff0c;要想矫正&#xff0c;首先我我们要知道矢量网络分析仪的误差来源有哪些?主要有以下三个方面&#xff1a;漂移误差、随机误差、系统误差 1、漂移误差&#xff1a;是由于进行校准之后仪器或测试系统性能发生变…

网络基础--笔记

文章目录一、网络基础1.1 二层交换网络1.1.1 HUB1.1.2 Switch1.1. 3 ARP协议1.1. 4 VLAN IEEE802.1Q协议1.1. 5 链路接口类型1.1. 6 链路聚合模式1.1. 7 链路聚合的类型1.1. 8 STP 生成树协议 &#xff08;Spanning Tree protocol&#xff09; IEEE 802.1D1.2 三层路由网络1.2.…

HDLC协议的特点及功能,让你一看就会

一 HDLC概述 1.1 HDLC的发展历史 高级数据链路控制(High-Level Data Link Control或简称HDLC)&#xff0c;是一个在同步网上传输数据、面向比特的数据链路层协议&#xff0c;它是由国际标准化组织(ISO)根据IBM公司的SDLC(SynchronousData Link Control)协议扩展开发而成的.其最…

[附源码]Python计算机毕业设计《数据库系统原理》在线学习平台

项目运行 环境配置&#xff1a; Pychram社区版 python3.7.7 Mysql5.7 HBuilderXlist pipNavicat11Djangonodejs。 项目技术&#xff1a; django python Vue 等等组成&#xff0c;B/S模式 pychram管理等等。 环境需要 1.运行环境&#xff1a;最好是python3.7.7&#xff0c;…

2.1 Vision-Language Pre-Training for Multimodal Aspect-Based Sentiment Analysis

Vision-Language Pre-Training for Multimodal Aspect-Based Sentiment Analysis 1、基本信息 作者&#xff1a;Yan Ling, Jianfei Yu, Rui Xia 会议&#xff1a;ACL 2022 单位&#xff1a;南京理工大学 2、主要框架 任务&#xff1a;Multimodal Aspect-Based Sentiment …

FITC-PEG-N3,Fluorescein-PEG-Azide,荧光素-聚乙二醇-叠氮可用于点击化学

1、名称 英文&#xff1a;Fluorescein-PEG-Azide&#xff0c;FITC-PEG-N3 中文&#xff1a;荧光素-聚乙二醇-叠氮 2、CAS编号&#xff1a;N/A 3、所属分类&#xff1a;Azide PEG Fluorescent PEG 4、分子量&#xff1a;可定制&#xff0c;荧光素-peg 20000-叠氮/Fluoresce…

大数据常见面试题Hadoop篇(3)

前几篇地址&#xff1a; 大数据常见面试题 Hadoop篇&#xff08;1&#xff09;_后季暖的博客-CSDN博客 大数据常见面试题 Hadoop篇&#xff08;2&#xff09;_后季暖的博客-CSDN博客 目录 36.HDFS文件能否直接删除或则修改&#xff1f; 37.谈谈hdfs中的block、package、chu…

java毕业设计——基于java+MMAS的蚁群算法路由选择可视化动态模拟设计与实现(毕业论文+程序源码)——蚁群算法路由选择可视化动态模拟

基于javaMMAS的蚁群算法路由选择可视化动态模拟设计与实现&#xff08;毕业论文程序源码&#xff09; 大家好&#xff0c;今天给大家介绍基于javaMMAS的蚁群算法路由选择可视化动态模拟设计与实现&#xff0c;文章末尾附有本毕业设计的论文和源码下载地址哦。 文章目录&#…