文章目录
- 一、集群规划及架构
- 二、系统初始化准备(所有节点同步操作)
- 三、安装kubeadm(所有节点同步操作)
- 四、高可用组件安装及配置
- 1、安装Nginx及配置
- 2、安装keepalived及配置
- 五、初始化Master集群
- 六、扩容K8S集群
- 1、扩容master节点
- 2、扩容node节点
- 七、安装网络组件Calico
- 八、部署Tomcat测试集群可用性
一、集群规划及架构
官方文档:
二进制下载地址
环境规划:
-
pod网段:10.244.0.0/16
-
service网段:10.10.0.0/16
-
注意: pod和service网段不可冲突,如果冲突会导致K8S集群安装失败。
主机名 | IP地址 | 操作系统 | 备注 |
---|---|---|---|
master-1 | 16.32.15.200 | CentOS7.8 | 安装keepalived、nginx实现高可用 |
master-2 | 16.32.15.201 | CentOS7.8 | 安装keepalived、nginx实现高可用 |
node-1 | 16.32.15.202 | CentOS7.8 | |
\ | 16.32.15.100 | \ | VIP地址 |
本次实验架构图:
二、系统初始化准备(所有节点同步操作)
1、关闭防火墙
systemctl disable firewalld --now
setenforce 0
sed -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config
2、配置域名解析
cat >> /etc/hosts << EOF
16.32.15.200 master-1
16.32.15.201 master-2
16.32.15.202 node-1
EOF
在指定主机上面修改主机名
hostnamectl set-hostname master-1 && bash
hostnamectl set-hostname master-2 && bash
hostnamectl set-hostname node-1 && bash
3、配置服务器时间保持一致
yum -y install ntpdate
ntpdate ntp1.aliyun.com
添加定时同步 每天凌晨1点自动同步时间
echo "0 1 * * * ntpdate ntp1.aliyun.com" >> /var/spool/cron/root
crontab -l
4、禁用swap交换分区(kubernetes强制要求禁用)
swapoff --all
禁止开机自启动swap交换分区
sed -i -r '/swap/ s/^/#/' /etc/fstab
5、修改Linux内核参数,添加网桥过滤器和地址转发功能
cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/kubernetes.conf
加载网桥过滤器模块
modprobe br_netfilter
lsmod | grep br_netfilter # 验证是否生效
6、配置ipvs功能
在kubernetes中Service有两种代理模型,一种是基于iptables的,一种是基于ipvs,两者对比ipvs的性能要高,如果想要使用ipvs模型,需要手动载入ipvs模块
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
# 执行脚本
/etc/sysconfig/modules/ipvs.modules
# 验证ipvs模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
7、安装Docker容器组件
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils
# 使用yum-config-manager创建docker阿里存储库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y
Docker默认使用的Cgroup Driver为默认文件驱动,而k8s默认使用的文件驱动为systemd,k8s要求驱动类型必须要一致,所以需要将docker文件驱动改成systemd,并且配置国内加速器。
mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
"registry-mirrors": ["https://aoewjvel.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 启动docker并设置开机自启
systemctl enable docker --now
systemctl status docker
8、重启服务器 可略过
reboot
三、安装kubeadm(所有节点同步操作)
配置国内yum源,一键安装 kubeadm、kubelet、kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum -y install --setopt=obsoletes=0 kubeadm-1.23.0 kubelet-1.23.0 kubectl-1.23.0
kubeadm将使用kubelet服务以容器方式部署kubernetes的主要服务,所以需要先启动kubelet服务
systemctl enable kubelet.service --now
四、高可用组件安装及配置
master-1、master-2主机上进行操作
1、安装Nginx及配置
在 master-1、 master-2主机同步执行,Nginx配置是一致的
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
yum -y install nginx*
配置Nginx文件内容如下:如果主机名称一致,那么Nginx配置文件直接复制即可,如果主机名不一致则需要更改 upstream k8s-apiserver下面为对应的主机名或者IP地址。
mv /etc/nginx/nginx.conf{,.bak}
cat >> /etc/nginx/nginx.conf << EOF
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server master-1:6443;
server master-2:6443;
}
server {
listen 16443;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
EOF
启动Nginx 并且加入开机自启动
nginx -t
systemctl start nginx
systemctl enable nginx
我们代理apiservice的端口是16443, 验证端口是否启动
netstat -anput |grep 16443
2、安装keepalived及配置
在 master-1、 master-2主机同步执行,但是配置文件不一致!
yum -y install keepalived
添加脚本文件,实现当Nginx宕机后停止keepalived,VIP地址会漂移到另一个节点,从而实现apiservice的高可用。
vim /etc/keepalived/checkNginx.sh
#!/bin/bash
# egrep -cv "grep|$$" 过滤掉包含grep 或者 $$ 表示的当前Shell进程ID
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ $count -eq 0 ];then
systemctl stop keepalived
fi
赋予可执行权限
chmod +x /etc/keepalived/checkNginx.sh
Keepalived master-1主机配置文件内容:
mv /etc/keepalived/keepalived.conf{,.default}
vim /etc/keepalived/keepalived.conf
vrrp_script checkNginx {
script "/etc/keepalived/checkNginx.sh" # 监控Nginx状态脚本
interval 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 本机网卡名称
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
16.32.15.100/24
}
track_script {
checkNginx
}
}
Keepalived master-2备机配置文件内容:
mv /etc/keepalived/keepalived.conf{,.default}
vim /etc/keepalived/keepalived.conf
vrrp_script checkNginx {
script "/etc/keepalived/checkNginx.sh" # 监控Nginx状态脚本
interval 2
}
vrrp_instance VI_1 {
state BACKUP # 当前角色
interface ens33
virtual_router_id 52 # VRRP 路由 ID实例,每个实例是唯一的
priority 90 # 优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
16.32.15.100/24
}
track_script {
checkNginx
}
}
重启keepalived(两台master同步执行)
systemctl restart keepalived
五、初始化Master集群
在master-1主机上进行操作
1、创建初始化文件
vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.23.0
controlPlaneEndpoint: 16.32.15.100:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:
- 16.32.15.200
- 16.32.15.201
- 16.32.15.202
- 16.32.15.100
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.10.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
- controlPlaneEndpoint:将集群的控制平面连接到一个负载均衡器(填写VIP地址)
- imageRepository:镜像下载地址,这里使用国内阿里云的
- certSANs:配置DNS,把集群涉及到的IP全部写上即可,包括VIP地址
- podSubnet:Pod网络段
- serviceSubnet:Service网络段
2、进行初始化
kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification
初始化成功后输出如下内容:
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-1] and IPs [10.10.0.1 16.32.15.200 16.32.15.100 16.32.15.201 16.32.15.202]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.889660 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: giw3n1.8ys41tcqlvl9xhrk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
# 将master节点加入集群中
kubeadm join 16.32.15.100:16443 --token giw3n1.8ys41tcqlvl9xhrk \
--discovery-token-ca-cert-hash sha256:2e97fe276dd9a52e91704fbd985f8c57c73c6ca750f07e9eeaf695f7639e0287 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
# 将node节点加入集群中
kubeadm join 16.32.15.100:16443 --token giw3n1.8ys41tcqlvl9xhrk \
--discovery-token-ca-cert-hash sha256:2e97fe276dd9a52e91704fbd985f8c57c73c6ca750f07e9eeaf695f7639e0287
配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
验证使用可以使用 kubectl 命令
kubectl get nodes
六、扩容K8S集群
1、扩容master节点
把master-1主节点的证书文件拷贝到master-2主机
master-2主机执行
cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
master-1主机执行 拷贝证书等文件到master-2主机
scp /etc/kubernetes/pki/ca.key master-2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.crt master-2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master-2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master-2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master-2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master-2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master-2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key master-2:/etc/kubernetes/pki/etcd/
master-2主机执行 加入集群
kubeadm join 16.32.15.100:16443 --token giw3n1.8ys41tcqlvl9xhrk \
--discovery-token-ca-cert-hash sha256:2e97fe276dd9a52e91704fbd985f8c57c73c6ca750f07e9eeaf695f7639e0287 \
--control-plane
显示如下图表示成功加入集群中
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
master-1主机执行 查看node状态
kubectl get node
2、扩容node节点
node-1主机上执行 加入集群
kubeadm join 16.32.15.100:16443 --token giw3n1.8ys41tcqlvl9xhrk \
--discovery-token-ca-cert-hash sha256:2e97fe276dd9a52e91704fbd985f8c57c73c6ca750f07e9eeaf695f7639e0287
显示如下图表示成功加入集群中
master-1主机执行 查看node状态
kubectl get node
可以看到node-1的ROLES角色为空,就表示这个节点是工作节点。
可以把node-1的ROLES变成work,按照如下方法:
kubectl label node node-1 node-role.kubernetes.io/worker=worker
七、安装网络组件Calico
Calico在线文档地址:
Calico.yaml下载地址:
1、查看自带Pod状态
kubectl get pods -n kube-system
当我们查看自带Pod状态时,coredns是pending状态,这是因为还没有安装网络插件,等到下面安装好网络插件之后这个cordns就会变成running了
2、上传calico.yaml文件到服务器中,下面提供calico.yaml文件内容:
在master-1、master-2执行
kubectl apply -f calico.yaml
3、查看集群状态 && 查看自带Pod状态
kubectl get nodes
kubectl get pods -n kube-system
八、部署Tomcat测试集群可用性
1、创建tomcat的pod资源
vim tomcat.yaml
apiVersion: v1 #pod属于k8s核心组v1
kind: Pod #创建的是一个Pod资源
metadata: #元数据
name: demo-pod #pod名字
namespace: default #pod所属的名称空间
labels:
app: myapp #pod具有的标签
env: dev #pod具有的标签
spec:
containers: #定义一个容器,容器是对象列表,下面可以有多个name
- name: tomcat-pod-java #容器的名字
ports:
- containerPort: 8080
image: tomcat:8.5-jre8-alpine #容器使用的镜像
imagePullPolicy: IfNotPresent
执行ymal文件
kubectl apply -f tomcat.yaml
查看pod状态
kubectl get pod
2、创建tomcat的service资源
vim tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
selector:
app: myapp
env: dev
执行ymal文件
kubectl apply -f tomcat-service.yaml
查看service状态
kubectl get svc
3、浏览器访问测试(任意节点+30080端口)
4、测试coredns
kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.10.0.10
Address 1: 10.10.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.10.0.1 kubernetes.default.svc.cluster.local
/ # nslookup tomcat.default.svc.cluster.local
Server: 10.10.0.10
Address 1: 10.10.0.10 kube-dns.kube-system.svc.cluster.local
Name: tomcat.default.svc.cluster.local
Address 1: 10.10.164.81 tomcat.default.svc.cluster.local
- 注意:busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip