一.准备
1.基本概述
版本:
kubelet:v1.20.4
docker: 20.10.23
资源:
cpu:8
mem:16
kernel:3.10.0-1160.71.1.el7.x86_64
镜像仓库地址:
registry.cn-hangzhou.aliyuncs.com/google_containers/
2.安装前准备
2.1)关闭防火墙
~]# systemctl stop firewalld && systemctl disable firewalld && iptables -F
2.2)关闭selinux
~]# sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0
3)修改内核和加载所需要的内核
## 修改内核
~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 开启路由转发
~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
~]# sysctl --system
# 加载ip_vs内核模块均衡
~]# modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
# 设置下次开机自动加载
~]# cat > /etc/modules-load.d/ip_vs.conf << EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF
# 配置hosts方便辨识(可忽略)
~]# cat > /etc/hosts << EOF
172.17.0.62 master1
172.17.0.107 master2
172.17.0.110 master3
EOF
# 测试
~]# ping -w4 master1 && ping -w4 master2 && ping -w4 master3
2.3)准备yum源
# 准备docker的yum源
~]# yum install wget -y
~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# 导入阿里云的源kubernetes的源
~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
3.安装k8s
3.1)刷新缓存安装kubeadm、kubectl、kubelet、docker-ce
~]# yum clean all && yum repolist && yum makecache
~]# yum install -y kubeadm-1.20.4 kubelet-1.20.4 kubectl-1.20.4 docker-ce
3.2)设置systemd及指定网络仓库
~]# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://hub-mirror.c.163.com"]
}
3.3)安装代理
支持内部负载均衡
~]# yum install -y ipvsadm ipset
3.4)启动docker服务和kubernetes设置开机自启
~]# systemctl daemon-reload && systemctl enable docker && systemctl start docker
~]# systemctl enable kubelet
# 注:不需要启动kublet,join加入集群后会自动启动,否则会报错
3.5)Tab键设置
~]# kubectl completion bash > /etc/bash_completion.d/kubectl
~]# kubeadm completion bash > /etc/bash_completion.d/kubeadm
3.6)下载镜像
# 先查看kubelete需要的镜像
~]# kubeadm config images list
# 下载镜像,这里用国内镜像源(以1.20.4为例)
~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.15
~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.15
~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.15
~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.15
~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
## 注:安装kubelet完成后,kubelet是不启动的,不能直接systemctl start kubelet,加入节点或初始化为master后即可启动成功
二.添加node(worknode工作节点)
1.获取证书key
~]# kubeadm token create --print-join-command
kubeadm join 172.17.0.62:6443 --token i2lamg.s6c3n1h7bqo0txa2 --discovery-token-ca-cert-hash sha256:34535001c736a86e2a3dd6de79cbaafbbce5f23e42c2e9e7ac9562abef69ebab
2.node加入k8s集群
~]# kubeadm join 172.17.0.62:6443 --token i2lamg.s6c3n1h7bqo0txa2 --discovery-token-ca-cert-hash sha256:34535001c736a86e2a3dd6de79cbaafbbce5f23e42c2e9e7ac9562abef69ebab
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.在master查看
~]# kubectl get nodes
vm-0-107-centos Ready <node> 109m v1.20.4
vm-0-62-centos Ready control-plane,master 8h v1.20.4
三.添加master节点
1.添加master节点
~]# kubeadm init phase upload-certs --upload-certs
I0129 16:54:16.921885 8775 version.go:254] remote version is much newer: v1.26.1; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
6ac134a79072a83b107f5877b0b5ef43d5f45247c0d1e9a6c77db2e46b8789d3
~]# kubeadm token create --print-join-command
kubeadm join 172.17.0.62:6443 --token i2lamg.s6c3n1h7bqo0txa2 --discovery-token-ca-cert-hash sha256:34535001c736a86e2a3dd6de79cbaafbbce5f23e42c2e9e7ac9562abef69ebab
# 不要使用 --experimental-control-plane,会报错,这个是老版的方法
# 要加上--control-plane --certificate-key ,不然就会添加为node节点而不是master
# join的时候节点上不要部署,如果部署了kubeadm reset后再join
## 老版的
~]# kubeadm join 172.17.0.62:6443 --token i2lamg.s6c3n1h7bqo0txa2 --discovery-token-ca-cert-hash sha256:34535001c736a86e2a3dd6de79cbaafbbce5f23e42c2e9e7ac9562abef69ebab \
--experimental-control-plane --certificate-key 6ac134a79072a83b107f5877b0b5ef43d5f45247c0d1e9a6c77db2e46b8789d3
## 新版
~]# kubeadm join 172.17.0.62:6443 --token i2lamg.s6c3n1h7bqo0txa2 --discovery-token-ca-cert-hash sha256:34535001c736a86e2a3dd6de79cbaafbbce5f23e42c2e9e7ac9562abef69ebab --control-plane --certificate-key 6ac134a79072a83b107f5877b0b5ef43d5f45247c0d1e9a6c77db2e46b8789d3
2.排错
## 注:第一次会失败可能需要修改
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
To see the stack trace of this error execute with --v=5 or higher
### 解决办法如下
## 查看kubeadm-config.yaml
~]# kubectl -n kube-system get cm kubeadm-config -oyaml | grep controlPlaneEndpoint
# 发现没有controlPlaneEndpoint,添加controlPlaneEndpoint
# 大概在这么个位置:
~]# kubectl -n kube-system edit cm kubeadm-config
...
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
controlPlaneEndpoint: 172.17.0.62:6443 # 添加这一行
# 然后再在准备添加为master的节点上执行kubeadm join的命令
# 1.删除节点数据
~]# kubectl cordon 节点名
~]# kubectl drain 节点名 --delete-local-data --ignore-daemonsets --force
~]# kubectl delete node 节点名
~]# rm -rf .kube/ # 如果不删除则会报如下错
3.查看
~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-54d67798b7-mrdwg 1/1 Running 0 8h
kube-system coredns-54d67798b7-qwfns 1/1 Running 0 8h
kube-system etcd-vm-0-107-centos 1/1 Running 0 128m
kube-system etcd-vm-0-62-centos 1/1 Running 0 8h
kube-system kube-apiserver-vm-0-107-centos 1/1 Running 0 128m
kube-system kube-apiserver-vm-0-62-centos 1/1 Running 0 8h
kube-system kube-controller-manager-vm-0-107-centos 1/1 Running 0 128m
kube-system kube-controller-manager-vm-0-62-centos 1/1 Running 2 8h
kube-system kube-flannel-ds-cnrhr 1/1 Running 1 128m
kube-system kube-flannel-ds-nf2hv 1/1 Running 0 8h
kube-system kube-proxy-sh9n6 1/1 Running 0 128m
kube-system kube-proxy-wwblc 1/1 Running 0 8h
kube-system kube-scheduler-vm-0-107-centos 1/1 Running 0 128m
kube-system kube-scheduler-vm-0-62-centos 1/1 Running 2 8h
~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-0-107-centos Ready control-plane,master 128m v1.20.4
vm-0-62-centos Ready control-plane,master 8h v1.20.4