二进制方式部署kubernetes集群

news2024/9/21 13:34:58

二进制方式部署kubernetes集群

1、部署k8s常见的几种方式

1.1 kubeadm

Kubeadm 是一个 k8s 部署工具,提供 kubeadm init 和 kubeadm join,用于快速部署 Kubernetes 集群。

Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署。

Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

1.2 二进制

Kubernetes 系统由一组可执行程序组成,用户可以通过 GitHub 上的 Kubernetes 项目页下载编译好的二进制

包,或者下载源代码并编译后进行安装。

从 github 下载发行版的二进制包,手动部署每个组件,组成 Kubernetes 集群。

1.3 kubespray

kubespray 是 Kubernetes incubator 中的项目,目标是提供 Production Ready Kubernetes 部署方案,该项目

基础是通过 Ansible Playbook 来定义系统与 Kubernetes 集群部署的任务。

Kubernetes 需要容器运行时(Container Runtime Interface,CRI)的支持,目前官方支持的容器运行时包括:

Docker、Containerd、CRI-O 和 frakti,本文以 Docker 作为容器运行环境。

本文以二进制文件方式部署 Kubernetes 集群,并对每个组件的配置进行详细说明。

2、二进制部署环境准备

2.1 软硬件环境准备

软件环境:

软件版本
操作系统CentOS Linux release 7.9.2009 (Core)
容器引擎Docker version 20.10.21, build baeda1f
KubernetesKubernetes V1.20.15

服务器规划:

角色IP组件
k8s-master1192.168.54.101kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd
k8s-master2192.168.54.104kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd
k8s-node1192.168.54.102kubelet,kube-proxy,docker,etcd
k8s-node2192.168.54.103kubelet,kube-proxy,docker,etcd
虚拟IP192.168.54.105

搭建这套 k8s 高可用集群分两部分实施,先部署一套单 master 架构(3台),再扩容为多 master 架构(4台),顺便

再熟悉下 master 扩容流程。

单 master服务器规划:

角色IP组件
k8s-master1192.168.54.101kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd
k8s-node1192.168.54.102kubelet,kube-proxy,docker,etcd
k8s-node2192.168.54.103kubelet,kube-proxy,docker,etcd

2.2 操作系统初始化配置(所有节点)

# 关闭系统防火墙
# 临时关闭
systemctl stop firewalld
# 永久关闭
systemctl disable firewalld
# 关闭selinux
# 永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config  
# 临时关闭
setenforce 0  
# 关闭swap
# 临时关闭
swapoff -a   
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab
# 根据规划设置主机名
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
# 添加hosts
cat >> /etc/hosts << EOF
192.168.54.101 k8s-master1
192.168.54.102 k8s-node1
192.168.54.103 k8s-node2
192.168.54.104 k8s-master2
EOF
# 将桥接的IPV4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
EOF
# 生效
sysctl --system  
# 时间同步
# 使用阿里云时间服务器进行临时同步
yum install ntpdate
ntpdate ntp.aliyun.com

上面的基本上不会出现任何问题。

下面在master上安装etcd、docker、kube-apiserver、kube-controller-manager和kube-scheduler服务。

3、部署etcd集群

etcd 服务作为 Kubernetes 集群的主数据库,在安装 Kubernetes 各服务之前需要首先安装和启动。

3.1 etcd简介

Etcd 是一个分布式键值存储系统,Kubernetes 使用 Etcd 进行数据存储,所以先准备一个 Etcd 数据库,为解决

Etcd 单点故障,应采用集群方式部署,这里使用 3 台组建集群,可容忍 1 台机器故障,当然,你也可以使用 5 台

组建集群,可容忍 2 台机器故障。

3.2 服务器规划

本文安装 Etcd 的服务规划:

节点名称IP
etcd-1192.168.54.101
etcd-2192.168.54.102
etcd-3192.168.54.103

说明:为了节省机器,这里与 k8s 节点复用,也可以部署在 k8s 机器之外,只要 apiserver 能连接到就行。

3.3 cfssl证书生成工具准备

cfssl 简介:cfssl 是一个开源的证书管理工具,使用 json 文件生成证书,相比 openssl 更方便使用。 找任意

一台服务器操作,这里用 k8s-master1 节点。

# k8s-master1节点执行
# 创建目录存放cfssl工具
mkdir /software-cfssl
# 下载相关工具
# 这些都是可执行文件
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -P /software-cfssl/
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -P /software-cfssl/
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -P /software-cfssl/
cd /software-cfssl/
chmod +x *
cp cfssl_linux-amd64 /usr/local/bin/cfssl
cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
cp cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

3.4 自签证书颁发机构(CA)

3.4.1 创建工作目录

# k8s-master1节点执行
mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd/

3.4.2 生成自签CA配置

# k8s-master1节点执行
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "YuMingYu",
            "ST": "YuMingYu"
        }
    ]
}
EOF

3.4.3 生成自签CA证书

# k8s-master1节点执行
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

说明:当前目录下会生成 ca.pem 和 ca-key.pem 文件,同时会生成 ca.csr 文件。

查看证书:

# k8s-master1节点执行
[root@k8s-master1 etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3.5 使用自签CA签发etcd https证书

3.5.1 创建证书申请文件

# k8s-master1节点执行
cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.54.101",
    "192.168.54.102",
    "192.168.54.103",
    "192.168.54.104"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "YuMingYu",
            "ST": "YuMingYu"
        }
    ]
}
EOF

说明:上述文件 hosts 字段中 ip 为所有 etcd 节点的集群内部通信 ip,一个都不能少,为了方便后期扩容可以多

写几个预留的 ip。

3.5.2 生成证书

# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

说明:当前目录下会生成 server.pem 和 server-key.pem。

查看证书:

# k8s-master1节点执行
[root@k8s-master1 etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

3.6 下载etcd二进制文件

下载地址:

https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

下载后上传到服务器任意位置即可。

3.7 部署etcd集群

从GitHub官网( https://github.com/coreos/etcd/releases )下载etcd二进制文件,将etcd和etcdctl文件复

制到/usr/bin目录。

以下操作在 k8s-master1 上面操作,为简化操作,待会将 k8s-master1 节点生成的所有文件拷贝到其他节点。

3.7.1 创建工作目录并解压二进制包

# k8s-master1节点执行
mkdir /opt/etcd/{bin,cfg,ssl} -p
# 将安装包放在~目录下
cd ~
tar -xf etcd-v3.4.9-linux-amd64.tar.gz
# etcd,etcdctl为可执行文件
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

3.8 创建etcd配置文件

# k8s-master1节点执行
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.54.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.54.101:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.54.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.54.101:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.54.101:2380,etcd-2=https://192.168.54.102:2380,etcd-3=https://192.168.54.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

配置说明:

  • ETCD_NAME:节点名称,集群中唯一

  • ETCD_DATA_DIR:数据目录

  • ETCD_LISTEN_PEER_URLS:集群通讯监听地址

  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

  • ETCD_INITIAL_CLUSTER:集群节点地址

  • ETCD_INITIALCLUSTER_TOKEN:集群Token

  • ETCD_INITIALCLUSTER_STATE:加入集群的状态,new是新集群,existing表示加入已有集群

3.9 systemd管理etcd

# k8s-master1节点执行
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

可以在[Service]下添加一个工作目录(可选):

[Service]
WorkingDirectory=/var/lib/etcd/

其中,WorkingDirectory(/var/lib/etcd/) 表示etcd数据保存的目录,需要在启动etcd服务之前创建。

配置文件 /etc/etcd/etcd.conf 通常不需要特别的参数设置(详细的参数配置内容参见官方文档),etcd默认使用

https://192.168.54.101:2379 地址供客户端连接。

3.10 将master1节点所有生成的文件拷贝到节点2和节点3

# k8s-master1节点执行
#!/bin/bash
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
for i in {2..3}
do
scp -r /opt/etcd/ root@192.168.54.10$i:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.54.10$i:/usr/lib/systemd/system/
done
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── etcd.service
└── ......

# k8s-node1节点执行
[root@k8s-node1 ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-node1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── etcd.service
└── ......

# k8s-node2节点执行
[root@k8s-node2 ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-node2 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── etcd.service
└── ......

3.11 修改节点2和节点3中etcd.conf配置文件中的节点名称和当前服务器IP

# k8s-node1节点执行
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.54.102:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.54.102:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.54.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.54.102:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.54.101:2380,etcd-2=https://192.168.54.102:2380,etcd-3=https://192.168.54.103:2380"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
# k8s-node2节点执行
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.54.103:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.54.103:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.54.103:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.54.103:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.54.101:2380,etcd-2=https://192.168.54.102:2380,etcd-3=https://192.168.54.103:2380"  
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

3.12 启动etcd并设置开机自启

配置完成后,通过systemctl start命令启动etcd服务。同时,使用systemctl enable命令将服务加入开机启动列表

中。

说明:etcd 须多个节点同时启动,不然执行 systemctl start etcd 会一直卡在前台,连接其他节点,建议通过批量

管理工具,或者脚本同时启动 etcd。

# k8s-master1、k8s-node1和k8s-node2节点执行
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
systemctl status etcd

3.13 检查etcd集群状态

通过执行etcdctl cluster-health,可以验证etcd是否正确启动:

# k8s-master1节点执行
[root@k8s-master1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.54.103:2379 |   true | 19.533787ms |       |
| https://192.168.54.101:2379 |   true | 19.229071ms |       |
| https://192.168.54.102:2379 |   true | 23.769337ms |       |
+-----------------------------+--------+-------------+-------+
# k8s-node1节点执行
[root@k8s-node1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.54.102:2379 |   true | 23.682349ms |       |
| https://192.168.54.103:2379 |   true | 23.718213ms |       |
| https://192.168.54.101:2379 |   true | 25.853315ms |       |
+-----------------------------+--------+-------------+-------+
# k8s-node2节点执行
[root@k8s-node2 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379" endpoint health --write-out=table
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.54.103:2379 |   true | 24.056756ms |       |
| https://192.168.54.102:2379 |   true | 24.108094ms |       |
| https://192.168.54.101:2379 |   true | 24.793733ms |       |
+-----------------------------+--------+-------------+-------+

如果为以上状态证明部署的没有问题。

3.14 etcd问题排查(日志)

less /var/log/message
journalctl -u etcd

4、安装Docker(所有节点)

这里使用 Docker 作为容器引擎,也可以换成别的,例如 containerd,k8s 在 1.20 版本就不在支持 docker。

4.1 解压二进制包

cd ~
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
tar -xf docker-19.03.9.tgz
mv docker/* /usr/bin/

4.2 配置镜像加速

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

4.3 docker.service配置

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd --selinux-enabled=false --insecure-registry=127.0.0.1
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
#TasksMax=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

4.4 启动并设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker
systemctl status docker

5、部署master节点

5.1 生成kube-apiserver证书

5.1.1 自签证书颁发机构(CA)

# k8s-master1节点执行
cd ~/TLS/k8s
# k8s-master1节点执行
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
# k8s-master1节点执行
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成证书:

# k8s-master1节点执行
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

目录下会生成 ca.pem 和 ca-key.pem,同时还有 ca.csr 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

5.1.2 使用自签CA签发kube-apiserver https证书

创建证书申请文件:

# k8s-master1节点执行
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.54.101",
      "192.168.54.102",
      "192.168.54.103",
      "192.168.54.104",
      "192.168.54.105",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

说明:上述文件中hosts字段中IP为所有Master/LB/VIP IP,一个都不能少,为了方便后期扩容可以多写几个预留

的IP。

生成证书:

# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

说明:当前目录下会生成server.pem 和 server-key.pem 文件,还有server.csr。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

5.2 下载

下载地址:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md

在这里插入图片描述

下载:

https://storage.googleapis.com/kubernetes-release/release/v1.20.15/kubernetes-server-linux-amd64.tar.gz

5.3 解压二进制包

上传刚才下载的 k8s 软件包到服务器上。

将 kube-apiserver、kube-controller-manager 和 kube-scheduler 文件复制到 /opt/kubernetes/bin 目录。

# k8s-master1节点执行
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/bin
/opt/kubernetes/bin
├── kube-apiserver
├── kube-controller-manager
└── kube-scheduler

[root@k8s-master1 ~]# tree /usr/bin/
/usr/bin/
├── ......
├── kubectl
└── ......

5.4 部署kube-apiserver

5.4.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.54.101:2379,https://192.168.54.102:2379,https://192.168.54.103:2379 \\
--bind-address=192.168.54.101 \\
--secure-port=6443 \\
--advertise-address=192.168.54.101 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

配置文件 /opt/kubernetes/cfg/kube-apiserver.conf 的内容包括了 kube-apiserver 的全部启动参数,主要

的配置参数在变量 KUBE_APISERVER_OPTS 中指定。

说明:

  • 上面两个\\第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

  • --logtostderr:启用日志,设置为false表示将日志写入文件,不写入stderr。

  • --v:日志等级。

  • --log-dir:日志目录。

  • --etcd-servers:etcd集群地址,指定etcd服务的URL。

  • --bind-address:监听地址,API Server绑定主机的安全IP地址,设置0.0.0.0表示绑定所有IP地址。

  • --secure-port:https安全端口,API Server绑定主机的安全端口号,默认为8080。

  • --advertise-address:集群通告地址。

  • --allow-privileged:启动授权。

  • --service-cluster-ip-range:Service虚拟IP地址段,Kubernetes集群中Service的虚拟IP地址范围,以

    CIDR格式表示,例如10.0.0.0/24,该IP范围不能与物理机的IP地址有重合。

  • --enable-admission-plugins :准入控制模块,Kubernetes集群的准入控制设置,各控制模块以插件的形

    式依次生效。

  • --authorization-mode:认证授权,启用RBAC授权和节点自管理。

  • --enable-bootstrap-token-auth:启用TLS bootstrap机制。

  • --token-auth-file:bootstrap token文件。

  • --service-node-port-range:Service nodeport类型默认分配端口范围,Kubernetes集群中Service可使

    用的物理机端口号范围,默认值为30000~32767。

  • --kubelet-client-xxx:apiserver访问kubelet客户端证书。

  • --tls-xxx-file:apiserver https证书。

  • 1.20版本必须加的参数--service-account-issuer--service-account-signing-key-file

  • --etcd-xxxfile:连接etcd集群证书。

  • --audit-log-xxx:审计日志。

  • 启动聚合层网关配置--requestheader-client-ca-file--proxy-client-cert-file

    --proxy-client-key-file--requestheader-allowed-names

    -requestheader-extra-headers-prefix--requestheader-group-headers

    --requestheader-username-headers--enable-aggregator-routing

  • --storage-backend:指定etcd的版本,从Kubernetes 1.6开始,默认为etcd 3。注意,在Kubernetes 1.6

    之前的版本中没有这个参数,kube-apiserver默认使用etcd 2,对于正在运行的1.5或旧版本的Kubernetes集

    群,etcd提供了数据升级方案,详见etcd文档:

    https://coreos.com/etcd/docs/latest/upgrades/upgrade_3_0.html

5.4.2 拷贝刚才生成的证书

把刚才生成的证书拷贝到配置文件中的路径:

# k8s-master1节点执行
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

5.4.3 启用TLS bootstrapping机制

TLS Bootstraping:Master apiserver 启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver

进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也

会增加集群扩展复杂度。为了简化流程,Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,

kubelet 会以一个低权限用户自动向 apiserver 申请证书,kubelet的证书由apiserver动态签署。所以强烈建议

在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

创建上述配置文件中 token 文件:

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/token.csv << EOF
4136692876ad4b01bb9dd0988480ebba,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

格式:token,用户名,UID,用户组

token也可自行生成替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

5.4.4 systemd管理apiserver

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── etcd.service
└── ......

5.4.5 启动并设置开机启动

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver
systemctl status kube-apiserver
# k8s-master1节点执行
[root@k8s-master1 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 13:49:42 CST; 10s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 44755 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─44755 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --...

12月 05 13:49:42 k8s-master1 systemd[1]: Started Kubernetes API Server.
12月 05 13:49:42 k8s-master1 kube-apiserver[44755]: E1205 13:49:42.475307   44755 instance.go:392] Could not... api
12月 05 13:49:45 k8s-master1 kube-apiserver[44755]: E1205 13:49:45.062415   44755 controller.go:152] Unable ...Msg:
Hint: Some lines were ellipsized, use -l to show in full.

5.5 部署kube-controller-manager

5.5.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF

配置文件 /opt/kubernetes/cfg/kube-controller-manager.conf 的内容包含了kube-controller-manager的

全部启动参数,主要的配置参数在变量 KUBE_CONTROLLER_MANAGER_OPTS 中指定,对启动参数说明如下:

  • --v:日志级别。

  • --log-dir:日志目录。

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。

  • --kubeconfig:连接apiserver配置文件,设置与API Server连接的相关配置。

  • --leader-elect:当该组件启动多个时,自动选举(HA)

  • --cluster-signing-cert-file:自动为kubelet颁发证书的CA,apiserver保持一致

  • --cluster-signing-key-file:自动为kubelet颁发证书的CA,apiserver保持一致

[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t
    server: https://192.168.54.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-controller-manager
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-controller-manager
  user:
    client-certificate-data: LS0t
    client-key-data: LS0t

5.5.2 生成kubeconfig文件

生成 kube-controller-manager 证书:

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
# 生成证书
# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

会生成 kube-controller-manager.csr、kube-controller-manager-key.pem 和

kube-controller-manager.pem 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls kube-controller-manager*
kube-controller-manager.csr       kube-controller-manager-key.pem
kube-controller-manager-csr.json  kube-controller-manager.pem

生成 kubeconfig 文件(以下是 shell 命令,直接在 shell 终端执行):

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-controller-manager \
  --client-certificate=./kube-controller-manager.pem \
  --client-key=./kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成kube-controller-manager.kubeconfig文件

5.5.3 systemd管理controller-manager

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

kube-controller-manager 服务依赖于 kube-apiserver 服务,systemd 服务配置文件

/usr/lib/systemd/system/kube-controller-manager.service 可以添加如下内容:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── etcd.service
└── ......

5.5.4 启动并设置开机自启

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-controller-manager 
systemctl enable kube-controller-manager
systemctl status kube-controller-manager
# k8s-master1节点执行
[root@k8s-master1 k8s]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 13:55:33 CST; 11s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 46929 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─46929 /opt/kubernetes/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernete...

12月 05 13:55:33 k8s-master1 systemd[1]: Started Kubernetes Controller Manager.
12月 05 13:55:34 k8s-master1 kube-controller-manager[46929]: E1205 13:55:34.773588   46929 core.go:232] faile...ded
Hint: Some lines were ellipsized, use -l to show in full.

5.6 部署 kube-scheduler

5.6.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF

配置文件 /opt/kubernetes/cfg/kube-scheduler.conf 的内容包括了 kube-scheduler 的全部启动参数,主要

的配置参数在变量 KUBE_SCHEDULER_OPTS 中指定,对启动参数说明如下:

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。

  • --log-dir:日志目录。

  • --v:日志级别。

  • --kubeconfig:连接apiserver配置文件,设置与API Server连接的相关配置,可以与kube-controller-

    manager使用的kubeconfig文件相同。

  • --leader-elect:当该组件启动多个时,自动选举(HA)。

[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kube-scheduler.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t
    server: https://192.168.54.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-scheduler
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-scheduler
  user:
    client-certificate-data: LS0t
    client-key-data: LS0t

5.6.2 生成kubeconfig文件

生成 kube-scheduler 证书:

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
# k8s-master1节点执行
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

会生成 kube-scheduler.csr、kube-scheduler-key.pem 和 kube-scheduler.pem 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls kube-scheduler*
kube-scheduler.csr  kube-scheduler-csr.json  kube-scheduler-key.pem  kube-scheduler.pem

生成 kubeconfig 文件:

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成 kube-scheduler.kubeconfig文件

5.6.3 systemd管理scheduler

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

kube-scheduler 服务也依赖于 kube-apiserver 服务,systemd 服务配置文件

/usr/lib/systemd/system/kube-scheduler.service 可以添加内容如下:

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
# k8s-master1节点执行
[root@k8s-master1 k8s]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── etcd.service
└── ......

5.6.4 启动并设置开机启动

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler
# k8s-master1节点执行
[root@k8s-master1 k8s]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 14:03:18 CST; 6s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 49798 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─49798 /opt/kubernetes/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --...

12月 05 14:03:18 k8s-master1 systemd[1]: Started Kubernetes Scheduler.

kube-apiserver、kube-controller-manager 和 kube-scheduler 服务配置完成后,执行 systemctl start 命令按顺

序启动这3个服务,同时,使用systemctl enable命令将服务加入开机启动列表中(如果已经执行过就无须再执行):

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

通过 systemctl status <service_name> 验证服务的启动状态,running表示启动成功。

至此,Master上所需的服务就全部启动完成了。

5.6.5 查看集群状态

生成 kubectl 连接集群的证书 :

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

会生成 admin.csr、admin-key.pem 和 admin.pem。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

生成 kubeconfig 文件 :

# k8s-master1节点执行
mkdir /root/.kube

KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成/root/.kube/config文件

通过 kubectl 工具查看当前集群组件状态 :

# k8s-master1节点执行
[root@k8s-master1 k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

如上说明 master 节点组件运行正常。

5.6.6 授权kubelet-bootstrap用户允许请求证书

# k8s-master1节点执行
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

接下来在 Node 上安装 kubelet 和 kube-proxy 服务。

6、部署Work Node

在 Work Node 上需要预先安装好 Docker Daemon 并且正常启动,Docker 的安装和启动详见 Docker 官网:

http://www.docker.com 的说明文档。

下面还是在master node上面操作,既当master节点,也当Work Node节点。

work node主要是指 kubeletkube-proxy

6.1 创建工作目录并拷贝二进制文件

注:在所有 work node 创建工作目录。

# k8s-master1、k8s-node1和k8s-node2节点执行
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

从 master 节点 k8s-server 软件包中拷贝到所有 work 节点:

# k8s-master1节点执行
#进入到k8s-server软件包目录
#!/bin/bash 
cd ~/kubernetes/server/bin
for i in {1..3}
do
scp kubelet kube-proxy root@192.168.54.10$i:/opt/kubernetes/bin/
done

6.2 部署kubelet

6.2.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
# 会生成kubelet.kubeconfig文件

配置文件 /opt/kubernetes/cfg/kubelet.conf 的内容包括了 kubelet 的全部启动参数,主要的配置参数在变

KUBELET_OPTS 中指定,对启动参数说明如下:

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。

  • --log-dir:日志目录。

  • --v:日志级别。

  • --hostname-override:显示名称,集群唯一(不可重复),设置本Node的名称。

  • --network-plugin:启用CNI。

  • --kubeconfig :空路径,会自动生成,后面用于连接 apiserver。设置与 API Server 连接的相关配置,可以

    与 kube-controller-manager 使用的 kubeconfig 文件相同。

# k8s-master1节点执行
[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t
    server: https://192.168.54.101:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /opt/kubernetes/ssl/kubelet-client-current.pem
    client-key: /opt/kubernetes/ssl/kubelet-client-current.pem
  • --bootstrap-kubeconfig:首次启动向apiserver申请证书。
  • --config:配置文件参数。
  • --cert-dir:kubelet证书目录。
  • --pod-infra-container-image :管理Pod网络容器的镜像 init container。

6.2.2 配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

6.2.3 生成kubelet初次加入集群引导kubeconfig文件

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443" # apiserver IP:PORT
TOKEN="4136692876ad4b01bb9dd0988480ebba" # 与token.csv里保持一致  /opt/kubernetes/cfg/token.csv 

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成bootstrap.kubeconfig文件

6.2.4 systemd管理kubelet

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kubelet 服务依赖于 Docker 服务,systemd 服务配置文件 /usr/lib/systemd/system/kubelet.service 可以

添加如下内容:

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet

其中,WorkingDirectory 表示 kubelet 保存数据的目录,需要在启动 kubelet 服务之前创建。

# k8s-master1节点执行
[root@k8s-master1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kubelet.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet.crt
    ├── kubelet.key
    ├── server-key.pem
    └── server.pem
  
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── etcd.service
├── kubelet.service
└── ......

6.2.5 启动并设置开机启动

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
# k8s-master1节点执行
[root@k8s-master1 bin]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 14:18:27 CST; 4s ago
 Main PID: 55291 (kubelet)
    Tasks: 9
   Memory: 25.8M
   CGroup: /system.slice/kubelet.service
           └─55291 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostnam...

12月 05 14:18:27 k8s-master1 systemd[1]: Started Kubernetes Kubelet.

6.2.6 允许kubelet证书申请并加入集群

# k8s-master1节点执行
# 查看kubelet证书请求
[root@k8s-master1 k8s]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8   28s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
# k8s-master1节点执行
# 允许kubelet节点申请
# node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8是上面生成的
[root@k8s-master1 k8s]# kubectl certificate approve node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8
certificatesigningrequest.certificates.k8s.io/node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8 approved
# k8s-master1节点执行
# 查看申请
[root@k8s-master1 k8s]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8   2m31s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
# k8s-master1节点执行
# 查看节点
[root@k8s-master1 k8s]# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady   <none>   62s   v1.20.15

说明:由于网络插件还没有部署,节点会没有准备就绪 NotReady。

6.3 部署kube-proxy

6.3.1 创建配置文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

配置文件 /opt/kubernetes/cfg/kube-proxy.conf 的内容包括了 kube-proxy 的全部启动参数,主要的配置参

数在变量 KUBE_PROXY_OPTS 中指定,对启动参数说明如下:

  • --logtostderr:设置为false表示将日志写入文件,不写入stderr。
  • --log-dir:日志目录。
  • --v:日志级别。

6.3.2 配置参数文件

# k8s-master1节点执行
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.244.0.0/16
EOF

6.3.3 生成kube-proxy证书文件

# k8s-master1节点执行
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
# 生成证书
# k8s-master1节点执行
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

会生成 kube-proxy.csr、kube-proxy-key.pem 和 kube-proxy.pem 文件。

# k8s-master1节点执行
[root@k8s-master1 k8s]# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

6.3.4 生成kube-proxy.kubeconfig文件

# k8s-master1节点执行
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.54.101:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
# 会生成kube-proxy.kubeconfig文件

6.3.5 systemd管理kube-proxy

# k8s-master1节点执行
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kube-proxy 服务依赖于 network 服务,systemd 服务配置文件

/usr/lib/systemd/system/kube-proxy.service 添加如下内容:

[Unit]
Description=Kubernetes Proxy
After=network.target
Requires=network.service
# k8s-master1节点执行
[root@k8s-master1 k8s]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kubelet.kubeconfig
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2022-12-05-14-20-15.pem
    ├── kubelet.crt
    ├── kubelet.key
    ├── server-key.pem
    └── server.pem
    
    
[root@k8s-master1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── docker.service
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── etcd.service
├── kubelet.service
├── kube-proxy.service
└── ......    

6.3.6 启动并设置开机自启

# k8s-master1节点执行
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
[root@k8s-master1 k8s]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 14:36:12 CST; 8s ago
 Main PID: 65578 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           └─65578 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --conf...

12月 05 14:36:12 k8s-master1 systemd[1]: Started Kubernetes Proxy.

6.4 部署网络组件(Calico)

Calico 是一个纯三层的数据中心网络方案,是目前 Kubernetes 主流的网络方案。

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl apply -f calico.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system

更改 calico 网段:

"ipam": {
        "type": "calico-ipam",
        "assign_ipv4": "true",
        "assign_ipv6": "true"
    },
    - name: IP
      value: "autodetect"

    - name: IP6
      value: "autodetect"

    # 此处要进行修改,保证和前面的一样
    - name: CALICO_IPV4POOL_CIDR
      value: "172.16.0.0/16"

    - name: CALICO_IPV6POOL_CIDR
      value: "fc00::/48"

    - name: FELIX_IPV6SUPPORT
      value: "true"

等 Calico Pod 都 Running,节点也会准备就绪。

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-d55ffb795-ngzgz   1/1     Running   0          34m
calico-node-v9wtk                         1/1     Running   0          34m
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    <none>   62m   v1.20.15

6.5 授权apiserver访问kubelet

# k8s-master1节点执行
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml

7、新增加Work Node

7.1 拷贝以部署好的相关文件到新节点

在 k8s-master1 节点将 Work Node 涉及文件拷贝到新节点 54.102/54.103

# k8s-master1节点执行
#!/bin/bash 

for i in {2..3}; do scp -r /opt/kubernetes root@192.168.54.10$i:/opt/; done

for i in {2..3}; do scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.54.10$i:/usr/lib/systemd/system; done

for i in {2..3}; do scp -r /opt/kubernetes/ssl/ca.pem root@192.168.54.10$i:/opt/kubernetes/ssl/; done

7.2 删除kubelet证书和kubeconfig文件

# k8s-node1和k8s-node2节点执行
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*

说明:这几个文件是证书申请审批后自动生成的,每个 Node 不同,必须删除。

# k8s-node1节点执行
[root@k8s-node1 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-node1 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── kubelet.service
├── kube-proxy.service
└── ......      
# k8s-node2节点执行
[root@k8s-node2 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem
    
[root@k8s-node2 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── kubelet.service
├── kube-proxy.service
└── ......      

7.3 修改主机名

# k8s-node1和k8s-node2节点执行
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
# 和
--hostname-override=k8s-node2

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
# 和
hostnameOverride: k8s-node2

7.4 启动并设置开机自启

# k8s-node1和k8s-node2节点执行
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
systemctl status kubelet kube-proxy
# k8s-node1节点执行
[root@k8s-node1 ~]# systemctl status kubelet kube-proxy
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:13 CST; 7min ago
 Main PID: 18510 (kubelet)
    Tasks: 14
   Memory: 49.8M
   CGroup: /system.slice/kubelet.service
           └─18510 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-node1 --network-plugin=cni --ku...

12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443830   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443843   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443951   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.443961   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444157   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444181   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444411   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444427   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444723   18510 driver-call.go:266] Failed to unmarshal output for command: init, output: ""...SON input
12月 05 15:43:25 k8s-node1 kubelet[18510]: E1205 15:43:25.444737   18510 plugins.go:747] Error dynamically probing plugins: Error creating Flexvolume...SON input

● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:13 CST; 7min ago
 Main PID: 18516 (kube-proxy)
    Tasks: 7
   Memory: 15.5M
   CGroup: /system.slice/kube-proxy.service
           └─18516 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-proxy-config.yml

12月 05 15:36:13 k8s-node1 systemd[1]: Started Kubernetes Proxy.
12月 05 15:36:13 k8s-node1 kube-proxy[18516]: E1205 15:36:13.717137   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:14 k8s-node1 kube-proxy[18516]: E1205 15:36:14.769341   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:16 k8s-node1 kube-proxy[18516]: E1205 15:36:16.907789   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:21 k8s-node1 kube-proxy[18516]: E1205 15:36:21.350694   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:30 k8s-node1 kube-proxy[18516]: E1205 15:36:30.701762   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
12月 05 15:36:47 k8s-node1 kube-proxy[18516]: E1205 15:36:47.323079   18516 node.go:161] Failed to retrieve node info: nodes "k8s-node1" not found
Hint: Some lines were ellipsized, use -l to show in full.
# k8s-node2节点执行
[root@k8s-node2 ~]# systemctl status kubelet kube-proxy
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:16 CST; 7min ago
 Main PID: 39153 (kubelet)
    Tasks: 14
   Memory: 50.8M
   CGroup: /system.slice/kubelet.service
           └─39153 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-node2 --network-...

12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898329   39153 remote_image.go:113] PullImage "calico/node:v3.13.1" from image servic...
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898365   39153 kuberuntime_image.go:51] Pull image "calico/node:v3.13.1" failed: rpc ...
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898507   39153 kuberuntime_manager.go:829] container &Container{Name:calico-n...e:true,V
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.898540   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...
12月 05 15:42:02 k8s-node2 kubelet[39153]: E1205 15:42:02.978288   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378277   39153 remote_image.go:113] PullImage "calico/node:v3.13.1" from image servic...
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378299   39153 kuberuntime_image.go:51] Pull image "calico/node:v3.13.1" failed: rpc ...
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378414   39153 kuberuntime_manager.go:829] container &Container{Name:calico-n...e:true,V
12月 05 15:43:04 k8s-node2 kubelet[39153]: E1205 15:43:04.378437   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...
12月 05 15:43:18 k8s-node2 kubelet[39153]: E1205 15:43:18.399947   39153 pod_workers.go:191] Error syncing pod 00c4d7e7-b2de-4d71-85a1-8e021450...

● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 15:36:16 CST; 7min ago
 Main PID: 39163 (kube-proxy)
    Tasks: 7
   Memory: 18.2M
   CGroup: /system.slice/kube-proxy.service
           └─39163 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-pro...

12月 05 15:36:16 k8s-node2 systemd[1]: Started Kubernetes Proxy.
12月 05 15:36:17 k8s-node2 kube-proxy[39163]: E1205 15:36:17.031550   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:18 k8s-node2 kube-proxy[39163]: E1205 15:36:18.149111   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:20 k8s-node2 kube-proxy[39163]: E1205 15:36:20.398528   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:25 k8s-node2 kube-proxy[39163]: E1205 15:36:25.009895   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:33 k8s-node2 kube-proxy[39163]: E1205 15:36:33.635518   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
12月 05 15:36:50 k8s-node2 kube-proxy[39163]: E1205 15:36:50.280862   39163 node.go:161] Failed to retrieve node info: nodes "k8s-node2" not found
Hint: Some lines were ellipsized, use -l to show in full.

7.5 在master上同意新的Node kubelet证书申请

# k8s-master1节点执行
# 查看证书请求
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-VRB35635LOQVmbSE1f-dH4ZTQ1DBrFjhv5lU_PjSkgU   98s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-Ya9T7F0RFoaUI20J2SBOiQm7PyYY3BJ8Q46Pm4Vqld8   79m    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-dsFFKL7woGXhMoA6_VNw4BbE2R0XQqr3d4wXXrI_7jI   102s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
# k8s-master1节点执行
# 同意node加入
[root@k8s-master1 ~]# kubectl certificate approve node-csr-VRB35635LOQVmbSE1f-dH4ZTQ1DBrFjhv5lU_PjSkgU
certificatesigningrequest.certificates.k8s.io/node-csr-VRB35635LOQVmbSE1f-dH4ZTQ1DBrFjhv5lU_PjSkgU approved

[root@k8s-master1 ~]# kubectl certificate approve node-csr-dsFFKL7woGXhMoA6_VNw4BbE2R0XQqr3d4wXXrI_7jI
certificatesigningrequest.certificates.k8s.io/node-csr-dsFFKL7woGXhMoA6_VNw4BbE2R0XQqr3d4wXXrI_7jI approved

7.6 查看Node状态(要稍等会才会变成ready,会下载一些初始化镜像)

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE    VERSION
k8s-master1   Ready    <none>   81m    v1.20.15
k8s-node1     Ready    <none>   98s    v1.20.15
k8s-node2     Ready    <none>   118s   v1.20.15
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get pods  --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-d55ffb795-ngzgz   1/1     Running   0          62m
kube-system   calico-node-fr5fq                         1/1     Running   0          9m46s
kube-system   calico-node-v9wtk                         1/1     Running   0          62m
kube-system   calico-node-zp6cz                         1/1     Running   0          10m

说明:其他节点同上,至此,3 个节点的集群搭建完成。

8、部署Dashboard和CoreDNS

8.1 部署Dashboard(k8s-master1)

# k8s-master1节点执行
# 部署
[root@k8s-master1 ~]# kubectl apply -f kubernetes-dashboard.yaml
# k8s-master1节点执行
# 查看部署情况
[root@k8s-master1 ~]# kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7b59f7d4df-fbkww   1/1     Running   0          7m53s
pod/kubernetes-dashboard-74d688b6bc-tzbjb        1/1     Running   0          7m53s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.75    <none>        8000/TCP        7m53s
service/kubernetes-dashboard        NodePort    10.0.0.3     <none>        443:31856/TCP   7m53s

创建 service account 并绑定默认 cluster-admin 管理员集群角色。

# k8s-master1节点执行
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-trzcf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: f615afd7-310d-45b9-aadf-24c44591e613

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjloWmk2alpOY0JPMkFHOFEwcGVEdGdxQjJzMnYtbXU1Xy14ckJfd0FTbEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdHJ6Y2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjYxNWFmZDctMzEwZC00NWI5LWFhZGYtMjRjNDQ1OTFlNjEzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.KO1Lw8rtZxDqgA2NWcUU8yaCjtisuJ-xTGayyAHDM7CWy9rq3GSmRW397ExFTsazu572HvoDSUHcvCUCQFXBMuUUa0qxVqWzuAktUsVleIPl3ch32B9oudCDYcxAlZhc7C_qDa69Id9wEkicQTGPowWnTL0SJGhSvwt1Q_po5EjyUNTrXzAW96yPF6UQ0bb4379m1hKp8FIE05c9kPju9VipkWXmJxYfn9kzXfRpLnVO9Ly-QNuCt-umJGTs2aRfwy_h7bVwBtODlbZTxQrtDc21efXmVXEeXAB4yCgmAbWCXbPDNOpUpwSsVAVyl44JOD4Vnk8DqWt0Ltxa-9evIA

访问地址:https://NodeIP:31856

使用输出的 token 登陆 Dashboard (如访问提示 https 异常,可使用火狐浏览器)。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-J4HOy42O-1687083974045)(…/…/images/Kubernetes/0169.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-UkXS3U3u-1687083974046)(…/…/images/Kubernetes/0170.png)]

8.2 部署CoreDNS(k8s-master1)

CoreDNS 主要用于集群内部 Service 名称解析。

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl apply -f coredns.yaml 

[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-d55ffb795-ngzgz   1/1     Running   1          105m
calico-node-fr5fq                         1/1     Running   1          53m
calico-node-v9wtk                         1/1     Running   1          105m
calico-node-zp6cz                         1/1     Running   1          53m
coredns-6cc56c94bd-jjjq6                  1/1     Running   0          33s

测试解析是否正常:

# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # ns
nsenter   nslookup
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # exit

至此一个单 Master 的 k8s 节点就已经完成了。

9、增加master节点(k8s-master2)(高可用架构)

说明:Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现

Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可

用性。 针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:

Etcd数据库的高可用性和Kubernetes master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高

可用,本节将对master节点高可用进行说明和实施。 master节点扮演着总控中心的角色,通过不断与工作节点

上的Kubeletkube-proxy进行通信来维护整个集群的健康工作状态。如果master节点故障,将无法使用

kubectl工具或者API做任何集群管理。 master节点主要有三个服务kube-apiserver

kube-controller-managerkube-scheduler,其中kube-controller-managerkube-scheduler组件自

身通过选择机制已经实现了高可用,所以master高可用主要针对kube-apiserver组件,而该组件是以HTTP

API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

说明:现在需要再增加一台新服务器,作为k8s-masterIP192.168.54.104k8s-master2与已部署的

k8s-master1所有操作一致。所以我们只需将k8s-master1所有K8s文件拷贝过来,再修改下服务器IP和主机

名启动即可。

9.1 安装Docker(k8s-master1)

# k8s-master1节点执行
#!/bin/bash
scp /usr/bin/docker* root@192.168.54.104:/usr/bin
scp /usr/bin/runc root@192.168.54.104:/usr/bin
scp /usr/bin/containerd* root@192.168.54.104:/usr/bin
scp /usr/lib/systemd/system/docker.service root@192.168.54.104:/usr/lib/systemd/system
scp -r /etc/docker root@192.168.54.104:/etc

9.2 启动Docker、设置开机自启(k8s-master2)

# k8s-master2节点执行
systemctl daemon-reload
systemctl start docker
systemctl enable docker
systemctl status docker
# k8s-master2节点执行
[root@k8s-master2 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:49:27 CST; 21s ago
     Docs: https://docs.docker.com
 Main PID: 17089 (dockerd)
   CGroup: /system.slice/docker.service
           ├─17089 /usr/bin/dockerd --selinux-enabled=false --insecure-registry=127.0.0.1
           └─17099 containerd --config /var/run/docker/containerd/containerd.toml --log-level info

12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.608063063+08:00" level=info msg="sche...grpc
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.608073300+08:00" level=info msg="ccRe...grpc
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.608080626+08:00" level=info msg="Clie...grpc
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.645248247+08:00" level=info msg="Load...rt."
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.792885392+08:00" level=info msg="Defa...ess"
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.848943865+08:00" level=info msg="Load...ne."
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.890760085+08:00" level=info msg="Dock...03.9
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.891223273+08:00" level=info msg="Daem...ion"
12月 05 16:49:27 k8s-master2 dockerd[17089]: time="2022-12-05T16:49:27.909746131+08:00" level=info msg="API ...ock"
12月 05 16:49:27 k8s-master2 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

9.3 创建etcd证书目录(k8s-master2)

# k8s-master2节点执行
mkdir -p /opt/etcd/ssl

9.4 拷贝文件(k8s-master1)

拷贝 k8s-master1 上所有 k8s 文件和 etcd 证书到 k8s-master2:

# k8s-master1节点执行
#!/bin/bash
scp -r /opt/kubernetes root@192.168.54.104:/opt
scp -r /opt/etcd/ssl root@192.168.54.104:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.54.104:/usr/lib/systemd/system
scp /usr/bin/kubectl  root@192.168.54.104:/usr/bin
scp -r ~/.kube root@192.168.54.104:~

9.5 删除证书(k8s-master2)

删除 kubelet 和 kubeconfig 文件:

# k8s-master2节点执行
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*
# k8s-master2节点执行
[root@k8s-master2 ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubelet
│   ├── kube-proxy
│   └── kube-scheduler
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-controller-manager.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   ├── kube-proxy.kubeconfig
│   ├── kube-scheduler.conf
│   ├── kube-scheduler.kubeconfig
│   └── token.csv
├── logs
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

[root@k8s-master2 ~]# tree /opt/etcd/
/opt/etcd/
└── ssl
    ├── ca-config.json
    ├── ca.csr
    ├── ca-csr.json
    ├── ca-key.pem
    ├── ca.pem
    ├── server.csr
    ├── server-csr.json
    ├── server-key.pem
    └── server.pem
   
[root@k8s-master2 ~]# tree /usr/lib/systemd/system/
/usr/lib/systemd/system/
├── ......
├── kube-apiserver.service
├── kube-controller-manager.service
├── kube-scheduler.service
├── kubelet.service
├── kube-proxy.service
└── ......   

9.6 修改配置文件和主机名(k8s-master2)

修改 apiserver、kubelet 和 kube-proxy 配置文件为本地 IP:

# k8s-master2节点执行
vi /opt/kubernetes/cfg/kube-apiserver.conf 
...
--bind-address=192.168.54.104 \
--advertise-address=192.168.54.104 \
...

vi /opt/kubernetes/cfg/kube-controller-manager.kubeconfig
server: https://192.168.54.104:6443

vi /opt/kubernetes/cfg/kube-scheduler.kubeconfig
server: https://192.168.54.104:6443

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master2

vi ~/.kube/config
...
server: https://192.168.54.104:6443

9.7 启动并设置开机自启(k8s-master2)

# k8s-master2节点执行
systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
[root@k8s-master2 ~]# systemctl status kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:44 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 20411 (kube-apiserver)
    Tasks: 10
   Memory: 317.4M
   CGroup: /system.slice/kube-apiserver.service
           └─20411 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.54...

12月 05 16:58:44 k8s-master2 systemd[1]: Started Kubernetes API Server.
12月 05 16:58:45 k8s-master2 kube-apiserver[20411]: E1205 16:58:45.980478   20411 instance.go:392] Could not construct pre-rendered resp...ot: api
12月 05 16:58:48 k8s-master2 kube-apiserver[20411]: E1205 16:58:48.994575   20411 controller.go:152] Unable to remove old endpoints from...rorMsg:

● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:44 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 20418 (kube-controller)
    Tasks: 7
   Memory: 25.9M
   CGroup: /system.slice/kube-controller-manager.service
           └─20418 /opt/kubernetes/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --ku...

12月 05 16:58:44 k8s-master2 systemd[1]: Started Kubernetes Controller Manager.

● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:44 CST; 28s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 20423 (kube-scheduler)
    Tasks: 9
   Memory: 18.9M
   CGroup: /system.slice/kube-scheduler.service
           └─20423 /opt/kubernetes/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --kubeconfig=/opt/...

12月 05 16:58:44 k8s-master2 systemd[1]: Started Kubernetes Scheduler.

● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:45 CST; 28s ago
 Main PID: 20429 (kubelet)
    Tasks: 8
   Memory: 28.2M
   CGroup: /system.slice/kubelet.service
           └─20429 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-master2 --networ...

12月 05 16:58:45 k8s-master2 systemd[1]: Started Kubernetes Kubelet.

● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-12-05 16:58:45 CST; 28s ago
 Main PID: 20433 (kube-proxy)
    Tasks: 7
   Memory: 14.7M
   CGroup: /system.slice/kube-proxy.service
           └─20433 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-pro...

12月 05 16:58:45 k8s-master2 systemd[1]: Started Kubernetes Proxy.
12月 05 16:58:45 k8s-master2 kube-proxy[20433]: E1205 16:58:45.262842   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:58:46 k8s-master2 kube-proxy[20433]: E1205 16:58:46.362804   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:58:48 k8s-master2 kube-proxy[20433]: E1205 16:58:48.490551   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:58:52 k8s-master2 kube-proxy[20433]: E1205 16:58:52.634076   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
12月 05 16:59:01 k8s-master2 kube-proxy[20433]: E1205 16:59:01.559297   20433 node.go:161] Failed to retrieve node info: nodes "k8s-mast...t found
Hint: Some lines were ellipsized, use -l to show in full.

9.8 查看集群状态(k8s-master2)

# k8s-master2节点执行
[root@k8s-master2 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

9.9 审批kubelet证书申请(k8s-master1)

# 查看证书请求
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-CMPsAwf8hGyMEC205me9C5KXMkBthr8J1ihv67VLPMo   7m9s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
# 同意请求
# k8s-master1节点执行
[root@k8s-master1 ~]# kubectl certificate approve node-csr-CMPsAwf8hGyMEC205me9C5KXMkBthr8J1ihv67VLPMo
certificatesigningrequest.certificates.k8s.io/node-csr-CMPsAwf8hGyMEC205me9C5KXMkBthr8J1ihv67VLPMo approved
# 查看Node
# k8s-master1节点执行
[root@k8s-master1 ~]#  kubectl get nodes
NAME          STATUS   ROLES    AGE    VERSION
k8s-master1   Ready    <none>   167m   v1.20.15
k8s-master2   Ready    <none>   78s    v1.20.15
k8s-node1     Ready    <none>   87m    v1.20.15
k8s-node2     Ready    <none>   88m    v1.20.15
# 查看Node
# k8s-master2节点执行
[root@k8s-master2 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    <none>   3h48m   v1.20.15
k8s-master2   Ready    <none>   62m     v1.20.15
k8s-node1     Ready    <none>   149m    v1.20.15
k8s-node2     Ready    <none>   149m    v1.20.15

至此一个双 master 节点 k8s 集群已经部署完毕。

kubelet 默认采用向 Master 自动注册本 Node 的机制,在 Master 上查看各 Node 的状态,状态为 Ready 表示

Node 已经成功注册并且状态为可用。

等所有 Node 的状态都为 Ready 之后,一个 Kubernetes 集群就启动完成了。接下来可以创建 Pod、

Deployment、Service 等资源对象来部署容器应用了。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/659767.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

掌握Python的X篇_4_开发工具ipython与vscode的安装使用

本篇将会介绍两个工具的安装及使用来提高Python的编程效率。 ipython&#xff1a;比python更好用的交互式开发环境vscode&#xff1a;本身是文本编辑器&#xff0c;通过安装相关的插件vscode可以作为python集中开发环境使用 掌握Python的X篇_4_开发工具ipython与vscode的安装使…

第四章 linux编辑器——vim的使用

第四章 linux编辑器——vim的使用 一、什么是vim&#xff1f;二、vim的基本操作1、模式之间的相互切换2、vim的常见命令集&#xff08;1&#xff09;正常模式的常见命令a. 模式切换b. 光标移动c.删除文字d.复制e.替换f.撤销g.更改 &#xff08;2&#xff09;底行模式的常见命令…

复习之linux的网络配置

一、基本定义 1.IP IP指网际互连协议&#xff0c;Internet Protocol的缩写&#xff0c;是TCP/IP体系中的网络层协议。 电脑之间要实现网络通信&#xff0c;就必须要有一个合法的ip地址。 IP地址网络地址主机地址&#xff08;又称&#xff1a;主机号和网络号组成&#xff09…

【MySQL】MyISAM中的索引方案

介绍 B树索引使用存储引擎如表所示&#xff1a; 索引/存储引擎MyISAMInnoDBMemoryB树索引支持支持支持 多个存储引擎支持同一种类型的索引&#xff0c;但是他们的实现原理是不同的。 InnoDB和MyISAM默认的索引是B树索引&#xff0c;而Memory默认的索引是Hash索引。 MyISAM…

【软件测试】在Windows使用Docker搭建CentOS环境(详细)

目录&#xff1a;导读 前言一、Python编程入门到精通二、接口自动化项目实战三、Web自动化项目实战四、App自动化项目实战五、一线大厂简历六、测试开发DevOps体系七、常用自动化测试工具八、JMeter性能测试九、总结&#xff08;尾部小惊喜&#xff09; 前言 我们做软件测试在…

租服务器跑代码、pycharm连接服务器跑代码、Xshell连接服务器运行代码

一、服务器 1.1 注册 推荐使用矩池云服务器&#xff0c;按时按量计费&#xff0c;服务器自带镜像可选&#xff0c;可将要运行的项目上传到网盘 注册网址&#xff1a;矩池云 1.2 租用 选择合适的租用 1.3 选择镜像 选择合适的系统镜像 1.4 复制命令 进入租用列表&#xff…

【每日挠头算法题(9)】二叉树的直径|二叉树的层序遍历

文章目录 一、二叉树的直径思路&#xff1a;二叉树的深度优先搜索具体代码如下&#xff1a; 二、二叉树的层序遍历思路&#xff1a;借助队列实现具体代码如下&#xff1a; 总结&#xff1a; 一、二叉树的直径 点我直达~ 思路&#xff1a;二叉树的深度优先搜索 根据题目要求&a…

【计网】第三章 数据链路层

文章目录 数据链路层一、使用点对点信道的数据链路层1.1 数据链路和帧1.2 三个基本问题封装成帧透明传输差错控制 二、点对点协议 PPP2.1 PPP 协议的特点2.2 PPP 协议的帧格式2.3 PPP 协议的工作状态 三、使用广播信道的数据链路层3.1 局域网的数据链路层3.2 CSMA/CD 协议3.3 使…

安规测试简介(二)-常见安规认证测试之CE认证

CE认证&#xff1a; CE是法语的缩写&#xff0c;英文意思为 “European Conformity” 即”欧洲共同体”, 事实上&#xff0c;CE还是欧共体许多国家语种中的"欧共体"这一词组的缩写&#xff0c;原来用英语词组EUROPEAN COMMUNITY 缩写为EC&#xff0c;后因欧共体在法文…

python---字典(1)

字典的创建 字典: 是一种存储键值对的 键值对: 键(key) 值(value) 根据key可以快速的找到value (key和value有一定的映射关系) 在python字典中可以包含很多键值对,但是键是唯一的. 创建一个空的字典 创建字典的同时,设置初始值 推荐写法是如下的字典的初始化: 字典查找ke…

unittest教程__assert断言(4)

测试用例是否测试通过是通过将预期结果与实际结果做比较来判定的&#xff0c;那代码中怎么来判定用例是否通过呢&#xff1f;在python中这种判定的方法就叫做断言&#xff0c;断言可以使用python的assert方法&#xff0c;也可以使用unittest框架提供的一系列断言方法。 unitte…

强化历程2-Vue+axios+ajax面试系列(2023.6.17)

因为主要是后端&#xff0c;在此训练都是非常基础的题目,后续会持续更新… 文章目录 强化历程2-Vueaxiosajax面试系列(2023.6.18第一次更新)题目汇总1 Vue常用指令2 v-show和v-if区别3 讲一讲MVVM4 vue特点?5 vue组件之间的传值6 vue整合其他框架7 vue生命周期8 vue中实现路由…

通过共享内存进行通信(嵌入式学习)

通过共享内存进行通信 概念特点函数示例代码 概念 在Linux中&#xff0c;共享内存是一种进程间通信&#xff08;IPC&#xff09;机制&#xff0c;允许多个进程共享同一块内存区域。这种通信方式可以提供高效的数据传输&#xff0c;特别适用于需要频繁交换数据的场景。 IO间进…

【运动控制】安装固高运动控制卡驱动程序

【运动控制】安装固高运动控制卡驱动程序 1、背景2、卸载PCI设备3、安装驱动4、安装验证 1、背景 运动控制卡是用来做什么的&#xff1f;顾名思义&#xff0c;用来控制电机转动的。 本博客简单介绍固高科技(深圳)有限公司的运动控制卡的驱动安装。 在购买了固高控制卡后&…

网络层(3)6/12

1.网络层 网络层最大的特点就是提供路由&#xff0c;路由就是分组从源到目的地址时&#xff0c;绝定的端到端的路径 路由&#xff1a;路由是网络层最主要的工作任务 网关&#xff1a;一个网络域到另一个网络域的关卡&#xff0c;主要用于不同网段之间的通讯 路由的获取方式&…

OpenGL之鼠标拾取和模型控制

文章目录 鼠标拾取转化步骤步骤 0&#xff1a;2D 视口坐标步骤 1&#xff1a;3D 规范化设备坐标步骤2&#xff1a;4d 均匀剪辑坐标步骤3&#xff1a;4D 眼&#xff08;相机&#xff09;坐标步骤4&#xff1a;4d 世界坐标 源码 模型控制源码 鼠标拾取 转化步骤 使用鼠标单击或“…

PMP考试成绩查询流程

具体查询方法如下 当你在PMI的注册邮箱收到一封PMI发来的&#xff0c;标题为&#xff1a; 祝贺您获得PMP认证的邮件时&#xff0c;表明你通过了PMP考试。 若没收到邮件&#xff0c;可通过以下方式进行成绩查询&#xff1a; 1、打开PMI官网&#xff1a;www.pmi.org&#xff0…

给初级测试工程师的一些避坑建议

我遇到的大多数开发人员都不怎么热衷于测试。有些会去做测试&#xff0c;但大多数都不测试&#xff0c;不愿意测试&#xff0c;或者勉而为之。我喜欢测试&#xff0c;并且比起编写新的代码&#xff0c;愉快地花更多的时间在测试中。我认为&#xff0c;正是因为专注于测试&#…

【文生图系列】基础篇-变分推理(数学推导)

文章目录 KL散度前向 vs 反向 KL前向KL反向KL可视化 问题描述变分推理ELBO: Evidence Lower Bound参考 此篇博文主要介绍什么是变分推理(Variational Inference , VI)&#xff0c;以及它的数学推导公式。变分推理&#xff0c;是机器学习中一种流行的方式&#xff0c;使用优化的…

LLMs模型速览(GPTs、LaMDA、GLM/ChatGLM、PaLM/Flan-PaLM、BLOOM、LLaMA、Alpaca)

文章目录 一、 GPT系列1.1 GPTs&#xff08;OpenAI&#xff0c;2018——2020&#xff09;1.2 InstructGPT&#xff08;2022-3&#xff09;1.2.1 算法1.2.2 损失函数 1.3 ChatGPT&#xff08;2022.11.30&#xff09;1.4 ChatGPT plugin1.5 GPT-4&#xff08;2023.3.14&#xff0…