3.二进制高可用安装k8s 1.23集群(生产级)

news2024/12/28 18:15:26

二进制高可用安装k8s集群(生产级)

本文档适用于kubernetes1.23

节点

Etcd Cluster

Etcd是一个数据库,k8s做的一些变更啥的都会存到Etcd中

如果集群比较大建议与master节点分装,单独装Etcd

master节点

master分为几个重要的组件

你所有的流量都会经过Kube-APIServer

ControllerManager就是一个集群的控制器

Scheduler是调度器

生产环境不建议master节点也装载node组件

注意:k8s文档增加了反种族歧视声明,将标签中的master替换成了conrtol-plane

K8s Service 网段: 10.96.0.0/12

K8s Pod 网段: 172.168.0.0/12

所有节点操作

主机名,hosts文件,时间同步,ssh免密登录,安装git,安装docker

配置limit

[root@k8s-master01 ~]# ulimit -SHn 65535
[root@k8s-master01 ~]# cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655360
* hard nproc 655360
* soft memlock unlimited
* hard memlock unlimited
EOF
[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do 
> scp /etc/docker/daemon.json $NODE:/etc/docker/daemon.json;
> done
# 所有节点配置docker配置文件

docker的基础配置

https://blog.csdn.net/llllyh812/article/details/124264385

master节点

[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin/ kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
# 在解压文件的时候,如果压缩包中的文件存在多级目录。解压出来的时候如果你不想要这些多级目录,你就可以使用–strip-component参数来实现。
[root@k8s-master01 ~]# tar -zxvf etcd-v3.5.5-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin/ etcd-v3.5.5-linux-amd64/etcd{,ctl}
etcd-v3.5.5-linux-amd64/etcdctl
etcd-v3.5.5-linux-amd64/etcd
# 二进制安装其实就是把组件放到对应的目录,就是安装成功了
[root@k8s-master01 ~]# MasterNodes='k8s-master02 k8s-master03'
[root@k8s-master01 ~]# WorkNodes='k8s-node01 k8s-node02'               
[root@k8s-master01 ~]# for NODE in $MasterNodes; do echo $NODE;scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
k8s-master02
kubelet                                                                                   100%  118MB  52.1MB/s   00:02    
kubectl                                                                                   100%   44MB  48.4MB/s   00:00    
kube-apiserver                                                                            100%  125MB  51.9MB/s   00:02    
kube-controller-manager                                                                   100%  116MB  45.8MB/s   00:02    
kube-scheduler                                                                            100%   47MB  45.2MB/s   00:01    
kube-proxy                                                                                100%   42MB  53.1MB/s   00:00    
etcd                                                                                      100%   23MB  48.6MB/s   00:00    
etcdctl                                                                                   100%   17MB  53.0MB/s   00:00    
k8s-master03
kubelet                                                                                   100%  118MB  51.6MB/s   00:02    
kubectl                                                                                   100%   44MB  46.8MB/s   00:00    
kube-apiserver                                                                            100%  125MB  46.8MB/s   00:02    
kube-controller-manager                                                                   100%  116MB  45.2MB/s   00:02    
kube-scheduler                                                                            100%   47MB  50.6MB/s   00:00    
kube-proxy                                                                                100%   42MB  52.3MB/s   00:00    
etcd                                                                                      100%   23MB  53.5MB/s   00:00    
etcdctl                                                                                   100%   17MB  47.9MB/s   00:00    
[root@k8s-master01 ~]# for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/; done
kubelet                                                                                   100%  118MB  42.3MB/s   00:02    
kube-proxy                                                                                100%   42MB  54.8MB/s   00:00    
kubelet                                                                                   100%  118MB  47.1MB/s   00:02    
kube-proxy                                                                                100%   42MB  53.1MB/s   00:00 
# 将组件发送到其他节点

切换分支

所有节点操作
[root@k8s-master01 ~]# mkdir -p /opt/cni/bin
[root@k8s-master01 ~]# cd /opt/cni/bin/
# [root@k8s-master01 bin]# git clone https://github.com/forbearing/k8s-ha-install.git
# 内容太多,直接复制需要的文件夹即可
[root@k8s-master01 bin]# mv /root/pki.zip /opt/cni/bin/
[root@k8s-master01 bin]# unzip pki.zip 
...暂时没写

生成证书(重要)

二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的

生成证书的CSR文件: 证书签名请求文件,配置了一些域名,公司,单位

https://github.com/cloudflare/cfssl/releases

[root@k8s-master01 ~]# mv cfssl_1.6.3_linux_amd64 /usr/local/bin/cfssl
[root@k8s-master01 ~]# mv cfssljson_1.6.3_linux_amd64 /usr/local/bin/cfssljson
[root@k8s-master01 ~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson 
# master01节点下载证书生成工具
[root@k8s-master01 ~]# mkdir -p /etc/etcd/ssl
# 所有节点创建etcd证书目录
[root@k8s-master01 ~]# mkdir -p /etc/kubernetes/pki
# 所有节点创建kubernetes相关目录
[root@k8s-master01 ~]# cd /opt/cni/bin/pki/
[root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json |cfssljson -bare /etc/etcd/ssl/etcd-ca
2022/10/24 19:19:51 [INFO] generating a new CA key and certificate from CSR
2022/10/24 19:19:51 [INFO] generate received request
2022/10/24 19:19:51 [INFO] received CSR
2022/10/24 19:19:51 [INFO] generating key: rsa-2048
2022/10/24 19:19:52 [INFO] encoded CSR
2022/10/24 19:19:52 [INFO] signed certificate with serial number 141154029411162894054001938470301605874823228651
[root@k8s-master01 pki]# ls /etc/etcd/ssl
etcd-ca.csr  etcd-ca-key.pem  etcd-ca.pem

[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,172.20.251.107,172.20.251.108,172.20.251.109 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

2022/10/24 19:24:45 [INFO] generate received request
2022/10/24 19:24:45 [INFO] received CSR
2022/10/24 19:24:45 [INFO] generating key: rsa-2048
2022/10/24 19:24:45 [INFO] encoded CSR
2022/10/24 19:24:45 [INFO] signed certificate with serial number 223282157701235776819254846371243906472259526809
[root@k8s-master01 pki]# ls /etc/etcd/ssl/
etcd-ca.csr  etcd-ca-key.pem  etcd-ca.pem  etcd.csr  etcd-key.pem  etcd.pem
# 生成etcd CA证书和CA证书的key

[root@k8s-master01 pki]# for NODE in $MasterNodes; do
ssh $NODE "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
done
done
etcd-ca-key.pem                                                                           100% 1675   214.7KB/s   00:00    
etcd-ca.pem                                                                               100% 1318   128.3KB/s   00:00    
etcd-key.pem                                                                              100% 1679   139.5KB/s   00:00    
etcd.pem                                                                                  100% 1464    97.7KB/s   00:00    
etcd-ca-key.pem                                                                           100% 1675    87.8KB/s   00:00    
etcd-ca.pem                                                                               100% 1318   659.6KB/s   00:00    
etcd-key.pem                                                                              100% 1679   861.1KB/s   00:00    
etcd.pem                                                                                  100% 1464     1.1MB/s   00:00 
# 传输证书到其他master节点

Master01生成kubernetes证书

e[root@k8s-master01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
2022/10/24 20:08:22 [INFO] generating a new CA key and certificate from CSR
2022/10/24 20:08:22 [INFO] generate received request
2022/10/24 20:08:22 [INFO] received CSR
2022/10/24 20:08:22 [INFO] generating key: rsa-2048
2022/10/24 20:08:22 [INFO] encoded CSR
2022/10/24 20:08:22 [INFO] signed certificate with serial number 462801853285240841018202532125246688421494972307
# 生成根证书

[root@k8s-master01 pki]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,172.20.251.200,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,172.20.251.107,172.20.251.108,172.20.251.109 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
2022/10/24 20:12:12 [INFO] generate received request
2022/10/24 20:12:12 [INFO] received CSR
2022/10/24 20:12:12 [INFO] generating key: rsa-2048
2022/10/24 20:12:12 [INFO] encoded CSR
2022/10/24 20:12:12 [INFO] signed certificate with serial number 690736382497214625223151514350842894019030617359
# 10.96.0.是k8s service的网段,如果说需要更改k8s service的网段,那就需要更改为你设置的ip地址
# 可以预留几个IP地址或域名
# 生成apiServer的客户端证书

[root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
2022/10/24 20:15:35 [INFO] generating a new CA key and certificate from CSR
2022/10/24 20:15:35 [INFO] generate received request
2022/10/24 20:15:35 [INFO] received CSR
2022/10/24 20:15:35 [INFO] generating key: rsa-2048
2022/10/24 20:15:35 [INFO] encoded CSR
2022/10/24 20:15:35 [INFO] signed certificate with serial number 168281383231764970339015684919770971594394785122

# 生成apiServer的聚合证书. requestheader-client-xxx
# 根据你配置的证书,验证请求是否合法. 
# requestheader-allowed-xxx:aggerator验证请求头是否被允许的
# 暂时不了解也没关系,因为是刚入门,学到后面k8s的概念再来看应该就明白了

[root@k8s-master01 pki]# cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
2022/10/24 20:33:22 [INFO] generate received request
2022/10/24 20:33:22 [INFO] received CSR
2022/10/24 20:33:22 [INFO] generating key: rsa-2048
2022/10/24 20:33:23 [INFO] encoded CSR
2022/10/24 20:33:23 [INFO] signed certificate with serial number 323590969709332384506410320558260817204509478345
2022/10/24 20:33:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
# apiServer证书生成完成
# 相同方式生成controller-manager的证书

[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
2022/10/24 20:36:46 [INFO] generate received request
2022/10/24 20:36:46 [INFO] received CSR
2022/10/24 20:36:46 [INFO] generating key: rsa-2048
2022/10/24 20:36:46 [INFO] encoded CSR
2022/10/24 20:36:46 [INFO] signed certificate with serial number 108710809086222047459631230378556745824189906638
2022/10/24 20:36:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
# 没有生成controller-manager的ca证书,直接使用之前的根证书

[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://172.20.251.107:8443 \ 
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
Cluster "kubernetes" set.
# set-cluster:设置一个集群项
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig 
Context "system:kube-controller-manager@kubernetes" created.
# set-context:设置一个环境项,一个上下文

[root@k8s-master01 ~]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig 
User "system:kube-controller-manager" set.
# set-credentials 设置一个用户项

[root@k8s-master01 ~]# kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig 
Switched to context "system:kube-controller-manager@kubernetes".
# 使用某个环境当做默认环境
# controller-manager证书生成完成

[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
2022/10/25 07:33:39 [INFO] generate received request
2022/10/25 07:33:39 [INFO] received CSR
2022/10/25 07:33:39 [INFO] generating key: rsa-2048
2022/10/25 07:33:39 [INFO] encoded CSR
2022/10/25 07:33:39 [INFO] signed certificate with serial number 61172664806377914143954086942177408078631179977
2022/10/25 07:33:39 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://172.20.251.107:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Cluster "kubernetes" set.

[root@k8s-master01 pki]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig 
User "system:kube-scheduler" set.

[root@k8s-master01 pki]# kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig 
Context "system:kube-scheduler@kubernetes" created.

[root@k8s-master01 pki]# kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig 
Switched to context "system:kube-scheduler@kubernetes".
# 生成scheduler证书,跟之前生成controller-manage的证书流程一样

[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
2022/10/25 07:43:32 [INFO] generate received request
2022/10/25 07:43:32 [INFO] received CSR
2022/10/25 07:43:32 [INFO] generating key: rsa-2048
2022/10/25 07:43:32 [INFO] encoded CSR
2022/10/25 07:43:32 [INFO] signed certificate with serial number 77202050989431331897938308725922479340036299536
2022/10/25 07:43:32 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@k8s-master01 pki]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://172.20.251.107:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master01 pki]# kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
User "kubernetes-admin" set.

[root@k8s-master01 pki]# kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
Context "kubernetes-admin@kubernetes" created.

[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
Switched to context "kubernetes-admin@kubernetes".
# 生成admin证书,如上类似

为什么用.json文件生成证书,那kubernetes怎么区分scheduler和admin呢?

[root@k8s-master01 pki]# cat admin-csr.json 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
# admin -> system:masters
# clusterrole: admin-xxx --> clusterrolebinding --> system:masters
# clusterrole是一个集群角色,相当于一个配置,有这个集群操作权限
# system:masters这个组的所有用户都会有操作集群的权限了

[root@k8s-master01 pki]# cat scheduler-csr.json 
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
# 如上类似

# kubelet证书无需手动配置,可以自动生成

[root@k8s-master01 pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
Generating RSA private key, 2048 bit long modulus (2 primes)
.........................................................+++++
.......................+++++
e is 65537 (0x010001)
# 创建ServerAccount Key -> secret
# 用作之后token生成

[root@k8s-master01 pki]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
writing RSA key

[root@k8s-master01 pki]# for NODE in k8s-master02 k8s-master03; do
for FILE in `ls /etc/kubernetes/pki | grep -v etcd`; do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
...

[root@k8s-master01 pki]# cat ca-config.json 
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
# ca证书的配置文件
# 至此证书生成完成
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr          apiserver.pem           controller-manager-key.pem  front-proxy-client.csr      scheduler.csr
admin-key.pem      ca.csr                  controller-manager.pem      front-proxy-client-key.pem  scheduler-key.pem
admin.pem          ca-key.pem              front-proxy-ca.csr          front-proxy-client.pem      scheduler.pem
apiserver.csr      ca.pem                  front-proxy-ca-key.pem      sa.key
apiserver-key.pem  controller-manager.csr  front-proxy-ca.pem          sa.pub
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/|wc -l
23
# 查看证书文件

Kubernetes系统组件配置

etcd配置

etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址

一定采用基数的配置,不要采用偶数的配置,因为etcd容易脑裂

Master01节点
[root@k8s-master01 ~]# unzip conf.zip 
[root@k8s-master01 ~]# cp conf/etcd/etcd.config.yaml 1.txt
[root@k8s-master01 ~]# cat 1.txt    
name: k8s-master01
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.20.251.107:2380'
listen-client-urls: 'https://172.20.251.107:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.20.251.107:2380'
advertise-client-urls: 'https://172.20.251.107:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://172.20.251.107:2380,k8s-master02=https://172.20.251.108:2380,k8s-master03=https://172.20.251.109:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false

[root@k8s-master01 ~]# cp 1.txt /etc/etcd/etcd.config.yaml
cp: overwrite '/etc/etcd/etcd.config.yaml'? y
[root@k8s-master01 ~]# scp /etc/etcd/etcd.config.yaml k8s-master02:/root/1.txt
etcd.config.yaml                                                                          100% 1457   183.6KB/s   00:00    
[root@k8s-master01 ~]# scp /etc/etcd/etcd.config.yaml k8s-master03:/root/1.txt
etcd.config.yaml                                                                          100% 1457    48.2KB/s   00:00 
# 传输etcd配置文件到其他master节点
[root@k8s-master01 ~]# cp conf/etcd/etcd.service /usr/lib/systemd/system/etcd.service
cp: overwrite '/usr/lib/systemd/system/etcd.service'? y
[root@k8s-master01 ~]# cat /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yaml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
[root@k8s-master01 ~]# scp conf/etcd/etcd.service k8s-master02:/usr/lib/systemd/system/etcd.service
etcd.service                                                                              100%  307    26.1KB/s   00:00    
[root@k8s-master01 ~]# scp conf/etcd/etcd.service k8s-master03:/usr/lib/systemd/system/etcd.service
etcd.service                                                                              100%  307    21.4KB/s   00:00
# 所有Master节点创建etcd service并启动
[root@k8s-master01 ~]# mkdir /etc/kubernetes/pki/etcd
[root@k8s-master01 ~]# ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl start etcd && systemctl enable etcd
Created symlink /etc/systemd/system/etcd3.service → /usr/lib/systemd/system/etcd.service.
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
# 所有Master节点创建etcd的证书目录

[root@k8s-master01 ~]# export ETCDCTL_API=3
[root@k8s-master01 ~]# etcdctl --endpoints="172.20.251.109:2379,172.20.251.108:2379,172.20.251.107:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.20.251.109:2379 | 8c97fe1a4121e196 |   3.5.5 |   20 kB |      true |      false |         2 |          8 |                  8 |        |
| 172.20.251.108:2379 | 5bbeaed610eda0eb |   3.5.5 |   20 kB |     false |      false |         2 |          8 |                  8 |        |
| 172.20.251.107:2379 | 5389bd6821125460 |   3.5.5 |   20 kB |     false |      false |         2 |          8 |                  8 |        |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
# 查看etcd状态
Master02节点
[root@k8s-master02 ~]# cat 1.txt 
name: k8s-master02
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.20.251.108:2380'
listen-client-urls: 'https://172.20.251.108:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.20.251.108:2380'
advertise-client-urls: 'https://172.20.251.108:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://172.20.251.107:2380,k8s-master02=https://172.20.251.108:2380,k8s-master03=https://172.20.251.109:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
[root@k8s-master02 ~]# cp 1.txt /etc/etcd/etcd.config.yaml
[root@k8s-master01 ~]# mkdir /etc/kubernetes/pki/etcd
[root@k8s-master01 ~]# ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl start etcd && systemctl enable etcd
Created symlink /etc/systemd/system/etcd3.service → /usr/lib/systemd/system/etcd.service.
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
# 所有Master节点创建etcd的证书目录
Master03节点
[root@k8s-master03 ~]# cat 1.txt 
name: k8s-master03
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.20.251.109:2380'
listen-client-urls: 'https://172.20.251.109:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.20.251.109:2380'
advertise-client-urls: 'https://172.20.251.109:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://172.20.251.107:2380,k8s-master02=https://172.20.251.108:2380,k8s-master03=https://172.20.251.109:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
[root@k8s-master03 ~]# cat 1.txt 
name: k8s-master03
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.20.251.109:2380'
listen-client-urls: 'https://172.20.251.109:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.20.251.109:2380'
advertise-client-urls: 'https://172.20.251.109:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://172.20.251.107:2380,k8s-master02=https://172.20.251.108:2380,k8s-master03=https://172.20.251.109:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
[root@k8s-master03 ~]# cp 1.txt /etc/etcd/etcd.config.yaml
cp: overwrite '/etc/etcd/etcd.config.yaml'? y
[root@k8s-master01 ~]# mkdir /etc/kubernetes/pki/etcd
[root@k8s-master01 ~]# ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl start etcd && systemctl enable etcd
Created symlink /etc/systemd/system/etcd3.service → /usr/lib/systemd/system/etcd.service.
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
# 所有Master节点创建etcd的证书目录

因重装云平台导致环境变更

新环境
请添加图片描述

高可用配置

所有Master节点
[root@k8s-master01 ~]# dnf -y install keepalived haproxy
# 所有master节点安装haproxy keepalived
Master01节点
[root@k8s-master01 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak 
[root@k8s-master01 ~]# cp conf/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg
[root@k8s-master01 ~]# cat /etc/haproxy/haproxy.cfg
global
    maxconn  2000
    ulimit-n  16384
    log  127.0.0.1 local0 err
    stats timeout 30s

defaults
    log global
    mode  http
    option  httplog
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    timeout http-request 15s
    timeout http-keep-alive 15s

frontend k8s-master
    bind 0.0.0.0:8443
    bind 127.0.0.1:8443
    mode tcp
    option tcplog
    tcp-request inspect-delay 5s
    default_backend k8s-master

backend k8s-master
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server k8s-master01    172.20.252.107:6443  check
    server k8s-master02    172.20.252.108:6443  check
    server k8s-master03    172.20.252.109:6443  check
[root@k8s-master01 ~]# scp /etc/haproxy/haproxy.cfg k8s-master02:/etc/haproxy/haproxy.cfg 
haproxy.cfg                                                                               100%  820     1.9MB/s   00:00    
[root@k8s-master01 ~]# scp /etc/haproxy/haproxy.cfg k8s-master03:/etc/haproxy/haproxy.cfg  
haproxy.cfg                                                                               100%  820     1.6MB/s   00:00
# 所有master配置haproxy
[root@k8s-master01 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@k8s-master01 ~]# cp conf/keepalived/keepalived.conf /etc/keepalived/keepalived.conf
[root@k8s-master01 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
    router_id LVS_DEVEL
    script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    mcast_src_ip 172.20.252.107
    virtual_router_id 51
    priority 101
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8S_PASS
    }
    virtual_ipaddress {
        172.20.252.200
    }
    track_script {
        chk_apiserver
} }
[root@k8s-master01 ~]# scp /etc/keepalived/keepalived.conf k8s-master02:/etc/keepalived/keepalived.conf 
keepalived.conf                                                                           100%  555   575.6KB/s   00:00    
[root@k8s-master01 ~]# scp /etc/keepalived/keepalived.conf k8s-master03:/etc/keepalived/keepalived.conf  
keepalived.conf                                                                           100%  555   763.8KB/s   00:00 
[root@k8s-master01 ~]# cp conf/keepalived/check_apiserver.sh /etc/keepalived/check_apiserver.sh
[root@k8s-master01 ~]# cat /etc/keepalived/check_apiserver.sh    
#!/usr/bin/env bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    systemctl stop keepalived
    exit 1
else
    exit 0
fi
[root@k8s-master01 ~]# scp /etc/keepalived/check_apiserver.sh k8s-master02:/etc/keepalived/
check_apiserver.sh                                                                        100%  355   199.5KB/s   00:00    
[root@k8s-master01 ~]# scp /etc/keepalived/check_apiserver.sh k8s-master03:/etc/keepalived/
check_apiserver.sh                                                                        100%  355    38.3KB/s   00:00 
[root@k8s-master01 ~]# chmod +x /etc/keepalived/check_apiserver.sh
# 配置keepalived并编写健康检查脚本
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now haproxy
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
[root@k8s-master01 ~]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
# 启动服务并设置开机自启
[root@k8s-master01 ~]# tail /var/log/messages    
Oct 27 01:33:56 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
Oct 27 01:33:56 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
Oct 27 01:33:56 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
Oct 27 01:33:56 k8s-master01 NetworkManager[1542]: <info>  [1666834436.8400] policy: set-hostname: current hostname was changed outside NetworkManager: 'k8s-master01'
Oct 27 01:34:01 k8s-master01 Keepalived_vrrp[19149]: (VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.20.252.200
Oct 27 01:34:01 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
Oct 27 01:34:01 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
Oct 27 01:34:01 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
Oct 27 01:34:01 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
Oct 27 01:34:01 k8s-master01 Keepalived_vrrp[19149]: Sending gratuitous ARP on eth0 for 172.20.252.200
# 查看日志
Master02节点
[root@k8s-master02 ~]# chmod +x /etc/keepalived/check_apiserver.sh
# 增加执行权限
[root@k8s-master02 ~]# cat /etc/keepalived/keepalived.conf           
global_defs {
    router_id LVS_DEVEL
    script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 172.20.252.108
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8S_PASS
    }
    virtual_ipaddress {
        172.20.252.200
    }
    track_script {
        chk_apiserver
} }
# 修改keepalived配置文件
[root@k8s-master02 ~]# systemctl daemon-reload
[root@k8s-master02 ~]# systemctl enable --now haproxy
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
[root@k8s-master02 ~]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
# 启动服务并设置开机自启
[root@k8s-master02 ~]# dnf -y install telnet
[root@k8s-master02 ~]# telnet 172.20.252.200 8443 
Trying 172.20.252.200...
Connected to 172.20.252.200.
Escape character is '^]'.
# 只要出现这个中括号就说明没有问题
Connection closed by foreign host.
# telnet测试keepalived是否正常
Master03节点
[root@k8s-master03 ~]# chmod +x /etc/keepalived/check_apiserver.sh
# 增加执行权限
[root@k8s-master03 ~]# cat /etc/keepalived/keepalived.conf    
global_defs {
    router_id LVS_DEVEL
    script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 172.20.252.109
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8S_PASS
    }
    virtual_ipaddress {
        172.20.252.200
    }
    track_script {
        chk_apiserver
} }
# 修改keepalived配置文件
[root@k8s-master03 ~]# systemctl daemon-reload
[root@k8s-master03 ~]# systemctl enable --now haproxy
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.
[root@k8s-master03 ~]# systemctl enable --now keepalived
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
# 启动服务并设置开机自启

环境变更2

请添加图片描述

Kubernetes组件配置

所有Master节点
[root@k8s-master01 ~]# mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
# 所有节点创建相关目录
Master01节点
[root@k8s-master01 ~]# cp kube-controller-manager.txt /usr/lib/systemd/system/kube-controller-manager.service
[root@k8s-master01 ~]# cp conf/k8s/v1.23/kube-apiserver.service kube-apiserver.txt
[root@k8s-master01 ~]# cat kube-apiserver.txt                                        
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=172.20.252.200 \
    --allow-privileged=true \
    --authorization-mode=Node,RBAC \
    --client-ca-file=/etc/kubernetes/pki/ca.pem \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
    --enable-bootstrap-token-auth=true \
    --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
    --etcd-certfile=/etc/etcd/ssl/etcd.pem \
    --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
    --etcd-servers=https://172.20.252.117:2379,https://172.20.252.118:2379,https://172.20.252.119:2379 \
    --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
    --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
    --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
    --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
    --requestheader-allowed-names=front-proxy-client \
    --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    --requestheader-extra-headers-prefix=X-Remote-Extra- \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    --secure-port=6443 \
    --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
    --service-cluster-ip-range=10.96.0.0/12 \
    --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
    --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
    --v=2
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
[root@k8s-master01 ~]# cp kube-apiserver.txt /usr/lib/systemd/system/kube-apiserver.service
# 所有Master节点创建kubekube-apiserver.service 
# 如果不是高可用集群,172.20.252.200改为master01的地址
[root@k8s-master01 ~]# scp kube-apiserver.txt k8s-master02:/usr/lib/systemd/system/kube-apiserver.service
kube-apiserver.txt                                                                        100% 1915    57.9KB/s   00:00    
[root@k8s-master01 ~]# scp kube-apiserver.txt k8s-master03:/usr/lib/systemd/system/kube-apiserver.service
kube-apiserver.txt                                                                        100% 1915   812.3KB/s   00:00
[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now kube-apiserver
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /usr/lib/systemd/system/kube-apiserver.service.
# 所有Master节点开启kube-apiserver
[root@k8s-master01 ~]# tail -f /var/log/messages 
...
[root@k8s-master01 ~]# systemctl status kube-apiserver.service 
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-10-28 09:32:16 CST; 1min 11s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 16024 (kube-apiserver)
    Tasks: 16 (limit: 101390)
   Memory: 198.1M
   ...
   I1028 09:32:34.623463   16024 controller.go:611] quota admission added>Oct 28 09:32:34 k8s-master01.novalocal kube-apiserver[16024]: 
   E1028 09:32:34.923027   16024 controller.go:228] unable to sync kubern>
   # 这个报错是因为主机名没有设置,设置后重启服务即可
# 检测kube-server状态
# I开头是正常,E开头是报错
[root@k8s-master01 ~]# cp conf/k8s/v1.23/kube-controller-manager.service kube-controller-manager.txt
[root@k8s-master01 ~]# cat kube-controller-manager.txt    
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
    --allocate-node-cidrs=true \
    --authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
    --authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
    --bind-address=127.0.0.1 \
    --client-ca-file=/etc/kubernetes/pki/ca.pem \
    --cluster-cidr=172.16.0.0/12 \
    --cluster-name=kubernetes \
    --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
    --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
    --controllers=*,bootstrapsigner,tokencleaner \
    --cluster-signing-duration=876000h0m0s \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
    --leader-elect=true \
    --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    --root-ca-file=/etc/kubernetes/pki/ca.pem \
    --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
    --service-cluster-ip-range=10.96.0.0/12 \
    --use-service-account-credentials=true \
    --v=2
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
[root@k8s-master01 ~]# cp kube-controller-manager.txt /usr/lib/systemd/system/kube-controller-manager.service
[root@k8s-master01 ~]# scp kube-controller-manager.txt k8s-master02:/usr/lib/systemd/system/kube-controller-manager.service
kube-controller-manager.txt                                                                        100% 1200    51.3KB/s   00:00    
[root@k8s-master01 ~]# scp kube-controller-manager.txt k8s-master03:/usr/lib/systemd/system/kube-controller-manager.service
kube-controller-manager.txt                                                                        100% 1200     4.7KB/s   00:00 
# 所有Master节点配置kube-controller-manager.service
# 注意本文档使用k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段,k8s Service网段若重复,需修改
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig 
# 通信文件
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
# 所有Master节点开启kube-controller-manager
[root@k8s-master01 ~]# systemctl status kube-controller-manager.service 
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-10-28 10:05:56 CST; 1min 17s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 19071 (kube-controller)
    Tasks: 7 (limit: 101390)
   Memory: 24.8M
# 检测kube-controller-manager状态
[root@k8s-master01 ~]# cp conf/k8s/v1.23/kube-scheduler.service kube-scheduler.txt
[root@k8s-master01 ~]# cp kube-scheduler.txt /usr/lib/systemd/system/kube-scheduler.service
[root@k8s-master01 ~]# scp kube-scheduler.txt k8s-master02:/usr/lib/systemd/system/kube-scheduler.service
kube-scheduler.txt                                                                                 100%  501   195.0KB/s   00:00    
[root@k8s-master01 ~]# scp kube-scheduler.txt k8s-master03:/usr/lib/systemd/system/kube-scheduler.service
kube-scheduler.txt                                                                                 100%  501   326.2KB/s   00:00 
# 所有Master节点配置kube-scheduler.service
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
# 所有Master节点启动kube-scheduler.service
Master02节点
[root@k8s-master02 ~]# cat /usr/lib/systemd/system/kube-apiserver.service    
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=172.20.252.118 \
    --allow-privileged=true \
    --authorization-mode=Node,RBAC \
    --client-ca-file=/etc/kubernetes/pki/ca.pem \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
    --enable-bootstrap-token-auth=true \
    --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
    --etcd-certfile=/etc/etcd/ssl/etcd.pem \
    --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
    --etcd-servers=https://172.20.252.117:2379,https://172.20.252.118:2379,https://172.20.252.119:2379 \
    --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
    --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
    --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
    --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
    --requestheader-allowed-names=front-proxy-client \
    --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    --requestheader-extra-headers-prefix=X-Remote-Extra- \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    --secure-port=6443 \
    --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
    --service-cluster-ip-range=10.96.0.0/12 \
    --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
    --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
    --v=2
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
[root@k8s-master02 ~]# systemctl daemon-reload && systemctl enable --now kube-apiserver
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /usr/lib/systemd/system/kube-apiserver.service.
# 所有Master节点开启kube-apiserver
[root@k8s-master02 ~]# systemctl daemon-reload
[root@k8s-master02 ~]# systemctl enable --now kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
# 所有Master节点开启kube-controller-manager
[root@k8s-master02 ~]# systemctl daemon-reload
[root@k8s-master02 ~]# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
# 所有Master节点启动kube-scheduler.service
Master03节点
[root@k8s-master03 ~]# cat /usr/lib/systemd/system/kube-apiserver.service    
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=172.20.252.119 \
    --allow-privileged=true \
    --authorization-mode=Node,RBAC \
    --client-ca-file=/etc/kubernetes/pki/ca.pem \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
    --enable-bootstrap-token-auth=true \
    --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
    --etcd-certfile=/etc/etcd/ssl/etcd.pem \
    --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
    --etcd-servers=https://172.20.252.117:2379,https://172.20.252.118:2379,https://172.20.252.119:2379 \
    --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
    --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
    --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
    --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
    --requestheader-allowed-names=front-proxy-client \
    --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
    --requestheader-extra-headers-prefix=X-Remote-Extra- \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    --secure-port=6443 \
    --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
    --service-cluster-ip-range=10.96.0.0/12 \
    --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
    --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
    --v=2
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
[root@k8s-master03 ~]# systemctl daemon-reload && systemctl enable --now kube-apiserver
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /usr/lib/systemd/system/kube-apiserver.service.
# 所有Master节点开启kube-apiserver
[root@k8s-master03 ~]# systemctl daemon-reload
[root@k8s-master03 ~]# systemctl enable --now kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
# 所有Master节点开启kube-controller-manager
[root@k8s-master03 ~]# systemctl daemon-reload
[root@k8s-master03 ~]# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
# 所有Master节点启动kube-scheduler.service

TLS Bootstrapping配置

自动颁发证书组件

为什么kubelet证书不去手动生成呢?

因为k8s主节点是固定的,node节点变换比较多,如果手动管理会很麻烦

Master01节点
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true --server=https://172.20.252.200:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Cluster "kubernetes" set.
# 注意,如果不是高可用集群,172.20.252.200:8443改为master01的地址,8443改为apiserver的端口,默认6443
[root@k8s-master01 ~]# kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
User "tls-bootstrap-token-user" set.
# 如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图黄色方框内的字符串是一致的,并且位数也是一样的,还要保证上个命令的--token的值与修改的字符串一致

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zoJum0i9-1685885489334)(.3.二进制高可用安装k8s集群(生产级)]\4bootstrap.secret.yaml.png)

[root@k8s-master01 ~]# kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes \
--user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Context "tls-bootstrap-token-user@kubernetes" created.
[root@k8s-master01 ~]# kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Switched to context "tls-bootstrap-token-user@kubernetes".
# bootstrap-kubelet.kubeconfig就是kubelet拿来向apiserver申请证书的文件
# Master01节点创建TLS Bootstrapping
[root@k8s-master01 bootstrap]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
# 现在没有证书问价无法查询
[root@k8s-master01 ~]# mkdir -p /root/.kube; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
# 创建kubernetes文件目录,将证书复制到这个目录下,kubectl就能操作集群了
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml 
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
# kubectl应用yaml文件
[root@k8s-master01 bootstrap]# kubectl get nodes
No resources found
# 已经可以查询
# kubectl只要有一个节点有就可以
# 甚至可以把kubectl放到集群外的节点,只要是能跟集群通信的节点即可

Node节点配置

Master01节点
[root@k8s-master01 ~]# cd /etc/kubernetes/
[root@k8s-master01 kubernetes]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do 
ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl/ /etc/etcd/ssl
for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do 
scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
done
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do 
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
# 将证书复制到其他节点

Kubelet配置

所有节点
[root@k8s-master01 ~]# mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
# 所有节点创建相关目录
[root@k8s-master01 ~]# cp conf/k8s/v1.23/kubelet.service /usr/lib/systemd/system/kubelet.service
[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do     
> scp /usr/lib/systemd/system/kubelet.service $NODE:/usr/lib/systemd/system/kubelet.service; done
# 所有节点配置kubelet.service
[root@k8s-master01 ~]# cp conf/k8s/v1.23/10-kubelet.conf /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
scp /etc/systemd/system/kubelet.service.d/10-kubelet.conf $NODE:/etc/systemd/system/kubelet.service.d/10-kubelet.conf;
done
# 所有节点配置kubelet.service的配置文件
[root@k8s-master01 ~]# cp conf/k8s/v1.23/kubelet-conf.yaml kubelet-conf.yaml 
[root@k8s-master01 ~]# cat kubelet-conf.yaml    
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
# 注意修改此处
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: #resolvConf#
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
allowedUnsafeSysctls:
 - "net.core*"
 - "net.ipv4.*"
#kubeReserved:
#  cpu: "1"
#  memory: 1Gi
#  ephemeral-storage: 10Gi
#systemReserved:
#  cpu: "1"
#  memory: 1Gi
#  ephemeral-storage: 10Gi
[root@k8s-master01 ~]# cp kubelet-conf.yaml /etc/kubernetes/kubelet-conf.yaml
[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
> scp /etc/kubernetes/kubelet-conf.yaml $NODE:/etc/kubernetes/kubelet-conf.yaml;
> done
# 所有节点创建kubelet的配置文件
# 注意: 如果更改了k8s的service的网段,需要更改kubelet-conf.yaml的clusterDNS:配置,改成k8s service网段的第十个地址,比如10.96.0.10
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl restart docker
[root@k8s-master01 ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
# 所有节点启动kubelet

[root@k8s-master01 ~]# kubectl get node      
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   <none>   7m17s   v1.23.13
k8s-master02   NotReady   <none>   7m18s   v1.23.13
k8s-master03   NotReady   <none>   7m18s   v1.23.13
k8s-node01     NotReady   <none>   7m18s   v1.23.13
k8s-node02     NotReady   <none>   7m17s   v1.23.13
[root@k8s-master01 ~]# tail /var/log/messages
...
k8s-master01 kubelet[140700]: I1030 15:32:24.237004  140700 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.d
...
# 查看日志
# 这是因为Calico还没有装

kube-proxy配置

Master01节点
[root@k8s-master01 ~]# yum -y install conntrack-tools
[root@k8s-master01 ~]# kubectl -n kube-system create serviceaccount kube-proxy
serviceaccount/kube-proxy created
[root@k8s-master01 ~]# kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
clusterrolebinding.rbac.authorization.k8s.io/system:kube-proxy created
[root@k8s-master01 ~]# SECRET=$(kubectl -n kube-system get sa/kube-proxy \
--output=jsonpath='{.secrets[0].name}')
[root@k8s-master01 ~]# JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
> --output=jsonpath='{.data.token}'|base64 -d)
[root@k8s-master01 ~]# mkdir k8s-ha-install
[root@k8s-master01 ~]# cd k8s-ha-install/
[root@k8s-master01 k8s-ha-install]# PKI_DIR=/etc/kubernetes/pki
[root@k8s-master01 k8s-ha-install]# K8S_DIR=/etc/kubernetes
[root@k8s-master01 k8s-ha-install]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://172.20.252.200:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master01 k8s-ha-install]# kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
User "kubernetes" set.
[root@k8s-master01 k8s-ha-install]# kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
Context "kubernetes" created.
[root@k8s-master01 k8s-ha-install]# kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
Switched to context "kubernetes".
# 注意,如果不是高可用集群,172.20.252.200:8443改为master01的地址,8443改为apiserver的端口,默认是6443
[root@k8s-master01 k8s-ha-install]# mkdir kube-proxy
[root@k8s-master01 k8s-ha-install]# cat > kube-proxy/kube-proxy.conf <<EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/var/log/kubernetes/logs \\
--config=/etc/kubernetes/kube-proxy.yaml"
EOF
[root@k8s-master01 k8s-ha-install]# cp /root/conf/k8s/v1.23/kube-proxy.yaml kube-proxy/
[root@k8s-master01 k8s-ha-install]# sed -ri "s/#POD_NETWORK_CIDR#/172.16.0.0\/12/g" kube-proxy/kube-proxy.yaml 
[root@k8s-master01 k8s-ha-install]# sed -ri "s/#KUBE_PROXY_MODE#/ipvs/g" kube-proxy/kube-proxy.yaml
[root@k8s-master01 k8s-ha-install]# for NODE in k8s-master01 k8s-master02 k8s-master03; do
scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig 
scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
scp kube-proxy/kube-proxy.yaml $NODE:/etc/kubernetes/kube-proxy.yaml
scp /root/conf/k8s/v1.23/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
done
[root@k8s-master01 k8s-ha-install]# for NODE in k8s-node01 k8s-node02; do
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig 
scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
scp kube-proxy/kube-proxy.yaml $NODE:/etc/kubernetes/kube-proxy.yaml
scp /root/conf/k8s/v1.23/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
done
# 在Master01将kube-proxy的systemd service文件发送到其他节点
# 如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12参数为pod的网段
所有节点
[root@k8s-master01 ~]# systemctl enable --now kube-proxy
Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-master01 k8s-ha-install]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-10-30 22:38:06 CST; 4s ago
   ...

Calico安装

Master01节点
[root@k8s-master01 ~]# cd k8s-ha-install/
[root@k8s-master01 k8s-ha-install]# mkdir calico
[root@k8s-master01 k8s-ha-install]# cd calico/
[root@k8s-master01 calico]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.3/manifests/calico-etcd.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 21088  100 21088    0     0  77245      0 --:--:-- --:--:-- --:--:-- 76963
[root@k8s-master01 calico]# sed -ri 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.20.252.117:2379,https://172.20.252.118:2379,https://172.20.252.119:2379"#g' calico-etcd.yaml 
[root@k8s-master01 calico]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
[root@k8s-master01 calico]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`          
[root@k8s-master01 calico]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
[root@k8s-master01 calico]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml 
[root@k8s-master01 calico]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: ""#etcd_key: "/calico-secrets/etcd-key"#g' calico-etcd.yaml 
# 修改calico-etcd.yaml文件的以上位置
[root@k8s-master01 calico]# POD_SUBNET="172.16.0.0/12"
[root@k8s-master01 calico]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml 
# 修改为自己的pod网段
[root@k8s-master01 calico]# kubectl apply -f calico-etcd.yaml 
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
# calico-etcd.yaml只在一个节点执行一次
# 注意,默认拉取镜像从docker.io拉取,如果网络环境差会很慢

请添加图片描述

如果网络环境不好建议将image: 改为阿里云的镜像仓库

请添加图片描述

[root@k8s-master01 calico]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-6c86f69969-8mjqk   1/1     Running   0             13m
calico-node-5cr6v                          1/1     Running   2 (10m ago)   13m
calico-node-9ss4w                          1/1     Running   1 (10m ago)   13m
calico-node-d4jbp                          1/1     Running   1 (10m ago)   13m
calico-node-dqssq                          1/1     Running   0             13m
calico-node-tnhjj                          1/1     Running   0             13m
# 如果容器状态一次可以使用kubectl describe或者logs查看容器的日志
[root@k8s-master01 calico]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    <none>   18h   v1.23.13
k8s-master02   Ready    <none>   18h   v1.23.13
k8s-master03   Ready    <none>   18h   v1.23.13
k8s-node01     Ready    <none>   18h   v1.23.13
k8s-node02     Ready    <none>   18h   v1.23.13
# 验证节点都为Ready状态
# 个人推荐网络使用Calico而不是Flannel

安装CoreDNS

kubernetes服务之间的访问都是通过service调用的

那service怎么解析成IP地址呢,都是靠这个CoreDNS

Master01节点
[root@k8s-master01 ~]# cd /root/k8s-ha-install/
[root@k8s-master01 k8s-ha-install]# mkdir CoreDNS
[root@k8s-master01 k8s-ha-install]# cd CoreDNS/
[root@k8s-master01 CoreDNS]# curl https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed -o coredns.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4304  100  4304    0     0   6998      0 --:--:-- --:--:-- --:--:--  6987
# 如果网络环境不好也可以直接去github复制
[root@k8s-master01 CoreDNS]# sed -i "s#clusterIP: CLUSTER_DNS_IP#clusterIP: 10.96.0.10#g" coredns.yaml
# 如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP
[root@k8s-master01 CoreDNS]# kubectl create -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
# 安装coredns
错误解决
以下为判断错误的解决方案
[root@k8s-master01 CoreDNS]# kubectl get pods -n kube-system -l k8s-app=kube-dns         
NAME                       READY   STATUS             RESTARTS   AGE
coredns-7875fc54bb-v6tsj   0/1     ImagePullBackOff   0          3m52s
# 查看状态,出现镜像拉取错误
[root@k8s-master01 CoreDNS]# docker import /root/coredns_1.9.4_linux_amd64.tgz coredns/coredns:1.9.4
sha256:98855711be7d3e4fe0ae337809191d2a4075702e2c00059f276e5c44da9b2e22
# 到官网下载后上传到Master01节点,使用import导入镜像
[root@k8s-master01 CoreDNS]# for NODE in k8s-master02 k8s-master03; do
> scp /root/coredns_1.9.4_linux_amd64.tgz $NODE:/root/coredns_1.9.4_linux_amd64.tgz
> done
coredns_1.9.4_linux_amd64.tgz                                                             100%   14MB 108.1MB/s   00:00    
coredns_1.9.4_linux_amd64.tgz                                                             100%   14MB  61.2MB/s   00:00
# 传输到其他Master节点
[root@k8s-master02 ~]# docker import /root/coredns_1.9.4_linux_amd64.tgz coredns/coredns:1.9.4
sha256:462c1fe56dcaca89d4b75bff0151c638da1aa59457ba49f8ac68552aa3a92203
[root@k8s-master03 ~]# docker import /root/coredns_1.9.4_linux_amd64.tgz coredns/coredns:1.9.4
sha256:c6f17bff4594222312b30b322a7e02e0a78adde6fa082352054bc27734698f69
# 其余Master节点导入镜像
[root@k8s-master01 CoreDNS]# kubectl delete -f coredns.yaml 
serviceaccount "coredns" deleted
clusterrole.rbac.authorization.k8s.io "system:coredns" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:coredns" deleted
configmap "coredns" deleted
deployment.apps "coredns" deleted
service "kube-dns" deleted
[root@k8s-master01 CoreDNS]# kubectl create -f coredns.yaml       
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
# 重新安装coredns
[root@k8s-master01 CoreDNS]# kubectl describe pod coredns-7875fc54bb-gtt7p -n kube-system
  Warning  Failed     2m32s (x4 over 3m27s)  kubelet            Error: failed to start container "coredns": Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-conf": executable file not found in $PATH: unknown
# 再次出现报错
[root@k8s-master01 CoreDNS]# kubectl delete -f coredns.yaml 
serviceaccount "coredns" deleted
clusterrole.rbac.authorization.k8s.io "system:coredns" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:coredns" deleted
configmap "coredns" deleted
deployment.apps "coredns" deleted
service "kube-dns" deleted
# 删除coredns
[root@k8s-master01 CoreDNS]# docker rmi coredns/coredns:1.9.4 
Untagged: coredns/coredns:1.9.4
Deleted: sha256:802f9bb655d0cfcf0141de41c996d659450294b00e54d1eff2e44a90564071ca
Deleted: sha256:1b0937bab1d24d9e264d6adf4e61ffb576981150a9d8bf16c44eea3c79344f43
[root@k8s-master02 ~]# docker rmi coredns/coredns:1.9.4 
Untagged: coredns/coredns:1.9.4
Deleted: sha256:462c1fe56dcaca89d4b75bff0151c638da1aa59457ba49f8ac68552aa3a92203
Deleted: sha256:1b0937bab1d24d9e264d6adf4e61ffb576981150a9d8bf16c44eea3c79344f43
[root@k8s-master03 ~]# docker rmi coredns/coredns:1.9.4 
Untagged: coredns/coredns:1.9.4
Deleted: sha256:c6f17bff4594222312b30b322a7e02e0a78adde6fa082352054bc27734698f69
Deleted: sha256:1b0937bab1d24d9e264d6adf4e61ffb576981150a9d8bf16c44eea3c79344f43
以下为正确解决方案
[root@k8s-master01 ~]# docker pull coredns/coredns:1.9.4
1.9.4: Pulling from coredns/coredns
c6824c7a0594: Pull complete 
8f16f0bc6a9b: Pull complete 
Digest: sha256:b82e294de6be763f73ae71266c8f5466e7e03c69f3a1de96efd570284d35bb18
Status: Downloaded newer image for coredns/coredns:1.9.4
docker.io/coredns/coredns:1.9.4
# 所有节点提前拉取coredns的镜像
再次发现报错
[root@k8s-master01 ~]# cd k8s-ha-install/CoreDNS/
[root@k8s-master01 CoreDNS]# kubectl create -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
# 再次安装coredns
[root@k8s-master01 CoreDNS]# kubectl -n kube-system logs -f -p coredns-7875fc54bb-mhpgx
/etc/coredns/Corefile:18 - Error during parsing: Unknown directive '}STUBDOMAINS'
[root@k8s-master01 CoreDNS]# sed -i "s/kubernetes CLUSTER_DOMAIN REVERSE_CIDRS/kubernetes cluster.local in-addr.arpa ip6.arpa/g" coredns.yaml
[root@k8s-master01 CoreDNS]# sed -i "s#forward . UPSTREAMNAMESERVER#forward . /etc/resolv.conf#g" coredns.yaml
[root@k8s-master01 CoreDNS]# sed -i "s/STUBDOMAINS//g" coredns.yaml 
[root@k8s-master01 CoreDNS]# kubectl delete -f coredns.yaml 
serviceaccount "coredns" deleted
clusterrole.rbac.authorization.k8s.io "system:coredns" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:coredns" deleted
configmap "coredns" deleted
deployment.apps "coredns" deleted
service "kube-dns" deleted
[root@k8s-master01 CoreDNS]# kubectl create -f coredns.yaml       
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
# 再次安装coredns
[root@k8s-master01 CoreDNS]# kubectl get pods -n kube-system -l k8s-app=kube-dns          
NAME                       READY   STATUS    RESTARTS   AGE
coredns-7875fc54bb-l7cl9   1/1     Running   0          109s
# 成功Running
[root@k8s-master01 CoreDNS]# kubectl describe pod coredns-7875fc54bb-l7cl9 -n kube-system |tail -n1
  Warning  Unhealthy  108s (x6 over 2m15s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503
# 好像还是有报错,不过不影响的话暂时不处理

安装Metrics Server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存,磁盘,CPU和网络的使用率

Master01节点
[root@k8s-master01 CoreDNS]# cd ..
[root@k8s-master01 k8s-ha-install]# mkdir metrics-server-0.6.1
[root@k8s-master01 k8s-ha-install]# cd metrics-server-0.6.1/
[root@k8s-master01 metrics-server-0.6.1]# cp /root/components.yaml ./
[root@k8s-master01 metrics-server-0.6.1]# kubectl create -f .
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
# 安装metrics server
发现报错,又是镜像拉取错误
[root@k8s-master01 metrics-server-0.6.1]# kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
[root@k8s-master01 metrics-server-0.6.1]# kubectl describe pod metrics-server-847dcc659d-jpsbj -n kube-system |tail -n1
  Normal   BackOff  3m34s (x578 over 143m)  kubelet  Back-off pulling image "k8s.gcr.io/metrics-server/metrics-server:v0.6.1"
# 镜像拉取失败
[root@k8s-master01 metrics-server-0.6.1]# kubectl delete -f .
serviceaccount "metrics-server" deleted
clusterrole.rbac.authorization.k8s.io "system:aggregated-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "system:metrics-server" deleted
rolebinding.rbac.authorization.k8s.io "metrics-server-auth-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "metrics-server:system:auth-delegator" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:metrics-server" deleted
service "metrics-server" deleted
deployment.apps "metrics-server" deleted
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted
# 删除Metrics server
[root@k8s-master01 metrics-server-0.6.1]# vim components.yaml 
...
        - --kubelet-insecure-tls
        image: docker.io/bitnami/metrics-server:0.6.1
...
# 添加非安全连接参数以及修改镜像拉取
[root@k8s-master01 metrics-server-0.6.1]# docker pull bitnami/metrics-server:0.6.1
0.6.1: Pulling from bitnami/metrics-server
1d8866550bdd: Pull complete 
5dc6be563c2f: Pull complete 
Digest: sha256:660be90d36504f10867e5c1cc541dadca13f96c72e5c7d959fd66e3a05c44ff8
Status: Downloaded newer image for bitnami/metrics-server:0.6.1
docker.io/bitnami/metrics-server:0.6.1
# 所有节点提前拉取镜像
[root@k8s-master01 metrics-server-0.6.1]# kubectl create -f .
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
# 安装metrics server
[root@k8s-master01 metrics-server-0.6.1]# kubectl get pods -n kube-system|tail -n1 
metrics-server-7bcff67dcd-mxfwk            1/1     Running   1 (4h13m ago)   4h13m
# 成功Running
报错发现其他节点看不了
[root@k8s-master01 metrics-server-0.6.1]# kubectl top nodes
NAME           CPU(cores)   CPU%        MEMORY(bytes)   MEMORY%     
k8s-node02     65m          1%          966Mi           6%          
k8s-master01   <unknown>    <unknown>   <unknown>       <unknown>   
k8s-master02   <unknown>    <unknown>   <unknown>       <unknown>   
k8s-master03   <unknown>    <unknown>   <unknown>       <unknown>   
k8s-node01     <unknown>    <unknown>   <unknown>       <unknown>   
[root@k8s-master01 metrics-server-0.6.1]# kubectl logs metrics-server-7bcff67dcd-mxfwk -n kube-system|tail -n2
Failed probe" probe="metric-storage-ready" err="no metrics to serve"
E1031 12:19:40.567512       1 scraper.go:140] "Failed to scrape node" err="Get \"https://172.20.252.119:10250/metrics/resource\": context deadline exceeded" node="k8s-master03"
# 暂时没排出来

安装Dashboard

Master01节点
[root@k8s-master01 metrics-server-0.6.1]# cd ..
[root@k8s-master01 k8s-ha-install]# mkdir dashboard
[root@k8s-master01 k8s-ha-install]# cd dashboard/
[root@k8s-master01 dashboard]# curl https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7621  100  7621    0     0  11945      0 --:--:-- --:--:-- --:--:-- 11945
# 下载yaml文件
[root@k8s-master01 dashboard]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master01 dashboard]# kubectl get pods -n kubernetes-dashboard 
NAME                                         READY   STATUS    RESTARTS      AGE
dashboard-metrics-scraper-6f669b9c9b-dzwmt   1/1     Running   0             3m46s
kubernetes-dashboard-758765f476-hr49g        1/1     Running   2 (72s ago)   3m47s
# 查询pod状态

请添加图片描述

[root@k8s-master01 dashboard]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
  type: NodePort
# 更改dashboard的svc为Nodeport
[root@k8s-master01 dashboard]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.108.38.202   <none>        443:32212/TCP   6m56s
# 改成NodePort会暴露一个端口号

请添加图片描述

可以通过所有节点的这个端口访问

请添加图片描述

[root@k8s-master01 dashboard]# cat > admin.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
[root@k8s-master01 dashboard]# kubectl apply -f admin.yaml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 创建管理员用户admin.yaml
[root@k8s-master01 dashboard]# kubectl -n kubernetes-dashboard get secret |grep admin-user|awk '{print $1}'
admin-user-token-snlxv
[root@k8s-master01 dashboard]# kubectl -n kubernetes-dashboard describe secret admin-user-token-snlxv|grep token:
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkRaWmh5ZHpYc2dRdjBrcmhCS3llSzlOaW9YUmJaQ2VYa3JianpDNXktNGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXNubHh2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMGZiYzFhMS1kZDlmLTQwNjEtOGEzZi04MjBiMjk3NGRlMjEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.2SUAR9IWYT4N5_qRXKAaEoJKZEGjRat1bPlsVnPOj0kR1cBxhwmIOayaKG99FSw9VH3OUKlI92U_uBiI3NloPStS94Z788LpmpyqgLgzKZqKaU9UJuMjfQ9l7g4VbvwiejsMJerzqLEMTGE6bYfeS4ObPyQhAbm9obo6ccgFHWFN7LPAmYmmIMOlbR_lkU7i9XRYrP4bWfmKYx6udbiLQLTuCWEWpgJnVa9vnAdv1BXqaoxONXefGvh8PqS4ayEVe3jCxz86WtcuvB143WTysPr1sCEZ-g8f_96y3bAKeJZleozRZ7bX4zw45LdkK6ZSXUMW3soddp35ybQ9UGEqLA
# 查询token值

复制token登录

请添加图片描述

集群验证

Master01节点
[root@k8s-master01 k8s-ha-install]# cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF
pod/busybox created
# 安装busybox
  1. Pod必须能解析Service

  2. Pod必须能解析夸namespace的Service

  3. 每个节点都必须要能访问Kubernetes的kubernetes svc(443)和kube-dns的service(53)

  4. Pod和Pod之间要能通

    a) 同namespace能通信

    b) 跨namespace能通信

    c) 跨机器能通信

[root@k8s-master01 k8s-ha-install]# kubectl exec busybox -n default -- nslookup kubernetes  
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[root@k8s-master01 k8s-ha-install]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
# 验证解析

[root@k8s-master01 ~]# dnf -y install telnet
# 所有节点安装telnet
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d12h
[root@k8s-master01 ~]# telnet 10.96.0.1 443
# 所有节点telnet 10.96.0.1的443端口,没有自动退出即为成功
[root@k8s-master01 ~]# kubectl get svc -n kube-system
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   10h
metrics-server   ClusterIP   10.110.240.137   <none>        443/TCP                  75m
[root@k8s-master01 ~]# telnet 10.96.0.10 53
# 所有节点telnet 10.96.0.10的53端口
[root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server
# 也可以curl一下,empty就是成功了

[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
...
[root@k8s-master01 ~]# kubectl exec -it calico-kube-controllers-6c86f69969-8mjqk -n kube-system -- sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
# 一些没有带sh的就测试不了
[root@k8s-master01 ~]# kubectl exec -it busybox -- sh
/ # ping 172.18.195.5
PING 172.18.195.5 (172.18.195.5): 56 data bytes
64 bytes from 172.18.195.5: seq=0 ttl=62 time=0.342 ms
64 bytes from 172.18.195.5: seq=1 ttl=62 time=0.639 ms
...
# [root@k8s-master01 ~]# kubectl run nginx --image=nginx
# pod/nginx created
# # 快速创建一个pod测试
[root@k8s-master01 ~]# kubectl create deploy nginx --image=nginx --replicas=3
deployment.apps/nginx created
# 创建一个3个副本的deployment
[root@k8s-master01 ~]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   0/3     3            0           32s
[root@k8s-master01 ~]# kubectl get pods -owide
NAME                     READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE   READINESS GATES
busybox                  1/1     Running   0          25m    172.27.14.203    k8s-node02     <none>           <none>
nginx                    1/1     Running   0          100s   172.25.92.76     k8s-master02   <none>           <none>
nginx-85b98978db-9rql2   1/1     Running   0          49s    172.17.125.1     k8s-node01     <none>           <none>
nginx-85b98978db-j6wxk   1/1     Running   0          49s    172.18.195.6     k8s-master03   <none>           <none>
nginx-85b98978db-qs2lw   1/1     Running   0          49s    172.25.244.197   k8s-master01   <none>           <none>
[root@k8s-master01 ~]# kubectl delete deploy nginx
deployment.apps "nginx" deleted
[root@k8s-master01 ~]# kubectl delete pod busybox nginx
pod "busybox" deleted
pod "nginx" deleted
# 集群验证完删除Pod和Deployment

生产环境关键性配置

[root@k8s-master01 ~]# cat > /etc/docker/daemon.json <<EOF
{
        "registry-mirrors": [
                "https://32yzbld0.mirror.aliyuncs.com",
                "https://registry.docker-cn.com",
                "https://docker.mirrors.ustc.edu.cn"
        ],
        "exec-opts": ["native.cgroupdriver=systemd"],
        "max-concurrent-downloads": 10,
        "max-concurrent-uploads": 5,
        "log-opts": {
                "max-size": "300m",
                "max-file": "2"
        },
        "live-restore": true
}
EOF
"max-concurrent-downloads": 10,
# 并发下载的线程数
"max-concurrent-uploads": 5,
# 并发上传的线程数
"log-opts": {
                "max-size": "300m",
                # 限制日志最大文件大小
                "max-file": "2"
                # 限制日志文件数
        },
        "live-restore": true
        # 打开之后重启docker不会让容器也一起重启
[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
> scp /etc/docker/daemon.json $NODE:/etc/docker/daemon.json 
> done
# 所有节点配置daemon.json
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl restart docker
# 所有节点重启docker

[root@k8s-master01 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
#  --experimental-cluster-signing-duration=876000h0m0s \
#  # 添加如上一条
#  --experimental-cluster-signing-duration=876000h0m0s \
#  # 证书有效期设置,因为我们是内网搭建
# 上面是老版本的参数,1.19之前
--cluster-signing-duration=876000h0m0s \
# 新版使用如上参数,证书有效期设置

[root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do scp /usr/lib/systemd/system/kube-controller-manager.service $NODE:/usr/lib/systemd/system/kube-controller-manager.service ; done
# 所有节点配置
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl restart kube-controller-manager
# 所有节点重启kube-controller-manager

[root@k8s-master01 ~]# vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node=''
# 在此之后添加如下一行
--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_
SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m"
# 更改加密方式,这样就不会被扫出漏洞了.
# 一般下载镜像可能会慢,deadline设置时间长一点

[root@k8s-master01 ~]# vim /etc/kubernetes/kubelet-conf.yaml
# 在文件最后添加如下内容
allowedUnsafeSysctls:
 - "net.core*"
 - "net.ipv4.*"
 # 允许修改内核
 # 可能设置安全问题,按需配置
kubeReserved:
  cpu: "10Mi"
  memory: 1Gi
  ephemeral-storage: 10Gi
systemReserved:
  cpu: "10Mi"
  memory: 1Gi
  ephemeral-storage: 10Gi
  # 容器跑到宿主机,限制资源
  # 生产环境一定要给高,按需配置
  
[root@k8s-master01 ~]# kubectl get nodes|head -n2
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   <none>   40h   v1.23.13
[root@k8s-master01 ~]# kubectl get node --show-labels |head -n2
NAME           STATUS     ROLES    AGE   VERSION    LABELS
k8s-master01   NotReady   <none>   40h   v1.23.13   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node.kubernetes.io/node=
[root@k8s-master01 ~]# kubectl label node k8s-master01 node-role.kubernetes.io/master=''
node/k8s-master01 labeled
# 设置节点的role
[root@k8s-master01 ~]# kubectl get nodes | head -n2
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   master   40h   v1.23.13

安装总结

  1. kubeadm (用容器方式启动,重启容易失败)

  2. 二进制(稳定)

  3. 自动化安装

    1. Ansible
      1. Master节点不需要写自动化
      2. 添加node节点,playbook
  4. 安装需要注意的细节

    1. 上面的细节配置
    2. 生产环境中etcd一定要和系统盘分开,一定要用ssd硬盘
    3. Docker数据盘也要和系统盘分开,有条件就用ssd硬盘

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/609120.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

排序算法——希尔排序图文详解

文章目录 希尔排序基本思想整体插入思想预排序结论 代码实现实现代码直接插入排序与希尔排序的效率比较测试代码&#xff1a; 时间复杂度 希尔排序 注1&#xff1a;本篇是基于对直接插入排序法的拓展&#xff0c;如果对直接插入法不了解&#xff0c;建议先看看直接插入排序 注…

Learning C++ No.27 【布隆过滤器实战】

引言 北京时间&#xff1a;2023/5/31/22:02&#xff0c;昨天的计算机导论考试&#xff0c;三个字&#xff0c;哈哈哈&#xff0c;摆烂&#xff0c;大致题目都是一些基础知识&#xff0c;但是这些基础知识都是非常非常理论的知识&#xff0c;理论的我一点不会&#xff0c;像什么…

【自制C++深度学习框架】表达式层的设计思路

表达式层的设计思路 在深度学习框架中&#xff0c;Expression Layer&#xff08;表达式层&#xff09;是指一个通用的算子&#xff0c;其允许深度学习网络的不同层之间结合和嵌套&#xff0c;从而支持一些更复杂的操作&#xff0c;如分支之间的加减乘除&#xff08;elementAdd…

PyTorch 深度学习 || 专题二:PyTorch 实验框架的搭建

PyTorch 实验框架的搭建 1. PyTorch简介 PyTorch是由Meta AI(Facebook)人工智能研究小组开发的一种基于Lua编写的Torch库的Python实现的深度学习库&#xff0c;目前被广泛应用于学术界和工业界&#xff0c;PyTorch在API的设计上更加简洁、优雅和易懂。 1.1 PyTorch的发展 “…

Numpy---生成数组的方法、从现有数组中生成、生成固定范围的数组

1. 生成数组的方法 np.ones(shape, dtypeNone, orderC) 创建一个所有元素都为1的多维数组 参数说明: shape : 形状&#xff1b; dtypeNone: 元素类型&#xff1b; order &#xff1a; {‘C’&#xff0c;‘F’}&#xff0c;可选&#xff0c;默认值&#xff1a;C 是否在内…

BPMN2.0自动启动模拟流程

思路&#xff1a;BPMN的流程模拟启动&#xff0c;主要是通过生成令牌&#xff0c;并启动令牌模拟 流程模拟的开启需要关键性工具&#xff1a;bpmn-js-token-simulation&#xff0c;需要先行下载 注&#xff1a;BPMN2.0的流程模拟工具版本不同&#xff0c;启动方式也不一样&am…

Kafka某Topic的部分partition无法消费问题

今天同事反馈有个topic出现积压。于是上kfk管理平台查看该topic对应的group。发现6个分区中有2个不消费&#xff0c;另外4个消费也较慢&#xff0c;总体lag在增长。查看服务器日志&#xff0c;日志中有rebalance 12 retry 。。。Exception&#xff0c;之后改消费线程停止。 查…

chatgpt赋能python:Python实现数据匹配的方法

Python实现数据匹配的方法 在数据分析和处理中&#xff0c;经常需要将两组数据进行匹配。Python作为一门强大的编程语言&#xff0c;在数据匹配方面也有着其独特的优势。下面我们将介绍Python实现数据匹配的方法。 数据匹配 数据匹配通常指的是将两组数据根据某些特定的规则…

理解calico容器网络通信方案原理

0. 前言 Calico是k8s中常用的容器解决方案的插件&#xff0c;本文主要介绍BGP模式和IPIP模式是如何解决的&#xff0c;并详细了解其原理&#xff0c;并通过实验加深理解。 1. 介绍Calico Calico是属于纯3层的网络模型&#xff0c;每个容器都通过IP直接通信&#xff0c;中间通…

试验SurfaceFlinger 中Source Crop

在 SurfaceFlinger 中&#xff0c;Source Crop 是用于指定源图像的裁剪区域的一个概念。Source Crop 可以理解为是一个矩形区域&#xff0c;它定义了源图像中要被渲染到目标区域的部分。在 Android 中&#xff0c;Source Crop 通常用于实现屏幕分辨率适应和缩放等功能。 在 Sur…

【Java基础篇】逻辑控制练习题与猜数字游戏

作者简介&#xff1a; 辭七七&#xff0c;目前大一&#xff0c;正在学习C/C&#xff0c;Java&#xff0c;Python等 作者主页&#xff1a; 七七的个人主页 文章收录专栏&#xff1a;Java.SE&#xff0c;本专栏主要讲解运算符&#xff0c;程序逻辑控制&#xff0c;方法的使用&…

2023_Python全栈工程师入门教程目录

2023_Python全栈工程师入门教程 该路线来自慕课课程,侵权则删,支持正版课程,课程地址为:https://class.imooc.com/sale/python2021 学习路线以三个项目推动,一步步夯实技术水平&#xff0c;打好Python开发基石 目录: 1.0 Python基础入门 2.0 Python语法进阶 3.0 Python数据…

windows系统典型漏洞分析

内存结构 缓冲区溢出漏洞 缓冲区溢出漏洞就是在向缓冲区写入数据时&#xff0c;由于没有做边界检查&#xff0c;导致写入缓冲区的数据超过预先分配的边界&#xff0c;从而使溢出数据覆盖在合法数据上而引起系统异常的一种现象。 ESP、EPB ESP&#xff1a;扩展栈指针&#xff08…

React.memo()、userMemo 、 userCallbank的区别及使用

本文是对以下课程的笔记输出&#xff0c;总结的比较简洁&#xff0c;若大家有不理解的地方&#xff0c;可以通过观看课程进行详细学习&#xff1b; React81_React.memo_哔哩哔哩_bilibili React76_useEffect简介_哔哩哔哩_bilibili React136_useMemo_哔哩哔哩_bilibili Rea…

直播录音时准备一副监听耳机,实现所听即所得,丁一号G800S上手

有些朋友在录视频还有开在线会议的时候&#xff0c;都会遇到一个奇怪的问题&#xff0c;就是自己用麦克风收音的时候&#xff0c;自己的耳机和别人的耳机听到的效果不一样&#xff0c;像是音色、清晰度不好&#xff0c;或者是缺少伴奏以及背景音嘈杂等&#xff0c;这时候我们就…

2023贵工程团体程序设计赛

A这是一道数学题&#xff1f; 道路有两边。 #include<bits/stdc.h> using namespace std; int main(){int n,m;cin>>n>>m;cout<<(n/m1)*2;return 0; } BCPA的团体赛 直接输出 。 #include <bits/stdc.h> using i64 long long; #define IOS…

Docker基本管理与网络以及数据管理

目录 一、Docker简介1、Docker简述2、什么是容器3、容器的优点4、Docker的logo及设计宗旨5、Docker与虚拟机的区别6、Docker的2个重要技术7、Docker三大核心概念 二、Docker的安装及管理1、安装Docker2、配置Docker加速器3、Docker镜像相关基础命令①搜索镜像②拉取镜像③查看镜…

Linux 配置Tomcat环境(二)

Linux 配置Tomcat环境 二、配置Tomcat1、创建一个Tomcat文件夹用于存放Tomcat压缩包2、把Tomcat压缩包传入服务器3、解压并启动Tomcat4、CentOS开放8080端口 二、配置Tomcat 1、创建一个Tomcat文件夹用于存放Tomcat压缩包 输入指令 cd /usr/local 进入到 usr/local 输入指令 …

[LsSDK][tool] ls_syscfg_gui2.0

文章目录 一、简介1.工具的目的2. 更新点下个更新 三、配置文件 一、简介 1.工具的目的 ① 可视化选择IO口功能。 ② 自由配置IO支持的功能。 ③ 适用各类MCU&#xff0c;方便移植和开发。 ④ 功能配置和裁剪&#xff08;选项-syscfg-待完成–需要适配keil语法有些麻烦&#…

Node.js: express + MySQL + Vue实现图片上传

前段时间用Node.js: express MySQL Vue element组件做了一个小项目&#xff0c;记录一下图片上传的实现。 将图片存入数据库有两种方法&#xff1a; 1&#xff0c;将图片以二进制流的方式存入数据库&#xff08;数据库搬家容易&#xff0c;比较安全&#xff0c;但数据库空间…