Kubernetes 二进制部署高可用集群 失败 看报错

news2025/1/9 15:24:20

概述

openssl证书有问题导致失败,未能解决openssl如何创建私钥,可参考ansible

在私有局域网内完成Kubernetes二进制高可用集群的部署

ETCD

Openssl ==> ca 证书

Haproxy

Keepalived

Kubernetes

主机规划

序号名字功能VMNET 1备注 + 1备注 + 2备注 +3 备注 + 4备注 +5
0orgin界面192.168.164.10haproxykeepalived
1reporsitory仓库192.168.164.16yum 仓库registoryhaproxykeepalived
2master01H-K8S-1192.168.164.11kube-apicontrollerscheduleretcd
3master02H-K8S-2192.168.164.12kube-apicontrollerscheduleretcd
4master03H-K8S-3192.168.164.13kube-apicontrollerscheduleretcd
5node04H-K8S-1192.168.164.14kube-proxykubeletdocker
6node05H-K8S-2192.168.164.15kube-proxykubeletdocker
7node07H-K8S-3192.168.164.17kube-proxykubeletdocker

图例

步骤

0. 前期环境准备 firewalld + selinux + 系统调优 + ansible安装

ansible 配置

配置主机清单

ansible]# cat hostlist
[k8s:children]
k8sm
k8ss

[lb:children]
origin
repo

[k8sm]
192.168.164.[11:13]

[k8ss]
192.168.164.[14:15]
192.168.164.17

[origin]
192.168.164.10

[repo]
192.168.164.16

配置ansible.cfg

hk8s]# cat ansible.cfg
[defaults]
inventory   = /root/ansible/hk8s/hostlist
roles_path  = /root/ansible/hk8s/roles
host_key_checking = False

firewalld + selinux

# 关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 关闭防火墙
systemctl disable --now firewalld

系统调优

1. 创建CA根证书

(46条消息) 【k8s学习2】二进制文件方式安装 Kubernetes之etcd集群部署_etcd 二进制文件_温殿飞的博客-CSDN博客

创建CA根证书完成ETCD和K8S的安全认证与联通性

使用openssl创建CA根证书,使用同一套。私钥:ca.key + 证书:ca.crt

假如存在不同的CA根证书, 可以完成集群间的授权与规划管理。

# 创建私钥
openssl genrsa -out ca.key 2048

# 基于私钥,创建证书
openssl req -x509 -new -nodes -key ca.key -subj "/CN=192.168.164.11" -days 36500 -out ca.crt

# -subj "/CN=ip" 指定master主机
# -days 证书有效期

# 证书存放地址为 /etc/kubernetes/pki
mkdir -p /etc/kubernetes/ && mv ~/ca /etc/kubernetes/pki
ls /etc/kubernetes/pki

2. 部署ETCD高可用集群

 Tags · etcd-io/etcd · GitHub 下载

Release v3.4.26 · etcd-io/etcd · GitHub
https://storage.googleapis.com/etcd/v3.4.26/etcd-v3.4.26-linux-amd64.tar.gz

下载tar包

 ansible unarchive

 将tar包远程传递给各master节点

# tar包解压到 ~ 目录下
# ansible k8sm -m unarchive -a "src=/var/ftp/localrepo/etcd/etcd-3.4.26.tar.gz dest=~ copy=yes mode=0755"
ansible k8sm -m unarchive -a "src=/var/ftp/localrepo/etcd/etcd-v3.4.26-linux-amd64.tar.gz dest=~ copy=yes mode=0755"

# 查看文件是否存在
ansible k8sm -m shell -a "ls -l ~"
# 错误则删除
ansible k8sm -m file -a "state=absent path=~/etcd*"

# 配置etcd etcdctl命令到/usr/bin
ansible k8sm -m shell -a "cp ~/etcd-v3.4.26-linux-amd64/etcd /usr/bin/"
ansible k8sm -m shell -a "cp ~/etcd-v3.4.26-linux-amd64/etcdctl /usr/bin/"

ansible k8sm -m shell -a "ls -l /usr/bin/etcd"
ansible k8sm -m shell -a "ls -l /usr/bin/etcdctl"

 

 官方install etcd脚本分析 == 寻找正确的安装包下载路径

#!/bin/bash

# 定义一系列环境变量
ETCD_VER=v3.4.26
# choose either URL
GOOGLE_URL=https://storage.googleapis.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GOOGLE_URL}


# 在/tmp文件夹下删除关于etcd的tar包清空环境
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
rm -rf /tmp/etcd-download-test && mkdir -p /tmp/etcd-download-test

# 下载特定版本的tar包
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

# 解压到指定文件
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download-test --strip-components=1
# 删除tar包
# rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

# 验证功能
/tmp/etcd-download-test/etcd --version
/tmp/etcd-download-test/etcdctl version

etcd.service 文件创建与配置

etcd/etcd.service at v3.4.26 · etcd-io/etcd · GitHub可查看到官方文件

etcd.service 将保存在/usr/lib/systemd/system/目录下方

/etc/etcd/配置文件夹 + /var/lib/etcd 需要创建

[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target

[Service]
Environment=ETCD_DATA_DIR=/var/lib/etcd
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
Restart=always
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

使用ansible转存文件 + 判断文件是否存在

Ansible 检查文件是否存在_harber_king的技术博客_51CTO博客

# 传输
ansible k8sm -m copy -a "src=/root/ansible/hk8s/etcd/etcd.service dest=/usr/lib/systemd/system/ mode=0644"
# 判断
ansible k8sm -m shell -a "ls -l /usr/lib/systemd/system/etcd.service" 

# 创建文件夹
ansible k8sm -m shell -a "mkdir -p /etc/etcd"
ansible k8sm -m shell -a "mkdir -p /var/lib/etcd"

etcd-CA 证书创建

【k8s学习2】二进制文件方式安装 Kubernetes之etcd集群部署_etcd 二进制文件_温殿飞的博客-CSDN博客

必须在一台master主机上面创建,不同主机创建的证书结果不同,创建完后将这个证书拷贝到其他节点的相同目录底下

将etcd_server.key + etcd_server.crt + etcd_server.csr + etcd_client.key + etcd_client.crt + etcd_client.csr 都保存在/etc/etcd/pki

本人将etcd_ssl.cnf也保存在/etc/etcd/pki内,一共产生7份文件

 etcd_ssl.cnf

[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 192.168.164.11
IP.2 = 192.168.164.12
IP.3 = 192.168.164.13

具体命令

# 进入指定目录
mkdir -p /etc/etcd/pki &&  cd /etc/etcd/pki

# 创建server密钥
openssl genrsa -out etcd_server.key 2048

openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr

openssl x509 -req -in etcd_server.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt

# 创建客户端密钥
openssl genrsa -out etcd_client.key 2048

openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr

openssl x509 -req -in etcd_client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt

etcd.conf.yml.sample 参数配置

etcd/etcd.conf.yml.sample at v3.4.26 · etcd-io/etcd · GitHub

配置 - etcd官方文档中文版 (gitbook.io)

k8s-二进制安装v1.25.8 - du-z - 博客园 (cnblogs.com)

二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈-阿里云开发者社区 (aliyun.com)

使用ansible在所有节点中跟新配置文件,需要使用shell 或者 ansible的传参方式进行更新ip名称等

以master01为例子,master02和master03需要执行对应修改

参数默认环境变量变更值描述实际计划值备注

name:

ETCD_NAME

hostname

master01

data-dir:

ETCD_DATA_DIR

/var/lib/etcd

/var/lib/etcd

listen-peer-urls

ETCD_LISTEN_PEER_URLS

https://ip:2380

https://192.168.164.11:2380

listen-client-urls

ETCD_LISTEN_CLIENT_URLS

http://ip:2379

https://ip:2379

"http://192.168.164.11:2379,https://192.168.164.11:2380"

initial-advertise-peer-urls

ETCD_INITIAL_ADVERTISE_PEER_URLS

https://ip:2380

"https://192.168.164.11:2380"

advertise-client-urls

ETCD_ADVERTISE_CLIENT_URLS

https://ip:2379

https://192.168.164.11:2379

initial-cluster

ETCD_INITIAL_CLUSTER

各节点=https://ip:2380

'master01=https://192.168.164.11:2380,master02=https://192.168.164.12:2380,master03=https://192.168.164.13:2380'

initial-cluster-state

ETCD_INITIAL_CLUSTER_STATE

new 新建 + existing 加入已有

new

cert-file:

client-transport-security:

/etc/etcd/pki/etcd_server.crt

key-file:

client-transport-security:

/etc/etcd/pki/etcd_server.key

client-cert-auth:

falsetrue

trusted-ca-file

client-transport-security:

etc/kubernetes/pki/ca.crt

auto-tls

falsetrue

cert-file

peer-transport-security:

/etc/etcd/pki/etcd_server.crt

key-file

peer-transport-security:

/etc/etcd/pki/etcd_server.key

client-cert-auth

falsetrue

trusted-ca-file

peer-transport-security:

/etc/kubernetes/pki/ca.crt

auto-tls

false    true

 配置没有经过测试

# This is the configuration file for the etcd server.
# https://doczhcn.gitbook.io/etcd/index/index-1/configuration 参考文档

# Human-readable name for this member.
# 建议使用hostname, 唯一值,环境变量: ETCD_NAME
name: "master01"

# Path to the data directory.
# 数据存储地址,需要和etcd.service保持一致,环境变量: ETCD_DATA_DIR
data-dir: /var/lib/etcd

# Path to the dedicated wal directory.
# 环境变量: ETCD_WAL_DIR
wal-dir:

# Number of committed transactions to trigger a snapshot to disk.
# 触发快照到硬盘的已提交事务的数量.
snapshot-count: 10000

# Time (in milliseconds) of a heartbeat interval.
# 心跳间隔时间 (单位 毫秒),环境变量: ETCD_HEARTBEAT_INTERVAL
heartbeat-interval: 100

# Time (in milliseconds) for an election to timeout.
# 选举的超时时间(单位 毫秒),环境变量: ETCD_ELECTION_TIMEOUT
election-timeout: 1000

# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 0

# List of comma separated URLs to listen on for peer traffic.
# 环境变量: ETCD_LISTEN_PEER_URLS
listen-peer-urls: "https://192.168.164.11:2380"

# List of comma separated URLs to listen on for client traffic.
# 环境变量: ETCD_LISTEN_CLIENT_URLS
listen-client-urls: "http://192.168.164.11:2379,https://192.168.164.11:2380"

# Maximum number of snapshot files to retain (0 is unlimited).
max-snapshots: 5

# Maximum number of wal files to retain (0 is unlimited).
max-wals: 5

# Comma-separated white list of origins for CORS (cross-origin resource sharing).
cors:

# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
# 环境变量: ETCD_INITIAL_ADVERTISE_PEER_URLS
initial-advertise-peer-urls: "https://192.168.164.11:2380"

# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: https://192.168.164.11:2379

# Discovery URL used to bootstrap the cluster.
discovery:

# Valid values include 'exit', 'proxy'
discovery-fallback: "proxy"

# HTTP proxy to use for traffic to discovery service.
discovery-proxy:

# DNS domain used to bootstrap initial cluster.
discovery-srv:

# Initial cluster configuration for bootstrapping.
# 为启动初始化集群配置, 环境变量: ETCD_INITIAL_CLUSTER
initial-cluster: "master01=https://192.168.164.11:2380,master02=https://192.168.164.12:2380,master03=https://192.168.164.13:2380"

# Initial cluster token for the etcd cluster during bootstrap.
# 在启动期间用于 etcd 集群的初始化集群记号(cluster token)。环境变量: ETCD_INITIAL_CLUSTER_TOKEN
initial-cluster-token: "etcd-cluster"

# Initial cluster state ('new' or 'existing').
# 环境变量: ETCD_INITIAL_CLUSTER_STATE。new 新建 + existing 加入已有
initial-cluster-state: "new"

# Reject reconfiguration requests that would cause quorum loss.
strict-reconfig-check: false

# Accept etcd V2 client requests
enable-v2: true

# Enable runtime profiling data via HTTP server
enable-pprof: true

# Valid values include 'on', 'readonly', 'off'
proxy: "off"

# Time (in milliseconds) an endpoint will be held in a failed state.
proxy-failure-wait: 5000

# Time (in milliseconds) of the endpoints refresh interval.
proxy-refresh-interval: 30000

# Time (in milliseconds) for a dial to timeout.
proxy-dial-timeout: 1000

# Time (in milliseconds) for a write to timeout.
proxy-write-timeout: 5000

# Time (in milliseconds) for a read to timeout.
proxy-read-timeout: 0

client-transport-security:
  # https://doczhcn.gitbook.io/etcd/index/index-1/security 参考
  # Path to the client server TLS cert file.
  cert-file: /etc/etcd/pki/etcd_server.crt

  # Path to the client server TLS key file.
  key-file: /etc/etcd/pki/etcd_server.key

  # Enable client cert authentication.
  client-cert-auth: true

  # Path to the client server TLS trusted CA cert file.
  trusted-ca-file: /etc/kubernetes/pki/ca.crt

  # Client TLS using generated certificates
  auto-tls: true

peer-transport-security:
  # Path to the peer server TLS cert file.
  cert-file: /etc/etcd/pki/etcd_server.crt

  # Path to the peer server TLS key file.
  key-file: /etc/etcd/pki/etcd_server.key

  # Enable peer client cert authentication.
  client-cert-auth: true

  # Path to the peer server TLS trusted CA cert file.
  trusted-ca-file: /etc/kubernetes/pki/ca.crt

  # Peer TLS using generated certificates.
  auto-tls: true

# Enable debug-level logging for etcd.
debug: false

logger: zap

# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]

# Force to create a new one member cluster.
force-new-cluster: false

auto-compaction-mode: periodic
auto-compaction-retention: "1"

配置经过测试 /etc/etcd/etcd.conf

ETCD_NAME=master03
ETCD_DATA_DIR=/var/lib/etcd

# [Cluster Flags]
# ETCD_AUTO_COMPACTION_RETENTIO:N=0


ETCD_LISTEN_PEER_URLS=https://192.168.164.13:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.164.13:2380

ETCD_LISTEN_CLIENT_URLS=https://192.168.164.13:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.164.13:2379

ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER="master01=https://192.168.164.11:2380,master02=https://192.168.164.12:2380,master03=https://192.168.164.13:2380"

# [Proxy Flags]
ETCD_PROXY=off

# [Security flags]
# 指定etcd的公钥证书和私钥
ETCD_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_CLIENT_CERT_AUTH=true

# 指定etcd的Peers通信的公钥证书和私钥
ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PEER_KEY_FILE=/etc/etcd/pki/etcd_server.key

所有节点同步文件kpi文件

名称作用子文件备注
/etc/kubernetes/pkikubernetes ca根证书ca.crt  ca.key 
/etc/etcdetcd的配置文件和ca证书etcd.conf.yml  pki
/etc/etcd/pkietcd的ca证书etcd_client.crt  etcd_client.csr  etcd_client.key  etcd_server.crt  etcd_server.csr  etcd_server.key  etcd_ssl.cnf
/var/lib/etcd存放数据的文件
/usr/bin/etcdetcd命令
/usr/bin/etcdctletcdctl命令
/usr/lib/systemd/system/etcd.servicesystemctl 管理etcd配置文件

启动服务

ansible k8sm -m systemd -a "name=etcd state=restarted enabled=yes"

检测集群状态

etcdctl --cacert="/etc/kubernetes/pki/ca.crt" \
--cert="/etc/etcd/pki/etcd_client.crt" \
--key="/etc/etcd/pki/etcd_client.key" \
--endpoints=https://192.168.164.11:2379,https://192.168.164.12:2379,https://192.168.164.13:2379 endpoint health -w table

报错:

排除

firewalld是否停止,为disabled?

selinux是否为permission,配置文件是否完成修改?

CA证书创建是否成功,是不是敲错命令,传递错误参数?

etcd_ssl.conf 配置文件,是否成功配置,IP.1 + IP.2 + IP.3 地址是否写全?

CA 证书是否为同一相同文件? 因为只在一台主机上面生成证书,接着传递给其他主机,所以是否成功传递,并保证相同

etcd.conf配置文件是否完成修改,并完成对应的编辑

验证命令是否敲错https非http

3. Kubernetes 高可用集群搭建

kubernetes/CHANGELOG-1.20.md at v1.20.13 · kubernetes/kubernetes · GitHub

下载1.20.10的版本

 软件准备与部署

# 部署软件到节点
ansible k8s -m unarchive -a "src=/var/ftp/localrepo/k8s/hk8s/kubernetes-server-linux-amd64.tar.gz dest=~ copy=yes mode=0755"

# 检测软件包安装是否齐全
ansible k8sm -m shell -a "ls -l /root/kubernetes/server/bin"
文件名说明
kube-apiserverkube-apiserver 主程序
kube-apiserver.docker_tagkube-apiserver docker 镜像的 tag
kube-apiserver.tarkube-apiserver docker 镜像文件
kube-controller-managerkube-controller-manager 主程序
kube-controller-manager.docker_tagkube-controller-manager docker 镜像的 tag
kube-controller-manager.tarkube-controller-manager docker 镜像文件
kube-schedulerkube-scheduler 主程序
kube-scheduler.docker_tagkube-scheduler docker 镜像的 tag
kube-scheduler.tarkube-scheduler docker 镜像文件
kubeletkubelet 主程序
kube-proxykube-proxy 主程序
kube-proxy.docker_tagkube-proxy docker 镜像的 tag
kube-proxy.tarkube-proxy docker 镜像文件
kubectl客户端命令行工具
kubeadmKubernetes 集群安装的命令工具
apiextensions-apiserver提供实现自定义资源对象的扩展 API Server
kube-aggregator聚合 API Server 程序

masters + slaves 分别部署相关的程序到/usr/bin

# 在master上部署组件
ansible k8sm -m shell -a "cp -r /root/kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} /usr/bin"

# 造slave上部署组件
ansible k8ss -m shell -a "cp -r /root/kubernetes/server/bin/kube{let,-proxy} /usr/bin"

# master检验
ansible k8sm -m shell -a "ls -l /usr/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} "

# slave检验
ansible k8ss -m shell -a "ls -l /usr/bin/kube{let,-proxy} "

3.1 kube-apiserver

部署kube-apiserver服务 -- CA证书配置

在master01主机上执行命令

文件路径:/etc/kubernetes/pki

具体相关命令:

openssl genrsa -out apiserver.key 2048

openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=192.168.164.11" -out apiserver.csr

openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt

 master_ssl.cnf 文件内容

[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 169.169.0.1
IP.2 = 192.168.164.12
IP.3 = 192.168.164.13
IP.4 = 192.168.164.11
IP.5 = 192.168.164.200
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = master01
DNS.6 = master02
DNS.7 = master03

 配置kube-apiserver.service

cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=always
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

cat /etc/kubernetes/apiserver

KUBE_API_ARGS="--insecure-port=0  \
--secure-port=6443  \
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt  \
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key  \
--client-ca-file=/etc/kubernetes/pki/ca.crt  \
--apiserver-count=3 --endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.164.11:2379,https://192.168.164.12:2379,https://192.168.164.13:2379 \
--etcd-cafile=/etc/kubernetes/pki/ca.crt  \
--etcd-certfile=/etc/etcd/pki/etcd_client.crt  \
--etcd-keyfile=/etc/etcd/pki/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16  \
--service-node-port-range=30000-32767  \
--allow-privileged=true  \
--logtostderr=false  --log-dir=/var/log/kubernetes --v=0"

systemctl stop kube-apiserver  && systemctl daemon-reload && systemctl restart kube-apiserver && systemctl status kube-apiserver

ansible k8sm -m shell  -a " systemctl daemon-reload && systemctl restart kube-apiserver && systemctl status kube-apiserver "

 3.1.1 tailf -30 /var/log/messages 进行error j解决

cat /var/log/messages|grep kube-apiserver|grep -i error

error='no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"'. Try to set the AdvertiseAddress directly or provide a valid BindAddress to fix this.

需要为虚拟机配置网关

Error: [--etcd-servers must be specified, service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags]

 可能是版本原因问题:不建议使用openssl进行加密配置,官方使用 cfssl 软件进行加密配置。本人重新下载了K8S的1.19进行实践解决了service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags这个报错提示。但这不是更本原因

另外一个是原因

原因是CA证书配置出错,每一个master服务器需要配置自己单独的证书,不能共用CA证书

3.2 创建controller + scheduler + kubelet +kube-proxy 证书

kube-controller-manager、kube-scheduler、kublet和kube-proxy 都是apiserver的客户端

kube-controller-manager + kube-scheduler + kubelet + kube-proxy 可以根据实际情况单独配置CA证书从而链接到kube-apiserver。以下进行统一创建相同证书为例子

用openssl创建证书并放到/etc/kubernetes/pki/ 创建好的证书考到同集群的其他服务器使用

-subj "/CN=admin" 用于标识链接kube-apiserver的客户端用户名称

cd   /etc/kubernetes/pki

openssl genrsa -out client.key 2048

openssl req -new -key client.key -subj "/CN=admin" -out client.csr

openssl x509 -req -in client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out client.crt -days 36500

 

kubeconfig 配置文件

创建客户端连接 kube-apiserver 服务所需的 kubeconfig 配置文件

kube-controller-manager、kube-scheduler、kubelet、kube-proxy、kubectl统一使用的链接到kube-api的配置文件

文件存放在/etc/kubernetes下

# 文件传递到跳板机
scp -r ./client.* root@192.168.164.16:/root/ansible/hk8s/

官方文档:使用 kubeconfig 文件组织集群访问 | Kubernetes

PKI 证书和要求 | Kubernetes

配置对多集群的访问 | Kubernetes

apiVersion: v1
kind: Config
clusters:
  - name: default
    cluster:
      server: https://192.168.164.200:9443 # 虚拟ip地址 Haproxy地址 + haproxy的监听端口
      certificate-authority: /etc/kubernetes/pki/ca.crt

users:
  - name: admin # 链接apiserver的用户名
    user:
      client-certificate: /etc/kubernetes/pki/client.crt
      client-key: /etc/kubernetes/pki/client.key

contexts:
  - name: default
    context:
      cluster: default
      user: admin # 链接apiserver的用户名
    current-context: default

ansible部署文件

ansible k8s -m shell -a "ls -l /etc/kubernetes/kubeconfig"

ansible k8s -m copy -a "src=/root/ansible/hk8s/kubeconfig dest=/etc/kubernetes/"

ansible k8ss,192.168.164.12,192.168.164.13 -m copy -a "src=/root/ansible/hk8s/client.csr  dest=/etc/kubernetes/pki/" >> /dev/null

ansible k8ss,192.168.164.12,192.168.164.13 -m copy -a "src=/root/ansible/hk8s/client.crt  dest=/etc/kubernetes/pki/" >> /dev/null

ansible k8ss,192.168.164.12,192.168.164.13 -m copy -a "src=/root/ansible/hk8s/client.key  dest=/etc/kubernetes/pki/" >> /dev/null

3.3 kube-controller-manager

部署kube-controller-manager服务

/usr/lib/systemd/system/ 存放相应配置文件 kube-controller-manager.service

kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=always
 
[Install]
WantedBy=multi-user.target

controller-manager

EnvironmentFile=/etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/etc/kubernetes/pki/apiserver.key \
--root-ca-file=/etc/kubernetes/pki/ca.crt \
--v=0"

ansible 传递

ansible k8sm -m copy -a "src=./kube-controller-manager/controller-manager  dest=/etc/kubernetes/ "

ansible k8sm -m copy -a "src=./kube-controller-manager/kube-controller-manager.service dest=/usr/lib/systemd/system/ "

启动服务

systemctl daemon-reload && systemctl start kube-controller-manager && systemctl status kube-controller-manager && systemctl enable --now kube-controller-manager

ansible k8sm -m shell -a "systemctl daemon-reload && systemctl enable--now kube-controller-manager && systemctl status kube-controller-manager"

error: KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

3.4 kube-scheduler

同理配置

/usr/lib/systemd/system/ 存放相应配置文件 kube-sheduler.service

/etc/kubernetes/存放scheduler

 kube-sheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=always
 
[Install]
WantedBy=multi-user.target

scheduler

KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--v=0"

ansible 命令

ansible k8sm -m copy -a "src=./kube-scheduler/kube-scheduler.service  dest=/usr/lib/systemd/system/ "

ansible k8sm -m copy -a "src=./kube-scheduler/scheduler  dest=/etc/kubernetes/ "

ansible k8sm -m shell -a "systemctl daemon-reload && systemctl start kube-controller-manager && systemctl status kube-scheduler"

网页链接

Tags · etcd-io/etcd · GitHub

v3.4 docs | etcd

Introduction - etcd官方文档中文版 (gitbook.io)

 二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈-阿里云开发者社区 (aliyun.com)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/543195.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

拉线位移传感器可以用来做的工作

拉线位移传感器可以用来做的工作 拉线位移传感器,是做什么的呢?是测位移的一种传感器,它的使用方式是用拉线测量,所以我们又叫它拉线位移传感器或者拉绳位移传感器。 拉绳位移传感器的应用非常广泛,一般只要精度要求不…

Docker issue failed to solve: rpc error: code = unknown desc

完整错误:failed to solve: rpc error: code Unknown desc failed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests 8.0-alpine-v3.14-swoole]: 403 Forbidden 解决方案一 重启DockerDesktop或重新启…

JavaScript全解析-继承

继承 ●要知道什么是继承 ●要知道继承的方式有哪些 ●每种的继承方式是如何实现的 什么是继承 ●继承关系出现在构造函数和构造函数之间 ●当构造函数A 的实例使用了 构造函数B 的属性和方法 ●我们就说 构造函数A 继承自 构造函数B ○管 构造函数A 叫做子类 ○管 构造函数B 叫…

MATLAB算法实战应用案例精讲-【数模应用】生存曲线(补充篇)

目录 前言 几个相关概念 生存概率与死亡概率 生存率 生存曲线 事件、生存时间 中位生存时间 生存率的比较 生存数据 风险集 如何读懂KM曲线 应用案例 新药对患者总生存时间的影响-KM曲线 软件操作及结果解读 应用GraphPad Prism制作生存曲线 SPSS绘制生存曲线图 …

gRPC-go参数功能介绍1->WithBlock参数介绍

在gRPC-go框架中,当客户端使用 Dial() 方法连接到gRPC服务器时,可以使用 WithBlock() 选项来阻塞客户端,直到与服务器建立连接成功。 通常情况下,当客户端调用 Dial() 方法时,该方法会立即返回,并在后台异…

使用MinIO文件存储系统【完成图片上传保存】业务逻辑

目录 1:业务流程 2:接口实现 controller层 service层 1:业务流程 步骤一:前端进入上传图片的界面 步骤二:上传图片,请求到后端的媒资管理服务模块 步骤三:媒资管理服务将图片文件存储到m…

STM32---编写呼吸灯串口发送ON开LED,发送OFF关LED或者0X550X440XFF表示开灯,0X550X660XFF表示关灯

编写呼吸灯串口发送ON开LED,发送OFF关LED或者(0X550X440XFF表示开灯,0X550X660XFF表示关灯)注:包头 0X55 包尾:0XFF 数据:0X44表示开灯 0X66表示关灯 用到了重定向 //printf的重定向 int fpu…

使用SSD会提高游戏性能或FPS吗?

​“我在考虑要不要给电脑换个SSD,现在旧电脑上的HDD快满了,正好我也喜欢打游戏,听说换SSD可以提高电脑性能以及游戏FPS,这是真的吗?如果是真的,那我怎么样可以把旧硬盘上的数据迁移到新硬盘呢?…

【Web3.0大势所趋】下一代互联网的未来

前言 Web3.0 是一个越来越受到关注的话题,它被认为将会带来天翻地覆的变化。本文我们一起来谈谈 Web3.0 的概念、特点和优势,并探讨它为什么如此重要和具有革命性的。 文章目录 前言Web3.0是什么区块链技术智能合约总结 Web3.0是什么 Web3.0: 是下一代互…

达梦数据库的安装DM8

文章目录 一、达梦数据库的安装1、环境需求2、达梦的官方安装文档3、达梦数据库的安装包下载3.1、DM8的下载3.2、DM7的下载 4、开始操作4.1、使用xftp传dm8到虚拟机里4.2、下载依赖包4.3、拷贝dm8的iso文件到指定目录,并挂载4.4、创建DM安装用户和安装用户组并初始化…

提前熟知领英被限制被封因素,避免踩坑

领英在什么情况下容易被封 01.同一个人注册使用多个领英帐号。 02.多个人共同使用同一个领英帐号。 03.虚假资料注册领英账号,常见于注册领英账号的时候初始姓名随便填写或胡编乱造,注册时使用了网络虚拟的手机号码或邮箱等。 04.领英帐号的个人档案资料…

unity多线程Burst+Job System

Unity自己本身UnityEngine所使用的API是不能被多线程调用的,它没有向用户开放线程接口,所以Unity是不能使用多线程的,但是C#中可以使用多线程,Unity使用C#进行脚本编辑,故而Unity也可以通过C#来调用多线程。 JobSyste…

如何在新能源行业运用IPD?

新能源又称非常规能源,一般指在新技术基础上,可系统地开发利用的可再生能源。是指传统能源之外的各种能源形式,也是指刚开始开发利用或正在积极研究、有待推广的能源,如太阳能、地热能、风能、海洋能、生物质能和核聚变能等。目前…

HTTP与HTTPS的区别

1 HTTP与HTTPS有哪些区别 HTTP 是超文本传输协议,信息是明文传输,存在安全风险的问题。HTTPS 则解决 HTTP 不安全的缺陷,在 TCP 和 HTTP 网络层之间加入了 SSL/TLS 安全协议,使得报文能够加密传输。HTTP 连接建立相对简单&#x…

你的私人编程老师,ChatGPT帮你快速掌握Python编程

ChatGPT为我们提供了一种全新的学习方式,让你可以更轻松,更快速地学习编程知识。 以下是ChatGPT帮助用户学习编程的优势: 推荐资源和课程ChatGPT可以根据您的编程经验和学习目标,推荐适合您的在线学习资源和课程。例如&#xff0c…

Go类型断言

在Go语言中类型断言的语法格式如下: value, ok : x.(T) 类型断言失败,不会发生panic。根据ok判断是否成功 或者 value: x.(T) 类型断言失败,会发生panic 其中,x 表示一个接口的类型,T 表示一个具体的类型(也…

希望所有计算机专业同学都知道这些老师

C语言教程——翁凯老师、赫斌 翁恺老师是土生土长的浙大码农,从本科到博士都毕业于浙大计算机系,后来留校教书,一教就是20多年。 翁恺老师的c语言课程非常好,讲解特别有趣,很适合初学者学习。 郝斌老师的思路是以初学…

进口常用除氟树脂品牌

随着经济的发展,半导体、表面处理、采矿等行业产生了大量的高氟废水,由于高氟水对人体健康具有较大危害,含氟污水治理己经引起了广泛的关注。 多年以前,含氟废水的处理,一般下游污水处理厂通过合并多股废水&#xff0…

html监听界面被隐藏或显示

vue相比于小程序和uni-app 显然少了两个有点用的生命周期 onShow 应用被展示 onHide 应用被隐藏 但其实这个 要做其实也很简单 JavaScript中 有对应的visibilitychange事件可以监听 我们Html参考代码如下 <!DOCTYPE html> <html lang"en"> <head>…

在GB28181项目中,调用eXosip_register_send_register函数并且返回值为-2或者-3的含义是什么

一、eXosip_register_send_register返回-2的原因&#xff1a; 在GB28181项目中&#xff0c;调用eXosip_register_send_register函数并且返回值为-2通常表示注册发送失败。该返回值的含义是注册请求被拒绝&#xff0c;可能是由于身份验证失败或其他原因导致的。 以下是可能导致…