kubespray部署kubernetes集群

news2024/9/24 13:14:05

kubespray部署kubernetes集群

1、kubespray简介

Kubespray 是开源的部署生产级别 Kubernetes 集群的项目,它整合了 Ansible 作为部署的工具。

在这里插入图片描述

  • 可以部署在 AWS,GCE,Azure,OpenStack,vSphere,Packet(Bare metal),

    Oracle Cloud Infrastructure(Experimental)或Baremetal上。

  • 高可用集群

  • 可组合各种组件(例如,选择网络插件)

  • 支持最受欢迎的Linux发行版

  • 持续集成测试

官网:https://kubespray.io

项目地址:https://github.com/kubernetes-sigs/kubespray

2、在线部署

国内特殊的网络环境导致使用 kubespray 特别困难,部分镜像需要从 gcr.io 拉取,部分二进制文件需要从

github 下载,所以可以提前下载好进行镜像导入。

说明:高可用部署 etcd 要求3个节点,所以高可用集群最少需要 3 个节点。

kubespray 需要一个部署节点,也可以复用集群任意一个节点,这里在第一个master节点( 192.168.54.211 )安装

kubespray,并执行后续的所有操作。

2.1 搭建环境准备

1、服务器规划

iphostname
192.168.54.211master
192.168.54.212slave1
192.168.54.213slave2

2、设置 hostname

# 三台主机分别设置
$ hostnamectl set-hostname master
$ hostnamectl set-hostname slave1
$ hostnamectl set-hostname slave2
# 查看当前主机名称
$ hostname

3、设置 ip 和 hostname 的对应关系

# 三台主机分别设置
$ cat >> /etc/hosts << EOF
192.168.54.211 master
192.168.54.212 slave1
192.168.54.213 slave2
EOF

2.2 下载kubespray

# master节点执行
# 下载正式发布的release版本
wget https://github.com/kubernetes-sigs/kubespray/archive/v2.16.0.tar.gz
tar -zxvf kubespray-2.16.0.tar.gz
# 或者直接克隆
git clone https://github.com/kubernetes-sigs/kubespray.git -b v2.16.0 --depth=1

2.3 安装依赖

# master节点执行
cd kubespray-2.16.0/
yum install -y epel-release python3-pip
pip3 install -r requirements.txt

如果报错:

# 错误一
Complete output from command python setup.py egg_info:

	=============================DEBUG ASSISTANCE==========================
	If you are seeing an error here please try the following to
	successfully install cryptography:

	Upgrade to the latest pip and try again. This will fix errors for most
	users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
	=============================DEBUG ASSISTANCE==========================

Traceback (most recent call last):
	File "<string>", line 1, in <module>
    File "/tmp/pip-build-3w9d_1bk/cryptography/setup.py", line 17, in <module>
    from setuptools_rust import RustExtension
    ModuleNotFoundError: No module named 'setuptools_rust'

----------------------------------------

# 解决方法
pip3 install --upgrade cryptography==3.2
# 错误二
Exception: command 'gcc' failed with exit status 1

# 解决方法
# python2
yum install gcc libffi-devel python-devel openssl-devel -y
# python3
yum install gcc libffi-devel python3-devel openssl-devel -y

2.4 更新Ansible

更新 Ansible inventory file,IPS 地址为 3 个实例的内部 IP:

# master节点执行
[root@master kubespray-2.16.0]# cp -rfp inventory/sample inventory/mycluster
[root@master kubespray-2.16.0]# declare -a IPS=( 192.168.54.211 192.168.54.212 192.168.54.213)
[root@master kubespray-2.16.0]# CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
DEBUG: Adding group all
DEBUG: Adding group kube_control_plane
DEBUG: Adding group kube_node
DEBUG: Adding group etcd
DEBUG: Adding group k8s_cluster
DEBUG: Adding group calico_rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube_control_plane
DEBUG: adding host node2 to group kube_control_plane
DEBUG: adding host node1 to group kube_node
DEBUG: adding host node2 to group kube_node
DEBUG: adding host node3 to group kube_node

2.5 修改安装节点信息

查看自动生成的 hosts.yaml,kubespray 会根据提供的节点数量自动规划节点角色。这里部署 2 个 master 节

点,同时 3 个节点也作为 node ,3 个节点也用来部署 etcd。

# master节点执行
[root@master kubespray-2.16.0]# cat inventory/mycluster/hosts.yaml
all:
  hosts:
    node1:
      ansible_host: 192.168.54.211
      ip: 192.168.54.211
      access_ip: 192.168.54.211
    node2:
      ansible_host: 192.168.54.212
      ip: 192.168.54.212
      access_ip: 192.168.54.212
    node3:
      ansible_host: 192.168.54.213
      ip: 192.168.54.213
      access_ip: 192.168.54.213
  children:
    kube-master:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

修改 inventory/mycluster/hosts.yaml 文件:

# master节点执行
[root@master kubespray-2.16.0]# vim inventory/mycluster/hosts.yaml
all:
  hosts:
    master:
      ansible_host: 192.168.54.211
      ip: 192.168.54.211
      access_ip: 192.168.54.211
    slave1:
      ansible_host: 192.168.54.212
      ip: 192.168.54.212
      access_ip: 192.168.54.212
    slave2:
      ansible_host: 192.168.54.213
      ip: 192.168.54.213
      access_ip: 192.168.54.213
  children:
    kube-master:
      hosts:
        master:
        slave1:
    kube-node:
      hosts:
        master:
        slave1:
        slave2:
    etcd:
      hosts:
        master:
        slave1:
        slave2:
    k8s-cluster:
      children:
        kter:
        kube-node:
    calico-rr:
      hosts: {}

2.6 修改全局环境变量(默认即可)

[root@master kubespray-2.16.0]# cat inventory/mycluster/group_vars/all/all.yml
---
## Directory where etcd data stored
etcd_data_dir: /var/lib/etcd

## Experimental kubeadm etcd deployment mode. Available only for new deployment
etcd_kubeadm_enabled: false

## Directory where the binaries will be installed
bin_dir: /usr/local/bin

## The access_ip variable is used to define how other nodes should access
## the node.  This is used in flannel to allow other flannel nodes to see
## this node for example.  The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
# access_ip: 1.1.1.1


## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
# loadbalancer_apiserver:
#   address: 1.2.3.4
#   port: 1234

## Internal loadbalancers for apiservers
# loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
# loadbalancer_apiserver_type: nginx  # valid values "nginx" or "haproxy"

## If the cilium is going to be used in strict mode, we can use the
## localhost connection and not use the external LB. If this parameter is
## not specified, the first node to connect to kubeapi will be used.
# use_localhost_as_kubeapi_loadbalancer: true

## Local loadbalancer should use this port
## And must be set port 6443
loadbalancer_apiserver_port: 6443

## If loadbalancer_apiserver_healthcheck_port variable defined, enables proxy liveness check for nginx.
loadbalancer_apiserver_healthcheck_port: 8081

### OTHER OPTIONAL VARIABLES

## Upstream dns servers
# upstream_dns_servers:
#   - 8.8.8.8
#   - 8.8.4.4

## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using openstack-client before starting the playbook.
# cloud_provider:

## When cloud_provider is set to 'external', you can set the cloud controller to deploy
## Supported cloud controllers are: 'openstack' and 'vsphere'
## When openstack or vsphere are used make sure to source in the required fields
# external_cloud_provider:

## Set these proxy values in order to update package manager and docker daemon to use proxies
# http_proxy: ""
# https_proxy: ""

## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
# no_proxy: ""

## Some problems may occur when downloading files over https proxy due to ansible bug
## https://github.com/ansible/ansible/issues/32750. Set this variable to False to disable
## SSL validation of get_url module. Note that kubespray will still be performing checksum validation.
# download_validate_certs: False

## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
# additional_no_proxy: ""

## If you need to disable proxying of os package repositories but are still behind an http_proxy set
## skip_http_proxy_on_os_packages to true
## This will cause kubespray not to set proxy environment in /etc/yum.conf for centos and in /etc/apt/apt.conf for debian/ubuntu
## Special information for debian/ubuntu - you have to set the no_proxy variable, then apt package will install from your source of wish
# skip_http_proxy_on_os_packages: false

## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all
## pods will restart) when adding or removing workers.  To override this behaviour by only including master nodes in the
## no_proxy variable, set below to true:
no_proxy_exclude_workers: false

## Certificate Management
## This setting determines whether certs are generated via scripts.
## Chose 'none' if you provide your own certificates.
## Option is  "script", "none"
# cert_management: script

## Set to true to allow pre-checks to fail and continue deployment
# ignore_assert_errors: false

## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
# kube_read_only_port: 10255

## Set true to download and cache container
# download_container: true

## Deploy container engine
# Set false if you want to deploy container engine manually.
# deploy_container_engine: true

## Red Hat Enterprise Linux subscription registration
## Add either RHEL subscription Username/Password or Organization ID/Activation Key combination
## Update RHEL subscription purpose usage, role and SLA if necessary
# rh_subscription_username: ""
# rh_subscription_password: ""
# rh_subscription_org_id: ""
# rh_subscription_activation_key: ""
# rh_subscription_usage: "Development"
# rh_subscription_role: "Red Hat Enterprise Server"
# rh_subscription_sla: "Self-Support"

## Check if access_ip responds to ping. Set false if your firewall blocks ICMP.
# ping_access_ip: true

2.7 修改集群安装配置

默认安装版本较低,指定 kubernetes 版本:

# master节点执行
[root@master kubespray-2.16.0]# vim inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.20.7

如果有其它需要,修改 inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml 文件即可。

2.8 k8s集群插件

Kuberenetes 仪表板和入口控制器等插件请在下面的文件中进行设置:

$ vim inventory/mycluster/group_vars/k8s_cluster/addons.yml

这里不对该文件进行修改。

2.9 SSH免密配置

配置ssh免密,kubespray ansible 节点对所有节点免密。

# master节点执行
ssh-keygen
ssh-copy-id 192.168.54.211
ssh-copy-id 192.168.54.212
ssh-copy-id 192.168.54.213
ssh-copy-id master
ssh-copy-id slave1
ssh-copy-id slave2

2.10 改变镜像源

# master节点执行
[root@master kubespray-2.16.0]# cat > inventory/mycluster/group_vars/k8s_cluster/vars.yml << EOF
gcr_image_repo: "registry.aliyuncs.com/google_containers"
kube_image_repo: "registry.aliyuncs.com/google_containers"
etcd_download_url: "https://ghproxy.com/https://github.com/coreos/etcd/releases/download/{{ etcd_version }}/etcd-{{ etcd_version }}-linux-{{ image_arch }}.tar.gz"
cni_download_url: "https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/{{ cni_version }}/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz"
calicoctl_download_url: "https://ghproxy.com/https://github.com/projectcalico/calicoctl/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}"
calico_crds_download_url: "https://ghproxy.com/https://github.com/projectcalico/calico/archive/{{ calico_version }}.tar.gz"
crictl_download_url: "https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/{{ crictl_version }}/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz"
nodelocaldns_image_repo: "cncamp/k8s-dns-node-cache"
dnsautoscaler_image_repo: "cncamp/cluster-proportional-autoscaler-amd64"
EOF

2.11 安装集群

运行 kubespray playbook 安装集群:

# master节点执行
[root@master kubespray-2.16.0]# ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml

安装过程中会下载许多可执行文件和镜像。

出现下面的信息表示执行成功:

PLAY RECAP *************************************************************************************************************
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
master                     : ok=584  changed=109  unreachable=0    failed=0    skipped=1160 rescued=0    ignored=1
slave1                     : ok=520  changed=97   unreachable=0    failed=0    skipped=1008 rescued=0    ignored=0
slave2                     : ok=438  changed=76   unreachable=0    failed=0    skipped=678  rescued=0    ignored=0

Saturday 31 December 2022  20:07:57 +0800 (0:00:00.060)       0:59:12.196 *****
===============================================================================
container-engine/docker : ensure docker packages are installed ----------------------------------------------- 2180.79s
kubernetes/preinstall : Install packages requirements --------------------------------------------------------- 487.24s
download_file | Download item ---------------------------------------------------------------------------------- 58.95s
download_file | Download item ---------------------------------------------------------------------------------- 50.40s
download_container | Download image if required ---------------------------------------------------------------- 44.25s
download_file | Download item ---------------------------------------------------------------------------------- 42.65s
download_container | Download image if required ---------------------------------------------------------------- 38.06s
download_container | Download image if required ---------------------------------------------------------------- 32.38s
kubernetes/kubeadm : Join to cluster --------------------------------------------------------------------------- 32.29s
download_container | Download image if required ---------------------------------------------------------------- 30.67s
download_file | Download item ---------------------------------------------------------------------------------- 25.82s
kubernetes/control-plane : Joining control plane node to the cluster. ------------------------------------------ 25.60s
download_container | Download image if required ---------------------------------------------------------------- 25.34s
download_container | Download image if required ---------------------------------------------------------------- 22.49s
kubernetes/control-plane : kubeadm | Initialize first master --------------------------------------------------- 20.90s
download_container | Download image if required ---------------------------------------------------------------- 20.14s
download_file | Download item ---------------------------------------------------------------------------------- 19.50s
download_container | Download image if required ---------------------------------------------------------------- 17.84s
download_container | Download image if required ---------------------------------------------------------------- 13.96s
download_container | Download image if required ---------------------------------------------------------------- 13.31s

2.12 查看创建的集群

# master节点执行
[root@master ~]# kubectl get nodes -o wide
NAME     STATUS   ROLES                  AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
master   Ready    control-plane,master   10m     v1.20.7   192.168.54.211   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.15
slave1   Ready    control-plane,master   9m38s   v1.20.7   192.168.54.212   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.15
slave2   Ready    <none>                 8m40s   v1.20.7   192.168.54.213   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://19.3.15
[root@master ~]# kubectl -n kube-system get pods
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7c5b64bf96-wtmxn   1/1     Running   0          8m41s
calico-node-c6rr6                          1/1     Running   0          9m6s
calico-node-l59fj                          1/1     Running   0          9m6s
calico-node-n9tg6                          1/1     Running   0          9m6s
coredns-f944c7f7c-n2wzp                    1/1     Running   0          8m26s
coredns-f944c7f7c-x2tfl                    1/1     Running   0          8m22s
dns-autoscaler-557bfb974d-6cbtk            1/1     Running   0          8m24s
kube-apiserver-master                      1/1     Running   0          10m
kube-apiserver-slave1                      1/1     Running   0          10m
kube-controller-manager-master             1/1     Running   0          10m
kube-controller-manager-slave1             1/1     Running   0          10m
kube-proxy-czk9s                           1/1     Running   0          9m17s
kube-proxy-gwfc8                           1/1     Running   0          9m17s
kube-proxy-tkxlf                           1/1     Running   0          9m17s
kube-scheduler-master                      1/1     Running   0          10m
kube-scheduler-slave1                      1/1     Running   0          10m
nginx-proxy-slave2                         1/1     Running   0          9m18s
nodelocaldns-4vd75                         1/1     Running   0          8m23s
nodelocaldns-cr5gg                         1/1     Running   0          8m23s
nodelocaldns-pmgqx                         1/1     Running   0          8m23s

2.13 查看安装的镜像

# master节点执行
[root@master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.7             ff54c88b8ecf        19 months ago       118MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.7             22d1a2072ec7        19 months ago       116MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.7             034671b24f0f        19 months ago       122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.7             38f903b54010        19 months ago       47.3MB
nginx                                                             1.19                f0b8a9a54136        19 months ago       133MB
quay.io/calico/node                                               v3.17.4             4d9399da41dc        20 months ago       165MB
quay.io/calico/cni                                                v3.17.4             f3abd83bc819        20 months ago       128MB
quay.io/calico/kube-controllers                                   v3.17.4             c623a89d3672        20 months ago       52.2MB
cncamp/k8s-dns-node-cache                                         1.17.1              21fc69048bd5        22 months ago       123MB
quay.io/coreos/etcd                                               v3.4.13             d1985d404385        2 years ago         83.8MB
cncamp/cluster-proportional-autoscaler-amd64                      1.8.3               078b6f04135f        2 years ago         40.6MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        2 years ago         45.2MB
registry.aliyuncs.com/google_containers/pause                     3.3                 0184c1613d92        2 years ago         683kB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        2 years ago         683kB
# slave1节点执行
[root@slave1 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.7             ff54c88b8ecf        19 months ago       118MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.7             034671b24f0f        19 months ago       122MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.7             22d1a2072ec7        19 months ago       116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.7             38f903b54010        19 months ago       47.3MB
nginx                                                             1.19                f0b8a9a54136        19 months ago       133MB
quay.io/calico/node                                               v3.17.4             4d9399da41dc        20 months ago       165MB
quay.io/calico/cni                                                v3.17.4             f3abd83bc819        20 months ago       128MB
quay.io/calico/kube-controllers                                   v3.17.4             c623a89d3672        20 months ago       52.2MB
cncamp/k8s-dns-node-cache                                         1.17.1              21fc69048bd5        22 months ago       123MB
quay.io/coreos/etcd                                               v3.4.13             d1985d404385        2 years ago         83.8MB
cncamp/cluster-proportional-autoscaler-amd64                      1.8.3               078b6f04135f        2 years ago         40.6MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        2 years ago         45.2MB
registry.aliyuncs.com/google_containers/pause                     3.3                 0184c1613d92        2 years ago         683kB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        2 years ago         683kB
#  slave2节点执行
[root@slave2 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.7             ff54c88b8ecf        19 months ago       118MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.7             034671b24f0f        19 months ago       122MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.7             22d1a2072ec7        19 months ago       116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.7             38f903b54010        19 months ago       47.3MB
nginx                                                             1.19                f0b8a9a54136        19 months ago       133MB
quay.io/calico/node                                               v3.17.4             4d9399da41dc        20 months ago       165MB
quay.io/calico/cni                                                v3.17.4             f3abd83bc819        20 months ago       128MB
quay.io/calico/kube-controllers                                   v3.17.4             c623a89d3672        20 months ago       52.2MB
cncamp/k8s-dns-node-cache                                         1.17.1              21fc69048bd5        22 months ago       123MB
quay.io/coreos/etcd                                               v3.4.13             d1985d404385        2 years ago         83.8MB
registry.aliyuncs.com/google_containers/pause                     3.3                 0184c1613d92        2 years ago         683kB

导出镜像供离线使用:

#  master节点执行
docker save -o kube-proxy.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.20.7
docker save -o kube-controller-manager.tar registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.7
docker save -o kube-apiserver.tar registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.7
docker save -o kube-scheduler.tar registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.7
docker save -o nginx.tar nginx:1.19
docker save -o node.tar quay.io/calico/node:v3.17.4
docker save -o cni.tar quay.io/calico/cni:v3.17.4
docker save -o kube-controllers.tar quay.io/calico/kube-controllers:v3.17.4
docker save -o k8s-dns-node-cache.tar cncamp/k8s-dns-node-cache:1.17.1
docker save -o etcd.tar quay.io/coreos/etcd:v3.4.13
docker save -o cluster-proportional-autoscaler-amd64.tar cncamp/cluster-proportional-autoscaler-amd64:1.8.3
docker save -o coredns.tar registry.aliyuncs.com/google_containers/coredns:1.7.0
docker save -o pause_3.3.tar registry.aliyuncs.com/google_containers/pause:3.3
docker save -o pause_3.2.tar registry.aliyuncs.com/google_containers/pause:3.2

查看生成的文件:

# master节点执行
[root@master ~]# tree Kubespray-2.16.0/
Kubespray-2.16.0/
├── calicoctl
├── cni-plugins-linux-amd64-v0.9.1.tgz
├── images
│   ├── cluster-proportional-autoscaler-amd64.tar
│   ├── cni.tar
│   ├── coredns.tar
│   ├── etcd.tar
│   ├── k8s-dns-node-cache.tar
│   ├── kube-apiserver.tar
│   ├── kube-controller-manager.tar
│   ├── kube-controllers.tar
│   ├── kube-proxy.tar
│   ├── kube-scheduler.tar
│   ├── nginx.tar
│   ├── node.tar
│   ├── pause_3.2.tar
│   └── pause_3.3.tar
├── kubeadm-v1.20.7-amd64
├── kubectl-v1.20.7-amd64
├── kubelet-v1.20.7-amd64
└── rpm
    ├── docker
    │   ├── audit-libs-python-2.8.5-4.el7.x86_64.rpm
    │   ├── b001-libsemanage-python-2.5-14.el7.x86_64.rpm
    │   ├── b002-setools-libs-3.3.8-4.el7.x86_64.rpm
    │   ├── b003-libcgroup-0.41-21.el7.x86_64.rpm
    │   ├── b0041-checkpolicy-2.5-8.el7.x86_64.rpm
    │   ├── b004-python-IPy-0.75-6.el7.noarch.rpm
    │   ├── b005-policycoreutils-python-2.5-34.el7.x86_64.rpm
    │   ├── b006-container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
    │   ├── b007-containerd.io-1.3.9-3.1.el7.x86_64.rpm
    │   ├── d001-docker-ce-cli-19.03.14-3.el7.x86_64.rpm
    │   ├── d002-docker-ce-19.03.14-3.el7.x86_64.rpm
    │   └── d003-libseccomp-2.3.1-4.el7.x86_64.rpm
    └── preinstall
        ├── a001-libseccomp-2.3.1-4.el7.x86_64.rpm
        ├── bash-completion-2.1-8.el7.noarch.rpm
        ├── chrony-3.4-1.el7.x86_64.rpm
        ├── e2fsprogs-1.42.9-19.el7.x86_64.rpm
        ├── ebtables-2.0.10-16.el7.x86_64.rpm
        ├── ipset-7.1-1.el7.x86_64.rpm
        ├── ipvsadm-1.27-8.el7.x86_64.rpm
        ├── rsync-3.1.2-10.el7.x86_64.rpm
        ├── socat-1.7.3.2-2.el7.x86_64.rpm
        ├── unzip-6.0-22.el7_9.x86_64.rpm
        ├── wget-1.14-18.el7_6.1.x86_64.rpm
        └── xfsprogs-4.5.0-22.el7.x86_64.rpm

4 directories, 43 files

2.14 卸载集群

卸载集群:

[root@master kubespray-2.16.0]# ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root reset.yml

2.15 添加节点

1、在 inventory/mycluster/hosts.yaml 中添加新增节点信息

2、执行下面的命令:

[root@master kubespray-2.16.0]# ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root scale.yml -v -b --private-key=~/.ssh/id_rsa

2.16 移除节点

不用修改 hosts.yaml 文件,而是直接执行下面的命令:

[root@master kubespray-2.16.0]# ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root remove-node.yml -v -b --extra-vars "node=slave1"

2.17 升级集群

[root@master kubespray-2.16.0]# ansible-playbook upgrade-cluster.yml -b -i inventory/mycluster/hosts.yaml -e kube_version=v1.25.6

3、离线部署

在线部署可能因为网络的原因导致部署失败,所以可以使用离线部署 k8s 集群。

我们可以自己制作一个离线部署的安装包。

下面是从网上看到的一个离线部署的例子。

kubespray GitHub 地址为: https://github.com/kubernetes-sigs/kubespray

这里使用分支为 release-2.15,对应的主要组件和系统版本如下:

  • kubernetes v1.19.10

  • docker v19.03

  • calico v3.16.9

  • centos 7.9.2009

kubespray 离线包下载地址:

https://www.mediafire.com/file/nyifoimng9i6zp5/kubespray_offline.tar.gz/file

离线包下载完成后解压到 /opt 目录下:

# master节点执行
$ tar -zxvf /opt/kubespray_offline.tar.gz -C /opt/

查看文件列表:

# master节点执行
$ ll /opt/kubespray_offline
总用量 4
drwxr-xr-x.  4 root root   28 711 2021 ansible_install
drwxr-xr-x. 15 root root 4096 78 2021 kubespray
drwxr-xr-x.  4 root root  240 79 2021 kubespray_cache

三台机器的IP地址为:192.168.54.211192.168.54.212192.168.54.213

开始部署 ansible 服务器:

# master节点执行
yum install /opt/kubespray_offline/ansible_install/rpm/*
pip3 install /opt/kubespray_offline/ansible_install/pip/*

配置主机免密码登陆:

# master节点执行
ssh-keygen
ssh-copy-id 192.168.54.211
ssh-copy-id 192.168.54.212
ssh-copy-id 192.168.54.213
ssh-copy-id master
ssh-copy-id slave1
ssh-copy-id slave2

配置 ansible 主机组:

# master节点执行
[root@master ~]# cd /opt/kubespray_offline/kubespray
[root@master kubespray]# declare -a IPS=(192.168.54.211 192.168.54.212 192.168.54.213)
[root@master kubespray]# CONFIG_FILE=inventory/mycluster/hosts.yaml python3.6 contrib/inventory_builder/inventory.py ${IPS[@]}
DEBUG: Adding group all
DEBUG: Adding group kube-master
DEBUG: Adding group kube-node
DEBUG: Adding group etcd
DEBUG: Adding group k8s-cluster
DEBUG: Adding group calico-rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube-master
DEBUG: adding host node2 to group kube-master
DEBUG: adding host node1 to group kube-node
DEBUG: adding host node2 to group kube-node
DEBUG: adding host node3 to group kube-node

inventory/mycluster/hosts.yaml 文件会自动生成,查看改文件的内容:

# master节点执行
[root@master kubespray]# cat inventory/mycluster/hosts.yaml
all:
  hosts:
    node1:
      ansible_host: 192.168.54.211
      ip: 192.168.54.211
      access_ip: 192.168.54.211
    node2:
      ansible_host: 192.168.54.212
      ip: 192.168.54.212
      access_ip: 192.168.54.212
    node3:
      ansible_host: 192.168.54.213
      ip: 192.168.54.213
      access_ip: 192.168.54.213
  children:
    kube-master:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

修改 inventory/mycluster/hosts.yaml 文件:

# master节点执行
[root@master kubespray]# vim inventory/mycluster/hosts.yaml
all:
  hosts:
    master:
      ansible_host: 192.168.54.211
      ip: 192.168.54.211
      access_ip: 192.168.54.211
    slave1:
      ansible_host: 192.168.54.212
      ip: 192.168.54.212
      access_ip: 192.168.54.212
    slave2:
      ansible_host: 192.168.54.213
      ip: 192.168.54.213
      access_ip: 192.168.54.213
  children:
    kube-master:
      hosts:
        master:
        slave1:
    kube-node:
      hosts:
        master:
        slave1:
        slave2:
    etcd:
      hosts:
        master:
        slave1:
        slave2:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

修改配置文件使用离线的安装包和镜像:

# master节点执行
[root@master kubespray]# vim inventory/mycluster/group_vars/all/all.yml
---
## Directory where etcd data stored
etcd_data_dir: /var/lib/etcd

## Experimental kubeadm etcd deployment mode. Available only for new deployment
etcd_kubeadm_enabled: false

## Directory where the binaries will be installed
bin_dir: /usr/local/bin

## The access_ip variable is used to define how other nodes should access
## the node.  This is used in flannel to allow other flannel nodes to see
## this node for example.  The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
# access_ip: 1.1.1.1


## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
# loadbalancer_apiserver:
#   address: 1.2.3.4
#   port: 1234

## Internal loadbalancers for apiservers
# loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
# loadbalancer_apiserver_type: nginx  # valid values "nginx" or "haproxy"

## If the cilium is going to be used in strict mode, we can use the
## localhost connection and not use the external LB. If this parameter is
## not specified, the first node to connect to kubeapi will be used.
# use_localhost_as_kubeapi_loadbalancer: true

## Local loadbalancer should use this port
## And must be set port 6443
loadbalancer_apiserver_port: 6443

## If loadbalancer_apiserver_healthcheck_port variable defined, enables proxy liveness check for nginx.
loadbalancer_apiserver_healthcheck_port: 8081

### OTHER OPTIONAL VARIABLES

## Upstream dns servers
# upstream_dns_servers:
#   - 8.8.8.8
#   - 8.8.4.4

## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using openstack-client before starting the playbook.
# cloud_provider:

## When cloud_provider is set to 'external', you can set the cloud controller to deploy
## Supported cloud controllers are: 'openstack' and 'vsphere'
## When openstack or vsphere are used make sure to source in the required fields
# external_cloud_provider:

## Set these proxy values in order to update package manager and docker daemon to use proxies
# http_proxy: ""
# https_proxy: ""
# 
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
# no_proxy: ""

## Some problems may occur when downloading files over https proxy due to ansible bug
## https://github.com/ansible/ansible/issues/32750. Set this variable to False to disable
## SSL validation of get_url module. Note that kubespray will still be performing checksum validation.
# download_validate_certs: False

## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
# additional_no_proxy: ""

## If you need to disable proxying of os package repositories but are still behind an http_proxy set
## skip_http_proxy_on_os_packages to true
## This will cause kubespray not to set proxy environment in /etc/yum.conf for centos and in /etc/apt/apt.conf for debian/ubuntu
## Special information for debian/ubuntu - you have to set the no_proxy variable, then apt package will install from your source of wish
# skip_http_proxy_on_os_packages: false

## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all
## pods will restart) when adding or removing workers.  To override this behaviour by only including master nodes in the
## no_proxy variable, set below to true:
no_proxy_exclude_workers: false

## Certificate Management
## This setting determines whether certs are generated via scripts.
## Chose 'none' if you provide your own certificates.
## Option is  "script", "none"
## note: vault is removed
# cert_management: script

## Set to true to allow pre-checks to fail and continue deployment
# ignore_assert_errors: false

## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
# kube_read_only_port: 10255

## Set true to download and cache container
# download_container: true

## Deploy container engine
# Set false if you want to deploy container engine manually.
# deploy_container_engine: true

## Red Hat Enterprise Linux subscription registration
## Add either RHEL subscription Username/Password or Organization ID/Activation Key combination
## Update RHEL subscription purpose usage, role and SLA if necessary
# rh_subscription_username: ""
# rh_subscription_password: ""
# rh_subscription_org_id: ""
# rh_subscription_activation_key: ""
# rh_subscription_usage: "Development"
# rh_subscription_role: "Red Hat Enterprise Server"
# rh_subscription_sla: "Self-Support"

## Check if access_ip responds to ping. Set false if your firewall blocks ICMP.
# ping_access_ip: true
kube_apiserver_node_port_range: "1-65535"
kube_apiserver_node_port_range_sysctl: false


download_run_once: true
download_localhost: true
download_force_cache: true
download_cache_dir: /opt/kubespray_offline/kubespray_cache # 修改

preinstall_cache_rpm: true
docker_cache_rpm: true
download_rpm_localhost: "{{ download_cache_dir }}/rpm" # 修改
tmp_cache_dir: /tmp/k8s_cache # 修改
tmp_preinstall_rpm: "{{ tmp_cache_dir }}/rpm/preinstall" # 修改
tmp_docker_rpm: "{{ tmp_cache_dir }}/rpm/docker" # 修改
image_is_cached: true
nodelocaldns_dire_coredns: true

开始部署 k8s:

# master节点执行
[root@master kubespray]# ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml

部署时间大概持续半个小时,中间不需要任何介入,部署完成后,查看集群和Pod状态:

# master节点执行
[root@master kubespray]# kubectl get nodes
[root@master kubespray]# kubectl get pods -n kube-system 

卸载集群:

# master节点执行
[root@master kubespray]# ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root reset.yml

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/658228.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

马原否定之否定观点

事物普遍联系和发展 事物之间的普遍联系的 答案B C考察的是联系的条件性 1.联系对事物的发展有制约和支撑的作用 2.联系的条件可以相互转化 所以我们可以将不利条件转化成有利条件 3.建立联系必须尊重客观规律。 对立统一是事物发展的根本规律、 唯物辩证法揭示了事物发展一…

ELK日志收集系统集群实验

目录 一、实验拓扑 二、环境配置 (一)设置各个主机的IP地址为拓扑中的静态IP&#xff0c;在两个节点中修改主机名为node1和node2并设置hosts文件 1、在虚拟机node1上操作 2、在虚拟机node2上操作 3、测试node1与node2的通联性 三、 安装node1与node2节点的elasticsearch…

大数据Doris(四十四):kafka json 数组格式数据导入到Doris

文章目录 kafka json 数组格式数据导入到Doris 一、创建 Doris 表 二、创建 Kafka topic

[论文笔记]Bidirectional LSTM-CRF Models for Sequence Tagging

引言 本文是论文Bidirectional LSTM-CRF Models for Sequence Tagging的阅读笔记。这篇论文是15年发表的,比上次介绍的那篇还要早。 首次应用双向LSTM+CRF(BI-LSTM-CRF)到序列标注数据集。BI-LSTM-CRF模型可以有效地使用双向输入特征,也因为CRF层可以利用句子级标签信息。…

前端web入门-CSS-day06

(创作不易&#xff0c;感谢有你&#xff0c;你的支持&#xff0c;就是我前行的最大动力&#xff0c;如果看完对你有帮助&#xff0c;请留下您的足迹&#xff09; 目录 一、标准流 二、Flex 布局 组成 主轴对齐方式 侧轴对齐方式 修改主轴方向 弹性伸缩比 弹性盒子换行…

chatgpt赋能python:Python如何优雅地退出程序执行

Python如何优雅地退出程序执行 Python是一种非常强大的编程语言&#xff0c;它易于学习和使用&#xff0c;并拥有许多有用的功能和库。在Python编程中&#xff0c;经常需要退出程序执行。本文将介绍一些Python中退出程序执行的方法&#xff0c;并探讨它们的优缺点。 1. 使用s…

数据库中的SQL是如何执行的?

简介 参考文献&#xff1a;03丨学会用数据库的方式思考SQL是如何执行的 以oracle和MySQL为例&#xff0c;讲解了sql是怎么被执行的&#xff0c;并且对比了执行过程中&#xff0c;oracle和MySQL的异同。 个人感觉&#xff0c;讲解的核心是SQL执行时的缓存机制。 Oracle中的s…

算法刷题-字符串-重复的子字符串

KMP算法还能干这个 459.重复的子字符串 力扣题目链接 给定一个非空的字符串&#xff0c;判断它是否可以由它的一个子串重复多次构成。给定的字符串只含有小写英文字母&#xff0c;并且长度不超过10000。 示例 1: 输入: “abab” 输出: True 解释: 可由子字符串 “ab” 重复两…

计算机网络面试

计算机网络面试 OSI七层模型 七层网络体系结构各层的主要功能: 应用层:为应用程序提供交互服务。在互联网中的应用层协议很多,如域名系统DNS,支持万维网应用的HTTP协议,支持电子邮件的SMTP协议等。表示层:主要负责数据格式的转换,如加密解密、转换翻译、压缩解压缩等。…

Navicat如何连接MySQL

市面上有很多数据库连接工具&#xff0c;比如Navicat、SQLYog、WorkBench等&#xff0c;用的比较多的&#xff0c;比较好用的&#xff0c;还是Navicat。现在我们就来说说Navicat如何连接Mysql,此文仅适用于小白&#xff0c;大神可略过。 1.打开Navicat,点击左上角的【连接】按钮…

(十)异步-什么是异步(1)

一、什么是异步 启动程序时&#xff0c;系统会在内存中创建一个新的进程。 进程&#xff1a; 构成运行程序的资源的集合。这些资源包括虚地址空间、文件句柄和程序运行所需的其他许多东西。 在进程内部&#xff0c;系统创建了一个称为线程的内核对象&#xff0c;它代表了真正…

chatgpt赋能python:Python迭代循环详解:从基础到高级

Python迭代循环详解&#xff1a;从基础到高级 在Python中&#xff0c;迭代循环是一种非常重要的编程概念。它能够让我们在程序运行过程中多次访问一个数据集或序列&#xff0c;并且以各种方式对其进行操作。在本文中&#xff0c;我们将深入探讨Python中的迭代循环&#xff0c;…

RFID课程要点总结_1 Introduction

1. Introduction Comparison of different automatic identification technologies 首先明确一下比较对象。human identification&#xff08;cost too high&#xff09;是人力识别就不用说了。 fingerprint identification: stability 稳定&#xff0c;精确度高&#xff1…

GAN:生成对抗网络的突破与应用

第一章&#xff1a;引言 在当今信息时代&#xff0c;人工智能技术的发展如日中天。其中&#xff0c;生成对抗网络&#xff08;GAN&#xff09;作为一种强大的生成模型&#xff0c;引起了广泛的关注和研究。GAN通过两个相互对抗的神经网络&#xff0c;即生成器和判别器&#xf…

【深入浅出Nacos原理及调优】「实战开发专题」采用Docker容器进行部署和搭建Nacos服务以及“坑点”

采用Docker容器进行部署和搭建Nacos服务以及“坑点” Docker容器部署Nacos服务安装Docker下载Nacos镜像docker-compose up部署Derby 单机版部署MySQL 单机版部署修改standalone-mysql.yaml MySQL 集群模式部署 初始化nacos数据库Nacos服务的MySQL版本数据库脚本脚本内容 手动创…

chatgpt赋能python:Python遍历4层的最佳实践

Python遍历4层的最佳实践 如果您对搜索引擎优化&#xff08;SEO&#xff09;和网络爬虫有所了解&#xff0c;那么您可能会知道遍历多层链接的重要性。在这篇文章中&#xff0c;我将介绍如何使用Python编写一个简单而有效的爬虫程序&#xff0c;以遍历4层链接。我会使用标准的P…

AU 简单混音模板,用于AI换声,简单记录

玩AI换声&#xff0c;记录一下快速入门学到的混音经验 混音成品&#xff1a;【AI绫华/RVC2.0】星之所在 https://www.bilibili.com/video/BV1Ao4y1K7P9 人声轨效果器 多频段压缩器 主要用来控制高频的刺刺声 回声 主要用来使声音更加饱满 自适应降噪 减少毛毛躁躁的噪音&…

chatgpt赋能python:如何在Python中选取列表的某一个元素

如何在Python中选取列表的某一个元素 在Python编程中&#xff0c;经常需要从一个包含多个元素的列表中选取特定的元素&#xff0c;以进行下一步的操作或处理。本文将介绍如何通过索引和切片的方式来选取Python列表中的元素。 什么是Python列表 在Python中&#xff0c;列表&a…

chatgpt赋能python:Python怎么遍历ASCII表?

Python怎么遍历ASCII表&#xff1f; 什么是ASCII表&#xff1f; ASCII表&#xff0c;即美国信息交换标准代码&#xff0c;是最早广泛用于计算机中字符编码的标准之一。它包含128个字符&#xff0c;其中包括大写字母、小写字母、数字、标点符号以及其他特殊字符&#xff0c;如…

机器视觉初步5:图像预处理相关技术与原理简介

在机器视觉领域中&#xff0c;图像预处理是一项非常重要的技术。它是指在对图像进行进一步处理之前&#xff0c;对原始图像进行一系列的操作&#xff0c;以提高图像质量、减少噪声、增强图像特征等目的。本文将介绍一些常用的图像预处理技术&#xff0c;并通过配图说明&#xf…