云原生之容器编排实践-OpenEuler23.09在线安装Kubernetes与KubeSphere

news2024/12/24 2:24:22

背景

前几篇文章中介绍了如何将 ruoyi-cloud 项目部署到 Kubernetes 集群中,包括网关服务、认证服务和系统服务并且对全部服务采用 YAML 文件的方式来进行部署,这虽然有助于理解 K8S 组织管理资源的风格与底层机制,但是对于团队中不太熟悉命令行操作的成员不太友好,这不,现在我们借助由青云开源的容器平台, KubeSphere 来进行可视化的服务部署。 KubeSphere 是在 Kubernetes 之上构建的面向云原生应用的分布式操作系统,完全开源,支持多云与多集群管理,提供全栈的 IT 自动化运维能力;

接下来使用 KubeKey 完成 KubernetesKubeSphere 的一键安装。另外,由于 CentOS72024年即将停服,实际部署不建议采用;本次的部署环境采用 OpenEuler 社区创新版 23.09

Note:如果是生产环境部署,建议使用更稳定的 LTS 版本的操作系统,eg: OpenEuler 22.03 SP3

虚机资源

共用到了三台虚机,1台作为 Master 节点,2台 Worker 节点。

主机名IP说明
k1192.168.44.162主节点
k2192.168.44.163工作节点
k3192.168.44.164工作节点

即将安装的 KubeSphereKubernetes 版本信息如下:

  • KubeSphere版本:v3.3.2(我们指定了版本:./kk create config --with-kubesphere v3.3.2)
  • Kubernetes版本:v1.23.10(kubectl get node)
[root@k1 ~]# kubectl get node
NAME   STATUS   ROLES                  AGE    VERSION
k1     Ready    control-plane, master   3h2m   v1.23.10
k2     Ready    worker                 3h2m   v1.23.10
k3     Ready    worker                 3h2m   v1.23.10

系统环境

[root@k1 ~]# uname -a
Linux k1 6.4.0-10.1.0.20.oe2309.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Sep 25 19:01:14 CST 2023 x86_64 x86_64 x86_64 GNU/Linux
[root@k1 ~]# cat /proc/version
Linux version 6.4.0-10.1.0.20.oe2309.x86_64 (root@dc-64g.compass-ci) (gcc_old (GCC) 12.3.1 (openEuler 12.3.1-16.oe2309), GNU ld (GNU Binutils) 2.40) #1 SMP PREEMPT_DYNAMIC Mon Sep 25 19:01:14 CST 2023

下载安装操作系统:https://www.openeuler.org/zh/download/?version=openEuler%2023.09
我这里使用的 OpenEuler 操作系统,采用最小化安装,没有自带压缩/解压缩的软件,先安装下: yum install -y tar ,马上要用到。

下载安装KubeKey

KubeKey 是一个用于部署 Kubernetes 集群的开源轻量级工具。它提供了一种灵活、快速、便捷的方式来仅安装 Kubernetes/K3s ,或同时安装 Kubernetes/K3sKubeSphere ,以及其他云原生插件。除此之外,它也是扩展和升级集群的有效工具。

# 下载安装KubeKey
[root@euler ~]# curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -

Downloading kubekey v3.0.7 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz ...

Kubekey v3.0.7 Download Complete!

# 查看帮助文档
[root@euler ~]# ./kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add         Add nodes to kubernetes cluster
  alpha       Commands for features in alpha
  artifact    Manage a KubeKey offline installation package
  certs       cluster certs
  completion  Generate shell completion scripts
  create      Create a cluster or a cluster configuration file
  delete      Delete node or cluster
  help        Help about any command
  init        Initializes the installation environment
  plugin      Provides utilities for interacting with plugins
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     print the client version information

Flags:
  -h, --help   help for kk

Use "kk [command] --help" for more information about a command.

配置准备工作

# 设置三台虚机的主机名
[root@k1 ~]# hostnamectl set-hostname k1
[root@k2 ~]# hostnamectl set-hostname k2
[root@k3 ~]# hostnamectl set-hostname k3

# 创建配置文件
[root@k1 ~]# ./kk create config --with-kubesphere v3.3.2
Generate KubeKey config file successfully

# 修改配置文件,符合自己的需求
[root@k1 ~]# vi config-sample.yaml 
# 修改了主机信息,控制平面与ETCD的安装节点、工作节点信息
spec:
  hosts:
  - {name: k1, address: 192.168.44.162, internalAddress: 192.168.44.162, user: root, password: "CloudNative"}
  - {name: k2, address: 192.168.44.163, internalAddress: 192.168.44.163, user: root, password: "CloudNative"}
  - {name: k3, address: 192.168.44.164, internalAddress: 192.168.44.164, user: root, password: "CloudNative"}
  roleGroups:
    etcd:
    - k1
    control-plane:
    - k1
    worker:
    - k2
    - k3

安装K8S集群与KubeSphere

[root@k1 ~]# ./kk create cluster -f config-sample.yaml

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__, _|_.__/ \___\_| \_/\___|\__, |

                                    __/ |
                                   |___/

10:51:08 CST [GreetingsModule] Greetings
10:51:09 CST message: [k3]
Greetings, KubeKey!
10:51:09 CST message: [k1]
Greetings, KubeKey!
10:51:09 CST message: [k2]
Greetings, KubeKey!
10:51:09 CST success: [k3]
10:51:09 CST success: [k1]
10:51:09 CST success: [k2]
10:51:09 CST [NodePreCheckModule] A pre-check on nodes
10:51:15 CST success: [k1]
10:51:15 CST success: [k3]
10:51:15 CST success: [k2]
10:51:15 CST [ConfirmModule] Display confirmation form
+------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| k1   | y    | y    | y       | y        |       | y     |         |           |        |        |            |            |             |                  | CST 10:51:15 |
| k2   | y    | y    | y       | y        |       | y     |         |           |        |        |            |            |             |                  | CST 10:51:14 |
| k3   | y    | y    | y       | y        |       | y     |         |           |        |        |            |            |             |                  | CST 10:51:15 |
+------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
10:51:15 CST [ERRO] k1: conntrack is required.
10:51:15 CST [ERRO] k1: socat is required.
10:51:15 CST [ERRO] k2: conntrack is required.
10:51:15 CST [ERRO] k2: socat is required.
10:51:15 CST [ERRO] k3: conntrack is required.
10:51:15 CST [ERRO] k3: socat is required.

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

上面的安装过程报错:操作系统缺失 conntracksocat 依赖,那就安装吧。。

# 三台虚机都安装
[root@k1 ~]# yum install -y conntrack socat

# 重新执行安装操作
[root@k1 ~]# ./kk create cluster -f config-sample.yaml

 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__, _|_.__/ \___\_| \_/\___|\__, |

                                    __/ |
                                   |___/

11:17:17 CST [GreetingsModule] Greetings
11:17:17 CST message: [k3]
Greetings, KubeKey!
11:17:18 CST message: [k1]
Greetings, KubeKey!
11:17:18 CST message: [k2]
Greetings, KubeKey!
11:17:18 CST success: [k3]
11:17:18 CST success: [k1]
11:17:18 CST success: [k2]
11:17:18 CST [NodePreCheckModule] A pre-check on nodes
11:17:24 CST success: [k3]
11:17:24 CST success: [k2]
11:17:24 CST success: [k1]
11:17:24 CST [ConfirmModule] Display confirmation form
+------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| k1   | y    | y    | y       | y        | y     | y     |         | y         |        |        | v1.4.9     |            |             |                  | CST 11:17:24 |
| k2   | y    | y    | y       | y        | y     | y     |         | y         |        |        |            |            |             |                  | CST 11:17:24 |
| k3   | y    | y    | y       | y        | y     | y     |         | y         |        |        |            |            |             |                  | CST 11:17:24 |
+------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
11:17:39 CST success: [LocalHost]
11:17:39 CST [NodeBinariesModule] Download installation binaries
11:17:39 CST message: [localhost]
downloading amd64 kubeadm v1.23.10 ...
11:17:40 CST message: [localhost]
kubeadm is existed
11:17:40 CST message: [localhost]
downloading amd64 kubelet v1.23.10 ...
11:17:41 CST message: [localhost]
kubelet is existed
11:17:41 CST message: [localhost]
downloading amd64 kubectl v1.23.10 ...
11:17:41 CST message: [localhost]
kubectl is existed
11:17:41 CST message: [localhost]
downloading amd64 helm v3.9.0 ...
11:17:41 CST message: [localhost]
helm is existed
11:17:41 CST message: [localhost]
downloading amd64 kubecni v0.9.1 ...
11:17:42 CST message: [localhost]
kubecni is existed
11:17:42 CST message: [localhost]
downloading amd64 crictl v1.24.0 ...
11:17:42 CST message: [localhost]
crictl is existed
11:17:42 CST message: [localhost]
downloading amd64 etcd v3.4.13 ...
11:17:42 CST message: [localhost]
etcd is existed
11:17:42 CST message: [localhost]
downloading amd64 docker 20.10.8 ...
11:17:42 CST message: [localhost]
docker is existed
11:17:42 CST success: [LocalHost]
11:17:42 CST [ConfigureOSModule] Get OS release
11:17:43 CST success: [k3]
11:17:43 CST success: [k1]
11:17:43 CST success: [k2]
11:17:43 CST [ConfigureOSModule] Prepare to init OS
11:17:51 CST success: [k3]
11:17:51 CST success: [k2]
11:17:51 CST success: [k1]
11:17:51 CST [ConfigureOSModule] Generate init os script
11:17:54 CST success: [k1]
11:17:54 CST success: [k3]
11:17:54 CST success: [k2]
11:17:54 CST [ConfigureOSModule] Exec init os script
11:17:55 CST stdout: [k3]
Permissive
kernel.sysrq = 0
net.ipv4.ip_forward = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
kernel.dmesg_restrict = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:17:55 CST stdout: [k2]
Permissive
kernel.sysrq = 0
net.ipv4.ip_forward = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
kernel.dmesg_restrict = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:17:55 CST stdout: [k1]
Permissive
kernel.sysrq = 0
net.ipv4.ip_forward = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
kernel.dmesg_restrict = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
11:17:55 CST success: [k3]
11:17:55 CST success: [k2]
11:17:55 CST success: [k1]
11:17:55 CST [ConfigureOSModule] configure the ntp server for each node
11:17:55 CST skipped: [k3]
11:17:55 CST skipped: [k2]
11:17:55 CST skipped: [k1]
11:17:55 CST [KubernetesStatusModule] Get kubernetes cluster status
11:17:56 CST success: [k1]
11:17:56 CST [InstallContainerModule] Sync docker binaries
11:18:06 CST success: [k1]
11:18:06 CST success: [k3]
11:18:06 CST success: [k2]
11:18:06 CST [InstallContainerModule] Generate docker service
11:18:09 CST success: [k1]
11:18:09 CST success: [k2]
11:18:09 CST success: [k3]
11:18:09 CST [InstallContainerModule] Generate docker config
11:18:11 CST success: [k1]
11:18:11 CST success: [k3]
11:18:11 CST success: [k2]
11:18:11 CST [InstallContainerModule] Enable docker
11:18:15 CST success: [k1]
11:18:15 CST success: [k2]
11:18:15 CST success: [k3]
11:18:15 CST [InstallContainerModule] Add auths to container runtime
11:18:15 CST skipped: [k1]
11:18:15 CST skipped: [k2]
11:18:15 CST skipped: [k3]
11:18:15 CST [PullModule] Start to pull images on all nodes
11:18:15 CST message: [k1]
downloading image: kubesphere/pause:3.6
11:18:15 CST message: [k3]
downloading image: kubesphere/pause:3.6
11:18:15 CST message: [k2]
downloading image: kubesphere/pause:3.6
11:18:25 CST message: [k3]
downloading image: kubesphere/kube-proxy:v1.23.10
11:18:25 CST message: [k2]
downloading image: kubesphere/kube-proxy:v1.23.10
11:18:26 CST message: [k1]
downloading image: kubesphere/kube-apiserver:v1.23.10
11:19:25 CST message: [k2]
downloading image: coredns/coredns:1.8.6
11:19:38 CST message: [k1]
downloading image: kubesphere/kube-controller-manager:v1.23.10
11:19:40 CST message: [k2]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
11:19:54 CST message: [k3]
downloading image: coredns/coredns:1.8.6
11:20:06 CST message: [k1]
downloading image: kubesphere/kube-scheduler:v1.23.10
11:20:07 CST message: [k2]
downloading image: calico/kube-controllers:v3.23.2
11:20:12 CST message: [k3]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
11:20:21 CST message: [k1]
downloading image: kubesphere/kube-proxy:v1.23.10
11:20:42 CST message: [k1]
downloading image: coredns/coredns:1.8.6
11:20:46 CST message: [k2]
downloading image: calico/cni:v3.23.2
11:20:49 CST message: [k3]
downloading image: calico/kube-controllers:v3.23.2
11:20:58 CST message: [k1]
downloading image: kubesphere/k8s-dns-node-cache:1.15.12
11:21:18 CST message: [k3]
downloading image: calico/cni:v3.23.2
11:21:25 CST message: [k1]
downloading image: calico/kube-controllers:v3.23.2
11:21:50 CST message: [k2]
downloading image: calico/node:v3.23.2
11:21:57 CST message: [k1]
downloading image: calico/cni:v3.23.2
11:22:36 CST message: [k3]
downloading image: calico/node:v3.23.2
11:23:01 CST message: [k1]
downloading image: calico/node:v3.23.2
11:23:04 CST message: [k2]
downloading image: calico/pod2daemon-flexvol:v3.23.2
11:23:45 CST message: [k3]
downloading image: calico/pod2daemon-flexvol:v3.23.2
11:24:21 CST message: [k1]
downloading image: calico/pod2daemon-flexvol:v3.23.2
11:24:40 CST success: [k2]
11:24:40 CST success: [k3]
11:24:40 CST success: [k1]
11:24:40 CST [ETCDPreCheckModule] Get etcd status
11:24:41 CST success: [k1]
11:24:41 CST [CertsModule] Fetch etcd certs
11:24:41 CST success: [k1]
11:24:41 CST [CertsModule] Generate etcd Certs
[certs] Generating "ca" certificate and key
[certs] admin-k1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local k1 k2 k3 lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.44.162 192.168.44.163 192.168.44.164]
[certs] member-k1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local k1 k2 k3 lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.44.162 192.168.44.163 192.168.44.164]
[certs] node-k1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local k1 k2 k3 lb.kubesphere.local localhost] and IPs [127.0.0.1 ::1 192.168.44.162 192.168.44.163 192.168.44.164]
11:24:41 CST success: [LocalHost]
11:24:41 CST [CertsModule] Synchronize certs file
11:24:51 CST success: [k1]
11:24:51 CST [CertsModule] Synchronize certs file to master
11:24:51 CST skipped: [k1]
11:24:51 CST [InstallETCDBinaryModule] Install etcd using binary
11:24:53 CST success: [k1]
11:24:53 CST [InstallETCDBinaryModule] Generate etcd service
11:24:54 CST success: [k1]
11:24:54 CST [InstallETCDBinaryModule] Generate access address
11:24:54 CST success: [k1]
11:24:54 CST [ETCDConfigureModule] Health check on exist etcd
11:24:54 CST skipped: [k1]
11:24:54 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
11:24:56 CST success: [k1]
11:24:56 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
11:24:57 CST success: [k1]
11:24:57 CST [ETCDConfigureModule] Restart etcd
11:24:58 CST stdout: [k1]
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
11:24:58 CST success: [k1]
11:24:58 CST [ETCDConfigureModule] Health check on all etcd
11:24:59 CST success: [k1]
11:24:59 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
11:25:00 CST success: [k1]
11:25:00 CST [ETCDConfigureModule] Health check on all etcd
11:25:00 CST success: [k1]
11:25:00 CST [ETCDBackupModule] Backup etcd data regularly
11:25:02 CST success: [k1]
11:25:02 CST [ETCDBackupModule] Generate backup ETCD service
11:25:03 CST success: [k1]
11:25:03 CST [ETCDBackupModule] Generate backup ETCD timer
11:25:04 CST success: [k1]
11:25:04 CST [ETCDBackupModule] Enable backup etcd service
11:25:05 CST success: [k1]
11:25:05 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
11:25:52 CST success: [k1]
11:25:52 CST success: [k3]
11:25:52 CST success: [k2]
11:25:52 CST [InstallKubeBinariesModule] Synchronize kubelet
11:25:52 CST success: [k1]
11:25:52 CST success: [k3]
11:25:52 CST success: [k2]
11:25:52 CST [InstallKubeBinariesModule] Generate kubelet service
11:25:54 CST success: [k2]
11:25:54 CST success: [k3]
11:25:54 CST success: [k1]
11:25:54 CST [InstallKubeBinariesModule] Enable kubelet service
11:25:56 CST success: [k1]
11:25:56 CST success: [k2]
11:25:56 CST success: [k3]
11:25:56 CST [InstallKubeBinariesModule] Generate kubelet env
11:25:58 CST success: [k1]
11:25:58 CST success: [k3]
11:25:58 CST success: [k2]
11:25:58 CST [InitKubernetesModule] Generate kubeadm config
11:26:00 CST success: [k1]
11:26:00 CST [InitKubernetesModule] Init cluster using kubeadm
11:26:13 CST stdout: [k1]
W0129 11:26:00.970957   33502 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.23.10
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k1 k1.cluster.local k2 k2.cluster.local k3 k3.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 192.168.44.162 127.0.0.1 192.168.44.163 192.168.44.164]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.004023 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k1 as control-plane by adding the taints [node-role.kubernetes.io/master: NoSchedule]
[bootstrap-token] Using token: 27cbyk.yln96f9a3mdrupaa
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token 27cbyk.yln96f9a3mdrupaa \

        --discovery-token-ca-cert-hash sha256:694e4c50f1efbea5b14425c4d2face12c19ded118cbfc7a930c44d713f740c4f \
        --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token 27cbyk.yln96f9a3mdrupaa \

        --discovery-token-ca-cert-hash sha256:694e4c50f1efbea5b14425c4d2face12c19ded118cbfc7a930c44d713f740c4f

11:26:13 CST success: [k1]
11:26:13 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
11:26:15 CST success: [k1]
11:26:15 CST [InitKubernetesModule] Remove master taint
11:26:15 CST skipped: [k1]
11:26:15 CST [InitKubernetesModule] Add worker label
11:26:15 CST skipped: [k1]
11:26:15 CST [ClusterDNSModule] Generate coredns service
11:26:17 CST success: [k1]
11:26:17 CST [ClusterDNSModule] Override coredns service
11:26:18 CST stdout: [k1]
service "kube-dns" deleted
11:26:21 CST stdout: [k1]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
11:26:21 CST success: [k1]
11:26:21 CST [ClusterDNSModule] Generate nodelocaldns
11:26:23 CST success: [k1]
11:26:23 CST [ClusterDNSModule] Deploy nodelocaldns
11:26:23 CST stdout: [k1]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
11:26:23 CST success: [k1]
11:26:23 CST [ClusterDNSModule] Generate nodelocaldns configmap
11:26:25 CST success: [k1]
11:26:25 CST [ClusterDNSModule] Apply nodelocaldns configmap
11:26:26 CST stdout: [k1]
configmap/nodelocaldns created
11:26:26 CST success: [k1]
11:26:26 CST [KubernetesStatusModule] Get kubernetes cluster status
11:26:27 CST stdout: [k1]
v1.23.10
11:26:27 CST stdout: [k1]
k1    v1.23.10   [map[address:192.168.44.162 type: InternalIP] map[address:k1 type: Hostname]]
11:26:32 CST stdout: [k1]
I0129 11:26:30.356155   42023 version.go:255] remote version is much newer: v1.29.1; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
1e0ba137d117b90238a6ac1c63d6da2483d5fecb6668f14ccd9d4995cdece40a
11:26:33 CST stdout: [k1]
secret/kubeadm-certs patched
11:26:33 CST stdout: [k1]
secret/kubeadm-certs patched
11:26:33 CST stdout: [k1]
secret/kubeadm-certs patched
11:26:34 CST stdout: [k1]
g49jkt.ajjqolknkk5sku1v
11:26:34 CST success: [k1]
11:26:34 CST [JoinNodesModule] Generate kubeadm config
11:26:39 CST skipped: [k1]
11:26:39 CST success: [k3]
11:26:39 CST success: [k2]
11:26:39 CST [JoinNodesModule] Join control-plane node
11:26:39 CST skipped: [k1]
11:26:39 CST [JoinNodesModule] Join worker node
11:26:47 CST stdout: [k3]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0129 11:26:40.419569   25214 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
11:26:47 CST stdout: [k2]
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0129 11:26:40.228777   25530 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
11:26:47 CST success: [k3]
11:26:47 CST success: [k2]
11:26:47 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
11:26:47 CST skipped: [k1]
11:26:47 CST [JoinNodesModule] Remove master taint
11:26:47 CST skipped: [k1]
11:26:47 CST [JoinNodesModule] Add worker label to master
11:26:47 CST skipped: [k1]
11:26:47 CST [JoinNodesModule] Synchronize kube config to worker
11:26:50 CST success: [k2]
11:26:50 CST success: [k3]
11:26:50 CST [JoinNodesModule] Add worker label to worker
11:26:51 CST stdout: [k3]
node/k3 labeled
11:26:51 CST stdout: [k2]
node/k2 labeled
11:26:51 CST success: [k3]
11:26:51 CST success: [k2]
11:26:51 CST [DeployNetworkPluginModule] Generate calico
11:26:53 CST success: [k1]
11:26:53 CST [DeployNetworkPluginModule] Deploy calico
11:26:54 CST stdout: [k1]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
11:26:54 CST success: [k1]
11:26:54 CST [ConfigureKubernetesModule] Configure kubernetes
11:26:54 CST success: [k3]
11:26:54 CST success: [k1]
11:26:54 CST success: [k2]
11:26:54 CST [ChownModule] Chown user $HOME/.kube dir
11:26:56 CST success: [k2]
11:26:56 CST success: [k3]
11:26:56 CST success: [k1]
11:26:56 CST [AutoRenewCertsModule] Generate k8s certs renew script
11:27:00 CST success: [k1]
11:27:00 CST [AutoRenewCertsModule] Generate k8s certs renew service
11:27:03 CST success: [k1]
11:27:03 CST [AutoRenewCertsModule] Generate k8s certs renew timer
11:27:09 CST success: [k1]
11:27:09 CST [AutoRenewCertsModule] Enable k8s certs renew service
11:27:11 CST success: [k1]
11:27:11 CST [SaveKubeConfigModule] Save kube config as a configmap
11:27:11 CST success: [LocalHost]
11:27:11 CST [AddonsModule] Install addons
11:27:11 CST success: [LocalHost]
11:27:11 CST [DeployStorageClassModule] Generate OpenEBS manifest
11:27:16 CST success: [k1]
11:27:16 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
11:27:19 CST success: [k1]
11:27:19 CST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
11:27:22 CST success: [k1]
11:27:22 CST [DeployKubeSphereModule] Apply ks-installer
11:27:22 CST stdout: [k1]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
11:27:22 CST success: [k1]
11:27:22 CST [DeployKubeSphereModule] Add config to ks-installer manifests
11:27:23 CST success: [k1]
11:27:23 CST [DeployKubeSphereModule] Create the kubesphere namespace
11:27:25 CST success: [k1]
11:27:25 CST [DeployKubeSphereModule] Setup ks-installer config
11:27:26 CST stdout: [k1]
secret/kube-etcd-client-certs created
11:27:28 CST success: [k1]
11:27:28 CST [DeployKubeSphereModule] Apply ks-installer
11:27:29 CST stdout: [k1]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
11:27:29 CST success: [k1]
#####################################################

###              Welcome to KubeSphere!           ###

#####################################################

Console: http://192.168.44.162:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.

  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2024-01-29 11:40:43
#####################################################
11:40:47 CST success: [k1]
11:40:47 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

这个过程取决于网络和硬件配置,我花了大概十几分钟,当看到以下内容时,表示 K8S 集群和 KubeSphere 安装成功。

#####################################################

###              Welcome to KubeSphere!           ###

#####################################################

Console: http://192.168.44.162:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.

  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2024-01-29 11:40:43
#####################################################
11:40:47 CST success: [k1]
11:40:47 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

验证集群

# 查看启动了哪些pod
[root@k1 ~]# kubectl get pod -A
NAMESPACE                      NAME                                               READY   STATUS    RESTARTS   AGE
kube-system                    calico-kube-controllers-84897d7cdf-grnr9           1/1     Running   0          43m
kube-system                    calico-node-8b6c7                                  1/1     Running   0          43m
kube-system                    calico-node-llb8n                                  1/1     Running   0          43m
kube-system                    calico-node-pmz75                                  1/1     Running   0          43m
kube-system                    coredns-b7c47bcdc-2cz5g                            1/1     Running   0          43m
kube-system                    coredns-b7c47bcdc-v7lnx                            1/1     Running   0          43m
kube-system                    kube-apiserver-k1                                  1/1     Running   0          44m
kube-system                    kube-controller-manager-k1                         1/1     Running   0          44m
kube-system                    kube-proxy-n7p95                                   1/1     Running   0          43m
kube-system                    kube-proxy-n9dgz                                   1/1     Running   0          43m
kube-system                    kube-proxy-p2hkx                                   1/1     Running   0          43m
kube-system                    kube-scheduler-k1                                  1/1     Running   0          44m
kube-system                    nodelocaldns-7qpwq                                 1/1     Running   0          43m
kube-system                    nodelocaldns-qq8q5                                 1/1     Running   0          43m
kube-system                    nodelocaldns-sg52g                                 1/1     Running   0          43m
kube-system                    openebs-localpv-provisioner-858c4bc894-9hsgs       1/1     Running   0          42m
kube-system                    snapshot-controller-0                              1/1     Running   0          40m
kubesphere-controls-system     default-http-backend-696d6bf54f-2l6sf              1/1     Running   0          37m
kubesphere-controls-system     kubectl-admin-b49cf5585-zm5vh                      1/1     Running   0          30m
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running   0          33m
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running   0          33m
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running   0          33m
kubesphere-monitoring-system   kube-state-metrics-6c4bdb8d9c-jv9mr                3/3     Running   0          34m
kubesphere-monitoring-system   node-exporter-8zqk2                                2/2     Running   0          34m
kubesphere-monitoring-system   node-exporter-lhlgj                                2/2     Running   0          34m
kubesphere-monitoring-system   node-exporter-t65lm                                2/2     Running   0          34m
kubesphere-monitoring-system   notification-manager-deployment-7dd45b5b7d-llc8p   2/2     Running   0          30m
kubesphere-monitoring-system   notification-manager-deployment-7dd45b5b7d-mhfvl   2/2     Running   0          30m
kubesphere-monitoring-system   notification-manager-operator-8598775b-d68jj       2/2     Running   0          33m
kubesphere-monitoring-system   prometheus-k8s-0                                   2/2     Running   0          33m
kubesphere-monitoring-system   prometheus-k8s-1                                   2/2     Running   0          33m
kubesphere-monitoring-system   prometheus-operator-57c78bd7fb-kj2qg               2/2     Running   0          34m
kubesphere-system              ks-apiserver-b7ddc4f5c-mx7tk                       1/1     Running   0          37m
kubesphere-system              ks-console-7c48dd4c9f-ndhtl                        1/1     Running   0          37m
kubesphere-system              ks-controller-manager-854ff655d4-mjjld             1/1     Running   0          37m
kubesphere-system              ks-installer-6644975f87-5vxjx                      1/1     Running   0          42m

# 查看所有节点状态
[root@k1 ~]# kubectl get node
NAME   STATUS   ROLES                  AGE    VERSION
k1     Ready    control-plane, master   3h2m   v1.23.10
k2     Ready    worker                 3h2m   v1.23.10
k3     Ready    worker                 3h2m   v1.23.10

由于 KubeSphere 暴露的服务端口,我们可以在浏览器中直接访问验证:
Console: http://192.168.44.162:30880
Account: admin
Password: P@88w0rd

  • KubeSphere登录界面

2024-06-30-KubeSphereLogin.jpg* KubeSphere平台信息

2024-06-30-PlatformInfo.jpg

  • KubeSphere资源概览

2024-06-30-Resource.jpg

  • Kubernetes集群状态

2024-06-30-ClusterStatus.jpg

  • Kubernetes集群就绪

2024-06-30-ClusterNode.jpg

可能遇到的问题

Failed to connect to storage.googleapis.com port 443 after 2006 ms: Connection refused

在执行 ./kk create cluster -f config-sample.yaml 时,遇到上述报错信息,同时,提示我们通过 export KKZONE=cn 切换为国内源,解决网络问题。

小总结

本文介绍了如何使用 KubeSphere 官方提供的 KubeKey 工具快速搭建一个 Kubernetes 集群。 KubeSphere 提供了运维友好的向导式操作界面,帮助企业快速构建一个强大和功能丰富的容器云平台。

KubeSphere 为用户屏蔽了基础设施底层复杂的技术细节,帮助企业在各类基础设施之上无缝地部署、更新、迁移和管理现有的容器化应用。通过这种方式, KubeSphere 使开发人员能够专注于应用程序开发,使运维团队能够通过企业级可观测性功能和故障排除机制、统一监控和日志查询、存储和网络管理,以及易用的 CI/CD 流水线等来加快 DevOps 自动化工作流程和交付流程等。


If you have any questions or any bugs are found, please feel free to contact me.

Your comments and suggestions are welcome!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1878817.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

隧道FM调频广播信号泄漏电缆+天线覆盖方案

泄露电缆信号具有信号均匀,覆盖效果好等特点,但是由于造价昂贵及工程施工量大让一部分工程望而却步,现介绍一种性价比稍高一点的,泄漏电缆+宽带天线的方案。如图,去掉泄露电缆末端的匹配假负载 &#xff0c…

图书管理系统(附源码)

前言:前面一起和小伙伴们学习了较为完整的Java语法体系,那么本篇将运用这些知识连串在一起实现图书管理系统。 目录 一、总体设计 二、书籍与书架 书籍(Book) 书架(Booklist) 三、对图书的相关操作 I…

[C++][设计模式][适配器模式]详细讲解

目录 1.动机2.模式定义3.要点总结4.代码感受 1.动机 在软件系统中,由于应用环境的变化,常常需要将”一些现存的对象“放在新的环境中应用,但是新环境要求的接口是这些现存对象所不满足如何应对这些”迁移的变化“?如何既能利用现…

第一节:如何开发第一个spring boot3.x项目(自学Spring boot 3.x的第一天)

大家好,我是网创有方,从今天开始,我会记录每篇我自学spring boot3.x的经验。只要我不偷懒,学完应该很快,哈哈,更新速度尽可能快,想和大佬们一块讨论,如果需要讨论的欢迎一起评论区留…

基于MongoDB的电影影评分析

项目源码及资料 项目介绍 1、从豆瓣网爬取Top10的电影数据 爬取网址: https://movie.douban.com/top250 1.1 爬取Top10的影视信息 mv_data [] i 0 for x in soup.select(.item):i 1mv_name re.search(>([^<])<, str(x.select(.info > .hd > a > .tit…

【VMware】VMware 开启的虚拟机无法联网的解决方案

目录 &#x1f30a;1. 问题说明 &#x1f30a;2. 解决方案 &#x1f30d;2.1 查看虚拟网络编辑器 &#x1f30d;2.2 设置 vmnet &#x1f30d;2.3 设置虚拟机网络 &#x1f30d;2.4 Xshell连接虚拟机 &#x1f30a;1. 问题说明 虚拟机 ping 其他网页显示失败,比如&#…

嵌入式linux系统中动态链接库实现详解

大家好,linux系统中动态库是如何实现相互链接的?今天简单聊聊动态链接库的实现原理。 假设有这样两段代码,第一段代码定义了一个全量变量a以及函数foo,函数foo中引用了下一段代码中定义的全局变量b。 第二段代码定义了全局变量b以及main函数,同时在main函数中调用了第一个…

【数据仓库与数据挖掘】期末复习重点资料

题型&#xff1a; 选择题10个2分 填空题10空2分 简答题6个5分 大题1个&#xff08;20分10分&#xff09; 第一章 数据仓库的概念与体系结构 1.1 数据仓库的基本概念 1、元数据 元数据&#xff08;Metadata&#xff09;是描述数据仓库中数据的数据结构和构建方法的数据。…

关于Redisson分布式锁的用法

关于Redisson分布式锁的用法 Redisson是一个基于Redis的Java分布式对象和服务框架&#xff0c;它提供了多种分布式锁的实现&#xff0c;包括可重入锁、公平锁、读写锁等。Redisson实现分布式锁的核心原理主要依赖于Redis的数据结构和Redisson框架提供的高级功能。以下详细讲解…

java之动态代理

1 代理模式 代理模式提供了对目标对象额外的访问方式&#xff0c;即通过代理对象访问目标对象&#xff0c;这样可以在不修改原目标对象的前提下&#xff0c;提供额外的功能操作&#xff0c;扩展目标对象的功能。简言之&#xff0c;代理模式就是设置一个中间代理来控制访问原目标…

普通集群与镜像集群配置

一. 环境准备 关闭防火墙和selinux&#xff0c;进行时间同步 主机名系统IP服务rabbitmq-1 Rocky_linux9.4 192.168.226.22RabbitMQ&#xff0c;MySQLrabbitmq-2Rocky_linux9.4192.168.226.23RabbitMQrabbitmq-3Rocky_linux9.4192.168.226.24RabbitMQ 修改主机名#192.168.226…

机械硬盘故障分析及损坏处理(坏道屏蔽)

机械硬盘故障分析&#xff1a; 1、加电后没有声音就是电机不转&#xff0c;是电路问题&#xff0c;更换电路板解决。 2、加电后电机转&#xff0c;有连续敲击声音&#xff0c;或有异响&#xff0c;磁头损坏或机械故障。 3、加电后电机转&#xff0c;运行正常&#xff0c;BIOS无…

docker pull 镜像的时候遇到Pulling fs layer问题

最近遇到一个很奇怪的问题,docker pull 镜像的时候,总是出现Pulling fs layer问题,导致镜像拉取不成功,以前是安装好docker,正常拉取镜像都是没什么问题的,在这里记录一下这个问题的解决方法,当然,可能并不通用。 1、进入阿里云容器服务 地址:https://cr.console.aliy…

【反者道之动,弱者道之用】统计学中的哲理——回归均值 Regression to the mean

&#x1f4a1;&#x1f4a1;在统计学中&#xff0c;回归均值(Regression toward the Mean/Regression to the Mean) 指的是如果变量在其第一次测量时是极端的&#xff0c;则在第二次测量时会趋向于接近平均值的现象。   在金融学中&#xff0c; 回归均值是指股票价格无论高于…

【计算机组成原理实验】——运算器组成实验

计组TEC4实验——运算器组成实验 1. 实验目的 (1&#xff09;掌握算术逻辑运算加、减、乘、与的工作原理。 (2) 熟悉简单运算器的数据传送通路。 (3) 验证实验台运算器的8位加、减、与、直通功能。 (4) 验证实验台的4位乘4位功能。 (5) 按给定数据&#xff0c;完成几种指…

JS乌龟吃鸡游戏

代码&#xff1a; <!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><title>乌龟游戏</title><script type"text/javascript">function move(obj){//乌龟图片高度var wuGui_height 67;…

HarmonyOS Next开发学习手册——创建轮播 (Swiper)

Swiper 组件提供滑动轮播显示的能力。Swiper本身是一个容器组件&#xff0c;当设置了多个子组件后&#xff0c;可以对这些子组件进行轮播显示。通常&#xff0c;在一些应用首页显示推荐的内容时&#xff0c;需要用到轮播显示的能力。 针对复杂页面场景&#xff0c;可以使用 Sw…

MySQL1(初始数据库 概念 DDL建库建表 数据库的三大范式 表约束)

目录 一、初始数据库 二、概念 三、DDL建库建表 1. 数据库结构 2. SQL语句分类 3. DDL语句操作数据库 注释&#xff1a; 查看数据库&#xff1a; ​编辑创建数据库&#xff1a; 删除数据库&#xff1a; 选择数据库&#xff1a; 4. 数据库表的字段类型 4.1 字符串…

JVM内存模型剖析与参数设置

目录 Java语言的跨平台特性 JVM 的主要组成部分及其作用是什么? JVM整体结构及内存模型 线程栈&#xff08;Machine Stack&#xff09; 局部变量表&#xff08;Local Variable Table&#xff09; 操作数栈&#xff08;Operand Stack&#xff09; 程序计数器&#xff08…

【Linux】已解决:Ubuntu虚拟机安装Java/JDK

文章目录 一、分析问题背景二、可能出错的原因三、错误代码示例四、正确代码示例五、注意事项结论 已解决&#xff1a;Ubuntu虚拟机安装Java/JDK 一、分析问题背景 在Ubuntu虚拟机上安装Java开发工具包&#xff08;JDK&#xff09;是许多开发者的常见任务。然而&#xff0c;在…