从零开始:深入理解Kubernetes架构及安装过程

news2024/11/17 7:26:30

K8s环境搭建

文章目录

  • K8s环境搭建
    • 集群类型
    • 安装方式
    • 环境规划
    • 克隆三台虚拟机
    • 系统环境配置
    • 集群搭建
      • 初始化集群(仅在master节点)
      • 配置环境变量(仅在master节点)
      • 工作节点加入集群(knode1节点及knode2节点)
      • 安装calico网络(仅master节点)

集群类型

  • Kubernetes 集群大致分为两类:一主多从和多主多从。
    • 一主多从(单 master ):一个 Master 节点和多台Node 节点,搭建简单,但是有单机故障风险,适合用于测试环境。
    • 多主多从(高可用):多台 Master 节点和多台 Node节点,搭建麻烦,安全性高,适合用于生产环境。

在这里插入图片描述

安装方式

  • Kubernetes 有多种部署方式,目前主流的方式有 kubeadm 、minikube 、二进制包。
  • ① minikube:一个用于快速搭建单节点的 Kubernetes 工具。
  • ② kubeadm:一个用于快速搭建Kubernetes 集群的工具(可以用于生产环境)。
  • ③ 二进制包:从官网上下载每个组件的二进制包,依次去安装(建议生产环境使用)。

环境规划

  • 操作系统版本 Centos Stream 8
  • 安装源可访问阿里云开源镜像站或其他镜像站下载
  • 环境需要用到 3台虚拟机,需要联通外网,网卡类型 NAT 或者 桥接。
  • 本文通过脚本文件快速搭建 k8s集群
主机名ip地址内存磁盘处理器
Kmaster192.168.129.2008GB100GB2
Knode1192.168.129.2018GB100GB2
Knode2192.168.129.2028GB100GB2

也可根据自身电脑性能进行内存和存盘的分配,处理器每个节点最少两颗。

克隆三台虚拟机

准备三个文件夹

在这里插入图片描述

一定要注意,是完整克隆

在这里插入图片描述

启动,修改3台虚拟机主机名及ip地址。

注意:修改ip地址的时候,看清楚你的网卡名字叫什么,我的是 ifcfg-ens160.

修改完成之后,关机拍快照。

# kmaster 节点
[root@tmp ~]# hostnamectl set-hostname kmaster
[root@kmaster ~]# cd /etc/sysconfig/network-scripts/
[root@kmaster network-scripts]# vi ifcfg-ens160
[root@kmaster network-scripts]# cat ifcfg-ens160
TYPE=Ethernet
BOOTPROTO=none
NAME=ens160
DEVICE=ens160
ONBOOT=yes
IPADDR=192.168.129.200
NETMASK=255.255.255.0
GATEWAY=192.168.129.2
DNS1=192.168.129.2

# knode1 节点
[root@tmp ~]# hostnamectl set-hostname knode1
[root@knode1 ~]# cd /etc/sysconfig/network-scripts/
[root@knode1 network-scripts]# vi ifcfg-ens160
[root@knode1 network-scripts]# cat ifcfg-ens160
TYPE=Ethernet
BOOTPROTO=none
NAME=ens160
DEVICE=ens160
ONBOOT=yes
IPADDR=192.168.129.201
NETMASK=255.255.255.0
GATEWAY=192.168.129.2
DNS1=192.168.129.2

# knode2 节点
[root@tmp ~]# hostnamectl set-hostname knode2
[root@knode2 ~]# cd /etc/sysconfig/network-scripts/
[root@knode2 network-scripts]# vi ifcfg-ens160
[root@knode2 network-scripts]# cat ifcfg-ens160
TYPE=Ethernet
BOOTPROTO=none
NAME=ens160
DEVICE=ens160
ONBOOT=yes
IPADDR=192.168.129.202
NETMASK=255.255.255.0
GATEWAY=192.168.129.2
DNS1=192.168.129.2

系统环境配置

点击获取脚本及其配置文件

本脚本针对网卡为 ens160,如果不是,请修改脚本网卡指定。

K8s版本 1.27.0

[root@kmaster ~]# vim Stream8-k8s-v1.27.0.sh
#!/bin/bash
# CentOS stream 8 install kubenetes 1.27.0
# the number of available CPUs 1 is less than the required 2
# k8s 环境要求虚拟cpu数量至少2个
# 使用方法:在所有节点上执行该脚本,所有节点配置完成后,复制第11步语句,单独在master节点上进行集群初始化。
#1 rpm
echo '###00 Checking RPM###'
yum install -y yum-utils vim bash-completion net-tools wget
echo "00 configuration successful ^_^"
#Basic Information
echo '###01 Basic Information###'
hostname=`hostname`
# 网卡为 ens160
hostip=$(ifconfig ens160 |grep -w "inet" |awk '{print $2}')
echo 'The Hostname is:'$hostname
echo 'The IPAddress is:'$hostip

#2 /etc/hosts
echo '###02 Checking File:/etc/hosts###'
hosts=$(cat /etc/hosts)
result01=$(echo $hosts |grep -w "${hostname}")
if [[ "$result01" != "" ]]
then
	echo "Configuration passed ^_^"
else
	echo "hostname and ip not set,configuring......"
	echo "$hostip $hostname" >> /etc/hosts
	echo "configuration successful ^_^"
fi
echo "02 configuration successful ^_^"

#3 firewall & selinux
echo '###03 Checking Firewall and SELinux###'
systemctl stop firewalld
systemctl disable firewalld
se01="SELINUX=disabled"
se02=$(cat /etc/selinux/config |grep -w "^SELINUX")
if [[ "$se01" == "$se02" ]]
then
	echo "Configuration passed ^_^"
else
	echo "SELinux Not Closed,configuring......"
	sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
	echo "configuration successful ^_^"
fi
echo "03 configuration successful ^_^"

#4 swap
echo '###04 Checking swap###'
swapoff -a
sed -i "s/^.*swap/#&/g" /etc/fstab
echo "04 configuration successful ^_^"

#5 docker-ce
echo '###05 Checking docker###'
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
echo 'list docker-ce versions'
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce
systemctl start docker 
systemctl enable docker
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://cc2d8woc.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
echo "05 configuration successful ^_^"

#6 iptables
echo '###06 Checking iptables###'
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
echo "06 configuration successful ^_^"

#7 cgroup(systemd/cgroupfs)
echo '###07 Checking cgroup###'
containerd config default > /etc/containerd/config.toml
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
systemctl restart containerd
echo "07 configuration successful ^_^"

#8 kubenetes.repo
echo '###08 Checking repo###'
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
echo "08 configuration successful ^_^"

#9 crictl
echo "Checking crictl"
cat <<EOF > /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 5
debug: false
EOF
echo "09 configuration successful ^_^"

#10 kube1.27.0
echo "Checking kube"
yum install -y kubelet-1.27.0 kubeadm-1.27.0 kubectl-1.27.0 --disableexcludes=kubernetes
systemctl enable --now kubelet
echo "10 configuration successful ^_^"
echo "Congratulations ! The basic configuration has been completed"

#11 Initialize the cluster
# 仅在master主机上做集群初始化
# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.27.0 --pod-network-cidr=10.244.0.0/16

在三台节点上分别运行脚本

[root@kmaster ~]# sh Stream8-k8s-v1.27.0.sh
[root@knode1 ~]# sh Stream8-k8s-v1.27.0.sh
[root@knode2 ~]# sh Stream8-k8s-v1.27.0.sh

# ***kmaster输出记录***
###00 Checking RPM###
CentOS Stream 8 - AppStream                           
CentOS Stream 8 - BaseOS                             
CentOS Stream 8 - Extras                             
CentOS Stream 8 - Extras common packages             
Dependencies resolved.
=============================================================================================================================================================
 Package                                                                                    Architecture                                                     
=============================================================================================================================================================
Installing:
 bash-completion                  noarch                                                   
 net-tools                        x86_64
............

Installed:
  conntrack-tools-1.4.4-11.el8.x86_64              cri-tools-1.26.0-0.x86_64             kubeadm-1.27.0-0.x86_64            kubectl-1.27.0-0.x86_64          
  libnetfilter_queue-1.0.4-3.el8.x86_64            socat-1.7.4.1-1.el8.x86_64           

Complete!
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
10 configuration successful ^_^
Congratulations ! The basic configuration has been completed

# ***knode1和knode2输出记录与kmaster一致***

集群搭建

初始化集群(仅在master节点)

复制脚本中最后一段命令执行,进行集群初始化

[root@kmaster ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.27.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0719 10:48:35.823181   13745 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd vers
W0719 10:48:51.007564   13745 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is 
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] an
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.150 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.150 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0719 10:49:09.467378   13745 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd vers
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.059875 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-e
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ddct8j.i2dloykyc0wpwdg3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.129.200:6443 --token ddct8j.i2dloykyc0wpwdg3 \
	--discovery-token-ca-cert-hash sha256:3bdd47846f02bcc9858d2946714341f22b37aaa07dbaa61594f2a0ecce80f4fb

配置环境变量(仅在master节点)

# 根据安装提示,执行命令
[root@kmaster ~]#   mkdir -p $HOME/.kube
[root@kmaster ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kmaster ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@kmaster ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
[root@kmaster ~]# source /etc/profile

[root@kmaster ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
kmaster   NotReady    control-plane   2d23h   v1.27.

工作节点加入集群(knode1节点及knode2节点)

将初始化集群后,生成的 kubeadm join 语句,分别拷贝到两个节点执行

# knode1节点
[root@knode1 ~]# kubeadm join 192.168.129.200:6443 --token ddct8j.i2dloykyc0wpwdg3 \
	--discovery-token-ca-cert-hash sha256:3bdd47846f02bcc9858d2946714341f22b37aaa07dbaa61594f2a0ecce80f4fb
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# knode2节点
[root@knode2 ~]# kubeadm join 192.168.129.200:6443 --token ddct8j.i2dloykyc0wpwdg3 \
	--discovery-token-ca-cert-hash sha256:3bdd47846f02bcc9858d2946714341f22b37aaa07dbaa61594f2a0ecce80f4fb	
	[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@kmaster ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
kmaster   NotReady    control-plane   2d23h   v1.27.0
knode1    NotReady    <none>          2d23h   v1.27.0
knode2    NotReady    <none>          2d23h   v1.27.0

安装calico网络(仅master节点)

安装网络组件前,集群状态为 NotReady,安装后,稍等片刻,集群状态将变为 Ready

查看集群状态

[root@kmaster ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
kmaster   NotReady    control-plane   2d23h   v1.27.0
knode1    NotReady    <none>          2d23h   v1.27.0
knode2    NotReady    <none>          2d23h   v1.27.0

安装 Tigera Calico operator,版本 3.26

# 所需文件可在百度网盘自行获取
[root@kmaster ~]# kubectl create -f tigera-operator-3-26-1.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

配置 custom-resources.yaml

[root@kmaster ~]# vim custom-resources-3-26-1.yaml
# 更改IP地址池中的 CIDR,和 kubeadm 初始化集群中的 --pod-network-cidr 参数保持一致(配置文件已做更改)
# cidr: 10.244.0.0/16

# 所需文件可在百度网盘自行获取
[root@kmaster ~]# kubectl create -f custom-resources-3-26-1.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

# 动态查看calico容器状态,待全部running后,集群状态变为正常
[root@kmaster ~]# watch kubectl get pods -n calico-system 
NAME                                       READY
 STATUS    RESTARTS	  AGE
calico-kube-controllers-5d6c98ff78-gcj2n   1/1
 Running   3 (103m ago)   2d23h
calico-node-cc9ct                          1/1
 Running   3 (103m ago)   2d23h
calico-node-v8459                          1/1
 Running   3 (103m ago)   2d23h
calico-node-w524w                          1/1
 Running   3 (103m ago)   2d23h
calico-typha-bbb96d56-46w2v                1/1
 Running   3 (103m ago)   2d23h
calico-typha-bbb96d56-nrxkf                1/1
 Running   3 (103m ago)   2d23h
csi-node-driver-4wm4h                      2/2
 Running   6 (103m ago)   2d23h
csi-node-driver-dr7hq                      2/2
 Running   6 (103m ago)   2d23h
csi-node-driver-fjr77                      2/2
 Running   6 (103m ago)   2d23h

再次查看集群状态

[root@kmaster ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
kmaster   Ready    control-plane   2d23h   v1.27.0
knode1    Ready    <none>          2d23h   v1.27.0
knode2    Ready    <none>          2d23h   v1.27.

·END
2
Running 6 (103m ago) 2d23h
csi-node-driver-dr7hq 2/2
Running 6 (103m ago) 2d23h
csi-node-driver-fjr77 2/2
Running 6 (103m ago) 2d23h


> 再次查看集群状态

```bash
[root@kmaster ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
kmaster   Ready    control-plane   2d23h   v1.27.0
knode1    Ready    <none>          2d23h   v1.27.0
knode2    Ready    <none>          2d23h   v1.27.

·END

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1083212.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

吃鸡高手秘籍大揭秘,享受顶级游戏干货!

大家好&#xff01;作为吃鸡行家&#xff0c;今天我将揭示一些与众不同、足够吸引力的内容&#xff0c;帮助您提高游戏战斗力并分享顶级游戏作战干货。 首先&#xff0c;让我们推荐一些绝地求生作图工具。这些工具可以帮助您在游戏中更好地制作作战图和规划策略&#xff0c;让您…

基于RuoYi-Flowable-Plus的若依ruoyi-nbcio支持本地图片上传与回显的功能实现(二)

更多ruoyi-nbcio功能请看演示系统 gitee源代码地址 前后端代码&#xff1a; https://gitee.com/nbacheng/ruoyi-nbcio 演示地址&#xff1a;RuoYi-Nbcio后台管理系统 排除路径&#xff0c;增加avatar图片 # security配置 security:# 排除路径excludes:# 静态资源- /*.html…

Python+Tkinter 图形化界面基础篇:集成数据库

PythonTkinter 图形化界面基础篇&#xff1a;集成数据库 引言为什么选择 SQLite 数据库&#xff1f;集成 SQLite 数据库的步骤示例&#xff1a;创建一个任务管理应用程序步骤1&#xff1a;导入必要的模块步骤2&#xff1a;创建主窗口和数据库连接步骤3&#xff1a;创建数据库表…

Spring源码解析—— AOP代理的生成

本文已经收录到大彬精心整理的大厂面试手册&#xff0c;包含计算机基础、Java基础、多线程、JVM、数据库、Redis、Spring、Mybatis、SpringMVC、SpringBoot、分布式、微服务、设计模式、架构、校招社招分享等高频面试题&#xff0c;非常实用&#xff0c;有小伙伴靠着这份手册拿…

(1)(1.3) 匿名航空电子设备DroneCAN激光雷达接口

文章目录 前言 1 设置参数 2 参数说明 前言 Avionics Anonymous DroneCAN 激光雷达接口是一个微型接口(Avionics Anonymous DroneCAN LIDAR Interface)&#xff0c;适用于几种常见的激光测距仪(several common laser rangefinders)&#xff0c;可通过 DroneCAN 连接到 Pixha…

混淆技术研究笔记(五)混淆后如何反篡改?

有了上一节的基础工具后&#xff0c;接下来要考虑如何反篡改。 本文采用的是对混淆后的代码&#xff0c;针对某些关键包的字节码数据计算md5值&#xff0c;对所有类计算完成后对md5值进行排序&#xff0c;排序后拼接字符串再次计算md5值&#xff0c;最后通过私钥对md5进行RSA对…

Linux之open和fopen的比较

1、fopen 是ANSIC标准中的C库函数&#xff0c;open是系统调用 2、fopen提供了IO缓存功能&#xff0c;而open没有&#xff0c;所以fopen速度要比open快 3、fopen具有良好的移植性&#xff0c;而open 是依赖于特定的环境 4、fopen返回一个FILE 结构体指针&#xff0c;而open 返…

MES管理系统如何解决电子企业的生产痛点

随着电子行业的快速发展&#xff0c;企业面临着越来越多的生产和管理挑战。其中&#xff0c;物料编码管理困难、产品设计工作繁重、客户需求多样化 以及产品设计变更管理困难等问题尤为突出。为了解决这些问题&#xff0c;许多电子企业开始引入MES管理系统解决方案&#xff0c;…

如何实现响应式网页设计?

聚沙成塔每天进步一点点 ⭐ 专栏简介 前端入门之旅&#xff1a;探索Web开发的奇妙世界 欢迎来到前端入门之旅&#xff01;感兴趣的可以订阅本专栏哦&#xff01;这个专栏是为那些对Web开发感兴趣、刚刚踏入前端领域的朋友们量身打造的。无论你是完全的新手还是有一些基础的开发…

做直播或短视频 其实有几个精准粉丝就可以很快变现

随着短视频 和 直播 快速发展 人流量巨大 只要是个东西 只要你能豁得出去 都能卖出去 把精准流量引流到自己的私域里面 可以组建团队 一起发展 自己喝点汤就好 有一起做CSDN和其他短视频项目的 可以左上方私信留言 我怕说多了 审核不过去

Svelte生命周期(加整体概述)

目录 前言 一、编译阶段 1. 导入语句 2. 组件声明 3. 模板部分 4. CSS样式 二、运行时阶段 三、生命周期函数 1. onMount 2. beforeUpdate 与 afterUpdate 3. onDestroy 4. setContext 与 getContext 6. hasContext 7. getAllContexts 前言 Svelte是一种现代的Ja…

冠军代言|媒介易:释放品牌潜力,实力助力,助您势如破竹!

在竞争激烈的市场中&#xff0c;品牌需要不断创新&#xff0c;找到吸引目标客户的方法。而与体育冠军合作&#xff0c;通过冠军代言&#xff0c;已经成为了众多企业提高品牌知名度、树立形象、吸引消费者目光的重要策略之一。在这个领域&#xff0c;媒介易以其实力加冕&#xf…

如何正确高效使用墨西哥专线?

在当今全球化的物流行业中&#xff0c;跨境运输服务已经成为许多企业拓展国际市场的重要手段。然而&#xff0c;由于各国法律法规、文化差异以及运输环节的复杂性&#xff0c;企业在进行跨境运输时可能会遇到诸多挑战。为了解决这些问题&#xff0c;一些专业的物流公司推出了“…

浅谈电能质量监测装置在某半导体公司的应用

摘 要&#xff1a;半导体生产制造业在国民经济中起着举足轻重的作用&#xff0c;相关企业的规模也越来越大。其供配电系统稳定、可靠的运维不仅是其安全生产的基本保证&#xff0c;还关系到产品质量和生产的顺利进行。而半导体行业中大部分工艺设备对电能质量比较敏感&#xff…

KEIL5添加沁恒的ch55x芯片(其他非arm和stm32芯片也可使用类似的方法)

准备工作 参考&#xff1a;https://www.iotword.com/8615.html 已经安装好keil5的软件环境 烧录工具下载 沁恒烧录工具地址&#xff0c;下载安装后如下图 操作步骤 打开从沁恒官网下载安装好的WHCISPTOOL软件 安装下图中的操作方式完成对安装软件keil5中的配置文件的生…

京东商品列表数据接口,关键词搜索京东商品数据接口

在网页抓取方面&#xff0c;可以使用 Python、Java 等编程语言编写程序&#xff0c;通过模拟 HTTP 请求&#xff0c;获取京东网站上的商品页面。在数据提取方面&#xff0c;可以使用正则表达式、XPath 等方式从 HTML 代码中提取出有用的信息。值得注意的是&#xff0c;京东网站…

Camtasia Studio2024最新版本正式更新上线!

Camtasia Studio2024是一款专门录制屏幕动作的工具&#xff0c;它能在任何颜色模式下轻松地记录 屏幕动作&#xff0c;包括影像、音效、鼠标移动轨迹、解说声音等等&#xff0c;简单实用的视频录制软件,游戏的精彩画面,网络视频,屏幕录制可以让您录制屏幕所有内容视频录制支持3…

1806_emacs_org-mode归档的时候修改归档文件名称

全部学习汇总&#xff1a;GreyZhang/g_org: my learning trip for org-mode (github.com) 前面已经基本了解了org-mode的归档的规则或者方法&#xff0c;但是还有一点跟我现在的工作流有点不相符。我自己的工作流中会每月做一次工作的整理总结&#xff0c;因此归档的文件是按照…

Vue3新的状态管理库-Pinia(保姆级别教程)

目录 1.什么是Pinia2.为什么使用Pinia3.创建项目4.检查Pinia的安装版本5.main.js引入Pinia6.定义Store-组合式API写法(推荐)7.getters的实现8.action的异步实现9.storeToRefs 1.什么是Pinia Pinia是Vue的专属的最新状态管理库, 是Vuex状态管理工具的替代品 vue.js官网 https:…

柯桥学设计,室内设计当下,简约式的奢华正流行

清闲简练 石材原木的空间设计探索中强调与艺术美学的碰撞融合&#xff0c;Mohammad Alomran采用清爽的白色和温暖的奶油色调搭配&#xff0c;以简练当代的语言&#xff0c;实现优雅、简约的创意设计&#xff0c;形成空间独特的魅力。 J. SH 奢华在于细节&#xff0c;优雅在于…