K8s(1.20.15版本)部署(3master+2node)

news2024/11/24 4:36:25

1.准备工作

准备五台centos 7的虚拟机(每台虚拟机分配2核2G,存储使用20G硬盘,必须2核不然报错):如下图机器的分配情况:

IP节点名称节点类型
10.10.10.11k8s-master11master
10.10.10.12k8s-master12master
10.10.10.13k8s-master13master
10.10.10.21k8s-node01node
10.10.10.22k8s-node02node
10.10.10.200k8s-master-lbload-balance

1.1 基本【】配置

五台机器配置hosts

vim /etc/hosts

10.10.10.11 jx-nginx-11 k8s-master11
10.10.10.12 jx-nginx-12 k8s-master12
10.10.10.13 jx-nginx-13 k8s-master13
10.10.10.21 jx-nginx-21 k8s-node01
10.10.10.22 jx-nginx-22 k8s-node02
10.10.10.100 k8s-master-lb


1.2yum源配置

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

1.3 必备工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

1.4有节点关闭防火墙、selinux、dnsmasq、swap。

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.5 关闭swap分区


swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab


1.6 安装ntpdate

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y

所有节点同步时间。时间同步配置如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com

## 加入到crontab
crontab -e
*/5 * * * * ntpdate time2.aliyun.com

所有节点配置limit:

ulimit -SHn 65535

vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

Master11节点免密钥登录其他节点:

ssh-keygen -t rsa
for i in k8s-master11 k8s-master12 k8s-master13 k8s-node21 k8s-node22;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

所有机器内核升级

cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm


cd /root && yum localinstall -y kernel-ml*
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

##重启生效
yum update -y  && reboot

所有节点安装ipvsadm:

yum install ipvsadm ipset sysstat conntrack libseccomp -y
vim /etc/modules-load.d/ipvs.conf 
##加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

## 加载内核配置
systemctl enable --now systemd-modules-load.service

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system

安装组件

所有节点安装docker

yum install docker-ce-19.03.* -y
##开机自启
systemctl daemon-reload && systemctl enable --now docker

所有节点安状kubeadm

yum install kubeadm-1.20* kubelet-1.20* kubectl-1.20* -y

默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:

cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF
systemctl daemon-reload

高可用组件安装

## 所有Master节点通过yum安装HAProxy和KeepAlived:
yum install keepalived haproxy -y


vim /etc/haproxy/haproxy.cfg 
## 删除所有内容,加入以下内容,并修改自己ip
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01	10.10.10.11:6443  check
  server k8s-master02	10.10.10.12:6443  check
  server k8s-master03	10.10.10.13:6443  check

所有Master节点配置KeepAlived,

配置不一样,注意区分
注意每个节点的IP和网卡(interface参数)

Master11节点的配置:

! Configuration File for keepalived
global_defs {
    router_id k8s-master11
script_user root
    enable_script_security
3
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 10.10.10.200
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.10.10.200
    }
#    track_script {
#       chk_apiserver
#    }
}

Master02节点的配置:

! Configuration File for keepalived
global_defs {
    router_id k8s-master12
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 10.10.10.200
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.10.10.200
    }
#    track_script {
#       chk_apiserver
#    }
}

Master03节点的配置:

! Configuration File for keepalived
global_defs {
    router_id k8s-master13
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 10.10.10.200
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        10.10.10.200
    }
#    track_script {
#       chk_apiserver
#    }
}

配置KeepAlived健康检查文件:

 vim /etc/keepalived/check_apiserver.sh 
 ## 加入下面内容
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
##加以执行权限
chmod +x /etc/keepalived/check_apiserver.sh
## 启动haproxy和keepalived
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
### 测试vip
ping 10.10.10.200

集群初始化

主要在三台master加入下面的yaml

vim /root/kubeadm-config.yaml
##加入以下内容
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.10.11
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master11
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 10.10.10.200
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.10.10.200:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.15
networking:
  dnsDomain: cluster.local
  podSubnet: 172.168.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

所有Master节点提前下载镜像,可以节省初始化时间

kubeadm config images pull --config /root/kubeadm-config.yaml 

Master11节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可:

kubeadm init --config /root/kubeadm-config.yaml  --upload-certs

初始化成功之后记录下token值

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 10.10.10.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:2eb41cbafe1464ba85f617cfa60424540c0b2d5ff121e30946b10ec9e518a5a8 \
    --control-plane --certificate-key 505ca58be261429e1495681c3aaafc0396a06e522d131afa83390bd5bfc1629b

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.10.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:2eb41cbafe1464ba85f617cfa60424540c0b2d5ff121e30946b10ec9e518a5a8


Master11节点配置环境变量,用于访问Kubernetes集群:

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc

查看节点状态

kubectl get nodes


NAME           STATUS     ROLES                  AGE     VERSION
k8s-master11   NotReady   control-plane,master   5m23s   v1.20.15

初始化其他master加入集群

 kubeadm join 10.10.10.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:2eb41cbafe1464ba85f617cfa60424540c0b2d5ff121e30946b10ec9e518a5a8 \
    --control-plane --certificate-key 505ca58be261429e1495681c3aaafc0396a06e522d131afa83390bd5bfc1629b

添加Node节点加入集群

 kubeadm join 10.10.10.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:2eb41cbafe1464ba85f617cfa60424540c0b2d5ff121e30946b10ec9e518a5a8

查看集群状态:

[root@jx-nginx-11 ~]# kubectl  get node
NAME           STATUS   ROLES                  AGE    VERSION
jx-apache-21   Ready    <none>                 95m    v1.20.15
jx-apache-22   Ready    <none>                 95m    v1.20.15
jx-nginx-12    Ready    control-plane,master   98m    v1.20.15
jx-nginx-13    Ready    control-plane,master   97m    v1.20.15
k8s-master11   Ready    control-plane,master   101m   v1.20.15

Calico安装,以下步骤只在master11执行

cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git

cd /root/k8s-ha-install && git checkout manual-installation-v1.20.x && cd calico/
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379"#g' calico-etcd.yaml


ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml


sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml



创建calico

kubectl apply -f calico-etcd.yaml

Metrics Server部署

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02:/etc/kubernetes/pki/front-proxy-ca.crt

安装metrics server

cd /root/k8s-ha-install/metrics-server-0.4.x-kubeadm/

kubectl  create -f comp.yaml 

kubectl  top node
[root@jx-nginx-11 ~]# kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
jx-apache-21   161m         8%     792Mi           41%       
jx-apache-22   149m         7%     756Mi           39%       
jx-nginx-12    385m         19%    1074Mi          56%       
jx-nginx-13    400m         20%    1179Mi          62%       
k8s-master11   407m         20%    1129Mi          59%  

Dashboard部署

cd /root/k8s-ha-install/dashboard/
kubectl  create -f .

更改dashboard的svc为NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

查看端口

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

[root@jx-nginx-11 ~]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.111.102.154   <none>        443:32747/TCP   94m


访问浏览器,https://10.10.10.200:32747 目前谷歌浏览器好像不太能访问,可以看看火狐,edge,浏览器启动参数加上 --test-type --ignore-certificate-errors

查看token值

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

[root@jx-nginx-11 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-qqnd4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 4b41595e-437d-4ca6-ab80-9af359bbb6b5

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVtMjlQLUJJOUlkWDRrR0QxU3dOWkFjeTVyWjRDQTRlOGR5UkFncndITlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFxbmQ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0YjQxNTk1ZS00MzdkLTRjYTYtYWI4MC05YWYzNTliYmI2YjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.kjHq8lYDADfOkhYN5eUVGAsbVoxte2BZJXq9f-TZp9dqB75g-W9K_hWbdcSW5vBdNuEMJZFrC5VSXwjTt-tpMODMUcic9GzdL-Hc4ZEuyVRTmpAX0YwEZ9l39CW8fahsWHMz4-AuJG3jRzBED3R5pXcM_f6kPth_hsWthXAQeFEqXZZSBQDz1xIN2_Tz3-v3eHXPDBUBYnNOhyQyYQ416MRDC1Lo3vCmk07aCKQ9pczsc4plnV62qxt8fGwFaxSwCTek1Cg1ms7_d6ctBgpL5REQTqooAsKQjMCgpmNYHun4nI8SPTnj0mtS7aFOvtqRMl43mHWktrvS1iGg0wQOow

访问通过token

在这里插入图片描述
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/477253.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

解决centos8下域名raw.githubusercontent.com解析错误

在win10环境下执行命令 D:\test>ping raw.githubusercontent.com Ping 请求找不到主机 raw.githubusercontent.com。请检查该名称&#xff0c;然后重试。 解决很简单&#xff0c;把ipv6的DNS服务器设为240c::6666就行了&#xff0c;改完后执行命令 D:\test>ping raw.g…

C语言函数大全-- s 开头的函数(2)

C语言函数大全 本篇介绍C语言函数大全-- s 开头的函数&#xff08;2&#xff09; 1. setlinestyle 1.1 函数说明 函数声明函数功能void setlinestyle( int linestyle, unsigned upattern, int thickness );设置当前绘图窗口的线条样式、线型模式和线条宽度 参数&#xff1a…

SQL中去除重复数据的几种方法,我一次性都告你​

使用SQL对数据进行提取和分析时&#xff0c;我们经常会遇到数据重复的场景&#xff0c;需要我们对数据进行去重后分析。 以某电商公司的销售报表为例&#xff0c;常见的去重方法我们用到distinct 或者group by 语句&#xff0c; 今天介绍一种新的方法&#xff0c;利用窗口函数…

不得不说的行为型模式-命令模式

目录 命令模式&#xff1a; 代码实例&#xff1a; 下面是面试中可能遇到的问题&#xff1a; 命令模式&#xff1a; 命令模式(Command Pattern)是一种行为型设计模式&#xff0c;它允许将请求封装成对象&#xff0c;从而让你能够用不同的请求对客户端进行参数化&#xff0c;对…

Guitar Pro8苹果mac最新版本下载安装教程

Guitar Pro是一款专业的吉他制谱软件&#xff0c;现在已更新至Guitar Pro8&#xff0c;新增了支持添加音频轨道、支持嵌套连音符、直观的效果器视图、让指法一目了然的音阶示意图等实用新功能。下面我们来看Guitar Pro8 Mac如何安装。 guitar pro是一款专业的吉他学习软件&…

Linux内核(十四)Input 子系统详解 I —— 子系统介绍以及相关结构体解析

文章目录 概述input 子系统框架input 子系统相关结构体介绍input_dev结构体input_handler结构体input_handle结构体Evdev事件相关结构体input_event结构体&#xff08;标准按键编码信息&#xff09;设备相关信息结构体 概述 input子系统就是管理输入的子系统&#xff0c;和Lin…

HTML中的常用标签

HTML中的常用标签 &#x1f50e;注释标签&#x1f50e;标题标签&#x1f50e;段落标签&#x1f50e;换行标签&#x1f50e;格式化标签&#x1f50e;图片标签&#x1f50e;超链接标签&#x1f50e;表格标签合并单元格 &#x1f50e;列表标签无序列表有序列表自定义列表 &#x1…

新手如何快速学会Python?

在本文中&#xff0c;我们将介绍如何有效地学习 Python 。你应该知道「数据科学」是用于解决、探究问题并从数据中提取有价值信息的科学。 为了有效地做到这一点&#xff0c;你需要整理数据集、训练机器学习模型、可视化结果等等。 这是学习 Python 的最佳时机。 事实上&#x…

【五一创作】数据可视化之美 ( 三 ) - 动图展示 ( Python Matlab )

1 Introduction 在我们科研学习、工作生产中&#xff0c;将数据完美展现出来尤为重要。 数据可视化是以数据为视角&#xff0c;探索世界。我们真正想要的是 — 数据视觉&#xff0c;以数据为工具&#xff0c;以可视化为手段&#xff0c;目的是描述真实&#xff0c;探索世界。 …

[创新工具和方法论]-01- DOE课程基础知识

文章目录 1.DOE实验设计的介绍1.1 什么是实验设计DOE?1.2 DOE的优势有哪些?1.3 如何开展DoE研究&#xff1f;步骤 2.DOE实验培训3.数据分析步骤4.实验的随机化5.偏差6.R方 相关系数假设检验 7.三因子二水平全因子设计 1.DOE实验设计的介绍 实验设计是一种安排实验和分析实验数…

【网络进阶】服务器模型Reactor与Proactor

文章目录 1. Reactor模型2. Proactor模型3. 同步IO模拟Proactor模型 在高并发编程和网络连接的消息处理中&#xff0c;通常可分为两个阶段&#xff1a;等待消息就绪和消息处理。当使用默认的阻塞套接字时&#xff08;例如每个线程专门处理一个连接&#xff09;&#xff0c;这两…

Ubantu docker学习笔记(八)私有仓库

文章目录 一、建立HTTPS链接1.在仓库服务器上获取TLS证书1.1 生成证书颁发机构证书1.2 生成服务器证书1.3 利用证书运行仓库容器 2.让私有仓库支持HTTPS3.客户端端配置 二、基本身份验证三、对外隐藏仓库服务器3.1 在服务器端3.2 在客户端进行 四、仓库可视化 在前面的学习中&a…

数据库三范式与反范式详解

&#x1f3c6;今日学习目标&#xff1a; &#x1f340;数据库三范式与反范式详解 ✅创作者&#xff1a;林在闪闪发光 ⏰预计时间&#xff1a;30分钟 &#x1f389;个人主页&#xff1a;林在闪闪发光的个人主页 &#x1f341;林在闪闪发光的个人社区&#xff0c;欢迎你的加入: 林…

阿里云服务器通用算力u1性能测评CPU处理器网络PPS

阿里云服务器u1通用算力型Universal实例高性价比&#xff0c;CPU采用Intel(R) Xeon(R) Platinum&#xff0c;主频是2.5 GHz&#xff0c;云服务器U1实例的基准vCPU算力与5代企业级实例持平&#xff0c;最高vCPU算力与6代企业级实例持平&#xff0c;提供2c-32c规格和1:1/2/4/8丰富…

贪心算法讲解

文章目录 1. 贪心算法的概念2. 讲解贪心 1. 贪心算法的概念 贪心算法是&#xff1a;用一种局部最功利的标准&#xff0c;总是做出当前看来是最好的选择。如果局部最优解可以得出全局最优解&#xff0c;说明贪心假设成立&#xff0c;否则就失败。 举个例子&#xff1a; 这里有…

尚融宝26-投标

目录 一、需求 &#xff08;一&#xff09;投资人投标 &#xff08;二&#xff09;流程 二、标的详情 &#xff08;一&#xff09;需求 &#xff08;二&#xff09;后端 &#xff08;三&#xff09;前端 三、计算收益 &#xff08;一&#xff09;四种还款方式 &#…

基于 A* 搜索算法来优化无线传感器节点网络的平均电池寿命(Matlab代码实现)

目录 &#x1f4a5;1 概述 &#x1f4da;2 运行结果 &#x1f389;3 参考文献 &#x1f468;‍&#x1f4bb;4 Matlab代码 &#x1f4a5;1 概述 A*&#xff08;念做&#xff1a;A Star&#xff09;算法是一种很常用的路径查找和图形遍历算法。它有较好的性能和准确度。本文…

一篇带你快速入门DDD领域驱动设计

一、什么是领域驱动 领域驱动设计 Domain-Driven Design&#xff0c;简称DDD。软件对于行业并没有这么高的要求&#xff0c;他本身就是帮助其他行业更好的发展&#xff0c;赋能其他行业的。各个行业都有软件的身影&#xff0c;但是他们的业务场景是不同的&#xff0c;所以就需…

【MYSQL】数据类型和约束

目录 数据类型 1.数值类型 1.1.位--类型bit(M) 1.2. 整数类型--tinyint&#xff0c;smallint&#xff0c;int&#xff0c;bigint 1.3.小数类型--float、decimal 2.字符类型--char、varchar 3.日期类型--datetime、timestamp 4.string类型--enum和set mysql的约束 1.空…

Mybatis 知识总结2(基于注解的增删改查操作)

3.3 MyBatis 增删改查&#xff08;注解方式&#xff09; MyBatis 的增删改查是最基础最核心的功能&#xff0c;需要重点掌握。 需求说明 对员工信息进行增删改查操作。 查询&#xff08;查询结果分页展示后续实现&#xff09; 根据主键ID查询根据条件查询 新增更新删除 根据主…