Kubernetes部署(haproxy+keepalived)高可用环境和办公网络打通

news2024/11/25 13:01:05
  1. HAProxy + Keepalived 部署高可用性入口:
    • 部署两台或多台节点运行 HAProxy 作为负载均衡器。
    • 使用 Keepalived 实现 VIP(虚拟 IP),为 HAProxy 提供高可用性。Keepalived 会监控 HAProxy 的状态,如果主节点失效,VIP 会自动漂移到备用节点,确保高可用性。
  2. HAProxy 配置:
    • HAProxy 会作为 Kubernetes 的入口,转发流量到 Kubernetes 集群的 apiserver
    • 配置 HAProxy 负载均衡规则,分发流量到多个 Kubernetes master 节点的 API server。
  3. Keepalived 配置:
    • Keepalived 会监控 HAProxy 的状态,确保在节点失效时自动切换到备用服务器,保障服务连续性。
    • 配置高优先级的主节点和低优先级的备份节点,当主节点不可用时,VIP 切换到备份节点。
  4. 办公网络打通:
    • 办公网络通过 VPN 或路由器配置,将办公网络和 Kubernetes 集群的网络打通。
    • 确保办公网络能够通过 VIP 访问 Kubernetes 集群服务,包括 API server 和集群内的其他服务。

环境准备

环境

系统版本:CentOS 7.5.1804
Kubernetes 版本:v1.23.1
Calico 版本:v3.8.0

机器

主机名IP角色安装软件
k8s-test-lvs-1.fjf192.168.83.15/vip 192.168.83.3高可用&负载均衡 Masterkeepalive、haproxy
k8s-test-lvs-2.fjf192.168.83.26/vip 192.168.83.3高可用&负载均衡 Slavekeepalive、haproxy
k8s-test-master-1.fjf192.168.83.36Kubernetes Master Nodekubernetes
k8s-test-master-2.fjf192.168.83.54Kubernetes Master Nodekubernetes
k8s-test-master-3.fjf192.168.83.49Kubernetes Master Nodekubernetes
k8s-test-node-1.fjf192.168.83.52Kubernetes Worker Nodekubernetes
k8s-test-node-2.fjf192.168.83.22Kubernetes Worker Nodekubernetes
k8s-test-node-3.fjf192.168.83.37Kubernetes Worker Nodekubernetes

网络规划

名称网络备注
节点网络192.168.83.0/24容器宿主机所用网络
Pod网络172.15.0.0/16容器网络
Svc 网络172.16.0.0/16服务网络

系统配置

# 所有节点关闭Selinux
sed -i "/SELINUX/ s/enforcing/disabled/g" /etc/selinux/config


# 所有节点关闭防火墙
systemctl disable firewalld


# 所有节点配置内核参数
cat > /etc/sysctl.conf <<EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120

# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2

# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2

net.ipv4.tcp_fin_timeout = 30

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

如果是在OpenStack中部署,需要关闭所有主机的端口安全组,如果不关闭,pod网络和svc网络将无法正常通信
在openstack控制节点执行

# 所有节点都需要操作,可以使用for循环批量操作,下面是单主机移除步骤


# 查看端口ID
neutron port-list| grep 192.168.83.15 
| ea065e6c-65d9-457b-a68e-282653c890e5 |      | 9f83fb35aed1422588096b578cc01341 | fa:16:3e:bd:41:81 | {"subnet_id": "710ffde5-d820-4a30-afe2-1dfd6f40e288", "ip_address": "192.168.83.15"} |


# 移除安全组
neutron port-update --no-security-groups ea065e6c-65d9-457b-a68e-282653c890e5
Updated port: ea065e6c-65d9-457b-a68e-282653c890e5


# 关闭端口安全组
neutron port-update ea065e6c-65d9-457b-a68e-282653c890e5 --port-security-enabled=False
Updated port: ea065e6c-65d9-457b-a68e-282653c890e5


# 查看状态
neutron port-show fcf46a43-5a72-4a28-8e57-1eb04ae24c42
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:host_id       | compute6.openstack.fjf                                                               |
| binding:profile       | {}                                                                                   |
| binding:vif_details   | {"port_filter": true}                                                                |
| binding:vif_type      | bridge                                                                               |
| binding:vnic_type     | normal                                                                               |
| created_at            | 2021-07-01T10:13:32Z                                                                 |
| description           |                                                                                      |
| device_id             | 0a10a372-95fa-4b95-a036-ee43675f1ff4                                                 |
| device_owner          | compute:nova                                                                         |
| extra_dhcp_opts       |                                                                                      |
| fixed_ips             | {"subnet_id": "710ffde5-d820-4a30-afe2-1dfd6f40e288", "ip_address": "192.168.83.22"} |
| id                    | fcf46a43-5a72-4a28-8e57-1eb04ae24c42                                                 |
| mac_address           | fa:16:3e:3f:1d:a8                                                                    |
| name                  |                                                                                      |
| network_id            | 02a8d505-af1e-4da5-af08-ed5ea7600293                                                 |
| port_security_enabled | False                                                                                | # 此项为False则代表端口安全组关闭
| project_id            | 9f83fb35aed1422588096b578cc01341                                                     |
| revision_number       | 10                                                                                   |
| security_groups       |                                                                                      | # 此项必须为空
| status                | ACTIVE                                                                               |
| tags                  |                                                                                      |
| tenant_id             | 9f83fb35aed1422588096b578cc01341                                                     |
| updated_at            | 2021-07-01T10:13:42Z                                                                 |
+-----------------------+--------------------------------------------------------------------------------------+

部署haproxy+keepalived

[192.168.83.15,192.168.83.26]安装haproxy+keepalived

yum -y install haproxy keepalived

[192.168.83.15,192.168.83.26]配置haproxy

cat > /etc/haproxy/haproxy.cfg <<EOF
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
 
    stats socket /var/lib/haproxy/stats
 
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 5000
 
defaults
    mode                    tcp
    option                  redispatch
    option                  abortonclose
    timeout connect         5000s
    timeout client          50000s
    timeout server          50000s
    log 127.0.0.1           local0
    balance                 roundrobin
    maxconn                 50000
 
listen  admin_stats 0.0.0.0:50101
        mode        http
        stats uri   /
        stats realm     Global\ statistics
        stats auth  haproxy:password
        stats hide-version
        stats admin if TRUE
 
listen KubeApi
    bind 0.0.0.0:6443
    mode tcp
    server KubeMaster1 192.168.83.36:6443 weight 1 check port 6443 inter 12000 rise 1 fall 3
    server KubeMaster2 192.168.83.54:6443 weight 1 check port 6443 inter 12000 rise 1 fall 3
    server KubeMaster3 192.168.83.49:6443 weight 1 check port 6443 inter 12000 rise 1 fall 3
EOF

[192.168.83.15]配置keepalived

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
 
global_defs {
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
 
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass test-pass
    }
    virtual_ipaddress {
        192.168.83.3
    }
}
EOF

[192.168.83.26]配置keepalived

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
 
global_defs {
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
 
vrrp_instance VI_1 {
    state BACKUP # 与master值不同
    interface eth0
    virtual_router_id 52
    priority 99 # 与master值不同
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass test-pass
    }
    virtual_ipaddress {
        192.168.83.3
    }
}
EOF

[192.168.83.15,192.168.83.26]启动服务

systemctl start haproxy keepalived
systemctl enable haproxy keepalived     
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

查看管理后台,http://192.168.83.3:50101/,账号密码在haproxy配置文件中stats auth字段。
此时所有后端处于Down的状态属于正常情况,因为kubernetes master节点还没有部署
在这里插入图片描述

部署kubernetes master

[192.168.83.36,192.168.83.54,192.168.83.49]添加Yum源

# 移除可能存在的组件
yum remove docker docker-common docker-selinux docker-engine
# 安装一些依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 配置Docker CE 源
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
 
 
# Kubernetes
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
 
# 重建缓存
yum makecache fast

[192.168.83.36,192.168.83.54,192.168.83.49]安装

yum -y install docker-ce kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1

[192.168.83.36,192.168.83.54,192.168.83.49]添加docker配置

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
        "max-size": "100m",
        "max-file": "10"
  },
  "oom-score-adjust": -1000,
  "live-restore": true
}
EOF

[192.168.83.36,192.168.83.54,192.168.83.49]启动服务

systemctl start docker kubelet
systemctl enable docker kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[192.168.83.36]创建集群初始化配置文件

cat > /etc/kubernetes/kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "192.168.83.3:6443" #VIP,浮动IP,端口固定6443不要修改
kubernetesVersion: v1.15.0
networking:
  podSubnet: 172.15.0.0/16
  serviceSubnet: 172.16.0.0/16
  dnsDomain: k8s-test.fjf # 默认值:cluster.local,在使用svc网络是可以使用此后缀,需要dns指定到k8s内部dns才能完成解析
EOF

[192.168.83.36,192.168.83.54,192.168.83.49]手动下载镜像,加速集群初始化

KUBE_VERSION=v1.15.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1
 
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
 
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
 
 
for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done

[192.168.83.36]初始化集群

# --upload-certs 用于上传证书到集群,其他节点就不需要手动复制证书了
# tee kubeadm-init.log 保存屏幕输出信息,后面节点加入需要此信息
kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml  --upload-certs | tee kubeadm-init.log
# 以下,输出信息
W0705 18:04:22.814808   11579 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error converting YAML to JSON: yaml: unmarshal errors:
  line 5: key "kubernetesVersion" already set in map
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
        [WARNING Hostname]: hostname "k8s-test-master-1.fjf" could not be reached
        [WARNING Hostname]: hostname "k8s-test-master-1.fjf": lookup k8s-test-master-1.fjf on 192.168.81.10:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-test-master-1.fjf localhost] and IPs [192.168.83.36 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-test-master-1.fjf localhost] and IPs [192.168.83.36 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-test-master-1.fjf kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.k8s-test.fjf] and IPs [172.16.0.1 192.168.83.36 192.168.83.3]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.005364 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d3123a4998a8715f1c12830d7d54a63aa790fcf3da0cf4b5e7514cd7ffd6e200
[mark-control-plane] Marking the node k8s-test-master-1.fjf as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-test-master-1.fjf as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mpzs9m.oec6ixeesemzbxle
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of the control-plane node running the following command on each as root:
# Master加入
  kubeadm join 192.168.83.3:6443 --token mpzs9m.oec6ixeesemzbxle \
    --discovery-token-ca-cert-hash sha256:7c05c8001693061902d6f20947fbc60c1b6a12e9ded449e6c59a71e6448fac5d \
    --experimental-control-plane --certificate-key d3123a4998a8715f1c12830d7d54a63aa790fcf3da0cf4b5e7514cd7ffd6e200
 
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
 
Then you can join any number of worker nodes by running the following on each as root:
# Worker节点接入
kubeadm join 192.168.83.3:6443 --token mpzs9m.oec6ixeesemzbxle \
    --discovery-token-ca-cert-hash sha256:7c05c8001693061902d6f20947fbc60c1b6a12e9ded449e6c59a71e6448fac5d

[192.168.83.54,192.168.83.49]其他master节点加入集群

# 此命令直接复制上一步输出的结果就可以了,注意区分master节点和worker节点的指令是不同的
kubeadm join 192.168.83.3:6443 --token mpzs9m.oec6ixeesemzbxle \
    --discovery-token-ca-cert-hash sha256:7c05c8001693061902d6f20947fbc60c1b6a12e9ded449e6c59a71e6448fac5d \
    --experimental-control-plane --certificate-key d3123a4998a8715f1c12830d7d54a63aa790fcf3da0cf4b5e7514cd7ffd6e200
# 以下,输出信息
Flag --experimental-control-plane has been deprecated, use --control-plane instead
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
        [WARNING Hostname]: hostname "k8s-test-master-2.fjf" could not be reached
        [WARNING Hostname]: hostname "k8s-test-master-2.fjf": lookup k8s-test-master-2.fjf on 192.168.81.10:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-test-master-2.fjf localhost] and IPs [192.168.83.54 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-test-master-2.fjf localhost] and IPs [192.168.83.54 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-test-master-2.fjf kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.k8s-test.fjf] and IPs [172.16.0.1 192.168.83.54 192.168.83.3]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-test-master-2.fjf as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-test-master-2.fjf as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
 
This node has joined the cluster and a new control plane instance was created:
 
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
 
To start administering your cluster from this node, you need to run the following as a regular user:
 
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Run 'kubectl get nodes' to see this node join the cluster.

[192.168.83.36,192.168.83.54,192.168.83.49]创建集群配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查集群状态

kubectl get node
NAME                    STATUS     ROLES    AGE     VERSION
k8s-test-master-1.fjf   NotReady   master   7m19s   v1.15.0
k8s-test-master-2.fjf   NotReady   master   112s    v1.15.0
k8s-test-master-3.fjf   NotReady   master   62s     v1.15.0
 
 
kubectl get pod -A
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-6kxg6                        0/1     Pending   0          7m30s  # Pending状态正常,在安装网络组件后即可恢复
kube-system   coredns-5c98db65d4-x6x4b                        0/1     Pending   0          7m30s  # Pending状态正常,在安装网络组件后即可恢复
kube-system   etcd-k8s-test-master-1.fjf                      1/1     Running   0          6m37s
kube-system   etcd-k8s-test-master-2.fjf                      1/1     Running   0          2m22s
kube-system   etcd-k8s-test-master-3.fjf                      1/1     Running   0          93s
kube-system   kube-apiserver-k8s-test-master-1.fjf            1/1     Running   0          6m31s
kube-system   kube-apiserver-k8s-test-master-2.fjf            1/1     Running   0          2m22s
kube-system   kube-apiserver-k8s-test-master-3.fjf            1/1     Running   1          91s
kube-system   kube-controller-manager-k8s-test-master-1.fjf   1/1     Running   1          6m47s
kube-system   kube-controller-manager-k8s-test-master-2.fjf   1/1     Running   0          2m22s
kube-system   kube-controller-manager-k8s-test-master-3.fjf   1/1     Running   0          22s
kube-system   kube-proxy-hrfgq                                1/1     Running   0          2m23s
kube-system   kube-proxy-nlm68                                1/1     Running   0          93s
kube-system   kube-proxy-tt8dg                                1/1     Running   0          7m30s
kube-system   kube-scheduler-k8s-test-master-1.fjf            1/1     Running   1          6m28s
kube-system   kube-scheduler-k8s-test-master-2.fjf            1/1     Running   0          2m22s
kube-system   kube-scheduler-k8s-test-master-3.fjf            1/1     Running   0          21s

查看管理后台,http://192.168.83.3:50101/,账号密码在haproxy配置文件中stats auth字段。
所有Master节点状态为UP则为正常,如果不是,检查前面的步骤是否全部完成,并且输出内容一致
在这里插入图片描述

部署kubernetes 网络组件

  • 目前主要支持的CNI组件
    CNI插件:Flannel、Calico、Weave和Canal(技术上是多个插件的组合)

本次部署,使用Calico组件,可以使用改组件的BGP功能与内网打通Pod和SVC网络

  • 在Kubernetes集群中部署Calico,在任一Master节点执行即可
# 下载calico部署文件
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
# 修改CIDR
- name: CALICO_IPV4POOL_CIDR
  value: "172.15.0.0/16"  #修改为Pod的网络
# 部署calico
kubectl apply -f calico.yaml
# 以下为输出
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

等待CNI生效

watch kubectl get pods --all-namespaces
 
 
# 所有Pod状态为Running后即可退出
Every 2.0s: kubectl get pods --all-namespaces                                                                                                                        Mon Jul  5 18:49:49 2021
 
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE         
kube-system   calico-kube-controllers-7fc57b95d4-wsvpm        1/1     Running   0          85s
kube-system   calico-node-7f99k                               0/1     Running   0          85s
kube-system   calico-node-jssb6                               0/1     Running   0          85s
kube-system   calico-node-qgvp9                               1/1     Running   0          85s
kube-system   coredns-5c98db65d4-6kxg6                        1/1     Running   0          44m         
kube-system   coredns-5c98db65d4-x6x4b                        1/1     Running   0          44m         
kube-system   etcd-k8s-test-master-1.fjf                      1/1     Running   0          43m         
kube-system   etcd-k8s-test-master-2.fjf                      1/1     Running   0          39m         
kube-system   etcd-k8s-test-master-3.fjf                      1/1     Running   0          38m         
kube-system   kube-apiserver-k8s-test-master-1.fjf            1/1     Running   0          43m         
kube-system   kube-apiserver-k8s-test-master-2.fjf            1/1     Running   0          39m         
kube-system   kube-apiserver-k8s-test-master-3.fjf            1/1     Running   1          38m         
kube-system   kube-controller-manager-k8s-test-master-1.fjf   1/1     Running   1          44m         
kube-system   kube-controller-manager-k8s-test-master-2.fjf   1/1     Running   0          39m         
kube-system   kube-controller-manager-k8s-test-master-3.fjf   1/1     Running   0          37m         
kube-system   kube-proxy-hrfgq                                1/1     Running   0          39m         
kube-system   kube-proxy-nlm68                                1/1     Running   0          38m         
kube-system   kube-proxy-tt8dg                                1/1     Running   0          44m         
kube-system   kube-scheduler-k8s-test-master-1.fjf            1/1     Running   1          43m         
kube-system   kube-scheduler-k8s-test-master-2.fjf            1/1     Running   0          39m         
kube-system   kube-scheduler-k8s-test-master-3.fjf            1/1     Running   0          37m

查看CoreDNS状态是否恢复

kubectl get pod -A -owide
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE   IP               NODE                    NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-7fc57b95d4-zxc4d        1/1     Running   0          88s   172.15.139.1     k8s-test-master-1.fjf   <none>           <none>
kube-system   calico-node-24j88                               1/1     Running   0          89s   192.168.83.36    k8s-test-master-1.fjf   <none>           <none>
kube-system   calico-node-49vnk                               1/1     Running   0          89s   192.168.83.54    k8s-test-master-2.fjf   <none>           <none>
kube-system   calico-node-vzjk8                               1/1     Running   0          89s   192.168.83.49    k8s-test-master-3.fjf   <none>           <none>
kube-system   coredns-5c98db65d4-frfx7                        1/1     Running   0          16s   172.15.159.129   k8s-test-master-2.fjf   <none>           <none> # 状态已经为Running  
kube-system   coredns-5c98db65d4-tt9w5                        1/1     Running   0          27s   172.15.221.1     k8s-test-master-3.fjf   <none>           <none> # 状态已经为Running  
kube-system   etcd-k8s-test-master-1.fjf                      1/1     Running   0          48m   192.168.83.36    k8s-test-master-1.fjf   <none>           <none>
kube-system   etcd-k8s-test-master-2.fjf                      1/1     Running   0          44m   192.168.83.54    k8s-test-master-2.fjf   <none>           <none>
kube-system   etcd-k8s-test-master-3.fjf                      1/1     Running   0          43m   192.168.83.49    k8s-test-master-3.fjf   <none>           <none>
kube-system   kube-apiserver-k8s-test-master-1.fjf            1/1     Running   0          48m   192.168.83.36    k8s-test-master-1.fjf   <none>           <none>
kube-system   kube-apiserver-k8s-test-master-2.fjf            1/1     Running   0          44m   192.168.83.54    k8s-test-master-2.fjf   <none>           <none>
kube-system   kube-apiserver-k8s-test-master-3.fjf            1/1     Running   1          43m   192.168.83.49    k8s-test-master-3.fjf   <none>           <none>
kube-system   kube-controller-manager-k8s-test-master-1.fjf   1/1     Running   1          48m   192.168.83.36    k8s-test-master-1.fjf   <none>           <none>
kube-system   kube-controller-manager-k8s-test-master-2.fjf   1/1     Running   0          44m   192.168.83.54    k8s-test-master-2.fjf   <none>           <none>
kube-system   kube-controller-manager-k8s-test-master-3.fjf   1/1     Running   0          42m   192.168.83.49    k8s-test-master-3.fjf   <none>           <none>
kube-system   kube-proxy-hrfgq                                1/1     Running   0          44m   192.168.83.54    k8s-test-master-2.fjf   <none>           <none>
kube-system   kube-proxy-nlm68                                1/1     Running   0          43m   192.168.83.49    k8s-test-master-3.fjf   <none>           <none>
kube-system   kube-proxy-tt8dg                                1/1     Running   0          49m   192.168.83.36    k8s-test-master-1.fjf   <none>           <none>
kube-system   kube-scheduler-k8s-test-master-1.fjf            1/1     Running   1          48m   192.168.83.36    k8s-test-master-1.fjf   <none>           <none>
kube-system   kube-scheduler-k8s-test-master-2.fjf            1/1     Running   0          44m   192.168.83.54    k8s-test-master-2.fjf   <none>           <none>
kube-system   kube-scheduler-k8s-test-master-3.fjf            1/1     Running   0          42m   192.168.83.49    k8s-test-master-3.fjf   <none>           <none>

部署kuberntes node

[192.168.83.52,192.168.83.22,192.168.83.37]添加Yum源

# 移除可能存在的组件
yum remove docker docker-common docker-selinux docker-engine
# 安装一些依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 配置Docker CE 源
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
 
 
# Kubernetes
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
 
# 重建缓存
yum makecache fast

[192.168.83.52,192.168.83.22,192.168.83.37]安装


yum -y install kubelet-1.23.1 kubeadm-1.23.1 kubectl-1.23.1
yum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io

[192.168.83.52,192.168.83.22,192.168.83.37]添加docker配置

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://h3klxkkx.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
        "max-size": "100m",
        "max-file": "10"
  },
  "oom-score-adjust": -1000,
  "live-restore": true
}
EOF

[192.168.83.52,192.168.83.22,192.168.83.37]启动服务

systemctl start docker kubelet
systemctl enable docker kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[192.168.83.52,192.168.83.22,192.168.83.37]手动下载镜像,加速集群初始化

KUBE_VERSION=v1.15.0
KUBE_PAUSE_VERSION=3.1
  
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
  
images=(kube-proxy:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION})
  
  
for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done

[192.168.83.52,192.168.83.22,192.168.83.37]加入集群

# 此命令在初始化Master集群时有记录,如果找不到了,可以参考 http://wiki.fjf.com/pages/viewpage.action?pageId=14682096
kubeadm join 192.168.83.2:6443 --token wiqtis.k4g3jm9z94qykiyl     --discovery-token-ca-cert-hash sha256:2771e3f29b2628edac9c5e1433dff5c3275ab870ad14ca7c9e6adaf1eed3179e

在Master节点查看node节点状态

kubectl get node         
NAME                    STATUS   ROLES    AGE   VERSION
k8s-test-master-1.fjf   Ready    master   73m   v1.15.0
k8s-test-master-2.fjf   Ready    master   68m   v1.15.0
k8s-test-master-3.fjf   Ready    master   67m   v1.15.0
k8s-test-node-1.fjf     Ready    <none>   75s   v1.15.0 #状态为Ready即为成功加入集群
k8s-test-node-2.fjf     Ready    <none>   63s   v1.15.0 #状态为Ready即为成功加入集群
k8s-test-node-3.fjf     Ready    <none>   59s   v1.15.0 #状态为Ready即为成功加入集群

使用Calico打通Pod网络

现状
集群内pod&node可以通过pod ip直接进行访问,容器访问虚拟机没有问题,但是虚拟机不能访问容器,尤其是通过consul注册的服务,必须打通网络后才可以互相调用

目标
打通pod和虚拟机的网络,使虚拟机可以访问pod ip
官方文档:https://docs.projectcalico.org/archive/v3.8/networking/bgp

[Kubernetes Master节点]安装calico控制命令calicoctl

curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.8.9/calicoctl
chmod +x calicoctl
mv calicoctl /usr/bin/calicoctl

[calicoctl安装节点]添加calico配置

mkdir /etc/calico
cat > /etc/calico/calicoctl.cfg <<EOF
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"
EOF
 
 
# 测试
calicoctl version
Client Version:    v3.8.9
Git commit:        0991d2fb
Cluster Version:   v3.8.9        # 出现此行代表配置正确
Cluster Type:      k8s,bgp,kdd   # 出现此行代表配置正确

[calicoctl安装节点]配置集群路由反射器,node节点与master节点对等、master节点彼此对等

# 在本环境下将kubernetes master节点作为反射器使用
# 查看节点信息
kubectl get node
NAME                    STATUS   ROLES    AGE     VERSION
k8s-test-master-1.fjf   Ready    master   3d1h    v1.15.0
k8s-test-master-2.fjf   Ready    master   3d1h    v1.15.0
k8s-test-master-3.fjf   Ready    master   3d1h    v1.15.0
k8s-test-node-1.fjf     Ready    <none>   2d23h   v1.15.0
k8s-test-node-2.fjf     Ready    <none>   2d23h   v1.15.0
k8s-test-node-3.fjf     Ready    <none>   2d23h   v1.15.0
 
 
# 导出Master节点配置
calicoctl get node k8s-test-master-1.fjf --export -o yaml > k8s-test-master-1.yml
calicoctl get node k8s-test-master-2.fjf --export -o yaml > k8s-test-master-2.yml
calicoctl get node k8s-test-master-3.fjf --export -o yaml > k8s-test-master-3.yml
 
 
# 在3个Master节点配置中添加以下配置用于标识该节点为反射器
metadata:
  ......
  labels:
    ......
    i-am-a-route-reflector: true
  ......
spec:
  bgp:
    ......
    routeReflectorClusterID: 224.0.0.1
 
 
# 更新节点配置
calicoctl apply -f k8s-test-master-1.yml
calicoctl apply -f k8s-test-master-2.yml
calicoctl apply -f k8s-test-master-3.yml
 
 
# 其他节点与反射器对等
calicoctl apply -f - <<EOF
kind: BGPPeer
apiVersion: projectcalico.org/v3
metadata:
  name: peer-to-rrs
spec:
  nodeSelector: "!has(i-am-a-route-reflector)"
  peerSelector: has(i-am-a-route-reflector)
EOF
 
 
# 反射器彼此对等
calicoctl apply -f - <<EOF
kind: BGPPeer
apiVersion: projectcalico.org/v3
metadata:
  name: rr-mesh
spec:
  nodeSelector: has(i-am-a-route-reflector)
  peerSelector: has(i-am-a-route-reflector)
EOF

[calicoctl安装节点]配置Master节点与核心交换机对等

calicoctl apply -f - <<EOF
apiVersion: projectcalico.org/v3
kind: BGPPeer
metadata:
  name: rr-border
spec:
  nodeSelector: has(i-am-a-route-reflector)
  peerIP: 192.168.83.1
  asNumber: 64512
EOF
# peerIP: 核心交换机IP
# asNumber: 用于和核心交换机对等的ID

[192.168.83.1,设备型号:cisco 3650]配置核心交换与Master(反射器)节点对等,这一步需要在对端BGP设备上操作,这里是用核心交换机

router bgp 64512
bgp router-id 192.168.83.1
neighbor 192.168.83.36 remote-as 64512
neighbor 192.168.83.49 remote-as 64512
neighbor 192.168.83.54 remote-as 64512

查看BGP 对等状态

calicoctl node status
# INFO字段全部为Established 即为正常
Calico process is running.
 
IPv4 BGP status
+---------------+---------------+-------+----------+-------------+
| PEER ADDRESS  |   PEER TYPE   | STATE |  SINCE   |    INFO     |
+---------------+---------------+-------+----------+-------------+
| 192.168.83.1  | node specific | up    | 06:38:55 | Established |
| 192.168.83.54 | node specific | up    | 06:38:55 | Established |
| 192.168.83.22 | node specific | up    | 06:38:55 | Established |
| 192.168.83.37 | node specific | up    | 06:38:55 | Established |
| 192.168.83.49 | node specific | up    | 06:38:55 | Established |
| 192.168.83.52 | node specific | up    | 06:38:55 | Established |
+---------------+---------------+-------+----------+-------------+
 
IPv6 BGP status
No IPv6 peers found.

测试,使用其他网段,如192.168.82.0/24的虚拟机ping 某一个pod ip,能正常通信即代表成功

[dev][root@spring-boot-demo1-192.168.82.85 ~]# ping -c 3 172.15.190.2
PING 172.15.190.2 (172.15.190.2) 56(84) bytes of data.
64 bytes from 172.15.190.2: icmp_seq=1 ttl=62 time=0.677 ms
64 bytes from 172.15.190.2: icmp_seq=2 ttl=62 time=0.543 ms
64 bytes from 172.15.190.2: icmp_seq=3 ttl=62 time=0.549 ms
 
--- 172.15.190.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.543/0.589/0.677/0.067 ms
[dev][root@spring-boot-demo1-192.168.82.85 ~]#

使用Calico打通Svc网络

现状
一般情况下,Kuberntes集群暴露服务的方式有Ingress、NodePort、HostNetwork,这几种方式用在生产环境下是没有问题的,安全性和稳定性有保障。但是在内部开发环境下,使用起来就有诸多不便,开发希望可以直接访问自己的服务,但是Pod IP又是随机变化的,这个时候我们就可以使用SVC IP 或者SVC Name进行访问

目标
打通SVC网络,使开发本地可以通过SVC IP 或 SVC Name访问集群服务
官方文档:https://docs.projectcalico.org/archive/v3.8/networking/service-advertisement
注意:前提是已经用BGP打通了Pod网络或已经建立了BGP对等才可以继续进行

[Kubernetes Master]确定SVC网络信息

kubectl cluster-info dump|grep -i  "service-cluster-ip-range"
# 以下为输出
                            "--service-cluster-ip-range=172.16.0.0/16",
                            "--service-cluster-ip-range=172.16.0.0/16",
                            "--service-cluster-ip-range=172.16.0.0/16",

[Kubernetes Master]启用SVC网络广播

kubectl patch ds -n kube-system calico-node --patch \
    '{"spec": {"template": {"spec": {"containers": [{"name": "calico-node", "env": [{"name": "CALICO_ADVERTISE_CLUSTER_IPS", "value": "172.16.0.0/16"}]}]}}}}'
# 以下为输出
daemonset.extensions/calico-node patched

测试,正常情况下启用BGP广播后,3分钟内核心交换即可接收到路由信息

# 找到集群DNS服务进行测试
kubectl get svc kube-dns -n kube-system
NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   172.16.0.10   <none>        53/UDP,53/TCP,9153/TCP   3d21h
 
 
# 找一个Pod IP在集群外进行解析测试,如果可以解析到结果说明SVC网络已经打通
[dev][root@spring-boot-demo1-192.168.82.85 ~]# dig -x 172.15.190.2 @172.16.0.10     
 
; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7_5.1 <<>> -x 172.15.190.2 @172.16.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23212
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;2.190.15.172.in-addr.arpa.     IN      PTR
 
;; ANSWER SECTION:
2.190.15.172.in-addr.arpa. 30   IN      PTR     172-15-190-2.ingress-nginx.ingress-nginx.svc.k8s-test.fjf. # 可以正常解析到主机记录
 
;; Query time: 3 msec
;; SERVER: 172.16.0.10#53(172.16.0.10)
;; WHEN: Fri Jul 09 15:26:55 CST 2021
;; MSG SIZE  rcvd: 150

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2128547.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

再次进阶 舞台王者 第八季完美童模全球赛荣耀大使【殷淑窈】赛场秀场超燃合集!

7月20-23日&#xff0c;2024第八季完美童模全球总决赛在青岛圆满落幕。在盛大的颁奖典礼上&#xff0c;一位才能出众的少女——殷淑窈&#xff0c;迎来了她舞台生涯的璀璨时刻。 荣耀大使——殷淑窈&#xff0c;以璀璨童星之姿&#xff0c;优雅地踏上完美童模盛宴的绚丽舞台&am…

51单片机应用开发---二进制、十六进制与单片机寄存器之间的关系(跑马灯实例)

实现目标 1、掌握二进制与十六进制之间的转换 2、掌握单片机寄存器与二进制、十六进制之间的转换 一、二进制与十六进制之间的转换 1、二进制 二进制&#xff08;binary&#xff09;&#xff0c; 是在数学和数字电路中以2为基数的记数系统&#xff0c;是以2为基数代表系统…

Java面试篇基础部分-Java中的集合类

Java集合是面试中经常被问到的一块内容,很多人在这个地方被面试官吊打。Java集合类被定义在java.util包中,主要有四种集合,分别是List、Queue、Set和Map,每种集合分类如下图所示 List集合 List是一种在开发中比较常用的集合类,作为有序的Collection的典范,分别有如下的…

股指期货的指数一直贴水是什么意思?

在投资的世界里&#xff0c;股指期货是一个既复杂又充满机会的领域。而“股指期货贴水”这一现象&#xff0c;更是让不少投资者感到困惑。今天&#xff0c;我们就用大白话&#xff0c;来详细解释一下股指期货贴水到底是什么意思。 一、什么是股指期货&#xff1f; 首先&#…

ppt一键生成软件免费版有哪些?职场小白看这里

俗话说得好&#xff1a;时间就是金钱&#xff0c;一款好用且高效的工具无疑能让我们的工作事半功倍。 特别是经常需要制作ppt的朋友&#xff0c;拥有ppt一键生成软件免费工具可以帮你们更高效的完成工作&#xff0c;将精力投入到其他事项去。 因此&#xff0c;今天这篇文章找…

虚拟电厂高质量发展,大众氢能港引领能源管理新变革

在2024年这个充满希望的夏日&#xff0c;上海市政府正式印发了《上海市虚拟电厂高质量发展工作方案》&#xff0c;标志着上海在探索能源结构优化、促进绿色低碳发展的道路上迈出了坚实的一步。该方案不仅为虚拟电厂的未来发展绘制了清晰的蓝图&#xff0c;更通过一系列创新举措…

Github 2024-09-09 开源项目周报 Top15

根据Github Trendings的统计,本周(2024-09-09统计)共有15个项目上榜。根据开发语言中项目的数量,汇总情况如下: 开发语言项目数量Python项目6TypeScript项目4Jupyter Notebook项目2C++项目2JavaScript项目2Shell项目1Dockerfile项目1C#项目1Dart项目1Rust项目1Microsoft Pow…

大健康企业如何通过私域流量与积分增值模式实现业绩飞跃

在探讨大健康领域的一家创新企业时&#xff0c;我们发现其成功之路并非仅仅依赖传统电商的广撒网策略&#xff0c;如大规模广告投放和促销优惠&#xff0c;而是巧妙转型&#xff0c;深耕私域流量的构建与独特商业模式的应用。 这一转变不仅显著提升了月度销售业绩&#xff0c;更…

不小心把电脑格式化了怎么恢复?这些步骤帮你找回数据

在日常使用电脑的过程中&#xff0c;我们有时会因为各种原因不小心对电脑进行了格式化操作。一旦电脑被格式化&#xff0c;所有的数据都将被清除&#xff0c;这给用户带来了巨大的困扰和损失。 然而&#xff0c;不必过于绝望&#xff0c;因为有些方法可以帮助我们恢复被格式化…

作为负责招聘的HR,如何解决职位吸引力不足的问题

职位吸引力不足&#xff0c;很有可能是因为对职位描述的太过平淡无奇&#xff0c;没办法引起求职者足够的关心和了解。HR应当更加深入地挖掘企业特色&#xff0c;通过精心撰写职位描述&#xff0c;来凸显出企业的发展潜力和品牌文化等亮点&#xff0c;使用更加精确的描述&#…

小间距LED显示屏的模组与箱体参数

随着显示技术的发展&#xff0c;小间距LED显示屏因其高清晰度和高亮度而越来越受到市场的欢迎。然而&#xff0c;对于许多用户来说&#xff0c;如何理解和选择小间距LED显示屏的参数可能是一个挑战。本文将详细介绍小间距LED显示屏的两大核心参数&#xff1a;模组参数和箱体参数…

装饰者模式实现和JDK中的应用

&#x1f3af; 设计模式专栏&#xff0c;持续更新中&#xff0c; 欢迎订阅&#xff1a;JAVA实现设计模式 &#x1f6e0;️ 希望小伙伴们一键三连&#xff0c;有问题私信都会回复&#xff0c;或者在评论区直接发言 装饰者模式 文章目录 装饰者模式&#x1f3af; 核心要点&#x…

【保姆级教程】基于OpenCV实现实时道路车道检测【附完整源码】

《博主简介》 小伙伴们好&#xff0c;我是阿旭。专注于人工智能、AIGC、python、计算机视觉相关分享研究。 &#x1f44d;感谢小伙伴们点赞、关注&#xff01; 《------往期经典推荐------》 一、AI应用软件开发实战专栏【链接】 项目名称项目名称1.【人脸识别与管理系统开发…

安装python,jupter notebook,anaconda换源

目标&#xff1a; 学会安装Anaconda实验环境&#xff0c;创建Anaconda虚拟环境 能够设置国内镜像源 设置好Jupyter Notebook的文件存储路径并学会基本用法 内容&#xff1a; 一、安装Anaconda 首先&#xff0c;打开Anaconda官方网站&#xff08;https://www.anaconda.com/…

stm32单片机个人学习笔记1(简单介绍)

前言 本篇文章属于stm32单片机&#xff08;以下简称单片机&#xff09;的学习笔记&#xff0c;来源于B站教学视频。下面是这位up主的视频链接。本文为个人学习笔记&#xff0c;只能做参考&#xff0c;细节方面建议观看视频&#xff0c;肯定受益匪浅。 STM32入门教程-2023版 细…

辛巴赔付到账,罗永浩退一赔三:直播带货终于往好方向卷了下…

因为快手顶流辛巴扔出的一颗重磅炸弹「被辛巴架火上烤&#xff0c;带货顶流圈快乱成一锅粥了……」&#xff0c;把直播带货行业藏在深处的淤泥炸出了水面。 原本表面看上去清澈、安静的水面&#xff0c;越来越浑&#xff0c;且还冒着火星子&#xff01;‍‍‍‍‍‍‍ 自从这个…

别让恶意刷票毁了你的项目,学会这6招防刷技巧!

我是小米,一个喜欢分享技术的29岁程序员。如果你喜欢我的文章,欢迎关注我的微信公众号“软件求生”,获取更多技术干货! 大家好,我是小米,一个29岁,积极活泼的技术爱好者!今天想跟大家聊一聊,在做个人项目时,如何有效地防止“刷”行为。无论是刷票、刷流量,还是恶意请…

DPDK基础入门(十):虚拟化

I/O虚拟化 全虚拟化&#xff1a;宿主机截获客户机对I/O设备的访问请求&#xff0c;然后通过软件模拟真实的硬件。这种方式对客户机而言非常透明&#xff0c;无需考虑底层硬件的情况&#xff0c;不需要修改操作系统。 半虚拟化&#xff1a;通过前端驱动/后端驱动模拟实现I/O虚拟…

腾讯云2024年数字生态大会开发者嘉年华(数据库动手实验)TDSQL-C初体验

在2024年9月5-6日&#xff0c;有幸参加了腾讯云举办的2024年数字生态大会开发者嘉年华。 有幸体验了腾讯的多项黑科技和云计算知识。特别是在“增一行代码”互动展区&#xff0c;体验了腾讯云云计算数据库TDSQL-C技术并进行了动手实验。这些技术充分展示了腾讯在云计算的强大实…

Antd - Form 表单提交onfinish函数不生效

Antd - Form 表单提交onfinish函数不生效 问题复现问题修复 问题复现 const onFinish: FormProps<InvoiceTitleInfo>[onFinish] (values) > {console.log(values); } const rules [() > ({validator() {const address form.getFieldValue(address) || ;if (!ad…