(四)Kubernetes - 手动部署(二进制方式安装)

news2024/11/19 13:32:53

Kubernetes - 手动部署 [ 3 ]

    • 1 部署work node
      • 1.1 创建工作目录并拷贝二进制文件
      • 1.2 部署kubelet
        • 1.2.1 创建配置文件
        • 1.2.2 配置文件
        • 1.2.3 生成kubelet初次加入集群引导kubeconfig文件
        • 1.2.4 systemd管理kubelet
        • 1.2.5 启动并设置开机启动
        • 1.2.6 允许kubelet证书申请并加入集群
      • 1.3 部署kube-proxy
        • 1.3.1 创建配置文件
        • 1.3.2 配置参数文件
        • 1.3.3 生成kube-proxy证书文件
        • 1.3.4 生成kube-proxy.kubeconfig文件
        • 1.3.5 systemd管理kube-proxy
        • 1.3.6 启动并设置开机自启
      • 1.4 部署网络组件(Calico)
      • 1.5 授权apiserver访问kubelet
    • 2 新增加Work Node
      • 2.1 拷贝已部署好的相关文件到新节点
      • 2.2 删除kubelet证书和kubeconfig文件
      • 2.3 修改主机名
      • 2.4 启动并设置开机自启
      • 2.5 在Master上同意新的Node kubelet证书申请
      • 2.6 查看Node状态
    • 3 部署Dashboard
      • 3.1 部署Dashboard
      • 3.2 访问dashboard
    • 4 部署CoreDNS

<接前文…>

环境准备

主机名操作系统IP 地址所需组件
node-251CentOS 7.9192.168.71.251所有组件都安装(合理利用资源)
node-252CentOS 7.9192.168.71.252所有组件都安装
node-253CentOS 7.9192.168.71.253docker kubelet kube-proxy

我们已经在node-251和node-252部署了etcd集群,并在所有机器安装了docker,在node-251(主节点)部署了kube-apiserver,kube-controller-manager和kube-scheduler

接下来我们将在node-252和node-253部署kubelet,kube-proxy等其他组件

1 部署work node

我们在主节点配置,完成之后进行同步,主节点也作为工作节点之一

1.1 创建工作目录并拷贝二进制文件

注: 在所有work node创建工作目录

从主节点k8s-server软件包中拷贝到所有work节点:

cd /opt/kubernetes/server/bin/
for master_ip in {1..3}
  do
    echo ">>> node-25${master_ip}"
    ssh root@node-25${master_ip} "mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} "
    scp kubelet  kube-proxy root@node-25${master_ip}:/opt/kubernetes/bin/
  done

1.2 部署kubelet

1.2.1 创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=node-251 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
  • –hostname-override :显示名称,集群唯一(不可重复)。
  • –network-plugin :启用CNI。
  • –kubeconfig : 空路径,会自动生成,后面用于连接apiserver。
  • –bootstrap-kubeconfig :首次启动向apiserver申请证书。
  • –config :配置文件参数。
  • –cert-dir :kubelet证书目录。
  • –pod-infra-container-image :管理Pod网络容器的镜像 init container

1.2.2 配置文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

1.2.3 生成kubelet初次加入集群引导kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.71.251:6443" # apiserver IP:PORT
TOKEN="4136692876ad4b01bb9dd0988480ebba" # 与token.csv里保持一致  /opt/kubernetes/cfg/token.csv 

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

1.2.4 systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

1.2.5 启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

1.2.6 允许kubelet证书申请并加入集群

#查看kubelet证书请求
[root@k8s-master1 bin]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4   107s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

#允许kubelet节点申请
[root@k8s-master1 bin]# kubectl certificate approve  node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4
certificatesigningrequest.certificates.k8s.io/node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4 approved

#查看申请
[root@k8s-master1 bin]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4   2m35s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

#查看节点
[root@k8s-master1 bin]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master1   NotReady   <none>   2m11s   v1.20.10

说明:
由于网络插件还没有部署,节点会没有准备就绪NotReady

1.3 部署kube-proxy

1.3.1 创建配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

1.3.2 配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.244.0.0/16
EOF

1.3.3 生成kube-proxy证书文件

# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Shenyang",
      "ST": "Shenyang",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

1.3.4 生成kube-proxy.kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.71.251:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
  
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

1.3.5 systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

1.3.6 启动并设置开机自启

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

1.4 部署网络组件(Calico)

Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。

组网原理
calico组网的核心原理就是IP路由,每个容器或者虚拟机会分配一个workload-endpoint(wl)。

从nodeA上的容器A内访问nodeB上的容器B时:
在这里插入图片描述
​核心问题是,nodeA怎样得知下一跳的地址?答案是node之间通过BGP协议交换路由信息。
每个node上运行一个软路由软件bird,并且被设置成BGP Speaker,与其它node通过BGP协议交换路由信息。
可以简单理解为,每一个node都会向其它node通知这样的信息:

我是X.X.X.X,某个IP或者网段在我这里,它们的下一跳地址是我。

通过这种方式每个node知晓了每个workload-endpoint的下一跳地址。

部署

curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pods -n kube-system

配置calico.yaml可能会报错

error: unable to recognize "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"

查看k8s对应的calico的版本 https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements
在这里插入图片描述
等Calico Pod都Running,节点也会准备就绪,笔者虚拟机配置偏低,运行速度较慢。

[root@node-251 kubernetes]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-577f77cb5c-xcgn5   1/1     Running   0          8m26s
calico-node-7dfjw                          1/1     Running   0          8m26s
[root@node-251 kubernetes]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
node-251   Ready    <none>   61m   v1.20.15

1.5 授权apiserver访问kubelet

应用场景:如kubectl logs

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml

2 新增加Work Node

2.1 拷贝已部署好的相关文件到新节点

在Master节点将Work Node涉及文件拷贝到新节点 71.252/71.253

for i in {2..3}; do scp -r /opt/kubernetes root@192.168.71.25$i:/opt/; done

for i in {2..3}; do scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.71.25$i:/usr/lib/systemd/system; done

for i in {2..3}; do scp -r /opt/kubernetes/ssl/ca.pem root@192.168.71.25$i:/opt/kubernetes/ssl/; done

2.2 删除kubelet证书和kubeconfig文件

删除work node的配置文件

for i in {2..3}; do
ssh root@node-25$i "rm -f /opt/kubernetes/cfg/kubelet.kubeconfig && rm -f /opt/kubernetes/ssl/kubelet*"
done

说明:
这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除。

2.3 修改主机名

在work node修改配置文件的主机名

[root@node-251 kubernetes]# grep 'node-251' /opt/kubernetes/cfg/kubelet.conf
--hostname-override=node-251 \
[root@node-251 kubernetes]# grep 'node-251' /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: node-251

2.4 启动并设置开机自启

在work node执行

systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy

2.5 在Master上同意新的Node kubelet证书申请

#查看证书请求
[root@node-251 kubernetes]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-0nA37A70PmadfExLLiUFejGF0vggS-3O-zMHma5AMnc   85m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34   99s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE   2m10s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

#批准
[root@node-251 kubernetes]# kubectl certificate approve node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34
certificatesigningrequest.certificates.k8s.io/node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34 approved
[root@node-251 kubernetes]# kubectl certificate approve node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE
certificatesigningrequest.certificates.k8s.io/node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE approved

#查看
[root@node-251 kubernetes]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-0nA37A70PmadfExLLiUFejGF0vggS-3O-zMHma5AMnc   85m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34   2m24s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE   2m55s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

2.6 查看Node状态

要稍等会才会变成ready,会下载一些初始化镜像

[root@node-251 kubernetes]# kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
node-251   Ready      <none>   86m   v1.20.15
node-252   NotReady   <none>   84s   v1.20.15
node-253   NotReady   <none>   72s   v1.20.15

3 部署Dashboard

在主节点部署

3.1 部署Dashboard

官方教程 https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

[root@node-251 kubernetes]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

使用NodePort的方式访问dashboard
https://www.cnblogs.com/wucaiyun1/p/11692204.html

修改后的recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.1
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

启动服务器

[root@node-251 kubernetes]# kubectl apply -f recommended.yaml

查看pod和svc启动情况

[root@node-251 kubernetes]# kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.0.0.117   <none>        443:30001/TCP   27s
[root@node-251 kubernetes]# kubectl get pods --namespace=kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP              NODE       NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7b9b99d599-bgdsq   1/1     Running   0          41s   172.16.29.195   node-252   <none>           <none>
kubernetes-dashboard-6d4799d74-d86zt         1/1     Running   0          41s   172.16.101.69   node-253   <none>           <none>

3.2 访问dashboard

创建service account并绑定默认cluster-admin管理员集群角色

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@node-251 kubernetes]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-nkxqf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 7ddf0af3-423d-4bc2-b9b0-0dde859b2e44

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1363 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImJ6R0VRc2tXRURINE5uQmVBMDNhdl9IX3FRRl9HRVh3RFpKWDZMcmhMX2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbmt4cWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2RkZjBhZjMtNDIzZC00YmMyLWI5YjAtMGRkZTg1OWIyZTQ0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.qNxZy8eARwZRh1wObRwn3i8OeOcnnD8ubrgOu2ZVmwmERJ3sHrNMXG5UDJph_SzaNtEo43o22zigxUdct8QJ9c-p9A5oYghuKBnDY1rR6h34mH4QQUpET2E8scNW3vaYxZmqZi1qpOzC72KL39m_cpbhMdfdyNweUY3vUDHrfIXfvDCS82v2jiCa4sjn5aajwwlZhywOPJXN7d1JGZKgg1tzwcMVkhtIYOP8RB3z-SfA1biAy8Xf7bTCPlaFGGuNlgWhgOxTv8M7r6U_KuFfV7S-cQqtEEp1qeBdI70Bk95euH3UJAx55_OkkjLx2dwFrgZiKFXoTNLSUFIVdsQVpQ

访问地址: https://NodeIP:30001,使用输出的token登陆Dashboard(如访问提示https异常,可使用火狐浏览器)
在这里插入图片描述
在这里插入图片描述

4 部署CoreDNS

CoreDNS主要用于集群内部Service名称解析。
参考 https://blog.csdn.net/weixin_46476452/article/details/127884162

coredns.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
    app.kubernetes.io/name: coredns
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
      app.kubernetes.io/name: coredns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        app.kubernetes.io/name: coredns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
           - labelSelector:
               matchExpressions:
               - key: k8s-app
                 operator: In
                 values: ["kube-dns"]
             topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.9.4
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
    app.kubernetes.io/name: coredns
spec:
  selector:
    k8s-app: kube-dns
    app.kubernetes.io/name: coredns
  clusterIP: 10.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
kubectl apply -f coredns.yaml 
[root@node-251 kubernetes]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-577f77cb5c-xcgn5   1/1     Running   3          157m
calico-node-48b86                          1/1     Running   0          125m
calico-node-7dfjw                          1/1     Running   2          157m
calico-node-9d66z                          1/1     Running   0          125m
coredns-6b9bb479b9-gz8zd                   1/1     Running   0          3m20s

测试解析是否正常

[root@node-251 kubernetes]# kubectl run -it --rm dns-test --image=docker.io/library/busybox:latest sh
If you don't see a command prompt, try pressing enter.
/ #
/ #
/ # ls
bin    dev    etc    home   lib    lib64  proc   root   sys    tmp    usr    var
/ # pwd
/
/ #

在这里插入图片描述
至此一个单Master的k8s节点就已经完成了,后续我们还将部署高可用master。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/495879.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

猫狗训练集训练报错:Failed to find data adapter that can handle input

这里写自定义目录标题 Jupyter Notebook6.5.4 tensorflow 2.12.0 pillow 9.5.0 numpy 1.23.5 keras 2.12.0 报错详细内容&#xff1a; ValueError: Failed to find data adapter that can handle input: (<class ‘tuple’> containing values of types {“<class ‘k…

Midjourney关键词分享!附输出AI绘画参考图

Midjourney 关键词是指用于 Midjourney 这个 AI 绘画工具的文本提示&#xff0c;可以影响生成图像的风格、内容、细节等。Midjourney 关键词有一些基本的语法规则和套用公式&#xff0c;也有一些常用的风格词汇和描述词汇&#xff0c;这里我以10张不同风格和类型的美女图为例&a…

windows 下Node.js 版本管理工具

目录 1、概述&#xff1a; 2、下载安装 3、nvm命令 4、如何安装不在可用列表里面的版本 1、概述&#xff1a; 不同项目使用的nodejs版本和依赖等不同&#xff0c;需要进行nodejs的版本切换&#xff0c;使用nvm可以方便的切换当前的nodejs版本 windows可以使用 nvm-window…

AP360X 可充电多功能LED手电筒与移动照明控制ic和应用方案

产品展示 线路图如下&#xff1a; ​ AP360X芯片应用原理图和扩容1.8A应用&#xff1a; ​​ 1&#xff0c;产品介绍 AP360X 系列产品是一款多种模式可选 的单芯片 LED 手电筒控制芯片&#xff0c;集成了锂电 池充电管理模块、手电筒功能控制模块和保 护模块&#xff0c;关机…

剑指 Offer 34. 二叉树中和为某一值的路径 / LeetCode 113. 路径总和 II(深度优先搜索)

题目&#xff1a; 链接&#xff1a;剑指 Offer 34. 二叉树中和为某一值的路径&#xff1b;LeetCode 113. 路径总和 II 难度&#xff1a;中等 给你二叉树的根节点 root 和一个整数目标和 targetSum &#xff0c;找出所有 从根节点到叶子节点 路径总和等于给定目标和的路径。 …

身为程序员,你有哪些提高写代码效率的黑科技?

目录 1、Google/Stackoverflow——搜索解决方案的能力 2、低代码平台——提供可复用的轮子 3、人工智能——帮你写代码 4、学会话术——消除烦恼 5、 按时上下班&#xff0c;一周工作 5 天&#xff0c;养足精神以更高效地写代码。 首先&#xff0c;每个程序员都是会利用工…

GPU理解

什么是GPU GPU(Graphics Processing Unit)代表图形处理单元。该术语通常与图形卡或视频卡等术语互换使用。从技术上讲&#xff0c;GPU 是第三方显卡或主板上的主要图形处理芯片。 GPU 与 CPU不同。CPU 是中央处理器&#xff0c;它是计算机的主要大脑。GPU 专用于执行在计算机…

操作系统内存管理笔记

计算机的硬件设备 计算机的硬件设备中&#xff0c;有三个部件最为关键&#xff0c;它们分别是中央处理器CPU、内存和I/O控制芯片。 系统软件 系统软件可以分成两块&#xff0c;一块是平台性的&#xff0c;比如操作系统内核、驱动程序、运行库和数以千计的系统工具&#xff1…

文献阅读(51)—— Transformer 用于中国空气质量检测

文献阅读&#xff08;51&#xff09;—— Transformer 用于中国空气质量检测 文章目录 文献阅读&#xff08;51&#xff09;—— Transformer 用于中国空气质量检测先验知识/知识拓展文章结构背景文章方法1. Dartboard Spatial MSA(DS-MSA)2. CT-MSA3. 自上而下的随机阶段 文章…

Tapdata 的 ∞ 实践:中小企业如何轻量、高效地搭建起一个灵活易用的数字化平台

数字化浪潮的裹挟下&#xff0c;企业的转型之路正在变得愈加清晰。 然而在数字化转型这条企业生存和发展的必由之路上&#xff0c;更易受到市场变化冲击、所处环境竞争压力更大的中小企业无疑在面临更多的困难和挑战。一方面&#xff0c;中小企业为了顺应时代潮流、适应市场需…

jQuery移动端日期组件,H5移动端日期组件,MUI移动端日期组件,移动端简单的日期组件

前言 比较简单 H5移动端日期组件&#xff0c;使用的是MUI官方JS组件&#xff0c;因为不想自己写一个所以直接拿来改动一下用了 效果图 实现 准备工作 到官网下载css和js&#xff1a;https://dev.dcloud.net.cn/mui/ 到官网查看API&#xff1a;https://dev.dcloud.net.cn/mu…

中国社科院与美国杜兰大学金融管理硕士项目——迎接立夏,切莫忘记自我成长

五月的风吹走了春季&#xff0c;今天我们迎来立夏。作为夏季的第一个节气&#xff0c;立夏常被人们当做万物蓄满能量&#xff0c;即将加速生长的标志。而在职的我们&#xff0c;也应该跟这世间万物一样&#xff0c;在季节交替之时沉淀自己、努力向上成长。在社科院与杜兰大学金…

推荐6个我经常逛的“小网站”,嘿嘿嘿!!!

如今&#xff0c;全球互联网上已经有超过 17 亿个网站。除了全球那些主流网站被大家所熟知外&#xff0c;其实还有很多很多网站&#xff0c;被淹没在了互联网世界中。 每次发现优质的内容都会第一时间给大家分享出来&#xff0c;不管是软件&#xff0c;插件&#xff0c;脚本还…

vscode 实现代码编译

vscode 实现代码编译 之前一直纠结用vascode的编译按钮实现编译&#xff0c;这样就需要额外配置json文件&#xff0c;会非常麻烦&#xff0c;其实vscode也支持用编译命令&#xff0c;具体步骤如下&#xff1a; 新建makefile文件&#xff0c;文件内容如下&#xff1a; target: g…

Activity四种启动模式分析

一、前言 在初学Android的时候&#xff0c;几乎所有的学习资料都会提到Activity有四种启动模式&#xff1a; standardsingleTopsingTasksingleInstance 而提到这四种启动方式的差异&#xff0c;必然要提到一个重要的概念Activity的Task任务栈&#xff0c;我们需要明确的一点是…

vue3之vite创建H5项目之4 ( 自动导入api、按需引入van)

vue3之vite创建H5项目之4 1:自动导入vue3相关api之ref等 &#xff08;unplugin-auto-import&#xff09; pnpm i unplugin-auto-import -D 1-1 自动导入vue3相关api之ref 1-1 vite.config.ts 配置 import AutoImport from "unplugin-auto-import/vite"export de…

【操作系统OS】学习笔记第三章 内存管理【哈工大李治军老师】更新中...

基于本人观看学习 哈工大李治军老师主讲的操作系统课程 所做的笔记&#xff0c;仅进行交流分享。 特此鸣谢李治军老师&#xff0c;操作系统的神作&#xff01; 如果本篇笔记帮助到了你&#xff0c;还请点赞 关注 支持一下 ♡>&#x16966;<)!! 主页专栏有更多&#xff0…

RocketMQ的下载及安装以及历史和发展

目录 RocketMQ历史及发展RocketMQ的下载及安装下载安装windows下的安装下载配置环境变量启动注意事项 控制台插件环境要求下载启动控制台使用文档Linux下的安装环境要求启动注意事项控制台插件 RocketMQ源码安装与调试下载环境要求IntelliJ IDEA导入启动RocketMQ源码 RocketMQ历…

2023天猫运营数据分析:Q1防晒品类行业分析报告

随着防晒观念的普及&#xff0c;日常防晒已逐步成为很多人的习惯。加之今年消费市场日渐复苏&#xff0c;消费者的“报复性出游”也加速了防晒市场的发展。 市场对防晒品类在2023年的表现抱有更高的期待&#xff0c;防晒品类有望成为整个化妆品消费领域复苏较好的赛道。 根据鲸…

项目准备工作、笔试题目讲解

目录 讲一下冯诺依曼体系结构输入的处理 查bug基本步骤 我希望你重点可以讲一讲处理的这个过程&#xff0c;该如何处理呢&#xff1f; 介绍一下Maven Maven如何配置阿里云镜像&#xff1f; 介绍一下springboot 介绍一下mybatis 为什么有些人说mybatis不是很好&#xff1f…