Kubernetes - 手动部署 [ 3 ]
- 1 部署work node
- 1.1 创建工作目录并拷贝二进制文件
- 1.2 部署kubelet
- 1.2.1 创建配置文件
- 1.2.2 配置文件
- 1.2.3 生成kubelet初次加入集群引导kubeconfig文件
- 1.2.4 systemd管理kubelet
- 1.2.5 启动并设置开机启动
- 1.2.6 允许kubelet证书申请并加入集群
- 1.3 部署kube-proxy
- 1.3.1 创建配置文件
- 1.3.2 配置参数文件
- 1.3.3 生成kube-proxy证书文件
- 1.3.4 生成kube-proxy.kubeconfig文件
- 1.3.5 systemd管理kube-proxy
- 1.3.6 启动并设置开机自启
- 1.4 部署网络组件(Calico)
- 1.5 授权apiserver访问kubelet
- 2 新增加Work Node
- 2.1 拷贝已部署好的相关文件到新节点
- 2.2 删除kubelet证书和kubeconfig文件
- 2.3 修改主机名
- 2.4 启动并设置开机自启
- 2.5 在Master上同意新的Node kubelet证书申请
- 2.6 查看Node状态
- 3 部署Dashboard
- 3.1 部署Dashboard
- 3.2 访问dashboard
- 4 部署CoreDNS
<接前文…>
环境准备
主机名 | 操作系统 | IP 地址 | 所需组件 |
---|---|---|---|
node-251 | CentOS 7.9 | 192.168.71.251 | 所有组件都安装(合理利用资源) |
node-252 | CentOS 7.9 | 192.168.71.252 | 所有组件都安装 |
node-253 | CentOS 7.9 | 192.168.71.253 | docker kubelet kube-proxy |
我们已经在node-251和node-252部署了etcd集群,并在所有机器安装了docker,在node-251(主节点)部署了kube-apiserver,kube-controller-manager和kube-scheduler
接下来我们将在node-252和node-253部署kubelet,kube-proxy等其他组件
1 部署work node
我们在主节点配置,完成之后进行同步,主节点也作为工作节点之一
1.1 创建工作目录并拷贝二进制文件
注: 在所有work node创建工作目录
从主节点k8s-server软件包中拷贝到所有work节点:
cd /opt/kubernetes/server/bin/
for master_ip in {1..3}
do
echo ">>> node-25${master_ip}"
ssh root@node-25${master_ip} "mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} "
scp kubelet kube-proxy root@node-25${master_ip}:/opt/kubernetes/bin/
done
1.2 部署kubelet
1.2.1 创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=node-251 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
- –hostname-override :显示名称,集群唯一(不可重复)。
- –network-plugin :启用CNI。
- –kubeconfig : 空路径,会自动生成,后面用于连接apiserver。
- –bootstrap-kubeconfig :首次启动向apiserver申请证书。
- –config :配置文件参数。
- –cert-dir :kubelet证书目录。
- –pod-infra-container-image :管理Pod网络容器的镜像 init container
1.2.2 配置文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
1.2.3 生成kubelet初次加入集群引导kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.71.251:6443" # apiserver IP:PORT
TOKEN="4136692876ad4b01bb9dd0988480ebba" # 与token.csv里保持一致 /opt/kubernetes/cfg/token.csv
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
1.2.4 systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
1.2.5 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
1.2.6 允许kubelet证书申请并加入集群
#查看kubelet证书请求
[root@k8s-master1 bin]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4 107s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
#允许kubelet节点申请
[root@k8s-master1 bin]# kubectl certificate approve node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4
certificatesigningrequest.certificates.k8s.io/node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4 approved
#查看申请
[root@k8s-master1 bin]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-KbHieprZUMOvTFMHGQ1RNTZEhsSlT5X6wsh2lzfUry4 2m35s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
#查看节点
[root@k8s-master1 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady <none> 2m11s v1.20.10
说明:
由于网络插件还没有部署,节点会没有准备就绪NotReady
1.3 部署kube-proxy
1.3.1 创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
1.3.2 配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.244.0.0/16
EOF
1.3.3 生成kube-proxy证书文件
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenyang",
"ST": "Shenyang",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
1.3.4 生成kube-proxy.kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.71.251:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
1.3.5 systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
1.3.6 启动并设置开机自启
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
1.4 部署网络组件(Calico)
Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
组网原理
calico组网的核心原理就是IP路由,每个容器或者虚拟机会分配一个workload-endpoint(wl)。
从nodeA上的容器A内访问nodeB上的容器B时:
核心问题是,nodeA怎样得知下一跳的地址?答案是node之间通过BGP协议交换路由信息。
每个node上运行一个软路由软件bird,并且被设置成BGP Speaker,与其它node通过BGP协议交换路由信息。
可以简单理解为,每一个node都会向其它node通知这样的信息:
我是X.X.X.X,某个IP或者网段在我这里,它们的下一跳地址是我。
通过这种方式每个node知晓了每个workload-endpoint的下一跳地址。
部署
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pods -n kube-system
配置calico.yaml可能会报错
error: unable to recognize "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"
查看k8s对应的calico的版本 https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements
等Calico Pod都Running,节点也会准备就绪,笔者虚拟机配置偏低,运行速度较慢。
[root@node-251 kubernetes]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-577f77cb5c-xcgn5 1/1 Running 0 8m26s
calico-node-7dfjw 1/1 Running 0 8m26s
[root@node-251 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-251 Ready <none> 61m v1.20.15
1.5 授权apiserver访问kubelet
应用场景:如kubectl logs
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml
2 新增加Work Node
2.1 拷贝已部署好的相关文件到新节点
在Master节点将Work Node涉及文件拷贝到新节点 71.252/71.253
for i in {2..3}; do scp -r /opt/kubernetes root@192.168.71.25$i:/opt/; done
for i in {2..3}; do scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.71.25$i:/usr/lib/systemd/system; done
for i in {2..3}; do scp -r /opt/kubernetes/ssl/ca.pem root@192.168.71.25$i:/opt/kubernetes/ssl/; done
2.2 删除kubelet证书和kubeconfig文件
删除work node的配置文件
for i in {2..3}; do
ssh root@node-25$i "rm -f /opt/kubernetes/cfg/kubelet.kubeconfig && rm -f /opt/kubernetes/ssl/kubelet*"
done
说明:
这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除。
2.3 修改主机名
在work node修改配置文件的主机名
[root@node-251 kubernetes]# grep 'node-251' /opt/kubernetes/cfg/kubelet.conf
--hostname-override=node-251 \
[root@node-251 kubernetes]# grep 'node-251' /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: node-251
2.4 启动并设置开机自启
在work node执行
systemctl daemon-reload
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
2.5 在Master上同意新的Node kubelet证书申请
#查看证书请求
[root@node-251 kubernetes]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-0nA37A70PmadfExLLiUFejGF0vggS-3O-zMHma5AMnc 85m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34 99s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE 2m10s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
#批准
[root@node-251 kubernetes]# kubectl certificate approve node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34
certificatesigningrequest.certificates.k8s.io/node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34 approved
[root@node-251 kubernetes]# kubectl certificate approve node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE
certificatesigningrequest.certificates.k8s.io/node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE approved
#查看
[root@node-251 kubernetes]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-0nA37A70PmadfExLLiUFejGF0vggS-3O-zMHma5AMnc 85m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
node-csr-7T9xXWh8imtC1tfHVpwV6Y6V02UhqIqG5sDRG_PlL34 2m24s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
node-csr-XHl-UgI7kFXewHESTcwWdnCV1L9AKDgDM2RlE3ErGOE 2m55s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
2.6 查看Node状态
要稍等会才会变成ready,会下载一些初始化镜像
[root@node-251 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-251 Ready <none> 86m v1.20.15
node-252 NotReady <none> 84s v1.20.15
node-253 NotReady <none> 72s v1.20.15
3 部署Dashboard
在主节点部署
3.1 部署Dashboard
官方教程 https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
[root@node-251 kubernetes]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
使用NodePort的方式访问dashboard
https://www.cnblogs.com/wucaiyun1/p/11692204.html
修改后的recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-beta4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.1
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
启动服务器
[root@node-251 kubernetes]# kubectl apply -f recommended.yaml
查看pod和svc启动情况
[root@node-251 kubernetes]# kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.0.0.117 <none> 443:30001/TCP 27s
[root@node-251 kubernetes]# kubectl get pods --namespace=kubernetes-dashboard -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dashboard-metrics-scraper-7b9b99d599-bgdsq 1/1 Running 0 41s 172.16.29.195 node-252 <none> <none>
kubernetes-dashboard-6d4799d74-d86zt 1/1 Running 0 41s 172.16.101.69 node-253 <none> <none>
3.2 访问dashboard
创建service account并绑定默认cluster-admin管理员集群角色
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@node-251 kubernetes]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-nkxqf
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 7ddf0af3-423d-4bc2-b9b0-0dde859b2e44
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1363 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImJ6R0VRc2tXRURINE5uQmVBMDNhdl9IX3FRRl9HRVh3RFpKWDZMcmhMX2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbmt4cWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2RkZjBhZjMtNDIzZC00YmMyLWI5YjAtMGRkZTg1OWIyZTQ0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.qNxZy8eARwZRh1wObRwn3i8OeOcnnD8ubrgOu2ZVmwmERJ3sHrNMXG5UDJph_SzaNtEo43o22zigxUdct8QJ9c-p9A5oYghuKBnDY1rR6h34mH4QQUpET2E8scNW3vaYxZmqZi1qpOzC72KL39m_cpbhMdfdyNweUY3vUDHrfIXfvDCS82v2jiCa4sjn5aajwwlZhywOPJXN7d1JGZKgg1tzwcMVkhtIYOP8RB3z-SfA1biAy8Xf7bTCPlaFGGuNlgWhgOxTv8M7r6U_KuFfV7S-cQqtEEp1qeBdI70Bk95euH3UJAx55_OkkjLx2dwFrgZiKFXoTNLSUFIVdsQVpQ
访问地址: https://NodeIP:30001
,使用输出的token登陆Dashboard(如访问提示https异常,可使用火狐浏览器)
4 部署CoreDNS
CoreDNS主要用于集群内部Service名称解析。
参考 https://blog.csdn.net/weixin_46476452/article/details/127884162
coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
app.kubernetes.io/name: coredns
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
app.kubernetes.io/name: coredns
template:
metadata:
labels:
k8s-app: kube-dns
app.kubernetes.io/name: coredns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.9.4
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
app.kubernetes.io/name: coredns
spec:
selector:
k8s-app: kube-dns
app.kubernetes.io/name: coredns
clusterIP: 10.0.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
kubectl apply -f coredns.yaml
[root@node-251 kubernetes]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-577f77cb5c-xcgn5 1/1 Running 3 157m
calico-node-48b86 1/1 Running 0 125m
calico-node-7dfjw 1/1 Running 2 157m
calico-node-9d66z 1/1 Running 0 125m
coredns-6b9bb479b9-gz8zd 1/1 Running 0 3m20s
测试解析是否正常
[root@node-251 kubernetes]# kubectl run -it --rm dns-test --image=docker.io/library/busybox:latest sh
If you don't see a command prompt, try pressing enter.
/ #
/ #
/ # ls
bin dev etc home lib lib64 proc root sys tmp usr var
/ # pwd
/
/ #
至此一个单Master的k8s节点就已经完成了,后续我们还将部署高可用master。