目录
一、Helm 包管理器
1.什么是 Helm
2.安装Helm
(3)Helm常用命令
(4)目录结构
(5)使用Helm完成redis主从搭建
二、Prometheus集群监控
1.监控方案
2.Prometheus监控k8s
三、ELK日志搜集
1.elk流程
2.配置elk
(1)配置es
(2)配置logstash
(3)配置filebeat,kibana
3.kibana使用和日志检索
四、k8s可视化管理
1. Dashboard安装
2.kubeSphere安装
五、感谢支持
一、Helm 包管理器
1.什么是 Helm
Helm是Kubernetes 包管理器,Helm 是查找、分享和使用软件构件 Kubernetes 的最优方式。
Helm 管理名为 chart 的 Kubernetes 包的工具。Helm 可以做以下的事情:
- 从头开始创建新的 chart
- 将 chart 打包成归档(tgz)文件
- 与存储 chart 的仓库进行交互
- 在现有的 Kubernetes 集群中安装和卸载 chart
- 管理与 Helm 一起安装的 chart 的发布周期
对于Helm,有三个重要的概念:
- chart :创建Kubernetes应用程序所必需的一组信息,将pod、service、deploy放到一个里面。
- config :包含了可以合并到打包的chart中的配置信息,用于创建一个可发布的对象。
- release :是一个与特定配置相结合的chart的运行实例。
2.安装Helm
这里下载3.10.2,版本太老的话会有坑。
#下载、解压二进制文件
cd /opt/k8s/
mkdir helm
cd helm/
wget https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz
tar -zxvf helm-v3.10.2-linux-amd64.tar.gz
cd /opt/k8s/
chmod +x helm/
#将配置文件拷贝到指定目录
cd linux-amd64/
cp helm /usr/local/bin/
#查看helm
cd ~
helm version
#添加helm仓库
注:使用helm下载安装包的时候可能会被墙,如果下载不下来就直接去官网下载也行,之前我们下载过ingress,可参考:3.k8s:服务发布:service,ingress;配置管理:configMap,secret,热更新;持久化存储:volumes,nfs,pv,pvc-CSDN博客
(3)Helm常用命令
#列出、增加、更新、删除 chart 仓库
helm repo
#使用关键词搜索 chart
helm search
#拉取远程仓库中的 chart 到本地
helm pull
#在本地创建新的 chart
helm create
#管理 chart 依赖
helm dependency
#安装 chart
helm install
#列出所有 release
helm list
helm list -n ingress-nginx
#检查 chart 配置是否有误
helm lint
#打包本地 chart
helm package
#回滚 release 到历史版本
helm rollback
#卸载 release
helm uninstall
#升级 release
helm upgrade
(4)目录结构
mychart
├── Chart.yaml
├── charts # 该目录保存其他依赖的 chart(子 chart)
├── templates # chart 配置模板,用于渲染最终的 Kubernetes YAML 文件
│ ├── NOTES.txt # 用户运行 helm install 时候的提示信息
│ ├── _helpers.tpl # 用于创建模板时的帮助类
│ ├── deployment.yaml # Kubernetes deployment 配置
│ ├── ingress.yaml # Kubernetes ingress 配置
│ ├── service.yaml # Kubernetes service 配置
│ ├── serviceaccount.yaml # Kubernetes serviceaccount 配置
│ └── tests
│ └── test-connection.yaml
└── values.yaml # 定义 chart 模板中的自定义配置的默认值,可以在执行 helm install 或 helm update 的时候覆盖
(5)使用Helm完成redis主从搭建
#查看chart仓库
helm repo list
#添加仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo add azure http://mirror.azure.cn/kubernetes/charts
# 搜索 redis chart
helm search repo redis
# 查看安装说明
helm show readme bitnami/redis
# 先将 chart 拉到本地
cd /opt/k8s/
helm pull bitnami/redis
#解压
tar -xvf redis-17.4.3.tgz
cd redis/
#修改配置
vim values.yaml
##################################################
24 storageClass: "managed-nfs-storage"
25 redis:
26 password: "123456"
504 size: 1Gi
##################################################
# 安装操作
# 创建命名空间
kubectl create namespace redis
# 安装redis
cd /opt/k8s/
helm install redis ./redis/ -n redis
# 查看
kubectl get all -n redis
# 升级
helm upgrade redis ./redis/ -n redis
# 查看历史
helm history redis
# 回退到上一版本
helm rollback redis
# 回退到指定版本
helm rollback redis 3
# 删除
helm delete redis -n redis
启动redis成功:
二、Prometheus集群监控
1.监控方案
Heapster、Weave Scope、Prometheus
我们选择Prometheus。Prometheus 是一套开源的监控系统、报警、时间序列的集合,最初由 SoundCloud 开发,后来随着越来越多公司的使用,于是便独立成开源项目。自此以后,许多公司和组织都采用了 Prometheus 作为监控告警工具。
2.Prometheus监控k8s
Prometheus有两种搭建方式,一种是自定义,一种是基于kube,我们使用第二种。
因为我们k8s是1.23的版本,因此需要选择Prometheus0.10,Prometheus0.11的版本其他的版本就不行。GitHub - prometheus-operator/kube-prometheus: Use Prometheus to monitor Kubernetes and applications running on Kubernetes
我们使用0.10版本:https://github.com/prometheus-operator/kube-prometheus/tree/v0.10.0
替换镜像
cd /opt/k8s/kube-prometheus/manifests
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheusOperator-deployment.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheus-prometheus.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' alertmanager-alertmanager.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' kubeStateMetrics-deployment.yaml
sed -i 's/k8s.gcr.io/lank8s.cn/g' kubeStateMetrics-deployment.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' nodeExporter-daemonset.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheusAdapter-deployment.yaml
sed -i 's/k8s.gcr.io/lank8s.cn/g' prometheusAdapter-deployment.yaml
# 启动并下载镜像
cd /opt/k8s/kube-prometheus/
kubectl create -f manifests/setup/
kubectl apply -f manifests/
kubectl get all -n monitoring
kubectl get po -n monitoring
kubectl get svc -n monitoring
# 在主机配置域名映射
# 路径是C:\Windows\System32\drivers\etc\hosts
192.168.200.140 grafana.wolfcode.cn
192.168.200.140 prometheus.wolfcode.cn
192.168.200.140 alertmanager.wolfcode.cn
# 添加ingress
cd manifests/
vim prometheus-ingress.yaml
####################################################################
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: monitoring
name: prometheus-ingress
spec:
ingressClassName: nginx
rules:
- host: grafana.wolfcode.cn # 访问 Grafana 域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
- host: prometheus.wolfcode.cn # 访问 Prometheus 域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: prometheus-k8s
port:
number: 9090
- host: alertmanager.wolfcode.cn # 访问 alertmanager 域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: alertmanager-main
port:
number: 9093
####################################################################
# 启动ingress
kubectl apply -f prometheus-ingress.yaml
# 一直监控节点有没有启动成功即可
kubectl get po -n monitoring
## 卸载
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
注:如果需要删除命名空间monitioring,删除不掉,参考:记录一次namespace 处于Terminating状态的处理方法_mob604756ff20da的技术博客_51CTO博客
注:如果pod一直下载不下来,可能是因为污点的问题,我们将污点去掉
kubectl taint nodes kubernete140 node-role.kubernetes.io/master-
http://grafana.wolfcode.cn/
http://prometheus.wolfcode.cn/
http://alertmanager.wolfcode.cn/
三、ELK日志搜集
1.elk流程
2.配置elk
(1)配置es
# 先给主机器打一个标签
kubectl label node kubernete140 es=data
cd /opt/k8s/elk
#创建命名空间
vim namespace.yaml
############################
apiVersion: v1
kind: Namespace
metadata:
name: kube-logging
############################
# 创建es配置文件
vim es.yaml
##################################################################
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-logging
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-logging
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
apiVersion: apps/v1
kind: StatefulSet #使用statefulset创建Pod
metadata:
name: elasticsearch-logging #pod名称,使用statefulSet创建的Pod是有序号有顺序的
namespace: kube-logging #命名空间
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
srv: srv-elasticsearch
spec:
serviceName: elasticsearch-logging #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local)
replicas: 1 #副本数量,单节点
selector:
matchLabels:
k8s-app: elasticsearch-logging #和pod template配置的labels相匹配
template:
metadata:
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: docker.io/library/elasticsearch:7.9.3
name: elasticsearch-logging
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 500Mi
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-logging
mountPath: /usr/share/elasticsearch/data/ #挂载点
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "discovery.type" #定义单节点类型
value: "single-node"
- name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整
value: "-Xms512m -Xmx2g"
volumes:
- name: elasticsearch-logging
hostPath:
path: /data/es/
nodeSelector: #如果需要匹配落盘节点可以添加 nodeSelect
es: data
tolerations:
- effect: NoSchedule
operator: Exists
initContainers: #容器初始化前的操作
- name: elasticsearch-logging-init
image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误
securityContext: #仅应用到指定的容器上,并且不会影响Volume
privileged: true #运行特权容器
- name: increase-fd-ulimit
image: busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量
securityContext:
privileged: true
- name: elasticsearch-volume-init #es数据落盘初始化,加上777权限
image: alpine:3.6
command:
- chmod
- -R
- "777"
- /usr/share/elasticsearch/data/
volumeMounts:
- name: elasticsearch-logging
mountPath: /usr/share/elasticsearch/data/
##################################################################
# 启动
kubectl apply -f namespace.yaml
kubectl apply -f es.yaml
kubectl get po -n kube-logging
kubectl get svc -n kube-logging
(2)配置logstash
vim logstash.yaml
---
apiVersion: v1
kind: Service
metadata:
name: logstash
namespace: kube-logging
spec:
ports:
- port: 5044
targetPort: beats
selector:
type: logstash
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
namespace: kube-logging
spec:
selector:
matchLabels:
type: logstash
template:
metadata:
labels:
type: logstash
srv: srv-logstash
spec:
containers:
- image: docker.io/kubeimages/logstash:7.9.3 #该镜像支持arm64和amd64两种架构
name: logstash
ports:
- containerPort: 5044
name: beats
command:
- logstash
- '-f'
- '/etc/logstash_c/logstash.conf'
env:
- name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
value: "http://elasticsearch-logging:9200"
volumeMounts:
- name: config-volume
mountPath: /etc/logstash_c/
- name: config-yml-volume
mountPath: /usr/share/logstash/config/
- name: timezone
mountPath: /etc/localtime
resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响
limits:
cpu: 1000m
memory: 2048Mi
requests:
cpu: 512m
memory: 512Mi
volumes:
- name: config-volume
configMap:
name: logstash-conf
items:
- key: logstash.conf
path: logstash.conf
- name: timezone
hostPath:
path: /etc/localtime
- name: config-yml-volume
configMap:
name: logstash-yml
items:
- key: logstash.yml
path: logstash.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-conf
namespace: kube-logging
labels:
type: logstash
data:
logstash.conf: |-
input {
beats {
port => 5044
}
}
filter { # 处理 ingress 日志
if [kubernetes][container][name] == "nginx-ingress-controller" {
json {
source => "message"
target => "ingress_log"
}
if [ingress_log][requesttime] {
mutate {
convert => ["[ingress_log][requesttime]", "float"]
}
}
if [ingress_log][upstremtime] {
mutate {
convert => ["[ingress_log][upstremtime]", "float"]
}
}
if [ingress_log][status] {
mutate {
convert => ["[ingress_log][status]", "float"]
}
}
if [ingress_log][httphost] and [ingress_log][uri] {
mutate {
add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"}
}
mutate {
split => ["[ingress_log][entry]","/"]
}
if [ingress_log][entry][1] {
mutate {
add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"}
remove_field => "[ingress_log][entry]"
}
} else {
mutate {
add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"}
remove_field => "[ingress_log][entry]"
}
}
}
}
if [kubernetes][container][name] =~ /^srv*/ { # 处理以srv进行开头的业务服务日志
json {
source => "message"
target => "tmp"
}
if [kubernetes][namespace] == "kube-logging" {
drop{}
}
if [tmp][level] {
mutate{
add_field => {"[applog][level]" => "%{[tmp][level]}"}
}
if [applog][level] == "debug"{
drop{}
}
}
if [tmp][msg] {
mutate {
add_field => {"[applog][msg]" => "%{[tmp][msg]}"}
}
}
if [tmp][func] {
mutate {
add_field => {"[applog][func]" => "%{[tmp][func]}"}
}
}
if [tmp][cost]{
if "ms" in [tmp][cost] {
mutate {
split => ["[tmp][cost]","m"]
add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"}
convert => ["[applog][cost]", "float"]
}
} else {
mutate {
add_field => {"[applog][cost]" => "%{[tmp][cost]}"}
}
}
}
if [tmp][method] {
mutate {
add_field => {"[applog][method]" => "%{[tmp][method]}"}
}
}
if [tmp][request_url] {
mutate {
add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"}
}
}
if [tmp][meta._id] {
mutate {
add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"}
}
}
if [tmp][project] {
mutate {
add_field => {"[applog][project]" => "%{[tmp][project]}"}
}
}
if [tmp][time] {
mutate {
add_field => {"[applog][time]" => "%{[tmp][time]}"}
}
}
if [tmp][status] {
mutate {
add_field => {"[applog][status]" => "%{[tmp][status]}"}
convert => ["[applog][status]", "float"]
}
}
}
mutate {
rename => ["kubernetes", "k8s"]
remove_field => "beat"
remove_field => "tmp"
remove_field => "[k8s][labels][app]"
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch-logging:9200"]
codec => json
index => "logstash-%{+YYYY.MM.dd}" #索引名称以logstash+日志进行每日新建
}
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-yml
namespace: kube-logging
labels:
type: logstash
data:
logstash.yml: |-
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
# 启动
kubectl apply -f logstash.yaml
kubectl get po -n kube-logging
(3)配置filebeat,kibana
vim filebeat.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
enable: true
paths:
- /var/log/containers/*.log #这里是filebeat采集挂载到pod中的日志目录
processors:
- add_kubernetes_metadata: #添加k8s的字段用于后续的数据清洗
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash: #因为还需要部署logstash进行数据的清洗,因此filebeat是把数据推到logstash中
hosts: ["logstash:5044"]
enabled: true
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-logging
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.io/kubeimages/filebeat:7.9.3 #该镜像支持arm64和amd64两种架构
args: [
"-c", "/etc/filebeat.yml",
"-e","-httpprof","0.0.0.0:6060"
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
resources:
limits:
memory: 1000Mi
cpu: 1000m
requests:
memory: 100Mi
cpu: 100m
volumeMounts:
- name: config #挂载的是filebeat的配置文件
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data #持久化filebeat数据到宿主机上
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers #这里主要是把宿主机上的源日志目录挂载到filebeat容器中,如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把mountPath改为/var/lib
mountPath: /var/lib
readOnly: true
- name: varlog #这里主要是把宿主机上/var/log/pods和/var/log/containers的软链接挂载到filebeat容器中
mountPath: /var/log/
readOnly: true
- name: timezone
mountPath: /etc/localtime
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath: #如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把path改为/var/lib
path: /var/lib
- name: varlog
hostPath:
path: /var/log/
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
- name: data
hostPath:
path: /data/filebeat-data
type: DirectoryOrCreate
- name: timezone
hostPath:
path: /etc/localtime
tolerations: #加入容忍能够调度到每一个节点
- effect: NoExecute
key: dedicated
operator: Equal
value: gpu
- effect: NoSchedule
operator: Exists
vim kibana.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-logging
name: kibana-config
labels:
k8s-app: kibana
data:
kibana.yml: |-
server.name: kibana
server.host: "0"
i18n.locale: zh-CN #设置默认语言为中文
elasticsearch:
hosts: ${ELASTICSEARCH_HOSTS} #es集群连接地址,由于我这都都是k8s部署且在一个ns下,可以直接使用service name连接
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
k8s-app: kibana
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
srv: srv-kibana
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
k8s-app: kibana
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
srv: srv-kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-logging:9200
ports:
- containerPort: 5601
name: ui
protocol: TCP
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config/kibana.yml
readOnly: true
subPath: kibana.yml
volumes:
- name: config
configMap:
name: kibana-config
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana
namespace: kube-logging
spec:
ingressClassName: nginx
rules:
- host: kibana.wolfcode.cn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana
port:
number: 5601
# 启动
kubectl apply -f filebeat.yaml -f kibana.yaml
kubectl get po -n kube-logging
kubectl get svc -n kube-logging
# 在svc中可以看到端口,直接访问即可
3.kibana使用和日志检索
先找到Stack Management:
四、k8s可视化管理
国内比较多的有:Kubernetes Dashboard,kubesphere,Rancher,Kuboard。
1. Dashboard安装
# 下载recommended.yaml
cd /opt/k8s/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 修改一下配置文件
#########################################
#第40行新增
type: NodePort
#########################################
# 运行
kubectl apply -f recommended.yaml
kubectl get po -n kubernetes-dashboard
kubectl get svc -n kubernetes-dashboard
#svc中会有端口,可以访问页面,得用https访问
注:你直接apply这个yaml很大概率下载不下来,因为用的是外国的镜像,我们替换镜像地址:
#194行的kubernetesui/dashboard:v2.7.0镜像地址变更为
image: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.7.0
#280行的kubernetesui/metrics-scraper:v1.0.8镜像地址变更为
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.8
我们选择token方式。
获取token
# 配置所有权限的账号
cd /opt/k8s/dashboard
vim dashboard-admin.yaml
#################################################
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-cluster-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
#################################################
# 启动
kubectl apply -f dashboard-admin.yaml
kubectl get sa -n kubernetes-dashboard
kubectl describe sa dashboard-admin -n kubernetes-dashboard
# 通过账户详情可以看到有一个属性叫Mountable secrets,这里的secret就是对应的值
kubectl describe secrets dashboard-admin-token-248cr -n kubernetes-dashboard
我们将token复制进去,就可以登录了:
改成简体中文:
左侧可以查看,右上角加号可以添加:
2.kubeSphere安装
官网地址:面向云原生应用的容器混合云,支持 Kubernetes 多集群管理的 PaaS 容器云平台解决方案 | KubeSphere
# 先把dashboard删掉
cd /opt/k8s/
kubectl delete -f dashboard/
# 一键安装
helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.2.tgz --debug --wait --set global.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks --set extension.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks
# 登录
http://192.168.200.140:30880/
账号:admin
密码:P@88w0rd
首次登录修改完密码后如下:
五、感谢支持
感谢各位大佬支持,如果觉得满意可以请喝一杯咖啡吗: