Kubernetes实战——集群监控和可视化管理

news2024/9/23 2:37:50

目录

一、Kube-Prometheus

1、版本兼容性介绍

2、安装 kube-prometheus

3、安装Ingress,实现访问

二、K8s安装ELK日志收集

1、安装Elasticsearch

2、安装Logstash

3、安装Filebeat

4、安装Kibina

三、Dashboard安装与使用

1、安装

 2、创建token

3、使用 

四、kubesphere安装与使用

1、官方文档     

2、准备工作

2.1、Default StorageClass安装

3、安装

 4、卸载

 5、使用

5.1、主界面

5.2、可插拔组件 

 5.3、平台管理

5.4、访问控制

 5.5、通知设置

五、附录

1、kube-prometheus源码

2、kube-prometheus所需镜像

3、Dashboard配置文件

4、openebs-operator.yaml

5、default-storage-class.yaml

6、修改镜像名称

六、参考 


一、Kube-Prometheus

1、版本兼容性介绍

kube-prometheus 堆栈Kubernetes 1.23Kubernetes 1.24Kubernetes 1.25Kubernetes 1.26Kubernetes 1.27Kubernetes 1.28Kubernetes 1.29Kubernetes 1.30Kubernetes 1.31
release-0.11
release-0.12
release-0.13
release-0.14
main

2、安装 kube-prometheus

#下载资源 下载不下来的见附录
wget https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.11.0.tar.gz

#解压文件
tar -xf v0.11.0.tar.gz 

#重命名
mv kube-prometheus-0.11.0 kube-prometheus

#进入目录
cd kube-prometheus/manifests


#创建setup
kubectl create -f setup/

#创建kube-prometheus资源
kubectl apply -f ../manifests/

#查看资源
kubectl get all -n monitoring


注意:

  • 有些镜像可能在国内拉取不下来,我会把镜像打包放到附录,直接导入镜像即可 。
  • 安装需要足够的空间,请清理掉没用的的Pod,或者增大虚拟机的内存。        
#导入镜像 
docker load -i <压缩包名称>

3、安装Ingress,实现访问

apiVersion: networking.k8s.io/v1
kind: Ingress  #类型为ingress
metadata:
  name: prometheus-ingress #ingress名称
  namespace: monitoring  #命名空间
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: grafana.wssnail.com  #grafana访问的域名
      http:
         paths:
          - pathType: Prefix #类型
            path: /
            backend:
              service:
                name: grafana  #需要与grafana-service.yml中配置的服务名保持一致
                port:
                  number: 3000 #需要与grafana-service.yml中配置的端口持一致
    - host: prometheus.wssnail.com  #prometheus访问的域名
      http:
         paths:
            - pathType: Prefix
              path: /
              backend:
                service:
                  name: prometheus-k8s #需要与prometheus-service.yaml中配置的服务名称保持一致
                  port:
                    number: 9090  #需要与prometheus-service.yaml中配置的端口称保持一致
    - host: alertmanager.wssnail.com  #alertmanager访问的域名
      http:
         paths:
            - pathType: Prefix
              path: /
              backend:
                service:
                  name: alertmanager-main  #需要与alertmanager-service.yaml配置的服务名保持一致
                  port:
                    number: 9093  #需要与alertmanager-service.yaml配置的端口保持一致


#创建资源,配置hosts
kubectl create -f prometheus-ingress.yaml 

#配置host,其中ip为ingress-nginx的Pod所在的节点ip
192.168.139.207  grafana.wssnail.com
192.168.139.207  prometheus.wssnail.com
192.168.139.207  alertmanager.wssnail.com

二、K8s安装ELK日志收集

1、安装Elasticsearch

apiVersion: v1
kind: Namespace
metadata:
  name: kube-logging
--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: elasticsearch-logging 
  namespace: kube-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    kubernetes.io/name: "Elasticsearch" 
spec: 
  ports: 
  - port: 9200 
    protocol: TCP 
    targetPort: db 
  selector: 
    k8s-app: elasticsearch-logging 
--- 
# RBAC authn and authz 
apiVersion: v1 
kind: ServiceAccount 
metadata: 
  name: elasticsearch-logging 
  namespace: kube-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
--- 
kind: ClusterRole 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: 
  name: elasticsearch-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
rules: 
- apiGroups: 
  - "" 
  resources: 
  - "services" 
  - "namespaces" 
  - "endpoints" 
  verbs: 
  - "get" 
--- 
kind: ClusterRoleBinding 
apiVersion: rbac.authorization.k8s.io/v1 
metadata: 
  namespace: kube-logging 
  name: elasticsearch-logging 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
subjects: 
- kind: ServiceAccount 
  name: elasticsearch-logging 
  namespace: kube-logging 
  apiGroup: "" 
roleRef: 
  kind: ClusterRole 
  name: elasticsearch-logging 
  apiGroup: "" 
--- 
# Elasticsearch deployment itself 
apiVersion: apps/v1 
kind: StatefulSet #使用statefulset创建Pod 
metadata: 
  name: elasticsearch-logging #pod名称,使用statefulSet创建的Pod是有序号有顺序的 
  namespace: kube-logging  #命名空间 
  labels: 
    k8s-app: elasticsearch-logging 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    srv: srv-elasticsearch 
spec: 
  serviceName: elasticsearch-logging #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local) 
  replicas: 1 #副本数量,单节点 
  selector: 
    matchLabels: 
      k8s-app: elasticsearch-logging #和pod template配置的labels相匹配 
  template: 
    metadata: 
      labels: 
        k8s-app: elasticsearch-logging 
        kubernetes.io/cluster-service: "true" 
    spec: 
      serviceAccountName: elasticsearch-logging 
      containers: 
      - image: docker.io/library/elasticsearch:7.9.3 
        name: elasticsearch-logging 
        resources: 
          # need more cpu upon initialization, therefore burstable class 
          limits: 
            cpu: 1000m 
            memory: 2Gi 
          requests: 
            cpu: 100m 
            memory: 500Mi 
        ports: 
        - containerPort: 9200 
          name: db 
          protocol: TCP 
        - containerPort: 9300 
          name: transport 
          protocol: TCP 
        volumeMounts: 
        - name: elasticsearch-logging 
          mountPath: /usr/share/elasticsearch/data/   #挂载点 
        env: 
        - name: "NAMESPACE" 
          valueFrom: 
            fieldRef: 
              fieldPath: metadata.namespace 
        - name: "discovery.type"  #定义单节点类型 
          value: "single-node" 
        - name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整 
          value: "-Xms512m -Xmx2g"  
      volumes: 
      - name: elasticsearch-logging 
        hostPath: 
          path: /data/es/
      nodeSelector: #如果需要匹配落盘节点可以添加 nodeSelect 
        es: data 
      tolerations: 
      - effect: NoSchedule 
        operator: Exists 
      # Elasticsearch requires vm.max_map_count to be at least 262144. 
      # If your OS already sets up this number to a higher value, feel free 
      # to remove this init container. 
      initContainers: #容器初始化前的操作 
      - name: elasticsearch-logging-init 
        image: alpine:3.6 
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误 
        securityContext:  #仅应用到指定的容器上,并且不会影响Volume 
          privileged: true #运行特权容器 
      - name: increase-fd-ulimit 
        image: busybox 
        imagePullPolicy: IfNotPresent 
        command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量 
        securityContext: 
          privileged: true 
      - name: elasticsearch-volume-init #es数据落盘初始化,加上777权限 
        image: alpine:3.6 
        command: 
          - chmod 
          - -R 
          - "777" 
          - /usr/share/elasticsearch/data/ 
        volumeMounts: 
        - name: elasticsearch-logging 
          mountPath: /usr/share/elasticsearch/data/
#给Node1节点打标签
kubectl label no node1 es=data

#创建资源
kubectl create -f es.yaml 

#查看资源
kubectl get po -n kube-logging

2、安装Logstash

--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: logstash 
  namespace: kube-logging 
spec: 
  ports: 
  - port: 5044 
    targetPort: beats 
  selector: 
    type: logstash 
  clusterIP: None 
--- 
apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: logstash 
  namespace: kube-logging 
spec: 
  selector: 
    matchLabels: 
      type: logstash 
  template: 
    metadata: 
      labels: 
        type: logstash 
        srv: srv-logstash 
    spec: 
      containers: 
      - image: docker.io/kubeimages/logstash:7.9.3 #该镜像支持arm64和amd64两种架构 
        name: logstash 
        ports: 
        - containerPort: 5044 
          name: beats 
        command: 
        - logstash 
        - '-f' 
        - '/etc/logstash_c/logstash.conf' 
        env: 
        - name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS" 
          value: "http://elasticsearch-logging:9200"  #这里配置的是es的服务名称,如果是夸命名空间需要加上命名空间
        volumeMounts: 
        - name: config-volume 
          mountPath: /etc/logstash_c/ 
        - name: config-yml-volume 
          mountPath: /usr/share/logstash/config/ 
        - name: timezone 
          mountPath: /etc/localtime 
        resources: #logstash一定要加上资源限制,避免对其他业务造成资源抢占影响 
          limits: 
            cpu: 1000m 
            memory: 2048Mi 
          requests: 
            cpu: 512m 
            memory: 512Mi 
      volumes: 
      - name: config-volume 
        configMap: 
          name: logstash-conf 
          items: 
          - key: logstash.conf 
            path: logstash.conf 
      - name: timezone 
        hostPath: 
          path: /etc/localtime 
      - name: config-yml-volume 
        configMap: 
          name: logstash-yml 
          items: 
          - key: logstash.yml 
            path: logstash.yml 
 
--- 
apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: logstash-conf 
  namespace: kube-logging 
  labels: 
    type: logstash 
data: 
  logstash.conf: |- 
    input {
      beats { 
        port => 5044 
      } 
    } 
    filter {
      # 处理 ingress 日志 
      if [kubernetes][container][name] == "nginx-ingress-controller" {
        json {
          source => "message" 
          target => "ingress_log" 
        }
        if [ingress_log][requesttime] { 
          mutate { 
            convert => ["[ingress_log][requesttime]", "float"] 
          }
        }
        if [ingress_log][upstremtime] { 
          mutate { 
            convert => ["[ingress_log][upstremtime]", "float"] 
          }
        } 
        if [ingress_log][status] { 
          mutate { 
            convert => ["[ingress_log][status]", "float"] 
          }
        }
        if  [ingress_log][httphost] and [ingress_log][uri] {
          mutate { 
            add_field => {"[ingress_log][entry]" => "%{[ingress_log][httphost]}%{[ingress_log][uri]}"} 
          } 
          mutate { 
            split => ["[ingress_log][entry]","/"] 
          } 
          if [ingress_log][entry][1] { 
            mutate { 
              add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/%{[ingress_log][entry][1]}"} 
              remove_field => "[ingress_log][entry]" 
            }
          } else { 
            mutate { 
              add_field => {"[ingress_log][entrypoint]" => "%{[ingress_log][entry][0]}/"} 
              remove_field => "[ingress_log][entry]" 
            }
          }
        }
      }
      # 处理以srv进行开头的业务服务日志 
      if [kubernetes][container][name] =~ /^srv*/ { 
        json { 
          source => "message" 
          target => "tmp" 
        } 
        if [kubernetes][namespace] == "kube-logging" { 
          drop{} 
        } 
        if [tmp][level] { 
          mutate{ 
            add_field => {"[applog][level]" => "%{[tmp][level]}"} 
          } 
          if [applog][level] == "debug"{ 
            drop{} 
          } 
        } 
        if [tmp][msg] { 
          mutate { 
            add_field => {"[applog][msg]" => "%{[tmp][msg]}"} 
          } 
        } 
        if [tmp][func] { 
          mutate { 
            add_field => {"[applog][func]" => "%{[tmp][func]}"} 
          } 
        } 
        if [tmp][cost]{ 
          if "ms" in [tmp][cost] { 
            mutate { 
              split => ["[tmp][cost]","m"] 
              add_field => {"[applog][cost]" => "%{[tmp][cost][0]}"} 
              convert => ["[applog][cost]", "float"] 
            } 
          } else { 
            mutate { 
              add_field => {"[applog][cost]" => "%{[tmp][cost]}"} 
            }
          }
        }
        if [tmp][method] { 
          mutate { 
            add_field => {"[applog][method]" => "%{[tmp][method]}"} 
          }
        }
        if [tmp][request_url] { 
          mutate { 
            add_field => {"[applog][request_url]" => "%{[tmp][request_url]}"} 
          } 
        }
        if [tmp][meta._id] { 
          mutate { 
            add_field => {"[applog][traceId]" => "%{[tmp][meta._id]}"} 
          } 
        } 
        if [tmp][project] { 
          mutate { 
            add_field => {"[applog][project]" => "%{[tmp][project]}"} 
          }
        }
        if [tmp][time] { 
          mutate { 
            add_field => {"[applog][time]" => "%{[tmp][time]}"} 
          }
        }
        if [tmp][status] { 
          mutate { 
            add_field => {"[applog][status]" => "%{[tmp][status]}"} 
            convert => ["[applog][status]", "float"] 
          }
        }
      }
      mutate { 
        rename => ["kubernetes", "k8s"] 
        remove_field => "beat" 
        remove_field => "tmp" 
        remove_field => "[k8s][labels][app]" 
      }
    }
    output { 
      elasticsearch { 
        hosts => ["http://elasticsearch-logging:9200"] 
        codec => json 
        index => "logstash-%{+YYYY.MM.dd}" #索引名称以logstash+日志进行每日新建 
      }
    } 
---
 
apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: logstash-yml 
  namespace: kube-logging 
  labels: 
    type: logstash 
data: 
  logstash.yml: |- 
    http.host: "0.0.0.0" 
    xpack.monitoring.elasticsearch.hosts: http://elasticsearch-logging:9200
#创建资源
kubectl create -f logstash.yaml

#查看资源
kubectl get po -n kube-logging 

3、安装Filebeat

--- 
apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: filebeat-config 
  namespace: kube-logging 
  labels: 
    k8s-app: filebeat 
data: 
  filebeat.yml: |- 
    filebeat.inputs: 
    - type: container 
      enable: true
      paths: 
        - /var/log/containers/*.log #这里是filebeat采集挂载到pod中的日志目录 
      processors: 
        - add_kubernetes_metadata: #添加k8s的字段用于后续的数据清洗 
            host: ${NODE_NAME}
            matchers: 
            - logs_path: 
                logs_path: "/var/log/containers/" 
    #output.kafka:  #如果日志量较大,es中的日志有延迟,可以选择在filebeat和logstash中间加入kafka 
    #  hosts: ["kafka-log-01:9092", "kafka-log-02:9092", "kafka-log-03:9092"] 
    # topic: 'topic-test-log' 
    #  version: 2.0.0 
    output.logstash: #因为还需要部署logstash进行数据的清洗,因此filebeat是把数据推到logstash中 
       hosts: ["logstash:5044"] 
       enabled: true 
--- 
apiVersion: v1 
kind: ServiceAccount 
metadata: 
  name: filebeat 
  namespace: kube-logging 
  labels: 
    k8s-app: filebeat
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole 
metadata: 
  name: filebeat 
  labels: 
    k8s-app: filebeat 
rules: 
- apiGroups: [""] # "" indicates the core API group 
  resources: 
  - namespaces 
  - pods 
  verbs: ["get", "watch", "list"] 
--- 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: filebeat 
subjects: 
- kind: ServiceAccount 
  name: filebeat 
  namespace: kube-logging 
roleRef: 
  kind: ClusterRole 
  name: filebeat 
  apiGroup: rbac.authorization.k8s.io 
--- 
apiVersion: apps/v1 
kind: DaemonSet 
metadata: 
  name: filebeat 
  namespace: kube-logging 
  labels: 
    k8s-app: filebeat 
spec: 
  selector: 
    matchLabels: 
      k8s-app: filebeat 
  template: 
    metadata: 
      labels: 
        k8s-app: filebeat 
    spec: 
      serviceAccountName: filebeat 
      terminationGracePeriodSeconds: 30 
      containers: 
      - name: filebeat 
        image: docker.io/kubeimages/filebeat:7.9.3 #该镜像支持arm64和amd64两种架构 
        args: [ 
          "-c", "/etc/filebeat.yml", 
          "-e","-httpprof","0.0.0.0:6060" 
        ] 
        #ports: 
        #  - containerPort: 6060 
        #    hostPort: 6068 
        env: 
        - name: NODE_NAME 
          valueFrom: 
            fieldRef: 
              fieldPath: spec.nodeName 
        - name: ELASTICSEARCH_HOST 
          value: elasticsearch-logging 
        - name: ELASTICSEARCH_PORT 
          value: "9200" 
        securityContext: 
          runAsUser: 0 
          # If using Red Hat OpenShift uncomment this: 
          #privileged: true 
        resources: 
          limits: 
            memory: 1000Mi 
            cpu: 1000m 
          requests: 
            memory: 100Mi 
            cpu: 100m 
        volumeMounts: 
        - name: config #挂载的是filebeat的配置文件 
          mountPath: /etc/filebeat.yml 
          readOnly: true 
          subPath: filebeat.yml 
        - name: data #持久化filebeat数据到宿主机上 
          mountPath: /usr/share/filebeat/data 
        - name: varlibdockercontainers #这里主要是把宿主机上的源日志目录挂载到filebeat容器中,如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把mountPath改为/var/lib 
          mountPath: /var/lib
          readOnly: true 
        - name: varlog #这里主要是把宿主机上/var/log/pods和/var/log/containers的软链接挂载到filebeat容器中 
          mountPath: /var/log/ 
          readOnly: true 
        - name: timezone 
          mountPath: /etc/localtime 
      volumes: 
      - name: config 
        configMap: 
          defaultMode: 0600 
          name: filebeat-config 
      - name: varlibdockercontainers 
        hostPath: #如果没有修改docker或者containerd的runtime进行了标准的日志落盘路径,可以把path改为/var/lib 
          path: /var/lib
      - name: varlog 
        hostPath: 
          path: /var/log/ 
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart 
      - name: inputs 
        configMap: 
          defaultMode: 0600 
          name: filebeat-inputs 
      - name: data 
        hostPath: 
          path: /data/filebeat-data 
          type: DirectoryOrCreate 
      - name: timezone 
        hostPath: 
          path: /etc/localtime 
      tolerations: #加入容忍能够调度到每一个节点 
      - effect: NoExecute 
        key: dedicated 
        operator: Equal 
        value: gpu 
      - effect: NoSchedule 
        operator: Exists 
#创建资源
kubectl create -f filebeat.yaml

#查看资源
kubectl get po -n kube-logging

4、安装Kibina

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-logging
  name: kibana-config
  labels:
    k8s-app: kibana
data:
  kibana.yml: |-
    server.name: kibana
    server.host: "0"
    i18n.locale: zh-CN                      #设置默认语言为中文
    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}         #es集群连接地址,由于我这都都是k8s部署且在一个ns下,可以直接使用service name连接
--- 
apiVersion: v1 
kind: Service 
metadata: 
  name: kibana 
  namespace: kube-logging 
  labels: 
    k8s-app: kibana 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    kubernetes.io/name: "Kibana" 
    srv: srv-kibana 
spec: 
  type: NodePort
  ports: 
  - port: 5601 
    protocol: TCP 
    targetPort: ui 
  selector: 
    k8s-app: kibana 
--- 
apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: kibana 
  namespace: kube-logging 
  labels: 
    k8s-app: kibana 
    kubernetes.io/cluster-service: "true" 
    addonmanager.kubernetes.io/mode: Reconcile 
    srv: srv-kibana 
spec: 
  replicas: 1 
  selector: 
    matchLabels: 
      k8s-app: kibana 
  template: 
    metadata: 
      labels: 
        k8s-app: kibana 
    spec: 
      containers: 
      - name: kibana 
        image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构 
        resources: 
          # need more cpu upon initialization, therefore burstable class 
          limits: 
            cpu: 1000m 
          requests: 
            cpu: 100m 
        env: 
          - name: ELASTICSEARCH_HOSTS 
            value: http://elasticsearch-logging:9200 
        ports: 
        - containerPort: 5601 
          name: ui 
          protocol: TCP 
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes:
      - name: config
        configMap:
          name: kibana-config

 #此处配置个ingress 避免通过外网访问不到的情况下处理
--- 
apiVersion: networking.k8s.io/v1
kind: Ingress 
metadata: 
  name: kibana 
  namespace: kube-logging 
spec: 
  ingressClassName: nginx
  rules: 
  - host: kibana.wssnail.com
    http: 
      paths: 
      - path: / 
        pathType: Prefix
        backend: 
          service:
            name: kibana 
            port:
              number: 5601 

#创建资源
 kubectl create -f kibana.yaml

#查看资源
 kubectl get all -n kube-logging 

 

 

三、Dashboard安装与使用

1、安装

#下载资源文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
#修改service类型为type: NodePort 完整文件见附录

#查看资源
kubectl get all -n kubernetes-dashboard

 2、创建token

apiVersion: v1 
kind: ServiceAccount 
metadata: 
  labels: 
    k8s-app: kubernetes-dashboard 
  name: dashboard-admin 
  namespace: kubernetes-dashboard 
--- 
apiVersion: rbac.authorization.k8s.io/v1 
kind: ClusterRoleBinding 
metadata: 
  name: dashboard-admin-cluster-role 
roleRef: 
  apiGroup: rbac.authorization.k8s.io 
  kind: ClusterRole 
  name: cluster-admin 
subjects: 
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kubernetes-dashboard
#创建资源
kubectl create -f dashboard-admin.yaml

#查看账户信息
kubectl describe serviceaccount dashboard-admin -n kubernetes-dashboard

#查看token
kubectl describe secrets dashboard-admin-token-krndx -n kubernetes-dashboard  

 注意:这里访问需要使用https 

3、使用 

四、kubesphere安装与使用

1、官方文档     

https://kubesphere.io/zh/docs/v3.4

2、准备工作

  • 您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
  • 确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。
  • 在安装之前,需要配置 Kubernetes 集群中的默认存储类型。

2.1、Default StorageClass安装

# 安装 iSCSI 协议客户端(OpenEBS 需要该协议提供存储支持) 所有节点都执行
yum install iscsi-initiator-utils -y

# 设置开机启动
systemctl enable --now iscsid

# 启动服务
systemctl start iscsid

# 查看服务状态
systemctl status iscsid


# 安装 OpenEBS 
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml


# 查看状态(下载镜像可能需要一些时间)
kubectl get all -n openebs


# 在主节点创建本地 storage class
kubectl apply -f default-storage-class.yaml

附录 openebs-operator.yaml

附录 default-storage-class.yaml

3、安装

 kubesphere-installer.yaml

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: clusterconfigurations.installer.kubesphere.io
spec:
  group: installer.kubesphere.io
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              x-kubernetes-preserve-unknown-fields: true
            status:
              type: object
              x-kubernetes-preserve-unknown-fields: true
  scope: Namespaced
  names:
    plural: clusterconfigurations
    singular: clusterconfiguration
    kind: ClusterConfiguration
    shortNames:
      - cc

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - autoscaling
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - iam.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - notification.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - auditing.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - events.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - core.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - installer.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - security.istio.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - kiali.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - edgeruntime.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - types.kubefed.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - application.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - alerting.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-installer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-installer
  template:
    metadata:
      labels:
        app: ks-installer
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: kubesphere/ks-installer:v3.4.1
        imagePullPolicy: "IfNotPresent"
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 20m
            memory: 100Mi
        volumeMounts:
        - mountPath: /etc/localtime
          name: host-time
          readOnly: true
      volumes:
      - hostPath:
          path: /etc/localtime
          type: ""
        name: host-time

cluster-configuration.yaml 

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    # adminPassword: ""     # Custom password of the admin user. If the parameter exists but the value is empty, a random password is generated. If the parameter does not exist, P@88w0rd is used.
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: ""        # Add your private registry address if it is needed.
#  dev_tag: ""               # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version.
  etcd:
    monitoring: false       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: localhost  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
        type: NodePort

    # apiserver:            # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi # Redis PVC size.
    openldap:
      enabled: false
      volumeSize: 2Gi   # openldap PVC size.
    minio:
      volumeSize: 20Gi # Minio PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
      GPUMonitoring:     # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero.
        enabled: false
    gpu:                 # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs.
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:   # Storage backend for logging, events and auditing.
      # master:
      #   volumeSize: 4Gi  # The volume size of Elasticsearch master nodes.
      #   replicas: 1      # The total number of master nodes. Even numbers are not allowed.
      #   resources: {}
      # data:
      #   volumeSize: 20Gi  # The volume size of Elasticsearch data nodes.
      #   replicas: 1       # The total number of data nodes.
      #   resources: {}
      enabled: false
      logMaxAge: 7             # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash      # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:   # Storage backend for logging, events and auditing.
      # master:
      #   volumeSize: 4Gi  # The volume size of Opensearch master nodes.
      #   replicas: 1      # The total number of master nodes. Even numbers are not allowed.
      #   resources: {}
      # data:
      #   volumeSize: 20Gi  # The volume size of Opensearch data nodes.
      #   replicas: 1       # The total number of data nodes.
      #   resources: {}
      enabled: true
      logMaxAge: 7             # Log retention time in built-in Opensearch. It is 7 days by default.
      opensearchPrefix: whizard      # The string making up index names. The index name will be formatted as ks-<opensearchPrefix>-logging.
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: false         # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: false         # Enable or disable the KubeSphere Auditing Log System.
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: false         # Enable or disable the KubeSphere DevOps System.
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi  # Recommend keep same as requests.memory.
    jenkinsVolumeSize: 16Gi
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: false         # Enable or disable the KubeSphere Events System.
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    ruler:
      enabled: true
      replicas: 2
    #   resources: {}
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: false         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1  # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    #   volumeSize: 20Gi  # Prometheus PVC size.
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1          # AlertManager Replicas.
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:                           # GPU monitoring-related plug-in installation.
      nvidia_dcgm_exporter:        # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.
        enabled: false             # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.
        # resources: {}
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: false # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: false # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: false     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
    istio:  # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false
    kubeedge:        # kubeedge configurations
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
            - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true 
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:        # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent.
    enabled: false   # Enable or disable Gatekeeper.
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    # image: 'alpine:3.15' # There must be an nsenter program in the image
    timeout: 600         # Container timeout, if set to 0, no timeout will be used. The unit is seconds
#下载资源
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml

wget  https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml

#创建资源
 kubectl apply -f kubesphere-installer.yaml

 kubectl apply -f cluster-configuration.yaml 

查看日志,出现以下内容证明安装完成。

#查看日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

#查看资源
kubectl get all -n kubesphere-system 

注意:安装时间有点长,耐心等待

 4、卸载

当安装失败或者需要卸载时使用下面删除脚本

#!/usr/bin/env bash

function delete_sure(){
  cat << eof
$(echo -e "\033[1;36mNote:\033[0m")

Delete the KubeSphere cluster, including the module kubesphere-system kubesphere-devops-system kubesphere-monitoring-system kubesphere-logging-system openpitrix-system.
eof

read -p "Please reconfirm that you want to delete the KubeSphere cluster.  (yes/no) " ans
while [[ "x"$ans != "xyes" && "x"$ans != "xno" ]]; do
    read -p "Please reconfirm that you want to delete the KubeSphere cluster.  (yes/no) " ans
done

if [[ "x"$ans == "xno" ]]; then
    exit
fi
}


delete_sure

# delete ks-install
kubectl delete deploy ks-installer -n kubesphere-system 2>/dev/null

# delete helm
for namespaces in kubesphere-system kubesphere-devops-system kubesphere-monitoring-system kubesphere-logging-system openpitrix-system kubesphere-monitoring-federated
do
  helm list -n $namespaces | grep -v NAME | awk '{print $1}' | sort -u | xargs -r -L1 helm uninstall -n $namespaces 2>/dev/null
done

# delete kubefed
kubectl get cc -n kubesphere-system ks-installer -o jsonpath="{.status.multicluster}" | grep enable
if [[ $? -eq 0 ]]; then
  helm uninstall -n kube-federation-system kubefed 2>/dev/null
  #kubectl delete ns kube-federation-system 2>/dev/null
fi


helm uninstall -n kube-system snapshot-controller 2>/dev/null

# delete kubesphere deployment
kubectl delete deployment -n kubesphere-system `kubectl get deployment -n kubesphere-system -o jsonpath="{.items[*].metadata.name}"` 2>/dev/null

# delete monitor statefulset
kubectl delete prometheus -n kubesphere-monitoring-system k8s 2>/dev/null
kubectl delete statefulset -n kubesphere-monitoring-system `kubectl get statefulset -n kubesphere-monitoring-system -o jsonpath="{.items[*].metadata.name}"` 2>/dev/null
# delete grafana
kubectl delete deployment -n kubesphere-monitoring-system grafana 2>/dev/null
kubectl --no-headers=true get pvc -n kubesphere-monitoring-system -o custom-columns=:metadata.namespace,:metadata.name | grep -E kubesphere-monitoring-system | xargs -n2 kubectl delete pvc -n 2>/dev/null

# delete pvc
pvcs="kubesphere-system|openpitrix-system|kubesphere-devops-system|kubesphere-logging-system"
kubectl --no-headers=true get pvc --all-namespaces -o custom-columns=:metadata.namespace,:metadata.name | grep -E $pvcs | xargs -n2 kubectl delete pvc -n 2>/dev/null


# delete rolebindings
delete_role_bindings() {
  for rolebinding in `kubectl -n $1 get rolebindings -l iam.kubesphere.io/user-ref -o jsonpath="{.items[*].metadata.name}"`
  do
    kubectl -n $1 delete rolebinding $rolebinding 2>/dev/null
  done
}

# delete roles
delete_roles() {
  kubectl -n $1 delete role admin 2>/dev/null
  kubectl -n $1 delete role operator 2>/dev/null
  kubectl -n $1 delete role viewer 2>/dev/null
  for role in `kubectl -n $1 get roles -l iam.kubesphere.io/role-template -o jsonpath="{.items[*].metadata.name}"`
  do
    kubectl -n $1 delete role $role 2>/dev/null
  done
}

# remove useless labels and finalizers
for ns in `kubectl get ns -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl label ns $ns kubesphere.io/workspace-
  kubectl label ns $ns kubesphere.io/namespace-
  kubectl patch ns $ns -p '{"metadata":{"finalizers":null,"ownerReferences":null}}'
  delete_role_bindings $ns
  delete_roles $ns
done

# delete clusters
for cluster in `kubectl get clusters -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch cluster $cluster -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete clusters --all 2>/dev/null

# delete workspaces
for ws in `kubectl get workspaces -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch workspace $ws -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete workspaces --all 2>/dev/null

# delete devopsprojects
for devopsproject in `kubectl get devopsprojects -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch devopsprojects $devopsproject -p '{"metadata":{"finalizers":null}}' --type=merge
done

for pip in `kubectl get pipeline -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch pipeline $pip -n `kubectl get pipeline -A | grep $pip | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2ibinaries in `kubectl get s2ibinaries -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2ibinaries $s2ibinaries -n `kubectl get s2ibinaries -A | grep $s2ibinaries | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2ibuilders in `kubectl get s2ibuilders -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2ibuilders $s2ibuilders -n `kubectl get s2ibuilders -A | grep $s2ibuilders | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2ibuildertemplates in `kubectl get s2ibuildertemplates -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2ibuildertemplates $s2ibuildertemplates -n `kubectl get s2ibuildertemplates -A | grep $s2ibuildertemplates | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

for s2iruns in `kubectl get s2iruns -A -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch s2iruns $s2iruns -n `kubectl get s2iruns -A | grep $s2iruns | awk '{print $1}'` -p '{"metadata":{"finalizers":null}}' --type=merge
done

kubectl delete devopsprojects --all 2>/dev/null


# delete validatingwebhookconfigurations
for webhook in ks-events-admission-validate users.iam.kubesphere.io network.kubesphere.io validating-webhook-configuration
do
  kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io $webhook 2>/dev/null
done

# delete mutatingwebhookconfigurations
for webhook in ks-events-admission-mutate logsidecar-injector-admission-mutate mutating-webhook-configuration
do
  kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io $webhook 2>/dev/null
done

# delete users
for user in `kubectl get users -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch user $user -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete users --all 2>/dev/null


# delete helm resources
for resource_type in `echo helmcategories helmapplications helmapplicationversions helmrepos helmreleases`; do
  for resource_name in `kubectl get ${resource_type}.application.kubesphere.io -o jsonpath="{.items[*].metadata.name}"`; do
    kubectl patch ${resource_type}.application.kubesphere.io ${resource_name} -p '{"metadata":{"finalizers":null}}' --type=merge
  done
  kubectl delete ${resource_type}.application.kubesphere.io --all 2>/dev/null
done

# delete workspacetemplates
for workspacetemplate in `kubectl get workspacetemplates.tenant.kubesphere.io -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch workspacetemplates.tenant.kubesphere.io $workspacetemplate -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete workspacetemplates.tenant.kubesphere.io --all 2>/dev/null

# delete federatednamespaces in namespace kubesphere-monitoring-federated
for resource in $(kubectl get federatednamespaces.types.kubefed.io -n kubesphere-monitoring-federated -oname); do
  kubectl patch "${resource}" -p '{"metadata":{"finalizers":null}}' --type=merge -n kubesphere-monitoring-federated
done

# delete crds
for crd in `kubectl get crds -o jsonpath="{.items[*].metadata.name}"`
do
  if [[ $crd == *kubesphere.io ]]; then kubectl delete crd $crd 2>/dev/null; fi
done

# delete relevance ns
for ns in kubesphere-alerting-system kubesphere-controls-system kubesphere-devops-system kubesphere-logging-system kubesphere-monitoring-system kubesphere-monitoring-federated openpitrix-system kubesphere-system
do
  kubectl delete ns $ns 2>/dev/null
done

#授权变为可执行文件
chmod 777 kubesphere-delete.sh 

 5、使用

5.1、主界面

5.2、可插拔组件 

 

 5.3、平台管理

5.4、访问控制

 5.5、通知设置

五、附录

1、kube-prometheus源码

链接: https://pan.baidu.com/s/1fhWsAlQG5M3qjnhS1IMU3g?pwd=qxgr 提取码: qxgr 

2、kube-prometheus所需镜像

链接: https://pan.baidu.com/s/1TuGr5EW3U5Tj2egT1xEwtg?pwd=fj84 提取码: fj84 

3、Dashboard配置文件

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

4、openebs-operator.yaml

# This manifest deploys the OpenEBS control plane components, 
# with associated CRs & RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context
#
# NOTE: The Jiva and cStor components previously included in the Operator File  
#  are now removed and it is recommended for users to use cStor and Jiva CSI operators. 
#  To upgrade your Jiva and cStor volumes to CSI, please checkout the documentation at:
#  https://github.com/openebs/upgrade
#
# To deploy the legacy Jiva and cStor:
# kubectl apply -f https://openebs.github.io/charts/legacy-openebs-operator.yaml
# 
# To deploy cStor CSI:
# kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml
#
# To deploy Jiva CSI:
# kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml
#

# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:
  name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: openebs-maya-operator
  namespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: openebs-maya-operator
rules:
- apiGroups: ["*"]
  resources: ["nodes", "nodes/proxy"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["statefulsets", "daemonsets"]
  verbs: ["*"]
- apiGroups: ["*"]
  resources: ["resourcequotas", "limitranges"]
  verbs: ["list", "watch"]
- apiGroups: ["*"]
  resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "certificatesigningrequests"]
  verbs: ["list", "watch"]
- apiGroups: ["*"]
  resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
  verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
  resources: ["volumesnapshots", "volumesnapshotdatas"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["openebs.io"]
  resources: [ "*"]
  verbs: ["*" ]
- apiGroups: ["cstor.openebs.io"]
  resources: [ "*"]
  verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]
  resources: ["leases"]
  verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]
  resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]
  verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
- apiGroups: ["*"]
  resources: ["poddisruptionbudgets"]
  verbs: ["get", "list", "create", "delete", "watch"]
- apiGroups: ["coordination.k8s.io"]
  resources: ["leases"]
  verbs: ["get", "create", "update"]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: openebs-maya-operator
subjects:
- kind: ServiceAccount
  name: openebs-maya-operator
  namespace: openebs
roleRef:
  kind: ClusterRole
  name: openebs-maya-operator
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.5.0
  creationTimestamp: null
  name: blockdevices.openebs.io
spec:
  group: openebs.io
  names:
    kind: BlockDevice
    listKind: BlockDeviceList
    plural: blockdevices
    shortNames:
    - bd
    singular: blockdevice
  scope: Namespaced
  versions:
  - additionalPrinterColumns:
    - jsonPath: .spec.nodeAttributes.nodeName
      name: NodeName
      type: string
    - jsonPath: .spec.path
      name: Path
      priority: 1
      type: string
    - jsonPath: .spec.filesystem.fsType
      name: FSType
      priority: 1
      type: string
    - jsonPath: .spec.capacity.storage
      name: Size
      type: string
    - jsonPath: .status.claimState
      name: ClaimState
      type: string
    - jsonPath: .status.state
      name: Status
      type: string
    - jsonPath: .metadata.creationTimestamp
      name: Age
      type: date
    name: v1alpha1
    schema:
      openAPIV3Schema:
        description: BlockDevice is the Schema for the blockdevices API
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: DeviceSpec defines the properties and runtime status of a BlockDevice
            properties:
              aggregateDevice:
                description: AggregateDevice was intended to store the hierarchical information in cases of LVM. However this is currently not implemented and may need to be re-looked into for better design. To be deprecated
                type: string
              capacity:
                description: Capacity
                properties:
                  logicalSectorSize:
                    description: LogicalSectorSize is blockdevice logical-sector size in bytes
                    format: int32
                    type: integer
                  physicalSectorSize:
                    description: PhysicalSectorSize is blockdevice physical-Sector size in bytes
                    format: int32
                    type: integer
                  storage:
                    description: Storage is the blockdevice capacity in bytes
                    format: int64
                    type: integer
                required:
                - storage
                type: object
              claimRef:
                description: ClaimRef is the reference to the BDC which has claimed this BD
                properties:
                  apiVersion:
                    description: API version of the referent.
                    type: string
                  fieldPath:
                    description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.'
                    type: string
                  kind:
                    description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
                    type: string
                  name:
                    description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
                    type: string
                  namespace:
                    description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
                    type: string
                  resourceVersion:
                    description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
                    type: string
                  uid:
                    description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
                    type: string
                type: object
              details:
                description: Details contain static attributes of BD like model,serial, and so forth
                properties:
                  compliance:
                    description: Compliance is standards/specifications version implemented by device firmware  such as SPC-1, SPC-2, etc
                    type: string
                  deviceType:
                    description: DeviceType represents the type of device like sparse, disk, partition, lvm, crypt
                    enum:
                    - disk
                    - partition
                    - sparse
                    - loop
                    - lvm
                    - crypt
                    - dm
                    - mpath
                    type: string
                  driveType:
                    description: DriveType is the type of backing drive, HDD/SSD
                    enum:
                    - HDD
                    - SSD
                    - Unknown
                    - ""
                    type: string
                  firmwareRevision:
                    description: FirmwareRevision is the disk firmware revision
                    type: string
                  hardwareSectorSize:
                    description: HardwareSectorSize is the hardware sector size in bytes
                    format: int32
                    type: integer
                  logicalBlockSize:
                    description: LogicalBlockSize is the logical block size in bytes reported by /sys/class/block/sda/queue/logical_block_size
                    format: int32
                    type: integer
                  model:
                    description: Model is model of disk
                    type: string
                  physicalBlockSize:
                    description: PhysicalBlockSize is the physical block size in bytes reported by /sys/class/block/sda/queue/physical_block_size
                    format: int32
                    type: integer
                  serial:
                    description: Serial is serial number of disk
                    type: string
                  vendor:
                    description: Vendor is vendor of disk
                    type: string
                type: object
              devlinks:
                description: DevLinks contains soft links of a block device like /dev/by-id/... /dev/by-uuid/...
                items:
                  description: DeviceDevLink holds the mapping between type and links like by-id type or by-path type link
                  properties:
                    kind:
                      description: Kind is the type of link like by-id or by-path.
                      enum:
                      - by-id
                      - by-path
                      type: string
                    links:
                      description: Links are the soft links
                      items:
                        type: string
                      type: array
                  type: object
                type: array
              filesystem:
                description: FileSystem contains mountpoint and filesystem type
                properties:
                  fsType:
                    description: Type represents the FileSystem type of the block device
                    type: string
                  mountPoint:
                    description: MountPoint represents the mountpoint of the block device.
                    type: string
                type: object
              nodeAttributes:
                description: NodeAttributes has the details of the node on which BD is attached
                properties:
                  nodeName:
                    description: NodeName is the name of the Kubernetes node resource on which the device is attached
                    type: string
                type: object
              parentDevice:
                description: "ParentDevice was intended to store the UUID of the parent Block Device as is the case for partitioned block devices. \n For example: /dev/sda is the parent for /dev/sda1 To be deprecated"
                type: string
              partitioned:
                description: Partitioned represents if BlockDevice has partitions or not (Yes/No) Currently always default to No. To be deprecated
                enum:
                - "Yes"
                - "No"
                type: string
              path:
                description: Path contain devpath (e.g. /dev/sdb)
                type: string
            required:
            - capacity
            - devlinks
            - nodeAttributes
            - path
            type: object
          status:
            description: DeviceStatus defines the observed state of BlockDevice
            properties:
              claimState:
                description: ClaimState represents the claim state of the block device
                enum:
                - Claimed
                - Unclaimed
                - Released
                type: string
              state:
                description: State is the current state of the blockdevice (Active/Inactive/Unknown)
                enum:
                - Active
                - Inactive
                - Unknown
                type: string
            required:
            - claimState
            - state
            type: object
        type: object
    served: true
    storage: true
    subresources: {}
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.5.0
  creationTimestamp: null
  name: blockdeviceclaims.openebs.io
spec:
  group: openebs.io
  names:
    kind: BlockDeviceClaim
    listKind: BlockDeviceClaimList
    plural: blockdeviceclaims
    shortNames:
    - bdc
    singular: blockdeviceclaim
  scope: Namespaced
  versions:
  - additionalPrinterColumns:
    - jsonPath: .spec.blockDeviceName
      name: BlockDeviceName
      type: string
    - jsonPath: .status.phase
      name: Phase
      type: string
    - jsonPath: .metadata.creationTimestamp
      name: Age
      type: date
    name: v1alpha1
    schema:
      openAPIV3Schema:
        description: BlockDeviceClaim is the Schema for the blockdeviceclaims API
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: DeviceClaimSpec defines the request details for a BlockDevice
            properties:
              blockDeviceName:
                description: BlockDeviceName is the reference to the block-device backing this claim
                type: string
              blockDeviceNodeAttributes:
                description: BlockDeviceNodeAttributes is the attributes on the node from which a BD should be selected for this claim. It can include nodename, failure domain etc.
                properties:
                  hostName:
                    description: HostName represents the hostname of the Kubernetes node resource where the BD should be present
                    type: string
                  nodeName:
                    description: NodeName represents the name of the Kubernetes node resource where the BD should be present
                    type: string
                type: object
              deviceClaimDetails:
                description: Details of the device to be claimed
                properties:
                  allowPartition:
                    description: AllowPartition represents whether to claim a full block device or a device that is a partition
                    type: boolean
                  blockVolumeMode:
                    description: 'BlockVolumeMode represents whether to claim a device in Block mode or Filesystem mode. These are use cases of BlockVolumeMode: 1) Not specified: VolumeMode check will not be effective 2) VolumeModeBlock: BD should not have any filesystem or mountpoint 3) VolumeModeFileSystem: BD should have a filesystem and mountpoint. If DeviceFormat is    specified then the format should match with the FSType in BD'
                    type: string
                  formatType:
                    description: Format of the device required, eg:ext4, xfs
                    type: string
                type: object
              deviceType:
                description: DeviceType represents the type of drive like SSD, HDD etc.,
                nullable: true
                type: string
              hostName:
                description: Node name from where blockdevice has to be claimed. To be deprecated. Use NodeAttributes.HostName instead
                type: string
              resources:
                description: Resources will help with placing claims on Capacity, IOPS
                properties:
                  requests:
                    additionalProperties:
                      anyOf:
                      - type: integer
                      - type: string
                      pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
                      x-kubernetes-int-or-string: true
                    description: 'Requests describes the minimum resources required. eg: if storage resource of 10G is requested minimum capacity of 10G should be available TODO for validating'
                    type: object
                required:
                - requests
                type: object
              selector:
                description: Selector is used to find block devices to be considered for claiming
                properties:
                  matchExpressions:
                    description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
                    items:
                      description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
                      properties:
                        key:
                          description: key is the label key that the selector applies to.
                          type: string
                        operator:
                          description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
                          type: string
                        values:
                          description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
                          items:
                            type: string
                          type: array
                      required:
                      - key
                      - operator
                      type: object
                    type: array
                  matchLabels:
                    additionalProperties:
                      type: string
                    description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
                    type: object
                type: object
            type: object
          status:
            description: DeviceClaimStatus defines the observed state of BlockDeviceClaim
            properties:
              phase:
                description: Phase represents the current phase of the claim
                type: string
            required:
            - phase
            type: object
        type: object
    served: true
    storage: true
    subresources: {}
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
  name: openebs-ndm-config
  namespace: openebs
  labels:
    openebs.io/component-name: ndm-config
data:
  # udev-probe is default or primary probe it should be enabled to run ndm
  # filterconfigs contains configs of filters. To provide a group of include
  # and exclude values add it as , separated string
  node-disk-manager.config: |
    probeconfigs:
      - key: udev-probe
        name: udev probe
        state: true
      - key: seachest-probe
        name: seachest probe
        state: false
      - key: smart-probe
        name: smart probe
        state: true
    filterconfigs:
      - key: os-disk-exclude-filter
        name: os disk exclude filter
        state: true
        exclude: "/,/etc/hosts,/boot"
      - key: vendor-filter
        name: vendor filter
        state: true
        include: ""
        exclude: "CLOUDBYT,OpenEBS"
      - key: path-filter
        name: path filter
        state: true
        include: ""
        exclude: "/dev/loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/md,/dev/dm-,/dev/rbd,/dev/zd"
    # metconfig can be used to decorate the block device with different types of labels
    # that are available on the node or come in a device properties.
    # node labels - the node where bd is discovered. A whitlisted label prefixes
    # attribute labels - a property of the BD can be added as a ndm label as ndm.io/<property>=<property-value>
    metaconfigs:
      - key: node-labels
        name: node labels
        pattern: ""
      - key: device-labels
        name: device labels
        type: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: openebs-ndm
  namespace: openebs
  labels:
    name: openebs-ndm
    openebs.io/component-name: ndm
    openebs.io/version: 3.5.0
spec:
  selector:
    matchLabels:
      name: openebs-ndm
      openebs.io/component-name: ndm
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        name: openebs-ndm
        openebs.io/component-name: ndm
        openebs.io/version: 3.5.0
    spec:
      # By default the node-disk-manager will be run on all kubernetes nodes
      # If you would like to limit this to only some nodes, say the nodes
      # that have storage attached, you could label those node and use
      # nodeSelector.
      #
      # e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
      # kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"
      #nodeSelector:
      #  "openebs.io/nodegroup": "storage-node"
      serviceAccountName: openebs-maya-operator
      hostNetwork: true
      # host PID is used to check status of iSCSI Service when the NDM
      # API service is enabled
      #hostPID: true
      containers:
      - name: node-disk-manager
        image: openebs/node-disk-manager:2.1.0
        args:
          - -v=4
        # The feature-gate is used to enable the new UUID algorithm.
          - --feature-gates="GPTBasedUUID"
        # Use partition table UUID instead of create single partition to get
        # partition UUID. Require `GPTBasedUUID` to be enabled with.
        # - --feature-gates="PartitionTableUUID"
        # Detect changes to device size, filesystem and mount-points without restart.
        # - --feature-gates="ChangeDetection"
        # The feature gate is used to start the gRPC API service. The gRPC server
        # starts at 9115 port by default. This feature is currently in Alpha state
        # - --feature-gates="APIService"
        # The feature gate is used to enable NDM, to create blockdevice resources
        # for unused partitions on the OS disk
        # - --feature-gates="UseOSDisk"
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        volumeMounts:
        - name: config
          mountPath: /host/node-disk-manager.config
          subPath: node-disk-manager.config
          readOnly: true
          # make udev database available inside container
        - name: udev
          mountPath: /run/udev
        - name: procmount
          mountPath: /host/proc
          readOnly: true
        - name: devmount
          mountPath: /dev
        - name: basepath
          mountPath: /var/openebs/ndm
        - name: sparsepath
          mountPath: /var/openebs/sparse
        env:
        # namespace in which NDM is installed will be passed to NDM Daemonset
        # as environment variable
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # pass hostname as env variable using downward API to the NDM container
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        # specify the directory where the sparse files need to be created.
        # if not specified, then sparse files will not be created.
        - name: SPARSE_FILE_DIR
          value: "/var/openebs/sparse"
        # Size(bytes) of the sparse file to be created.
        - name: SPARSE_FILE_SIZE
          value: "10737418240"
        # Specify the number of sparse files to be created
        - name: SPARSE_FILE_COUNT
          value: "0"
        livenessProbe:
          exec:
            command:
            - pgrep
            - "ndm"
          initialDelaySeconds: 30
          periodSeconds: 60
      volumes:
      - name: config
        configMap:
          name: openebs-ndm-config
      - name: udev
        hostPath:
          path: /run/udev
          type: Directory
      # mount /proc (to access mount file of process 1 of host) inside container
      # to read mount-point of disks and partitions
      - name: procmount
        hostPath:
          path: /proc
          type: Directory
      - name: devmount
      # the /dev directory is mounted so that we have access to the devices that
      # are connected at runtime of the pod.
        hostPath:
          path: /dev
          type: Directory
      - name: basepath
        hostPath:
          path: /var/openebs/ndm
          type: DirectoryOrCreate
      - name: sparsepath
        hostPath:
          path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-ndm-operator
  namespace: openebs
  labels:
    name: openebs-ndm-operator
    openebs.io/component-name: ndm-operator
    openebs.io/version: 3.5.0
spec:
  selector:
    matchLabels:
      name: openebs-ndm-operator
      openebs.io/component-name: ndm-operator
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-ndm-operator
        openebs.io/component-name: ndm-operator
        openebs.io/version: 3.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: node-disk-operator
          image: openebs/node-disk-operator:2.1.0
          imagePullPolicy: IfNotPresent
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            # the service account of the ndm-operator pod
            - name: SERVICE_ACCOUNT
              valueFrom:
                fieldRef:
                  fieldPath: spec.serviceAccountName
            - name: OPERATOR_NAME
              value: "node-disk-operator"
            - name: CLEANUP_JOB_IMAGE
              value: "openebs/linux-utils:3.5.0"
            # OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets
            # to the cleanup pod launched by NDM operator
            #- name: OPENEBS_IO_IMAGE_PULL_SECRETS
            #  value: ""
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8585
            initialDelaySeconds: 15
            periodSeconds: 20
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8585
            initialDelaySeconds: 5
            periodSeconds: 10
---
# Create NDM cluster exporter deployment.
# This is an optional component and is not required for the basic
# functioning of NDM
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-ndm-cluster-exporter
  namespace: openebs
  labels:
    name: openebs-ndm-cluster-exporter
    openebs.io/component-name: ndm-cluster-exporter
    openebs.io/version: 3.5.0
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      name: openebs-ndm-cluster-exporter
      openebs.io/component-name: ndm-cluster-exporter
  template:
    metadata:
      labels:
        name: openebs-ndm-cluster-exporter
        openebs.io/component-name: ndm-cluster-exporter
        openebs.io/version: 3.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: ndm-cluster-exporter
          image: openebs/node-disk-exporter:2.1.0
          command:
            - /usr/local/bin/exporter
          args:
            - "start"
            - "--mode=cluster"
            - "--port=$(METRICS_LISTEN_PORT)"
            - "--metrics=/metrics"
          ports:
            - containerPort: 9100
              protocol: TCP
              name: metrics
          imagePullPolicy: IfNotPresent
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: METRICS_LISTEN_PORT
              value: :9100
---
# Create NDM cluster exporter service
# This is optional and required only when
# ndm-cluster-exporter deployment is used
apiVersion: v1
kind: Service
metadata:
  name: openebs-ndm-cluster-exporter-service
  namespace: openebs
  labels:
    name: openebs-ndm-cluster-exporter-service
    openebs.io/component-name: ndm-cluster-exporter
    app: openebs-ndm-exporter
spec:
  clusterIP: None
  ports:
    - name: metrics
      port: 9100
      targetPort: 9100
  selector:
    name: openebs-ndm-cluster-exporter
---
# Create NDM node exporter daemonset.
# This is an optional component used for getting disk level
# metrics from each of the storage nodes
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: openebs-ndm-node-exporter
  namespace: openebs
  labels:
    name: openebs-ndm-node-exporter
    openebs.io/component-name: ndm-node-exporter
    openebs.io/version: 3.5.0
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      name: openebs-ndm-node-exporter
      openebs.io/component-name: ndm-node-exporter
  template:
    metadata:
      labels:
        name: openebs-ndm-node-exporter
        openebs.io/component-name: ndm-node-exporter
        openebs.io/version: 3.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
        - name: node-disk-exporter
          image: openebs/node-disk-exporter:2.1.0
          command:
            - /usr/local/bin/exporter
          args:
            - "start"
            - "--mode=node"
            - "--port=$(METRICS_LISTEN_PORT)"
            - "--metrics=/metrics"
          ports:
            - containerPort: 9101
              protocol: TCP
              name: metrics
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: true
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: METRICS_LISTEN_PORT
              value: :9101
---
# Create NDM node exporter service
# This is optional and required only when
# ndm-node-exporter daemonset is used
apiVersion: v1
kind: Service
metadata:
  name: openebs-ndm-node-exporter-service
  namespace: openebs
  labels:
    name: openebs-ndm-node-exporter
    openebs.io/component: openebs-ndm-node-exporter
    app: openebs-ndm-exporter
spec:
  clusterIP: None
  ports:
    - name: metrics
      port: 9101
      targetPort: 9101
  selector:
    name: openebs-ndm-node-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-localpv-provisioner
  namespace: openebs
  labels:
    name: openebs-localpv-provisioner
    openebs.io/component-name: openebs-localpv-provisioner
    openebs.io/version: 3.5.0
spec:
  selector:
    matchLabels:
      name: openebs-localpv-provisioner
      openebs.io/component-name: openebs-localpv-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: openebs-localpv-provisioner
        openebs.io/component-name: openebs-localpv-provisioner
        openebs.io/version: 3.5.0
    spec:
      serviceAccountName: openebs-maya-operator
      containers:
      - name: openebs-provisioner-hostpath
        imagePullPolicy: IfNotPresent
        image: openebs/provisioner-localpv:3.4.0
        args:
          - "--bd-time-out=$(BDC_BD_BIND_RETRIES)"
        env:
        # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
        # based on this address. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_K8S_MASTER
        #  value: "http://10.128.0.12:8080"
        # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
        # based on this config. This is ignored if empty.
        # This is supported for openebs provisioner version 0.5.2 onwards
        #- name: OPENEBS_IO_KUBE_CONFIG
        #  value: "/home/ubuntu/.kube/config"
        # This sets the number of times the provisioner should try 
        # with a polling interval of 5 seconds, to get the Blockdevice
        # Name from a BlockDeviceClaim, before the BlockDeviceClaim
        # is deleted. E.g. 12 * 5 seconds = 60 seconds timeout
        - name: BDC_BD_BIND_RETRIES
          value: "12"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: OPENEBS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
        # environment variable
        - name: OPENEBS_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: OPENEBS_IO_ENABLE_ANALYTICS
          value: "true"
        - name: OPENEBS_IO_INSTALLER_TYPE
          value: "openebs-operator"
        - name: OPENEBS_IO_HELPER_IMAGE
          value: "openebs/linux-utils:3.5.0"
        - name: OPENEBS_IO_BASE_PATH
          value: "/var/openebs/local"
        # LEADER_ELECTION_ENABLED is used to enable/disable leader election. By default
        # leader election is enabled.
        #- name: LEADER_ELECTION_ENABLED
        #  value: "true"
        # OPENEBS_IO_IMAGE_PULL_SECRETS environment variable is used to pass the image pull secrets
        # to the helper pod launched by local-pv hostpath provisioner
        #- name: OPENEBS_IO_IMAGE_PULL_SECRETS
        #  value: ""
        # Process name used for matching is limited to the 15 characters
        # present in the pgrep output.
        # So fullname can't be used here with pgrep (>15 chars).A regular expression
        # that matches the entire command name has to specified.
        # Anchor `^` : matches any string that starts with `provisioner-loc`
        # `.*`: matches any string that has `provisioner-loc` followed by zero or more char
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - test `pgrep -c "^provisioner-loc.*"` = 1
          initialDelaySeconds: 30
          periodSeconds: 60
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-hostpath
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      #hostpath type will create a PV by 
      # creating a sub-directory under the
      # BASEPATH provided below.
      - name: StorageType
        value: "hostpath"
      #Specify the location (directory) where
      # where PV(volume) data will be saved. 
      # A sub-directory with pv-name will be 
      # created. When the volume is deleted, 
      # the PV sub-directory will be deleted.
      #Default value is /var/openebs/local
      - name: BasePath
        value: "/var/openebs/local/"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-device
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      #device type will create a PV by
      # issuing a BDC and will extract the path
      # values from the associated BD.
      - name: StorageType
        value: "device"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---

5、default-storage-class.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local
  annotations:
    cas.openebs.io/config: |
      - name: StorageType
        value: "hostpath"
      - name: BasePath
        value: "/var/openebs/local/"
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"cas.openebs.io/config":"-
      name: StorageType\n  value: \"hostpath\"\n- name: BasePath\n  value:
      \"/var/openebs/local/\"\n","openebs.io/cas-type":"local","storageclass.beta.kubernetes.io/is-default-class":"true","storageclass.kubesphere.io/supported-access-modes":"[\"ReadWriteOnce\"]"},"name":"local"},"provisioner":"openebs.io/local","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
    openebs.io/cas-type: local
    storageclass.beta.kubernetes.io/is-default-class: 'true'
    storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce"]'
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

6、修改镜像名称

docker pull v5cn/prometheus-adapter:v0.9.1
docker pull easzlab/kube-state-metrics:v2.5.0

docker tag v5cn/prometheus-adapter:v0.9.1 k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
docker tag  easzlab/kube-state-metrics:v2.5.0  k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.5.0

六、参考 

1、https://blog.csdn.net/qiliang1033/article/details/132826339

2、http://www.bilibili.com/video/BV1MT411x7GH/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2156377.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【算法业务】互联网风控业务中的续贷审批模型(融合还款意愿分层的逾期风险识别模型)

1、背景说明 本文旨在提出一种针对风控催收受限情况下&#xff0c;如何提升风控审批模型的风险识别能力&#xff0c;以缓解贷后催收的压力&#xff0c;降低贷款资金坏账的风险。这篇工作依然是很早期的项目&#xff0c;分享的目的一方面做笔记&#xff0c;另一方面则是希望其中…

[Redis] 渐进式遍历+使用jedis操作Redis+使用Spring操作Redis

&#x1f338;个人主页:https://blog.csdn.net/2301_80050796?spm1000.2115.3001.5343 &#x1f3f5;️热门专栏: &#x1f9ca; Java基本语法(97平均质量分)https://blog.csdn.net/2301_80050796/category_12615970.html?spm1001.2014.3001.5482 &#x1f355; Collection与…

一种求解无人机三维路径规划的高维多目标优化算法,MATLAB代码

在无人机三维路径规划的研究领域&#xff0c;高维多目标优化算法是一个重要的研究方向。这种算法能够同时考虑多个目标&#xff0c;如航迹距离、威胁代价、能耗代价以及多无人机协同性能等&#xff0c;以实现无人机路径的最优规划。 无人机路径规划算法的研究进展表明&#xf…

22、Raven2

难度 中 目标 root权限 4个flag 使用Virtualbox启动 kali 192.168.86.105 靶机 192.168.86.106 信息收集 看到111端口有一个rpc相关的东西&#xff0c;去网上查看了一下是什么服务 通过在网上搜索发现这是一个信息泄露的漏洞&#xff0c;上面的这个端口其实就是泄露的端口和…

Python | Leetcode Python题解之第416题分割等和子集

题目&#xff1a; 题解&#xff1a; class Solution:def canPartition(self, nums: List[int]) -> bool:n len(nums)if n < 2:return Falsetotal sum(nums)if total % 2 ! 0:return Falsetarget total // 2dp [True] [False] * targetfor i, num in enumerate(nums…

为什么编程很难?

之前有一个很紧急的项目&#xff0c;项目中有一个bug始终没有被解决&#xff0c;托了十几天之后&#xff0c;就让我过去协助解决这个bug。这个项目是使用C语言生成硬件code&#xff0c;是更底层的verilog&#xff0c;也叫做HLS开发。 项目中的这段代码并不复杂&#xff0c;代码…

24年 九月 刷题记录

1. leetcode997找到小镇的法官 小镇里有 n 个人&#xff0c;按从 1 到 n 的顺序编号。传言称&#xff0c;这些人中有一个暗地里是小镇法官。 如果小镇法官真的存在&#xff0c;那么&#xff1a; 小镇法官不会信任任何人。 每个人&#xff08;除了小镇法官&#xff09;都信任这…

利用QEMU安装一台虚拟机的三种方法

文章目录 宿主机的选择方法一&#xff1a;直接用qemu源码安装步骤1&#xff1a;下载好qemu源码&#xff0c;这里我们用qemu-5.1.0步骤2&#xff1a;编译步骤3&#xff1a;创建一个系统盘步骤4&#xff1a;用步骤2编译的qemu-system-x86_64 启动一台Linux虚拟机步骤5&#xff1a…

问题——IMX6UL的uboot无法ping主机或Ubuntu

主要描述可能的方向&#xff0c;不涉具体过程&#xff0c;详细操作可以查阅网上相关教程 跟随正点原子教程测试以太网端口时&#xff0c;即便按照步骤多次尝试也无法ping通&#xff0c;后补充了些许网络工程基础知识解决了这个问题。 uboot无法ping主机或Ubuntu有多种可能&…

二分查找算法(3) _x的平方根

个人主页&#xff1a;C忠实粉丝 欢迎 点赞&#x1f44d; 收藏✨ 留言✉ 加关注&#x1f493;本文由 C忠实粉丝 原创 二分查找算法(3) _x的平方根 收录于专栏【经典算法练习】 本专栏旨在分享学习算法的一点学习笔记&#xff0c;欢迎大家在评论区交流讨论&#x1f48c; 目录 温馨…

简易CPU设计入门:取指令(一),端口列表与变量声明

取指令这一块呢&#xff0c;个人觉得&#xff0c;不太好讲。但是呢&#xff0c;不好讲&#xff0c;我也得讲啊。那就尽量地讲吧。如果讲得不好的话&#xff0c;那么&#xff0c;欢迎大家提出好的意见&#xff0c;帮助我改进讲课的质量。 首先呢&#xff0c;还是请大家去下载本…

面试官:Spring是如何解决循依赖问题?

Spring 的循环依赖一直都是 Spring 中一个很重要的话题&#xff0c;一方面是 Spring 为了解决循环依赖做了很多工作&#xff0c;另一个方面是因为它是面试 Spring 的常客&#xff0c;因为他要求你看过 Spring 的源码&#xff0c;如果没有看过 Spring 源码你基本上是回答不了这个…

pytorch的动态计算图机制

pytorch的动态计算图机制 一&#xff0c;动态计算图简介 Pytorch的计算图由节点和边组成&#xff0c;节点表示张量或者Function&#xff0c;边表示张量和Function之间的依赖关系。 Pytorch中的计算图是动态图。这里的动态主要有两重含义。 第一层含义是&#xff1a;计算图的…

“吉林一号”宽幅02B系列卫星

离轴四反光学成像系统 1.光学系统参数&#xff1a; 焦距&#xff1a;77.5mm&#xff1b; F/#&#xff1a;7.4&#xff1b; 视场&#xff1a;≥56゜&#xff1b; 光谱范围&#xff1a;400nm&#xff5e;1000nm。 2.说明&#xff1a; 光学系统采用离轴全反射式结构&#xff0c;整…

解密的军事卫星图像在各种民用地理空间研究中都有应用

一、美军光学侦察卫星计划概述 国家侦察局 &#xff08;NRO&#xff09; 负责开发和操作太空侦察系统&#xff0c;并为美国国家安全开展情报相关活动。NRO 开发了几代机密锁眼 &#xff08;KH&#xff09; 军事光学侦察卫星&#xff0c;这些卫星一直是美国国防部 &#xff08;D…

人工智能不是人工“制”能

文/孟永辉 如果你去过今年在上海举办的世界人工智能大会&#xff0c;就会知道当下的人工智能行业在中国是多么火爆。 的确&#xff0c;作为第四次工业革命的重要组成部分&#xff0c;人工智能愈发引起越来越多的重视。 不仅仅是在中国&#xff0c;当今世界的很多工业强国都在将…

python爬虫案例——异步加载网站数据抓取,post请求(6)

文章目录 前言1、任务目标2、抓取流程2.1 分析网页2.2 编写代码2.3 思路分析前言 本篇案例主要讲解异步加载网站如何分析网页接口,以及如何观察post请求URL的参数,网站数据并不难抓取,主要是将要抓取的数据接口分析清楚,才能根据需求编写想要的代码。 1、任务目标 目标网…

Win10 安装Node.js 以及 Vue项目的创建

一、Node.js和Vue介绍 1. Node.js Node.js 是一个基于 Chrome V8 引擎的 JavaScript 运行环境。它允许你在服务器端运行 JavaScript&#xff0c;使得你能够使用 JavaScript 来编写后端代码。以下是 Node.js 的一些关键特点&#xff1a; 事件驱动和非阻塞 I/O&#xff1a;Node…

list(一)

list是可以在常数范围内在任意位置进行插入和删除的序列式容器&#xff0c;并且该容器可以前后双向迭代。list的底层是双向链表结构&#xff0c;双向链表中每个元素存储在互不相关的独立节点中&#xff0c;在节点中通过指针指向 其前一个元素和后一个元素。 支持 -- 但是不支持…

Linux:终端(terminal)与终端管理器(agetty)

终端的设备文件 打开/dev目录可以发现其中有许多字符设备文件&#xff0c;例如对于我的RedHat操作系统&#xff0c;拥有tty0到tty59&#xff0c;它们是操作系统提供的终端设备。对于tty1-tty12使用ctrlaltF*可以进行快捷切换&#xff0c;下面的命令可以进行通用切换。 sudo ch…