Install Prometheus Monitoring On Kubernetes Cluster

news2024/10/7 20:29:13

目录

Node & Software & Docker Images Lists

​Prometheus introduction

Download Kubernetes Prometheus Manifest Files

Install Prometheus Monitoring Kubernetes 

Create a Namespace 

Create a Cluster Role And Binding It

Create a Config Map

Create a Prometheus Deployment

Connecting To Prometheus Dashboard

Method 1:Using Kubectl port forwarding

Method 2:Exposing Prometheus as a Service By NodePort

Method 3: Exposing Prometheus Using Ingress

Install Kube State Metrics

Install

Install Up Alertmanager

Install

Install Up Grafana

Install

Create Kubernetes Dashboards on Grafana

Install Up Node Exporter

Install

Querying Node-exporter Metrics in Prometheus 

Visualizing Prometheus Node Exporter Metrics as Grafana Dashboards

参考文档


Node & Software & Docker Images Lists

HOSTNAME

IPNODE TYPECONFIG
master1192.168.1.100master2vCPU4G
node1192.168.1.110worker2vCPu2G
node2192.168.1.120worker2vCPu2G
Image TypeName/Version
k8sregistry.aliyuncs.com/google_containers/coredns:v1.9.3
registry.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
registry.aliyuncs.com/google_containers/pause:3.9
 
calicodocker.io/calico/apiserver:v3.24.5
docker.io/calico/cni:v3.24.5
docker.io/calico/kube-controllers:v3.24.5
docker.io/calico/node:v3.24.5
docker.io/calico/pod2daemon-flexvol:v3.24.5
docker.io/calico/typha:v3.24.5
quay.io/tigera/operator:v1.28.5
dashboarddocker.io/kubernetesui/dashboard:v2.7.1
prometheusdocker.io/bitnami/kube-state-metrics:2.8.2
docker.io/grafana/grafana:latest
docker.io/kubernetesui/metrics-scraper:v1.0.8
docker.io/prom/alertmanager:latest
docker.io/prom/node-exporter:latest
docker.io/prom/prometheus:latest
ServiceHostPort:Container PortNodePort
prometheus-service8080:909030000
node-exporter9100:9100none
grafana-service3000:300032000
alertmanager-service9030:903031000
kube-state-metrics8080:http-metrics,8081:telemetry(headless)none

​Prometheus introduction

Prometheus is a high-scalable open-source monitoring framework. It provides out-of-the-box monitoring capabilities for the Kubernetes container orchestration platform. Also, In the observability space, it is gaining huge popularity as it helps with metrics and alerts. ​

There are a few key points I would like to list for your reference.

  1. Metric Collection: ​ Prometheus uses the pull model to retrieve metrics over HTTP. There is an option to push metrics to Prometheus using Pushgateway for use cases where Prometheus cannot Scrape the metrics. One such example is collecting custom metrics from short-lived kubernetes jobs & Cronjobs.
  2. Metric Endpoint: The systems that you want to monitor using Prometheus should expose the metrics on an /metrics endpoint. Prometheus uses this endpoint to pull the metrics in regular intervals.
  3. PromQL: ​ PromQL is a very flexible query language that can be used to query the metrics in the Prometheus dashboard. Also, the PromQL query will be used by Prometheus UI and Grafana to visualize metrics. ​
  4. Prometheus Exporters: Exporters are libraries that convert existing metrics from third-party apps to Prometheus metrics format. There are many official and community Prometheus exporters.Like the Prometheus node exporter,which exposes all Linux system-level metrics in Prometheus format.
  5. TSDB (time-series database): Prometheus uses TSDB for storing all the data efficiently. By default, all the data gets stored locally. However, to avoid a single point of failure, there are options to integrate remote storage for Prometheus TSDB.

To know more about prometheus architecture,you can refer prometheus架构 :: AWS Workshop 

Download Kubernetes Prometheus Manifest Files

In this guide,i want to expore all detail about install Prometheus Monitoring on Kubernetes Cluster.Of course you can install by prometheus-operator which use helm tools and get easily install.

Setup First, you need to download the Kubernetes Prometheus Manifest file. I put all files in git-repository, which includes Prometheus-server, Alertmanager, Grafana, Node Exporter files.

git clone https://github.com/ck784101777/kubernetes-prometheus-configs.git
tree kubernetes-prometheus-configs-main
kubernetes-prometheus-configs-main
├── kubernetes-alert-manager
│   ├── alertManagerConfigMap.yaml
│   ├── alertTemplateConfigMap.yaml
│   ├── deployment.yaml
│   └── service.yaml
├── kubernetes-grafana
│   ├── deployment.yaml
│   ├── grafana-datasources-config
│   └── service.yaml
├── kubernetes-node-exporter
│   ├── daemonset-.yaml
│   └── service.yaml
├── kubernetes-prometheus
│   ├── clusterRole.yaml
│   ├── config-map.yaml
│   ├── prometheus-deployment.yaml
│   ├── prometheus-ingress.yaml
│   └── prometheus-service.yaml
├── kubernetes-state-metrics
│   ├── cluster-role-binding.yaml
│   ├── cluster-role.yaml
│   ├── deployment.yaml
│   ├── service-account.yaml
│   └── service.yaml
└── README.md

Install Prometheus Monitoring Kubernetes 

Before starting, make sure you have a kubernetes cluster. Or if you don't have a kubernetes cluster up, follow this guide to get it:Install Kubernetes 1.26 on Centos 7.9

Create a Namespace 

First, we will create a Kubernetes namespace for all our monitoring components. If you don’t create a dedicated namespace, all the Prometheus kubernetes deployment objects get deployed on the default namespace.

Execute the following command to create a new namespace named monitoring.

kubectl create namespace monitoring

Create a Cluster Role And Binding It

We created a cluster role called prometheus and gave it permissions to get, list and monitor resources about nodes,services,endpoints,pods, ingresses,and deployments.You want monitor more resources you can customize it.

And we set role-prometheus to bind to the monitoring namespaces,which means that the role has the permission to access all the secrets in that namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  - deployment
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: default
  namespace: monitoring
kubectl apply -f clusterRole.yaml

Create a Config Map

We need two files to setup Prometheus configuration, one is prometheus.yaml file and the other is prometheus.rules file.

  1. prometheus.yaml: This is the main Prometheus configuration which include all the scrape configs, such as service discovery details, storage locations, data retention configs, etc)
  2. prometheus.rules: This file contains all the Prometheus alerting rules

​The prometheus.yaml contains all the configurations to discover pods and services running in the Kubernetes cluster dynamically. We have the following scrape jobs in our Prometheus scrape configuration.

  1. kubernetes-apiservers: It gets all the metrics from the API servers.
  2. kubernetes-nodes: It collects all the kubernetes node metrics.
  3. kubernetes-pods: All the pod metrics.
  4. kubernetes-cadvisor: Collects all cAdvisor metrics.
  5. kubernetes-service-endpoints: All the Service endpoints.

These two files are stored in the container with the path /etc/prometheus. After the deployment is created, you can run `kubectl exec -it -n monitoring <podName> ls /etc/prometheus` to show it.

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-server-conf
  labels:
    name: prometheus-server-conf
  namespace: monitoring
data:
  prometheus.rules: |-
    groups:
    - name: devopscube demo alert
      rules:
      - alert: High Pod Memory
        expr: sum(container_memory_usage_bytes) > 1
        for: 1m
        labels:
          severity: slack
        annotations:
          summary: High Memory Usage
  prometheus.yml: |-
    global:
      scrape_interval: 5s
      evaluation_interval: 5s
    rule_files:
      - /etc/prometheus/prometheus.rules
    alerting:
      alertmanagers:
      - scheme: http
        static_configs:
        - targets:
          - "alertmanager.monitoring.svc:9093"
    scrape_configs:
      - job_name: 'node-exporter'
        kubernetes_sd_configs:
          - role: endpoints
        relabel_configs:
        - source_labels: [__meta_kubernetes_endpoints_name]
          regex: 'node-exporter'
          action: keep
      - job_name: 'kubernetes-apiservers'
        kubernetes_sd_configs:
        - role: endpoints
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
          action: keep
          regex: default;kubernetes;https
      - job_name: 'kubernetes-nodes'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
        - role: node
        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics
      - job_name: 'kubernetes-pods'
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: kubernetes_pod_name
      - job_name: 'kube-state-metrics'
        static_configs:
          - targets: ['kube-state-metrics.kube-system.svc.cluster.local:8080']
      - job_name: 'kubernetes-cadvisor'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
        - role: node
        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
      - job_name: 'kubernetes-service-endpoints'
        kubernetes_sd_configs:
        - role: endpoints
        relabel_configs:
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: kubernetes_name
kubectl create -f config-map.yaml

Create a Prometheus Deployment

Create a file named prometheus-deployment.yaml and copy the following contents onto the file. In this configuration, we are mounting the Prometheus config map as a file inside /etc/prometheus as explained in the previous section.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-deployment
  namespace: monitoring
  labels:
    app: prometheus-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-server
  template:
    metadata:
      labels:
        app: prometheus-server
    spec:
      containers:
        - name: prometheus
          image: prom/prometheus
          args:
            - "--storage.tsdb.retention.time=12h"
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
          resources:
            requests:
              cpu: 500m
              memory: 500M
            limits:
              cpu: 1
              memory: 1Gi
          volumeMounts:
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
      volumes:
        - name: prometheus-config-volume
          configMap:
            defaultMode: 420
            name: prometheus-server-conf
  
        - name: prometheus-storage-volume
          emptyDir: {}
kubectl apply -f prometheus-deployment.yaml 
-> kubectl get pod -n monitoring
NAME                                     READY   STATUS    RESTARTS      AGE
alertmanager-6d77489f7b-mf4wl            1/1     Running   1 (42m ago)   24h
grafana-68b7b49968-2629q                 1/1     Running   1 (42m ago)   24h
node-exporter-gkfnl                      1/1     Running   1 (42m ago)   23h
prometheus-deployment-67cf879cc4-gbns8   1/1     Running   1 (42m ago)   25h

-> kubectl exec -it prometheus-deployment-67cf879cc4-gbns8 -n monitoring ls /etc/prometheus
prometheus.rules  prometheus.yml

Connecting To Prometheus Dashboard

You can view the deployed Prometheus dashboard in three different ways.

  1. Using Kubectl port forwarding
  2. Exposing the Prometheus deployment as a service with NodePort or a Load Balancer.
  3. Using Ingress.

Method 1:Using Kubectl port forwarding

Using prot forwarding means put port 8080 on the real machine to port 9090 on the container.Then you can enter http://localhost:8080 on your browser, you will get the Prometheus home page.

kubectl get pods --namespace=monitoring
NAME                                     READY   STATUS    RESTARTS        AGE
prometheus-deployment-67cf879cc4-gbns8   1/1     Running   2 (5m20s ago)   29h
kubectl port-forward prometheus-deployment-67cf879cc4-gbns8 8080:9090 -n monitoring

Method 2:Exposing Prometheus as a Service By NodePort

To access the Prometheus dashboard over a IP or a DNS name, you need to expose it as a Kubernetes service.Create a file named prometheus-service.yaml and copy the following contents. We will expose Prometheus on all kubernetes node IP’s on port 30000.

Once created, you can access the Prometheus dashboard using any of the Kubernetes node’s IP on port 30000. If you are on the cloud, make sure you have the right firewall rules to access port 30000 from your workstation.

apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
  namespace: monitoring
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '9090'
  
spec:
  selector: 
    app: prometheus-server
  type: NodePort  
  ports:
    - port: 8080
      targetPort: 9090 
      nodePort: 30000
kubectl create -f prometheus-service.yaml --namespace=monitoring 

Then you can enter http://nodeIp:8080 on your browser(for me is http://192.168.1.100:8080), you will get the Prometheus home page.

Method 3: Exposing Prometheus Using Ingress

If you have an existing ingress controller setup, you can create an ingress object to route the Prometheus DNS to the Prometheus backend service.

Also, you can add SSL for Prometheus in the ingress layer. And,you can use the command `openssl` to create tls.crt and tls.key and replace it.

Then you can enter https://prometheus.example.com on your browser, you will get the Prometheus home page.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: prometheus-ui
  namespace: monitoring
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  # Use the host you used in your kubernetes Ingress Configurations
  - host: prometheus.example.com
    http:
      paths:
      - backend:
          service:
            name: prometheus-service
            port:
              number: 8080
        path: /
        pathType: Prefix
  tls:
  - hosts: 
    - prometheus.apps.shaker242.lab
    secretName: prometheus-secret
---
apiVersion: v1 
kind: Secret 
metadata:
  name: prometheus-secret 
  namespace: monitoring
data:
# USe base64 in the certs
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZpVENDQkhHZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBRENCd0RFak1DRUdBMVVFQXhNYWFXNTAKWlhKdFpXUnBZWFJsTG5Ob1lXdGxjakkwTWk1c1lXSXhDekFKQmdOVkJBWVRBbFZUTVJFd0R3WURWUVFJRXdoVwphWEpuYVc1cFlURVFNQTRHQTFVRUJ4TUhRbkpwYzNSdmR6RXNNQ29HQTFVRUNoTWpVMGhCUzBWU01qUXlJRXhoCllpQkRaWEowYVdacFkyRjBaU0JCZFhSb2IzSnBkSGt4T1RBM0JnTlZCQXNUTUZOSVFVdEZVakkwTWlCTVlXSWcKU1c1MFpYSnRaV1JwWVhSbElFTmxjblJwWm1sallYUmxJRUYxZEdodmNtbDBlVEFlRncweE9URXdNVGN4TmpFMgpNekZhRncweU1URXdNVFl4TmpFMk16RmFNSUdBTVIwd0d3WURWUVFERkJRcUxtRndjSE11YzJoaGEyVnlNalF5CkxteGhZakVMTUFrR0ExVUVCaE1DVlZNeEVUQVBCZ05WQkFnVENGWnBjbWRwYm1saE1SQXdEZ1lEVlFRSEV3ZEMKY21semRHOTNNUll3RkFZRFZRUUtFdzFUU0VGTFJWSXlORElnVEdGaU1SVXdFd1lEVlFRTEV3eE1ZV0lnVjJWaQpjMmwwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURsRm16QVd0U09JcXZNCkpCV3Vuc0VIUmxraXozUmpSK0p1NTV0K0hCUG95YnZwVkJJeXMxZ3prby9INlkxa2Zxa1JCUzZZYVFHM2lYRFcKaDgzNlNWc3pNVUNVS3BtNXlZQXJRNzB4YlpPTXRJcjc1VEcrejFaRGJaeFUzbnh6RXdHdDN3U3c5OVJ0bjhWbgo5dEpTVXI0MHBHUytNemMzcnZOUFZRMjJoYTlhQTdGL2NVcGxtZUpkUnZEVnJ3Q012UklEcndXVEZjZkU3bUtxCjFSUkRxVDhETnlydlJmeUlubytmSkUxTmRuVEVMY0dTYVZlajhZVVFONHY0WFRnLzJncmxIN1pFT1VXNy9oYm8KUXh6NVllejVSam1wOWVPVUpvdVdmWk5FNEJBbGRZeVYxd2NPRXhRTmswck5BOU45ZXBjNWtUVVZQR3pOTWRucgovVXQxOWMweEFnTUJBQUdqZ2dIS01JSUJ4akFKQmdOVkhSTUVBakFBTUJFR0NXQ0dTQUdHK0VJQkFRUUVBd0lHClFEQUxCZ05WSFE4RUJBTUNCYUF3TXdZSllJWklBWWI0UWdFTkJDWVdKRTl3Wlc1VFUwd2dSMlZ1WlhKaGRHVmsKSUZObGNuWmxjaUJEWlhKMGFXWnBZMkYwWlRBZEJnTlZIUTRFRmdRVWRhYy94MTR6dXl3RVZPSi9vTjdQeU82bApDZ2N3Z2RzR0ExVWRJd1NCMHpDQjBJQVVzZFM1WWxuWEpWTk5mRVpkTEQvL2RyNE5mV3FoZ2JTa2diRXdnYTR4CkdUQVhCZ05WQkFNVEVHTmhMbk5vWVd0bGNqSTBNaTVzWVdJeEN6QUpCZ05WQkFZVEFsVlRNUkV3RHdZRFZRUUkKRXdoV2FYSm5hVzVwWVRFUU1BNEdBMVVFQnhNSFFuSnBjM1J2ZHpFc01Db0dBMVVFQ2hNalUwaEJTMFZTTWpReQpJRXhoWWlCRFpYSjBhV1pwWTJGMFpTQkJkWFJvYjNKcGRIa3hNVEF2QmdOVkJBc1RLRk5JUVV0RlVqSTBNaUJNCllXSWdVbTl2ZENCRFpYSjBhV1pwWTJGMFpTQkJkWFJvYjNKcGRIbUNBUUV3SFFZRFZSMGxCQll3RkFZSUt3WUIKQlFVSEF3RUdDQ3NHQVFVRkNBSUNNRWdHQTFVZEVRUkJNRCtDRFhOb1lXdGxjakkwTWk1c1lXS0NFbUZ3Y0hNdQpjMmhoYTJWeU1qUXlMbXhoWW9JVUtpNWhjSEJ6TG5Ob1lXdGxjakkwTWk1c1lXS0hCTUNvQ3hBd0RRWUpLb1pJCmh2Y05BUUVMQlFBRGdnRUJBRzA3ZHFNdFZYdVQrckduQlN4SkVTNjNSa2pHaWd0c3ZtNTk4NSsrbjZjRW5kSDIKb2hjaGdmRUo5V0UxYUFWSDR4QlJSdVRIUFVJOFcvd3N1OFBxQ1o4NHpRQ2U2elAyeThEcmEwbjFzK2lIeHFwRAorS3BwZS91NkNLVTFEL0VWRU9MakpZd3pRYlFLSUlPL2Y1Q0JVbUpGWjBuZ1VIUEtvUDNyTXordTlBOWFvRkVrCnF3dDBadHFHcWpjMkh3Q09UOTlOVmFsZ29ISXljOElxQXJXdjNSWklraUlyaW9kSUdDMS94MVQ2dHhKcEUyRisKQzZ0Tzk0U0FVSUJwc2VORjNFbGNLNUsyTW44YVAzR3NnNFRHeElPN2Q1eUIvb3YwNGhOV2Q1S2QwWGorL1BvQgpLOU43cFQ1SVU2citLekNoeGlSdmRvZlAzV0VYN1ZkNEtLWG94K0U9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2d0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktrd2dnU2xBZ0VBQW9JQkFRRGxGbXpBV3RTT0lxdk0KSkJXdW5zRUhSbGtpejNSalIrSnU1NXQrSEJQb3lidnBWQkl5czFnemtvL0g2WTFrZnFrUkJTNllhUUczaVhEVwpoODM2U1Zzek1VQ1VLcG01eVlBclE3MHhiWk9NdElyNzVURyt6MVpEYlp4VTNueHpFd0d0M3dTdzk5UnRuOFZuCjl0SlNVcjQwcEdTK016YzNydk5QVlEyMmhhOWFBN0YvY1VwbG1lSmRSdkRWcndDTXZSSURyd1dURmNmRTdtS3EKMVJSRHFUOEROeXJ2UmZ5SW5vK2ZKRTFOZG5URUxjR1NhVmVqOFlVUU40djRYVGcvMmdybEg3WkVPVVc3L2hibwpReHo1WWV6NVJqbXA5ZU9VSm91V2ZaTkU0QkFsZFl5VjF3Y09FeFFOazByTkE5TjllcGM1a1RVVlBHek5NZG5yCi9VdDE5YzB4QWdNQkFBRUNnZ0VCQU5zOHRjRDBiQnpHZzRFdk8yek0wMUJoKzZYN3daZk4wSjV3bW5kNjZYYkwKc1VEZ1N6WW9PbzNJZ2o5QWZTY2lyQ3YwdUozMVNFWmNpeGRVQ2tTdjlVNnRvTzdyUWdqeUZPM1N1dm5Wc3ZKaQpTZXc5Y0hqNk5jVDczak8rWkgxQVFFZ2tlWG5mQTNZU0JEcTFsSnhpUVZOaHpHUFY0Yzh4Wi9xUkhEbUVBTWR6CmwyaTB6dHJtcWRqSng4aTQxOXpGL1pVektoa2JtcVZVb3JjZ1lNdEt5QVloSENMYms2RFZtQ1FhbDlndEUrNjUKTmFTOEwxUW9yVWNVS0FoSTNKT2Q2TTRwbWRPaExITjZpZ0VwWFdVWGxBZjRITUZicHd5M1oxejNqZzVqTE9ragp6SWNDSVRaai9CYVZvSVc4QzJUb0pieUJKWkN6UDVjUVJTdkJOOGV4aUFFQ2dZRUEvV0Nxb2xVUWtOQkQrSnlPCklXOUJIRVlPS3oxRFZxNWxHRFhoNFMyTStpOU1pck5nUlcvL0NFRGhRUVVMZmtBTDgxMERPQmxsMXRRRUpGK3cKb1V6dWt6U1lkK1hTSnhicTM5YTF1ZGJldTNZU1ljeC8wTEEweGFQOW1sN1l1NXUraUZ4NGhwcnYyL2UrVklZQQpzTWV4WkZSODA3Q3M5YXN5MkdFT1l2aEdKb0VDZ1lFQTUzVm1weFlQbDFOYTVTMElJbEpuYm40dTl0RHpwYm5TCnpsMjBVQ3Q0d0N4STR6YjY1S1o4V1VaYlFzVTVaZ0VqTmxJWURXUisrd3kwVXh2SmNxUG5nS0xuOEdoSzhvOVEKeVJuR2dSYXAxWmNuUEdsbGdCeHQzM0s5TDNWMmJzMXBPcGJKMGlpOVdySWM4MU1wUVFpQjZ1RDRSZ216M0ZWSQpnUk5Ec2ZHS0xyRUNnWUVBbWY5ZXRqc3RUbGJHZVJ2dDVyUlB4bmR0dFNvTysyZ1RXWnVtSmM0aG1RMldYOWFWCjlKNFZTMWJqa1RrWHV5d0NGMis0dlNmeWxaZFd6U1M3bmMyOFV3dnNmekxYZjVxV05tV3hIYnBTdFcwVnp3c1QKeENyVWFDczd2ODlWdXZEMTVMc1BKZ0NWT0FSalVjd0FMM0d2aDJNeVd4ZE9pQ0g5VFRYd0lJYjFYQUVDZ1lBMwp4ZUptZ0xwaERJVHFsRjlSWmVubWhpRnErQTY5OEhrTG9TakI2TGZBRnV1NVZKWkFZcDIwSlcvNE51NE4xbGhWCnpwSmRKOG94Vkc1ZldHTENiUnhyc3RXUTZKQ213a0lGTTJEUjJsUXlVNm53dExUd21la2YzdFlYaVlad1RLNysKbnpjaW5RNkR2RWVkbW54bVgxWnU4cWJndVpYTmtmOVdtdjNFOHg4SkFRS0JnUUNNeDFWNHJIcUpwVXJMdkRVVQo4RzhXVGNrT2VFM2o2anhlcHMwcnExdEd1cE9XWW5saFlNYyt5VkMzMDZUc2dXUmJ5R1Y4YWNaRkF4WS9Ub2N5CmxpcXlUS1NGNUloYXhZQVpRTzVkOU1oTmN0bTRReDNaOUtTekZ5ZG01QlZVL0grMFFmUnRwM29TeFVneXRZNXkKV3ZDTFZ5bmNGZlZpL0VkaTdaZHM2aW82QVE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==
kubectl apply -f prometheus-ingress.yaml

In this guide,i recommend you use method 2 to access dashborad home page.It's easy to setup.

Now if you browse to status --> Targets, you will see all the Kubernetes endpoints connected to Prometheus automatically using service discovery as shown below.And you can see the heathy status of Endpoints.

Or you browse to Graph and enter `container_memory_rss` you'll see the memory used infos of your Nodes.

Install Kube State Metrics

Kube state metrics is a service that talks to the Kubernetes API server to get all the details about all the API objects like deployments, pods, daemonsets, Statefulsets, etc.Kube state metrics service exposes all the metrics on /metrics URI. Prometheus can scrape all the metrics exposed by Kube state metrics. ​

Following are some of the important metrics you can get from Kube state metrics.

  1. Node status, Node capacity (CPU and memory)
  2. Replica-set status (desired,available,unavailable,updated,etc)
  3. Pod status (waiting, running, ready, etc)
  4. Ingress metrics
  5. PV, PVC metrics
  6. Daemonset & Statefulset metrics.
  7. Resource requests and limits.
  8. Job & Cronjob metrics
  9. more metrics

Install

You will have to deploy the following Kubernetes objects for Kube state metrics to work.

  1. Service Account
  2. Cluster Role – For kube state metrics to access all the Kubernetes API objects.
  3. Cluster Role Binding – Binds the service account with the cluster role.
  4. Kube State Metrics Deployment
  5. Service – To expose the metrics

All the above Kube state metrics objects will be deployed in the kube-system namespace

Let’s deploy the components. It's esay to deploy,you don't need change any files just run `kubectl apply`.

tree kubernetes-state-metrics/
kubernetes-state-metrics/
├── cluster-role-binding.yaml
├── cluster-role.yaml
├── deployment.yaml
├── service-account.yaml
└── service.yaml
kubectl apply -f kubernetes-state-metrics/
kubectl get deployments kube-state-metrics -n kube-system
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
kube-state-metrics   1/1     1            1           29h

Kube state metrics can be added as part of the Prometheus. You need to add a job configuration to your Prometheus config for Prometheus to scrape all the Kube state metrics. If you have followed my prometheus guide, you don't have to add this scrape config. It is already setted in Promethues config file(as bellow).

  - job_name: 'kube-state-metrics'
        static_configs:
          - targets: ['kube-state-metrics.kube-system.svc.cluster.local:8080']

You can see the target status “up” after deploying kube state metrics. 

 

   Or you can do this do check all metrics.

kubectl run curl --image=radial/busyboxplus:curl -i --tty --rm
-> curl http://kube-state-metrics.kube-system.svc.cluster.local:8080/metrics

Install Up Alertmanager

AlertManager is an open-source alerting system that works with the Prometheus Monitoring system. And it handles all the alerting mechanisms for Prometheus metrics. There are many integrations available to receive alerts from the Alertmanager (Slack, email, API endpoints, etc)

Install

Alert Manager setup has the following key configurations.

  1. A config map for AlertManager configuration
  2. A config Map for AlertManager alert templates
  3. Alert Manager ​ Kubernetes Deployment ​
  4. Alert Manager service to access the web UI.
tree kubernetes-alert-manager/
kubernetes-alert-manager/
├── alertManagerConfigMap.yaml
├── alertTemplateConfigMap.yaml
├── deployment.yaml
└── service.yaml

Prometheus should have the correct alert manager service endpoint in `kubernetes-prometheus-configs-main/kubernetes-prometheus/config-map.yaml `as shown below to send the alert to Alert Manager.

alerting:
   alertmanagers:
      - scheme: http
        static_configs:
        - targets:
          - "alertmanager.monitoring.svc:9093"

​ All the alerting rules have to be setted on Prometheus config based on your needs. It should be created as part of the Prometheus config map with a file named prometheus.rules and added to the config.yaml in the following way. ​

rule_files:
      - /etc/prometheus/prometheus.rules

For receiving emails for alerts, you need to have a valid SMTP host in the alert manager file `alertManagerConfigMap.yaml`. You can customize the email template as per your needs in the Alert Template config map. We have given the generic template in this guide.

receivers:
    - name: alert-emailer
      email_configs:
      - to: demo@devopscube.com
        send_resolved: false
        from: from-email@email.com
        smarthost: smtp.eample.com:25
        require_tls: false
    - name: slack_demo
      slack_configs:
      - api_url: https://hooks.slack.com/services/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sz4Qk
        channel: '#devopscube-demo'

We expose the alertmanager deployment as a service with NodePort,and the port is 31000.

apiVersion: v1
kind: Service
metadata:
  name: alertmanager
  namespace: monitoring
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '9093'
spec:
  selector:
    app: alertmanager
  type: NodePort
  ports:
    - port: 9093
      targetPort: 9093
      nodePort: 31000

 Let’s get started with the setup.Just like the method to install Kube state metrics,we use `kubectl apply` to setup all files.

kubectl apply -f kubernetes-alert-manager/
kubectl get deployments -n monitoring alertmanager
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
alertmanager   1/1     1            1           30h

Click button Alter to show alter message,my failure was because I didn't set the correct email or slack info in config.yaml.

Then you can enter http://nodeIP:31000(mine is http://192.168.1.100:31000) on your browser, you will get the alertManager home page.We just set one alert rule for memory,if you want add more,you can customize the prometheus.rules.

Install Up Grafana

Grafana is an open-source lightweight dashboard tool. It can be integrated with data sources like Prometheus.Using Grafana you can create dashboards from Prometheus metrics to monitor the kubernetes cluster.

The best part is, you don’t have to write all the PromQL queries for the dashboards. There are many community dashboard templates available for Kubernetes. You can import it and modify it as per your needs.

Install

  1. A datasourcecs from prometheus
  2. A Grafana server Deployment
  3. Alert Grafana service to access the web UI.
tree kubernetes-grafana/
kubernetes-grafana/
├── deployment.yaml
├── grafana-datasources-config
└── service.yaml

The following data source configuration is for Prometheus. We could create it by .yaml file or grafana web page.

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources
  namespace: monitoring
data:
  prometheus.yaml: |-
    {
        "apiVersion": 1,
        "datasources": [
            {
               "access":"proxy",
                "editable": true,
                "name": "prometheus",
                "orgId": 1,
                "type": "prometheus",
                "url": "http://prometheus-service.monitoring.svc:8080",
                "version": 1
            }
        ]
    }

With deployment.yaml file ,the configuration of volumes,this Grafana deployment does not use a persistent volume. If you restart the pod all changes will be gone.You can persist it by NFS, Local volumes,etc.

  volumes:
        - name: grafana-storage
          emptyDir: {}
        - name: grafana-datasources
          configMap:
              defaultMode: 420
              name: grafana-datasources

 We expose the grafana deployment as a service with NodePort,and the port is 32000.

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '3000'
spec:
  selector: 
    app: grafana
  type: NodePort  
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 32000
kubectl apply -f kubernetes-grafana/
kubectl get deployments -n monitoring grafana
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
grafana   1/1     1            1           40h

Create Kubernetes Dashboards on Grafana

Enter 'http://NodeIP:32000'(for me is http://192.168.1.100:32000) on browser to login.

Use the following default username and password to log in. Once you log in with default credentials, it will prompt you to change the default password.

admin

admin

Creating a Kubernetes dashboard from the Grafana template is pretty easy. There are many prebuilt Grafana templates available for Kubernetes. You can easily have prebuilt dashboards for ingress controllers, volumes, API servers, Prometheus metrics, and much more.

You can get the template ID from Dashboards | Grafana Labs.This is page has ample of standards template of dashboard.

After logining in,following below photos to import a standard template of dashboard.Here we try to import the template with ID is '8588'. 

 

 

 

Install Up Node Exporter

​Node exporter is an official Prometheus exporter for capturing all the Linux system-related metrics.It collects all the hardware and Operating System level metrics that are exposed by the kernel. ​​By default, most of the Kubernetes clusters expose the metric server metrics (Cluster level metrics from the summary API) and Cadvisor (Container level metrics). It does not provide detailed node-level metrics. ​

To get all the kubernetes node-level system metrics, you need to have a node-exporter running in all the kubernetes nodes. It collects all the Linux system metrics and exposes them via /metrics endpoint on port 9100.

Similarly, you need to install Kube state metrics to get all the metrics related to kubernetes objects.

Here is what we are going to do.

  1. Deploy node exporter on all the Kubernetes nodes as a daemonset. Daemonset makes sure one instance of node-exporter is running in all the nodes. It exposes all the node metrics on port 9100 on the /metrics endpoint
  2. Create a service that listens on port 9100 and points to all the daemonset node exporter pods. We would be monitoring the service endpoints (Node exporter pods) from Prometheus using the endpoint job config. More explanation on this in the Prometheus config part.

Lest get started with the setup.

Install

Just Two files:

  1. A daemonset:Deploy node exporter on all the Kubernetes nodes as a daemonset. A daemonset make sure one instance of node-exporter is running in all the nodes and exposes all the node metrics on port 9100.
  2. A service:Create a service that listens on port 9100.And this service will points to all the daemonset node exporter pods as endpoints.
kubernetes-node-exporter/
├── daemonset.yaml
└── service.yaml
kubectl apply -f kubernetes-node-exporter/
kubectl get endpoints node-exporter -n monitoring 
NAME            ENDPOINTS                                                      AGE
node-exporter   10.244.137.120:9100,10.244.136.155:9100,10.244.131.22:9100,   40h

The scrape config for node-exporter is part of the Prometheus config map. Once you deploy the node-exporter, you should see node-exporter targets and metrics in Prometheus.

Querying Node-exporter Metrics in Prometheus 

Once you verify the node-exporter target state in Prometheus, you can query the Prometheus dashboard’s available node-exporter metrics.

All the metrics from node-exporter is prefixed with node_,You can query the metrics with different PromQL expressions. 

 

Visualizing Prometheus Node Exporter Metrics as Grafana Dashboards

Visualising the node exporter metrics on Grafana is not difficult as you think.So here is how the node-exporter Grafana dashboard looks for CPU/memory and disk statistics.

The Template ID is 1860.

参考文档

k8s--普通k8s集群---使用rolebinding限制或增加访问命名空间以及可执行操作权限_k8s rolebinding_张小凡vip的博客-CSDN博客

k8s--job 控制器_行走在这人世间的 tester的技术博客_51CTO博客

Jobs | Kubernetes

 prometheus理论+实践(1)_scrape_config_郝1.的博客-CSDN博客

Exporters and integrations | Prometheus

Querying basics | Prometheus

https://github.com/kubernetes/kube-state-metrics

Kubernetes Kube-state-metrics_富士康质检员张全蛋的博客-CSDN博客

k8s之containerPort、servicePort、nodePort、hostPort解析_k8s容器端口和服务端口_行者7786的博客-CSDN博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/554108.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

lwIP更新记05:核心应用文件移动

从 lwIP-2.0.0 开始&#xff0c;lwIP 开发者将一些核心应用从 contrib 仓库移动到 lwIP 仓库的 src/apps 文件夹。 对比版本 lwIP-1.4.1 和 lwIP-2.1.2 的 src 文件夹内容&#xff0c;可以发现 lwIP 2.1.2 版本多了一个 apps 文件夹。 最开始&#xff0c;也就是 2015 年 10 …

【王道·操作系统】第一章计算机系统概述【未完】

一、 操作系统的基本概念 1.1 概念&#xff08;定义&#xff09;&#xff1a;什么是操作系统 操作系统operating system,OS&#xff1a;控制和管理整个计算机系统的硬件和软件资源&#xff0c;并合理地组织调度计算机的工作和资源的分配&#xff1b;以提供给用户和其他软件方…

LC 谐振电路

LC电路是各种电子设备中的基本电子组件&#xff0c;尤其是在诸如调谐器&#xff0c;滤波器&#xff0c;混频器和振荡器之类的电路中使用的无线电设备中。在学习之前&#xff0c;我们复习一下电感和电容的原理。 电容就是储存电荷的容器&#xff0c;最基本构成是如下图所示的一个…

uniapp使用express连接mysql数据库

一、安装 express 脚手架 使用winR再输入cmd打开命令提示符&#xff0c;输入如下内容全局安装脚手架 npm i express-generator -g 二、在项目根目录下创建服务 可以在命令提示符中cd到自己项目的根目录下&#xff0c;也可以在HBuilder X里内置的终端运行代码 C:\HBuilderProj…

波奇学C++:动态内存管理,new和delete

内存分区 内存可分为栈&#xff0c;堆&#xff0c;静态区/数据段&#xff0c;常量区/代码段 栈&#xff1a;函数栈帧&#xff0c;临时变量&#xff0c;开辟空间 堆&#xff1a;动态申请的数据 静态区/数码段&#xff1a;静态数据&#xff0c;全局变量 常量区/代码段&#x…

django admin后台管理系统上传添加的图片保存到阿里云oss中

目录 一、配置admin上传图片到阿里云oss 二、配置admin后台上传到阿里云oss的图片为自定义名 问题描述&#xff1a;在开发自己的应用/网页前后台时可以调用阿里云oss的接口将图片上传至oss保存和读取&#xff0c;非常方便。但在django自带的admin后台中如何配置添加的图片也上…

视频美颜SDK在直播领域的应用与挑战

目前&#xff0c;视频美颜技术在视频拍摄领域“大展神通”&#xff0c;因为视频美颜SDK可以帮助主播在直播中展现更加美好的形象&#xff0c;吸引更多的观众&#xff0c;并提升用户体验。然而&#xff0c;视频美颜SDK在直播领域的应用也面临着一些挑战。 一、视频美颜SDK在直…

回溯法【2-5】

假设一个推销员问题由下图定义&#xff0c;用回溯法求解 从1号结点出发的相应最短巡回路径&#xff08;每个顶点刚好到达一次&#xff09;。若用bestL表示搜索过程中产生的当前最优解&#xff0c;剪枝函数 L 设计为&#xff1a; L 已走过的路径长度 当前结点相关的最短边 所…

ChatGPT提示工程课程,吴恩达OpenAI

Principle 1: Write clear and specific instructions 使用明确的分隔符&#xff0c;是LLM知道这个某个单独的字段。 前提设置&#xff1a; import openai import osfrom dotenv import load_dotenv, find_dotenv _ load_dotenv(find_dotenv()) # read local .env fileopena…

LeetCode·每日一题·1080. 根到叶路径上的不足节点·递归

作者&#xff1a;小迅 链接&#xff1a;https://leetcode.cn/problems/insufficient-nodes-in-root-to-leaf-paths/solutions/2279048/di-gui-zhu-shi-chao-ji-xiang-xi-by-xun-g-7rfd/ 来源&#xff1a;力扣&#xff08;LeetCode&#xff09; 著作权归作者所有。商业转载请联系…

如何使用 VSCode 软件运行C代码

VSCode 的下载和扩展的配置可以参考文章&#xff1a;VSCode 的安装与插件配置。 VSCode 是很好用的编辑器&#xff0c;通过给其配置 MinGW-w64 插件就可以在它上面编译运行C代码了。 在没有配置 MinGW-w64 插件时&#xff0c;在 VSCode 中运行下面的代码后打印如下图所示。 这…

【C语言】C的编译过程预处理

目录 一、 程序的翻译环境和执行环境1、翻译环境预处理编译汇编链接 2、执行环境 二、预处理详解1、预定义符号2、#define#define 语法#define 定义宏#define 替换规则 3、#和##4、宏和函数对比 一、 程序的翻译环境和执行环境 在ANSI C的任何一种实现中&#xff0c;存在两个不…

为什么我们拥有庞大的语言模型,而Vision Transformers的规模却很小?

编者按&#xff1a;本文探讨了语言模型为何会比视觉模型的参数数量大得多的原因&#xff0c;并详细介绍了传统ViT训练方法在扩展时出现不稳定性的问题。 为此&#xff0c;本文介绍了如何改进架构以实现扩展&#xff0c;并讨论了实现模型最优状态的方法。同时&#xff0c;如何在…

Docker部署skywalking9.2版本

注意使用docker部署skywalking和使用tar包部署有点不一样OAP和UI需要分别部署原因是&#xff1a; SkyWalking UI 和 OAP 是 SkyWalking 的两个主要组件&#xff0c;它们之间的关系是前端和后端的关系。SkyWalking UI 是一个 Web 应用程序&#xff0c;它提供了一个漂亮的 UI 界面…

连续降税、人民币结算,巴西潜力爆发!开发细节见内!

本文内容 /CONTENT 01/中巴贸易现状 02/主要进口类别 03/通关和贸易政策 04/市场商业环境 05/本地公司的注册程序 06/巴西的主要节日 最近巴西降低关税&#xff0c;宣布人民币结算。想转市场的朋友不妨考虑巴西。 巴西作为南美洲最大的国家&#xff0c;当地人口占53%(…

ios音频焦点

音频焦点 两个或者两个以上的app可以同时向同一输出流播放音频。系统会将所有音频流混合在一起&#xff0c;但这样会给用户带来很大的困扰。为了避免所有音乐app同时播放&#xff0c;ios引入了“音频焦点”的概念。在ios中&#xff0c;音频焦点是操作系统为了管理音频硬件而引…

uvc驱动ioctl分析上

uvc驱动ioctl分析上 文章目录 uvc驱动ioctl分析上uvc_ioctl_querycap查询设备的能力uvc_ioctl_enum_fmt处理V4L2设备的枚举格式&#xff08;enum_fmt&#xff09;的ioctl操作uvc_ioctl_enum_fmt_vid_out枚举视频输出格式uvc_ioctl_enum_fmt_vid_cap枚举视频捕获格式 uvc_v4l2_g…

低代码到底有多爽?解放双手,推荐一款C端的低代码产品

前言引入 低代码&#xff08;LowCode&#xff09;就是一种可视化搭建系统&#xff0c;从字面意思来讲&#xff0c;一是可视化&#xff1b;二是少写代码。由此可见&#xff0c;低代码的出现是为了减轻和降低开发者的负担&#xff0c;让开发者减少重复劳动&#xff0c;避免资源和…

基于Angular+Nginx+Java+Spring开发的医院信息系统(HIS)源码

基于云计算技术的SaaS服务的医院信息系统源码 云HIS系统有效实现医疗数据共享与交换&#xff0c;解决数据重复采集及信息孤岛等问题。重构管理服务流程&#xff0c;重建统一的信息架构体系&#xff0c;重造病人服务环境&#xff0c;向不同类型的医疗机构提供SaaS化HIS服务解决…

如何在不损失质量的情况下压缩优化图像大小

您是否知道在将图像上传到 WordPress 之前对其进行优化会对您的网站速度产生巨大影响&#xff1f; 在开始时&#xff0c;许多初学者只是简单地上传图片&#xff0c;而没有针对网络对其进行优化。这些大图像文件会使您的网站变慢。 您可以通过将图像优化最佳实践作为常规博客程…