k8s和deepflow部署与测试

news2024/11/24 0:53:26

Ubuntu-22-LTS部署k8s和deepflow

环境详情:
Static hostname: k8smaster.example.net
Icon name: computer-vm
Chassis: vm
Machine ID: 22349ac6f9ba406293d0541bcba7c05d
Boot ID: 605a74a509724a88940bbbb69cde77f2
Virtualization: vmware
Operating System: Ubuntu 22.04.4 LTS
Kernel: Linux 5.15.0-106-generic
Architecture: x86-64
Hardware Vendor: VMware, Inc.
Hardware Model: VMware Virtual Platform

当您在 Ubuntu 22.04 上安装 Kubernetes 集群时,您可以遵循以下步骤:

  1. 设置主机名并在 hosts 文件中添加条目

    • 登录到主节点并使用 hostnamectl 命令设置主机名:

      hostnamectl set-hostname "k8smaster.example.net"
      
    • 在工作节点上,运行以下命令设置主机名(分别对应第一个和第二个工作节点):

      hostnamectl set-hostname "k8sworker1.example.net"  # 第一个工作节点
      hostnamectl set-hostname "k8sworker2.example.net"  # 第二个工作节点
      
    • 在每个节点的 /etc/hosts 文件中添加以下条目:

      10.1.1.70 k8smaster.example.net k8smaster
      10.1.1.71 k8sworker1.example.net k8sworker1
      
  2. 禁用 swap 并添加内核设置

    • 在所有节点上执行以下命令以禁用交换功能:

      swapoff -a
      sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
      
    • 加载以下内核模块:

      tee /etc/modules-load.d/containerd.conf <<EOF
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      
    • 为 Kubernetes 设置以下内核参数:

      tee /etc/sysctl.d/kubernetes.conf <<EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      EOF
      sysctl --system
      
  3. 安装 containerd 运行时

    • 首先安装 containerd 的依赖项:

      apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      
    • 启用 Docker 存储库:

      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      
    • 安装 containerd:

      apt update
      apt install -y containerd.io
      
    • 配置 containerd 使用 systemd 作为 cgroup:

      containerd config default | tee /etc/containerd/config.toml > /dev/null 2>&1
      sed -i 's/SystemdCgroup\\=false/SystemdCgroup\\=true/g' /etc/containerd/config.toml
      

      部分配置手动修改

      disabled_plugins = []
      imports = []
      oom_score = 0
      plugin_dir = ""
      required_plugins = []
      root = "/var/lib/containerd"
      state = "/run/containerd"
      temp = ""
      version = 2
      
      [cgroup]
      path = ""
      
      [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
      
      [grpc]
      address = "/run/containerd/containerd.sock"
      gid = 0
      max_recv_message_size = 16777216
      max_send_message_size = 16777216
      tcp_address = ""
      tcp_tls_ca = ""
      tcp_tls_cert = ""
      tcp_tls_key = ""
      uid = 0
      
      [metrics]
      address = ""
      grpc_histogram = false
      
      [plugins]
      
      [plugins."io.containerd.gc.v1.scheduler"]
          deletion_threshold = 0
          mutation_threshold = 100
          pause_threshold = 0.02
          schedule_delay = "0s"
          startup_delay = "100ms"
      
      [plugins."io.containerd.grpc.v1.cri"]
          device_ownership_from_security_context = false
          disable_apparmor = false
          disable_cgroup = false
          disable_hugetlb_controller = true
          disable_proc_mount = false
          disable_tcp_service = true
          drain_exec_sync_io_timeout = "0s"
          enable_selinux = false
          enable_tls_streaming = false
          enable_unprivileged_icmp = false
          enable_unprivileged_ports = false
          ignore_deprecation_warnings = []
          ignore_image_defined_volumes = false
          max_concurrent_downloads = 3
          max_container_log_line_size = 16384
          netns_mounts_under_state_dir = false
          restrict_oom_score_adj = false
          # 修改以下这行
          sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
          selinux_category_range = 1024
          stats_collect_period = 10
          stream_idle_timeout = "4h0m0s"
          stream_server_address = "127.0.0.1"
          stream_server_port = "0"
          systemd_cgroup = false
          tolerate_missing_hugetlb_controller = true
          unset_seccomp_profile = ""
      
          [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/cni/bin"
          conf_dir = "/etc/cni/net.d"
          conf_template = ""
          ip_pref = ""
          max_conf_num = 1
      
          [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          disable_snapshot_annotations = true
          discard_unpacked_layers = false
          ignore_rdt_not_enabled_errors = false
          no_pivot = false
          snapshotter = "overlayfs"
      
          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = "io.containerd.runc.v2"
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                  BinaryName = ""
                  CriuImagePath = ""
                  CriuPath = ""
                  CriuWorkPath = ""
                  IoGid = 0
                  IoUid = 0
                  NoNewKeyring = false
                  NoPivotRoot = false
                  Root = ""
                  ShimCgroup = ""
                  SystemdCgroup = true
      
          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".image_decryption]
          key_model = "node"
      
          [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
      
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
      
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
      
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
      
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
              # 添加如下4行
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://docker.mirrors.ustc.edu.cn"]
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
              endpoint = ["https://registry.aliyuncs.com/google_containers"]
      
          [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
          tls_cert_file = ""
          tls_key_file = ""
      
      [plugins."io.containerd.internal.v1.opt"]
          path = "/opt/containerd"
      
      [plugins."io.containerd.internal.v1.restart"]
          interval = "10s"
      
      [plugins."io.containerd.internal.v1.tracing"]
          sampling_ratio = 1.0
          service_name = "containerd"
      
      [plugins."io.containerd.metadata.v1.bolt"]
          content_sharing_policy = "shared"
      
      [plugins."io.containerd.monitor.v1.cgroups"]
          no_prometheus = false
      
      [plugins."io.containerd.runtime.v1.linux"]
          no_shim = false
          runtime = "runc"
          runtime_root = ""
          shim = "containerd-shim"
          shim_debug = false
      
      [plugins."io.containerd.runtime.v2.task"]
          platforms = ["linux/amd64"]
          sched_core = false
      
      [plugins."io.containerd.service.v1.diff-service"]
          default = ["walking"]
      
      [plugins."io.containerd.service.v1.tasks-service"]
          rdt_config_file = ""
      
      [plugins."io.containerd.snapshotter.v1.aufs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.btrfs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.devmapper"]
          async_remove = false
          base_image_size = ""
          discard_blocks = false
          fs_options = ""
          fs_type = ""
          pool_name = ""
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.native"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.overlayfs"]
          mount_options = []
          root_path = ""
          sync_remove = false
          upperdir_label = false
      
      [plugins."io.containerd.snapshotter.v1.zfs"]
          root_path = ""
      
      [plugins."io.containerd.tracing.processor.v1.otlp"]
          endpoint = ""
          insecure = false
          protocol = ""
      
      [proxy_plugins]
      
      [stream_processors]
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar"
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar+gzip"
      
      [timeouts]
      "io.containerd.timeout.bolt.open" = "0s"
      "io.containerd.timeout.shim.cleanup" = "5s"
      "io.containerd.timeout.shim.load" = "5s"
      "io.containerd.timeout.shim.shutdown" = "3s"
      "io.containerd.timeout.task.state" = "2s"
      
      [ttrpc]
      address = ""
      gid = 0
      uid = 0
      
    • 重启并启用容器服务:

      systemctl restart containerd
      systemctl enable containerd
      
    • 设置crictl

      cat > /etc/crictl.yaml <<EOF
      runtime-endpoint: unix:///var/run/containerd/containerd.sock
      image-endpoint: unix:///var/run/containerd/containerd.sock
      timeout: 10
      debug: false
      pull-image-on-create: false
      EOF
      
  4. 添加阿里云的 Kubernetes 源

    • 首先,导入阿里云的 GPG 密钥:

      curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
      
    • 然后,添加阿里云的 Kubernetes 源:

      tee /etc/apt/sources.list.d/kubernetes.list <<EOF
      deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
      EOF
      
  5. 安装 Kubernetes 组件

    • 更新软件包索引并安装 kubelet、kubeadm 和 kubectl:

      apt-get update
      apt-get install -y kubelet kubeadm kubectl
      
    • 设置 kubelet 使用 systemd 作为 cgroup 驱动:

      # 可忽略
      # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /var/lib/kubelet/kubeadm-flags.env
      # systemctl daemon-reload
      # systemctl restart kubelet
      
  6. 初始化 Kubernetes 集群

    • 使用 kubeadm 初始化集群,并指定阿里云的镜像仓库:

      # kubeadm init --image-repository registry.aliyuncs.com/google_containers
      I0513 14:16:59.740096   17563 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.28
      [init] Using Kubernetes version: v1.28.9
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      W0513 14:17:01.440936   17563 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime         is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Generating "ca" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [k8smaster.example.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.        cluster.local] and IPs [10.96.0.1 10.1.1.70]
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to         4m0s
      [apiclient] All control plane components are healthy after 4.002079 seconds
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
      [upload-certs] Skipping phase. Please see --upload-certs
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.        io/exclude-from-external-load-balancers]
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
      [bootstrap-token] Using token: m9z4yq.dok89ro6yt23wykr
      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
      [addons] Applied essential addon: CoreDNS
      [addons] Applied essential addon: kube-proxy
      
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      Alternatively, if you are the root user, you can run:
      
        export KUBECONFIG=/etc/kubernetes/admin.conf
      
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      kubeadm join 10.1.1.70:6443 --token m9z4yq.dok89ro6yt23wykr \
              --discovery-token-ca-cert-hash sha256:17c3f29bd276592e668e9e6a7a187140a887254b4555cf7d293c3313d7c8a178 
      
  7. 配置 kubectl

    • 为当前用户设置 kubectl 访问:

      mkdir -p $HOME/.kube
      cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      chown $(id -u):$(id -g) $HOME/.kube/config
      
  8. 安装网络插件

    • 安装一个 Pod 网络插件,例如 Calico 或 Flannel。例如,使用 Calico:

      kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
      # 网络插件初始化完毕之后,coredns容器就正常了
      kubectl logs -n kube-system -l k8s-app=kube-dns
      
  9. 验证集群

    • 启动一个nginx pod:

      # vim nginx_pod.yml
      apiVersion: v1
      kind: Pod
      metadata:
        name: test-nginx-pod
        namespace: test
        labels:
          app: nginx
      spec:
        containers:
        - name: test-nginx-container
          image: nginx:latest
          ports:
          - containerPort: 80
        tolerations:
          - key: "node-role.kubernetes.io/control-plane"
            operator: "Exists"
            effect: "NoSchedule"
      ---
      
      apiVersion: v1
      kind: Service
      # service和pod必须位于同一个namespace
      metadata:
        name: nginx-service
        namespace: test
      spec:
        type: NodePort
        # selector应该匹配pod的labels
        selector:
          app: nginx
        ports:
        - protocol: TCP
          port: 80
          nodePort: 30007
          targetPort: 80
      

      启动

      kubectl apply -f nginx_pod.yml
      

部署opentelemetry-collector测试

otel-collector和otel-agent需要程序集成API,发送到以DaemonSet运行在每个节点的otel-agent,otel-agent再将数据发送给otel-collector汇总,然后发往可以处理otlp trace数据的后端,如zipkin、jaeger等。

自定义测试yaml文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  namespace: default
data:
  # 你的配置数据
  config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging]

---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  type: NodePort
  ports:
    - port: 4317
      targetPort: 4317
      nodePort: 30080
      name: otlp-grpc
    - port: 8888
      targetPort: 8888
      name: metrics
  selector:
    component: otel-collector

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      component: otel-collector
  template:
    metadata:
      labels:
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector:latest
        ports:
        - containerPort: 4317
        - containerPort: 8888
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
      volumes:
      - configMap:
          name: otel-collector-conf
        name: otel-collector-config-vol

启动

mkdir /conf
kubectl apply -f otel-collector.yaml
kubectl get -f otel-collector.yaml

删除

kubectl delete -f otel-collector.yaml

使用官方提供示例

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/main/examples/k8s/otel-config.yaml

根据需要修改文件

otel-config.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-agent-conf
  labels:
    app: opentelemetry
    component: otel-agent-conf
data:
  otel-agent-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    exporters:
      otlp:
        endpoint: "otel-collector.default:4317"
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 400
        # 25% of limit up to 2G
        spike_limit_mib: 100
        check_interval: 5s
    extensions:
      zpages: {}
    service:
      extensions: [zpages]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [otlp]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-agent
  labels:
    app: opentelemetry
    component: otel-agent
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-agent
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-agent
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-agent-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-agent
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 55679 # ZPages endpoint.
        - containerPort: 4317 # Default OpenTelemetry receiver port.
        - containerPort: 8888  # Metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 400MiB
        volumeMounts:
        - name: otel-agent-config-vol
          mountPath: /conf
      volumes:
        - configMap:
            name: otel-agent-conf
            items:
              - key: otel-agent-config
                path: otel-agent-config.yaml
          name: otel-agent-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 1500
        # 25% of limit up to 2G
        spike_limit_mib: 512
        check_interval: 5s
    extensions:
      zpages: {}
    exporters:
      otlp:
        endpoint: "http://someotlp.target.com:4317" # Replace with a real endpoint.
        tls:
          insecure: true
      zipkin:
        endpoint: "http://10.1.1.10:9411/api/v2/spans"
        format: "proto"
    service:
      extensions: [zpages]
      pipelines:
        traces/1:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [zipkin]
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  ports:
  - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
    port: 4318
    protocol: TCP
    targetPort: 4318
  - name: metrics # Default endpoint for querying metrics.
    port: 8888
  selector:
    component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-collector
  minReadySeconds: 5
  progressDeadlineSeconds: 120
  replicas: 1 #TODO - adjust this to your own requirements
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-collector-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-collector
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        ports:
        - containerPort: 55679 # Default endpoint for ZPages.
        - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
        - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
        - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
        - containerPort: 9411 # Default endpoint for Zipkin receiver.
        - containerPort: 8888  # Default endpoint for querying metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 1600MiB
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
#        - name: otel-collector-secrets
#          mountPath: /secrets
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol
#        - secret:
#            name: otel-collector-secrets
#            items:
#              - key: cert.pem
#                path: cert.pem
#              - key: key.pem
#                path: key.pem

部署deepflow监控单个k8s集群

官方文档
官方demo

安装helm

snap install helm --classic

设置pv

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
## config default storage class
kubectl patch storageclass openebs-hostpath  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

部署deepflow

helm repo add deepflow https://deepflowio.github.io/deepflow
helm repo update deepflow # use `helm repo update` when helm < 3.7.0
helm install deepflow -n deepflow deepflow/deepflow --create-namespace
# 显示如下
NAME: deepflow
LAST DEPLOYED: Tue May 14 14:13:50 2024
NAMESPACE: deepflow
STATUS: deployed
REVISION: 1
NOTES:
██████╗ ███████╗███████╗██████╗ ███████╗██╗      ██████╗ ██╗    ██╗
██╔══██╗██╔════╝██╔════╝██╔══██╗██╔════╝██║     ██╔═══██╗██║    ██║
██║  ██║█████╗  █████╗  ██████╔╝█████╗  ██║     ██║   ██║██║ █╗ ██║
██║  ██║██╔══╝  ██╔══╝  ██╔═══╝ ██╔══╝  ██║     ██║   ██║██║███╗██║
██████╔╝███████╗███████╗██║     ██║     ███████╗╚██████╔╝╚███╔███╔╝
╚═════╝ ╚══════╝╚══════╝╚═╝     ╚═╝     ╚══════╝ ╚═════╝  ╚══╝╚══╝ 

An automated observability platform for cloud-native developers.

# deepflow-agent Port for receiving trace, metrics, and log

deepflow-agent service: deepflow-agent.deepflow
deepflow-agent Host listening port: 38086

# Get the Grafana URL to visit by running these commands in the same shell

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

节点安装deepflow-ctl

curl -o /usr/bin/deepflow-ctl https://deepflow-ce.oss-cn-beijing.aliyuncs.com/bin/ctl/stable/linux/$(arch | sed 's|x86_64|amd64|' | sed 's|aarch64|arm64|')/deepflow-ctl
chmod a+x /usr/bin/deepflow-ctl

访问grafana页面

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

Ubuntu-22-LTS部署k8s和deepflow

环境详情:
Static hostname: k8smaster.example.net
Icon name: computer-vm
Chassis: vm
Machine ID: 22349ac6f9ba406293d0541bcba7c05d
Boot ID: 605a74a509724a88940bbbb69cde77f2
Virtualization: vmware
Operating System: Ubuntu 22.04.4 LTS
Kernel: Linux 5.15.0-106-generic
Architecture: x86-64
Hardware Vendor: VMware, Inc.
Hardware Model: VMware Virtual Platform

当您在 Ubuntu 22.04 上安装 Kubernetes 集群时,您可以遵循以下步骤:

  1. 设置主机名并在 hosts 文件中添加条目

    • 登录到主节点并使用 hostnamectl 命令设置主机名:

      hostnamectl set-hostname "k8smaster.example.net"
      
    • 在工作节点上,运行以下命令设置主机名(分别对应第一个和第二个工作节点):

      hostnamectl set-hostname "k8sworker1.example.net"  # 第一个工作节点
      hostnamectl set-hostname "k8sworker2.example.net"  # 第二个工作节点
      
    • 在每个节点的 /etc/hosts 文件中添加以下条目:

      10.1.1.70 k8smaster.example.net k8smaster
      10.1.1.71 k8sworker1.example.net k8sworker1
      
  2. 禁用 swap 并添加内核设置

    • 在所有节点上执行以下命令以禁用交换功能:

      swapoff -a
      sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
      
    • 加载以下内核模块:

      tee /etc/modules-load.d/containerd.conf <<EOF
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      
    • 为 Kubernetes 设置以下内核参数:

      tee /etc/sysctl.d/kubernetes.conf <<EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      EOF
      sysctl --system
      
  3. 安装 containerd 运行时

    • 首先安装 containerd 的依赖项:

      apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      
    • 启用 Docker 存储库:

      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      
    • 安装 containerd:

      apt update
      apt install -y containerd.io
      
    • 配置 containerd 使用 systemd 作为 cgroup:

      containerd config default | tee /etc/containerd/config.toml > /dev/null 2>&1
      sed -i 's/SystemdCgroup\\=false/SystemdCgroup\\=true/g' /etc/containerd/config.toml
      

      部分配置手动修改

      disabled_plugins = []
      imports = []
      oom_score = 0
      plugin_dir = ""
      required_plugins = []
      root = "/var/lib/containerd"
      state = "/run/containerd"
      temp = ""
      version = 2
      
      [cgroup]
      path = ""
      
      [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
      
      [grpc]
      address = "/run/containerd/containerd.sock"
      gid = 0
      max_recv_message_size = 16777216
      max_send_message_size = 16777216
      tcp_address = ""
      tcp_tls_ca = ""
      tcp_tls_cert = ""
      tcp_tls_key = ""
      uid = 0
      
      [metrics]
      address = ""
      grpc_histogram = false
      
      [plugins]
      
      [plugins."io.containerd.gc.v1.scheduler"]
          deletion_threshold = 0
          mutation_threshold = 100
          pause_threshold = 0.02
          schedule_delay = "0s"
          startup_delay = "100ms"
      
      [plugins."io.containerd.grpc.v1.cri"]
          device_ownership_from_security_context = false
          disable_apparmor = false
          disable_cgroup = false
          disable_hugetlb_controller = true
          disable_proc_mount = false
          disable_tcp_service = true
          drain_exec_sync_io_timeout = "0s"
          enable_selinux = false
          enable_tls_streaming = false
          enable_unprivileged_icmp = false
          enable_unprivileged_ports = false
          ignore_deprecation_warnings = []
          ignore_image_defined_volumes = false
          max_concurrent_downloads = 3
          max_container_log_line_size = 16384
          netns_mounts_under_state_dir = false
          restrict_oom_score_adj = false
          # 修改以下这行
          sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
          selinux_category_range = 1024
          stats_collect_period = 10
          stream_idle_timeout = "4h0m0s"
          stream_server_address = "127.0.0.1"
          stream_server_port = "0"
          systemd_cgroup = false
          tolerate_missing_hugetlb_controller = true
          unset_seccomp_profile = ""
      
          [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/cni/bin"
          conf_dir = "/etc/cni/net.d"
          conf_template = ""
          ip_pref = ""
          max_conf_num = 1
      
          [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          disable_snapshot_annotations = true
          discard_unpacked_layers = false
          ignore_rdt_not_enabled_errors = false
          no_pivot = false
          snapshotter = "overlayfs"
      
          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = "io.containerd.runc.v2"
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                  BinaryName = ""
                  CriuImagePath = ""
                  CriuPath = ""
                  CriuWorkPath = ""
                  IoGid = 0
                  IoUid = 0
                  NoNewKeyring = false
                  NoPivotRoot = false
                  Root = ""
                  ShimCgroup = ""
                  SystemdCgroup = true
      
          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".image_decryption]
          key_model = "node"
      
          [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
      
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
      
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
      
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
      
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
              # 添加如下4行
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://docker.mirrors.ustc.edu.cn"]
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
              endpoint = ["https://registry.aliyuncs.com/google_containers"]
      
          [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
          tls_cert_file = ""
          tls_key_file = ""
      
      [plugins."io.containerd.internal.v1.opt"]
          path = "/opt/containerd"
      
      [plugins."io.containerd.internal.v1.restart"]
          interval = "10s"
      
      [plugins."io.containerd.internal.v1.tracing"]
          sampling_ratio = 1.0
          service_name = "containerd"
      
      [plugins."io.containerd.metadata.v1.bolt"]
          content_sharing_policy = "shared"
      
      [plugins."io.containerd.monitor.v1.cgroups"]
          no_prometheus = false
      
      [plugins."io.containerd.runtime.v1.linux"]
          no_shim = false
          runtime = "runc"
          runtime_root = ""
          shim = "containerd-shim"
          shim_debug = false
      
      [plugins."io.containerd.runtime.v2.task"]
          platforms = ["linux/amd64"]
          sched_core = false
      
      [plugins."io.containerd.service.v1.diff-service"]
          default = ["walking"]
      
      [plugins."io.containerd.service.v1.tasks-service"]
          rdt_config_file = ""
      
      [plugins."io.containerd.snapshotter.v1.aufs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.btrfs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.devmapper"]
          async_remove = false
          base_image_size = ""
          discard_blocks = false
          fs_options = ""
          fs_type = ""
          pool_name = ""
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.native"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.overlayfs"]
          mount_options = []
          root_path = ""
          sync_remove = false
          upperdir_label = false
      
      [plugins."io.containerd.snapshotter.v1.zfs"]
          root_path = ""
      
      [plugins."io.containerd.tracing.processor.v1.otlp"]
          endpoint = ""
          insecure = false
          protocol = ""
      
      [proxy_plugins]
      
      [stream_processors]
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar"
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar+gzip"
      
      [timeouts]
      "io.containerd.timeout.bolt.open" = "0s"
      "io.containerd.timeout.shim.cleanup" = "5s"
      "io.containerd.timeout.shim.load" = "5s"
      "io.containerd.timeout.shim.shutdown" = "3s"
      "io.containerd.timeout.task.state" = "2s"
      
      [ttrpc]
      address = ""
      gid = 0
      uid = 0
      
    • 重启并启用容器服务:

      systemctl restart containerd
      systemctl enable containerd
      
    • 设置crictl

      cat > /etc/crictl.yaml <<EOF
      runtime-endpoint: unix:///var/run/containerd/containerd.sock
      image-endpoint: unix:///var/run/containerd/containerd.sock
      timeout: 10
      debug: false
      pull-image-on-create: false
      EOF
      
  4. 添加阿里云的 Kubernetes 源

    • 首先,导入阿里云的 GPG 密钥:

      curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
      
    • 然后,添加阿里云的 Kubernetes 源:

      tee /etc/apt/sources.list.d/kubernetes.list <<EOF
      deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
      EOF
      
  5. 安装 Kubernetes 组件

    • 更新软件包索引并安装 kubelet、kubeadm 和 kubectl:

      apt-get update
      apt-get install -y kubelet kubeadm kubectl
      
    • 设置 kubelet 使用 systemd 作为 cgroup 驱动:

      # 可忽略
      # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /var/lib/kubelet/kubeadm-flags.env
      # systemctl daemon-reload
      # systemctl restart kubelet
      
  6. 初始化 Kubernetes 集群

    • 使用 kubeadm 初始化集群,并指定阿里云的镜像仓库:

      # kubeadm init --image-repository registry.aliyuncs.com/google_containers
      I0513 14:16:59.740096   17563 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.28
      [init] Using Kubernetes version: v1.28.9
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      W0513 14:17:01.440936   17563 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime         is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Generating "ca" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [k8smaster.example.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.        cluster.local] and IPs [10.96.0.1 10.1.1.70]
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to         4m0s
      [apiclient] All control plane components are healthy after 4.002079 seconds
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
      [upload-certs] Skipping phase. Please see --upload-certs
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.        io/exclude-from-external-load-balancers]
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
      [bootstrap-token] Using token: m9z4yq.dok89ro6yt23wykr
      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
      [addons] Applied essential addon: CoreDNS
      [addons] Applied essential addon: kube-proxy
      
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      Alternatively, if you are the root user, you can run:
      
        export KUBECONFIG=/etc/kubernetes/admin.conf
      
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      kubeadm join 10.1.1.70:6443 --token m9z4yq.dok89ro6yt23wykr \
              --discovery-token-ca-cert-hash sha256:17c3f29bd276592e668e9e6a7a187140a887254b4555cf7d293c3313d7c8a178 
      
  7. 配置 kubectl

    • 为当前用户设置 kubectl 访问:

      mkdir -p $HOME/.kube
      cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      chown $(id -u):$(id -g) $HOME/.kube/config
      
  8. 安装网络插件

    • 安装一个 Pod 网络插件,例如 Calico 或 Flannel。例如,使用 Calico:

      kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
      # 网络插件初始化完毕之后,coredns容器就正常了
      kubectl logs -n kube-system -l k8s-app=kube-dns
      
  9. 验证集群

    • 启动一个nginx pod:

      # vim nginx_pod.yml
      apiVersion: v1
      kind: Pod
      metadata:
        name: test-nginx-pod
        namespace: test
        labels:
          app: nginx
      spec:
        containers:
        - name: test-nginx-container
          image: nginx:latest
          ports:
          - containerPort: 80
        tolerations:
          - key: "node-role.kubernetes.io/control-plane"
            operator: "Exists"
            effect: "NoSchedule"
      ---
      
      apiVersion: v1
      kind: Service
      # service和pod必须位于同一个namespace
      metadata:
        name: nginx-service
        namespace: test
      spec:
        type: NodePort
        # selector应该匹配pod的labels
        selector:
          app: nginx
        ports:
        - protocol: TCP
          port: 80
          nodePort: 30007
          targetPort: 80
      

      启动

      kubectl apply -f nginx_pod.yml
      

部署opentelemetry-collector测试

otel-collector和otel-agent需要程序集成API,发送到以DaemonSet运行在每个节点的otel-agent,otel-agent再将数据发送给otel-collector汇总,然后发往可以处理otlp trace数据的后端,如zipkin、jaeger等。

自定义测试yaml文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  namespace: default
data:
  # 你的配置数据
  config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging]

---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  type: NodePort
  ports:
    - port: 4317
      targetPort: 4317
      nodePort: 30080
      name: otlp-grpc
    - port: 8888
      targetPort: 8888
      name: metrics
  selector:
    component: otel-collector

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      component: otel-collector
  template:
    metadata:
      labels:
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector:latest
        ports:
        - containerPort: 4317
        - containerPort: 8888
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
      volumes:
      - configMap:
          name: otel-collector-conf
        name: otel-collector-config-vol

启动

mkdir /conf
kubectl apply -f otel-collector.yaml
kubectl get -f otel-collector.yaml

删除

kubectl delete -f otel-collector.yaml

使用官方提供示例

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/main/examples/k8s/otel-config.yaml

根据需要修改文件

otel-config.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-agent-conf
  labels:
    app: opentelemetry
    component: otel-agent-conf
data:
  otel-agent-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    exporters:
      otlp:
        endpoint: "otel-collector.default:4317"
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 400
        # 25% of limit up to 2G
        spike_limit_mib: 100
        check_interval: 5s
    extensions:
      zpages: {}
    service:
      extensions: [zpages]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [otlp]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-agent
  labels:
    app: opentelemetry
    component: otel-agent
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-agent
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-agent
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-agent-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-agent
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 55679 # ZPages endpoint.
        - containerPort: 4317 # Default OpenTelemetry receiver port.
        - containerPort: 8888  # Metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 400MiB
        volumeMounts:
        - name: otel-agent-config-vol
          mountPath: /conf
      volumes:
        - configMap:
            name: otel-agent-conf
            items:
              - key: otel-agent-config
                path: otel-agent-config.yaml
          name: otel-agent-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 1500
        # 25% of limit up to 2G
        spike_limit_mib: 512
        check_interval: 5s
    extensions:
      zpages: {}
    exporters:
      otlp:
        endpoint: "http://someotlp.target.com:4317" # Replace with a real endpoint.
        tls:
          insecure: true
      zipkin:
        endpoint: "http://10.1.1.10:9411/api/v2/spans"
        format: "proto"
    service:
      extensions: [zpages]
      pipelines:
        traces/1:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [zipkin]
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  ports:
  - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
    port: 4318
    protocol: TCP
    targetPort: 4318
  - name: metrics # Default endpoint for querying metrics.
    port: 8888
  selector:
    component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-collector
  minReadySeconds: 5
  progressDeadlineSeconds: 120
  replicas: 1 #TODO - adjust this to your own requirements
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-collector-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-collector
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        ports:
        - containerPort: 55679 # Default endpoint for ZPages.
        - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
        - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
        - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
        - containerPort: 9411 # Default endpoint for Zipkin receiver.
        - containerPort: 8888  # Default endpoint for querying metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 1600MiB
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
#        - name: otel-collector-secrets
#          mountPath: /secrets
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol
#        - secret:
#            name: otel-collector-secrets
#            items:
#              - key: cert.pem
#                path: cert.pem
#              - key: key.pem
#                path: key.pem

部署deepflow监控单个k8s集群

官方文档
官方demo

安装helm

snap install helm --classic

设置pv

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
## config default storage class
kubectl patch storageclass openebs-hostpath  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

部署deepflow

helm repo add deepflow https://deepflowio.github.io/deepflow
helm repo update deepflow # use `helm repo update` when helm < 3.7.0
helm install deepflow -n deepflow deepflow/deepflow --create-namespace
# 显示如下
NAME: deepflow
LAST DEPLOYED: Tue May 14 14:13:50 2024
NAMESPACE: deepflow
STATUS: deployed
REVISION: 1
NOTES:
██████╗ ███████╗███████╗██████╗ ███████╗██╗      ██████╗ ██╗    ██╗
██╔══██╗██╔════╝██╔════╝██╔══██╗██╔════╝██║     ██╔═══██╗██║    ██║
██║  ██║█████╗  █████╗  ██████╔╝█████╗  ██║     ██║   ██║██║ █╗ ██║
██║  ██║██╔══╝  ██╔══╝  ██╔═══╝ ██╔══╝  ██║     ██║   ██║██║███╗██║
██████╔╝███████╗███████╗██║     ██║     ███████╗╚██████╔╝╚███╔███╔╝
╚═════╝ ╚══════╝╚══════╝╚═╝     ╚═╝     ╚══════╝ ╚═════╝  ╚══╝╚══╝ 

An automated observability platform for cloud-native developers.

# deepflow-agent Port for receiving trace, metrics, and log

deepflow-agent service: deepflow-agent.deepflow
deepflow-agent Host listening port: 38086

# Get the Grafana URL to visit by running these commands in the same shell

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

节点安装deepflow-ctl

curl -o /usr/bin/deepflow-ctl https://deepflow-ce.oss-cn-beijing.aliyuncs.com/bin/ctl/stable/linux/$(arch | sed 's|x86_64|amd64|' | sed 's|aarch64|arm64|')/deepflow-ctl
chmod a+x /usr/bin/deepflow-ctl

访问grafana页面

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

Ubuntu-22-LTS部署k8s和deepflow

环境详情:
Static hostname: k8smaster.example.net
Icon name: computer-vm
Chassis: vm
Machine ID: 22349ac6f9ba406293d0541bcba7c05d
Boot ID: 605a74a509724a88940bbbb69cde77f2
Virtualization: vmware
Operating System: Ubuntu 22.04.4 LTS
Kernel: Linux 5.15.0-106-generic
Architecture: x86-64
Hardware Vendor: VMware, Inc.
Hardware Model: VMware Virtual Platform

当您在 Ubuntu 22.04 上安装 Kubernetes 集群时,您可以遵循以下步骤:

  1. 设置主机名并在 hosts 文件中添加条目

    • 登录到主节点并使用 hostnamectl 命令设置主机名:

      hostnamectl set-hostname "k8smaster.example.net"
      
    • 在工作节点上,运行以下命令设置主机名(分别对应第一个和第二个工作节点):

      hostnamectl set-hostname "k8sworker1.example.net"  # 第一个工作节点
      hostnamectl set-hostname "k8sworker2.example.net"  # 第二个工作节点
      
    • 在每个节点的 /etc/hosts 文件中添加以下条目:

      10.1.1.70 k8smaster.example.net k8smaster
      10.1.1.71 k8sworker1.example.net k8sworker1
      
  2. 禁用 swap 并添加内核设置

    • 在所有节点上执行以下命令以禁用交换功能:

      swapoff -a
      sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
      
    • 加载以下内核模块:

      tee /etc/modules-load.d/containerd.conf <<EOF
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      
    • 为 Kubernetes 设置以下内核参数:

      tee /etc/sysctl.d/kubernetes.conf <<EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      EOF
      sysctl --system
      
  3. 安装 containerd 运行时

    • 首先安装 containerd 的依赖项:

      apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      
    • 启用 Docker 存储库:

      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      
    • 安装 containerd:

      apt update
      apt install -y containerd.io
      
    • 配置 containerd 使用 systemd 作为 cgroup:

      containerd config default | tee /etc/containerd/config.toml > /dev/null 2>&1
      sed -i 's/SystemdCgroup\\=false/SystemdCgroup\\=true/g' /etc/containerd/config.toml
      

      部分配置手动修改

      disabled_plugins = []
      imports = []
      oom_score = 0
      plugin_dir = ""
      required_plugins = []
      root = "/var/lib/containerd"
      state = "/run/containerd"
      temp = ""
      version = 2
      
      [cgroup]
      path = ""
      
      [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
      
      [grpc]
      address = "/run/containerd/containerd.sock"
      gid = 0
      max_recv_message_size = 16777216
      max_send_message_size = 16777216
      tcp_address = ""
      tcp_tls_ca = ""
      tcp_tls_cert = ""
      tcp_tls_key = ""
      uid = 0
      
      [metrics]
      address = ""
      grpc_histogram = false
      
      [plugins]
      
      [plugins."io.containerd.gc.v1.scheduler"]
          deletion_threshold = 0
          mutation_threshold = 100
          pause_threshold = 0.02
          schedule_delay = "0s"
          startup_delay = "100ms"
      
      [plugins."io.containerd.grpc.v1.cri"]
          device_ownership_from_security_context = false
          disable_apparmor = false
          disable_cgroup = false
          disable_hugetlb_controller = true
          disable_proc_mount = false
          disable_tcp_service = true
          drain_exec_sync_io_timeout = "0s"
          enable_selinux = false
          enable_tls_streaming = false
          enable_unprivileged_icmp = false
          enable_unprivileged_ports = false
          ignore_deprecation_warnings = []
          ignore_image_defined_volumes = false
          max_concurrent_downloads = 3
          max_container_log_line_size = 16384
          netns_mounts_under_state_dir = false
          restrict_oom_score_adj = false
          # 修改以下这行
          sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
          selinux_category_range = 1024
          stats_collect_period = 10
          stream_idle_timeout = "4h0m0s"
          stream_server_address = "127.0.0.1"
          stream_server_port = "0"
          systemd_cgroup = false
          tolerate_missing_hugetlb_controller = true
          unset_seccomp_profile = ""
      
          [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/cni/bin"
          conf_dir = "/etc/cni/net.d"
          conf_template = ""
          ip_pref = ""
          max_conf_num = 1
      
          [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          disable_snapshot_annotations = true
          discard_unpacked_layers = false
          ignore_rdt_not_enabled_errors = false
          no_pivot = false
          snapshotter = "overlayfs"
      
          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = "io.containerd.runc.v2"
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                  BinaryName = ""
                  CriuImagePath = ""
                  CriuPath = ""
                  CriuWorkPath = ""
                  IoGid = 0
                  IoUid = 0
                  NoNewKeyring = false
                  NoPivotRoot = false
                  Root = ""
                  ShimCgroup = ""
                  SystemdCgroup = true
      
          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".image_decryption]
          key_model = "node"
      
          [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
      
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
      
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
      
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
      
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
              # 添加如下4行
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://docker.mirrors.ustc.edu.cn"]
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
              endpoint = ["https://registry.aliyuncs.com/google_containers"]
      
          [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
          tls_cert_file = ""
          tls_key_file = ""
      
      [plugins."io.containerd.internal.v1.opt"]
          path = "/opt/containerd"
      
      [plugins."io.containerd.internal.v1.restart"]
          interval = "10s"
      
      [plugins."io.containerd.internal.v1.tracing"]
          sampling_ratio = 1.0
          service_name = "containerd"
      
      [plugins."io.containerd.metadata.v1.bolt"]
          content_sharing_policy = "shared"
      
      [plugins."io.containerd.monitor.v1.cgroups"]
          no_prometheus = false
      
      [plugins."io.containerd.runtime.v1.linux"]
          no_shim = false
          runtime = "runc"
          runtime_root = ""
          shim = "containerd-shim"
          shim_debug = false
      
      [plugins."io.containerd.runtime.v2.task"]
          platforms = ["linux/amd64"]
          sched_core = false
      
      [plugins."io.containerd.service.v1.diff-service"]
          default = ["walking"]
      
      [plugins."io.containerd.service.v1.tasks-service"]
          rdt_config_file = ""
      
      [plugins."io.containerd.snapshotter.v1.aufs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.btrfs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.devmapper"]
          async_remove = false
          base_image_size = ""
          discard_blocks = false
          fs_options = ""
          fs_type = ""
          pool_name = ""
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.native"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.overlayfs"]
          mount_options = []
          root_path = ""
          sync_remove = false
          upperdir_label = false
      
      [plugins."io.containerd.snapshotter.v1.zfs"]
          root_path = ""
      
      [plugins."io.containerd.tracing.processor.v1.otlp"]
          endpoint = ""
          insecure = false
          protocol = ""
      
      [proxy_plugins]
      
      [stream_processors]
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar"
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar+gzip"
      
      [timeouts]
      "io.containerd.timeout.bolt.open" = "0s"
      "io.containerd.timeout.shim.cleanup" = "5s"
      "io.containerd.timeout.shim.load" = "5s"
      "io.containerd.timeout.shim.shutdown" = "3s"
      "io.containerd.timeout.task.state" = "2s"
      
      [ttrpc]
      address = ""
      gid = 0
      uid = 0
      
    • 重启并启用容器服务:

      systemctl restart containerd
      systemctl enable containerd
      
    • 设置crictl

      cat > /etc/crictl.yaml <<EOF
      runtime-endpoint: unix:///var/run/containerd/containerd.sock
      image-endpoint: unix:///var/run/containerd/containerd.sock
      timeout: 10
      debug: false
      pull-image-on-create: false
      EOF
      
  4. 添加阿里云的 Kubernetes 源

    • 首先,导入阿里云的 GPG 密钥:

      curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
      
    • 然后,添加阿里云的 Kubernetes 源:

      tee /etc/apt/sources.list.d/kubernetes.list <<EOF
      deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
      EOF
      
  5. 安装 Kubernetes 组件

    • 更新软件包索引并安装 kubelet、kubeadm 和 kubectl:

      apt-get update
      apt-get install -y kubelet kubeadm kubectl
      
    • 设置 kubelet 使用 systemd 作为 cgroup 驱动:

      # 可忽略
      # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /var/lib/kubelet/kubeadm-flags.env
      # systemctl daemon-reload
      # systemctl restart kubelet
      
  6. 初始化 Kubernetes 集群

    • 使用 kubeadm 初始化集群,并指定阿里云的镜像仓库:

      # kubeadm init --image-repository registry.aliyuncs.com/google_containers
      I0513 14:16:59.740096   17563 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.28
      [init] Using Kubernetes version: v1.28.9
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      W0513 14:17:01.440936   17563 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime         is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Generating "ca" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [k8smaster.example.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.        cluster.local] and IPs [10.96.0.1 10.1.1.70]
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to         4m0s
      [apiclient] All control plane components are healthy after 4.002079 seconds
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
      [upload-certs] Skipping phase. Please see --upload-certs
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.        io/exclude-from-external-load-balancers]
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
      [bootstrap-token] Using token: m9z4yq.dok89ro6yt23wykr
      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
      [addons] Applied essential addon: CoreDNS
      [addons] Applied essential addon: kube-proxy
      
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      Alternatively, if you are the root user, you can run:
      
        export KUBECONFIG=/etc/kubernetes/admin.conf
      
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      kubeadm join 10.1.1.70:6443 --token m9z4yq.dok89ro6yt23wykr \
              --discovery-token-ca-cert-hash sha256:17c3f29bd276592e668e9e6a7a187140a887254b4555cf7d293c3313d7c8a178 
      
  7. 配置 kubectl

    • 为当前用户设置 kubectl 访问:

      mkdir -p $HOME/.kube
      cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      chown $(id -u):$(id -g) $HOME/.kube/config
      
  8. 安装网络插件

    • 安装一个 Pod 网络插件,例如 Calico 或 Flannel。例如,使用 Calico:

      kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
      # 网络插件初始化完毕之后,coredns容器就正常了
      kubectl logs -n kube-system -l k8s-app=kube-dns
      
  9. 验证集群

    • 启动一个nginx pod:

      # vim nginx_pod.yml
      apiVersion: v1
      kind: Pod
      metadata:
        name: test-nginx-pod
        namespace: test
        labels:
          app: nginx
      spec:
        containers:
        - name: test-nginx-container
          image: nginx:latest
          ports:
          - containerPort: 80
        tolerations:
          - key: "node-role.kubernetes.io/control-plane"
            operator: "Exists"
            effect: "NoSchedule"
      ---
      
      apiVersion: v1
      kind: Service
      # service和pod必须位于同一个namespace
      metadata:
        name: nginx-service
        namespace: test
      spec:
        type: NodePort
        # selector应该匹配pod的labels
        selector:
          app: nginx
        ports:
        - protocol: TCP
          port: 80
          nodePort: 30007
          targetPort: 80
      

      启动

      kubectl apply -f nginx_pod.yml
      

部署opentelemetry-collector测试

otel-collector和otel-agent需要程序集成API,发送到以DaemonSet运行在每个节点的otel-agent,otel-agent再将数据发送给otel-collector汇总,然后发往可以处理otlp trace数据的后端,如zipkin、jaeger等。

自定义测试yaml文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  namespace: default
data:
  # 你的配置数据
  config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging]

---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  type: NodePort
  ports:
    - port: 4317
      targetPort: 4317
      nodePort: 30080
      name: otlp-grpc
    - port: 8888
      targetPort: 8888
      name: metrics
  selector:
    component: otel-collector

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      component: otel-collector
  template:
    metadata:
      labels:
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector:latest
        ports:
        - containerPort: 4317
        - containerPort: 8888
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
      volumes:
      - configMap:
          name: otel-collector-conf
        name: otel-collector-config-vol

启动

mkdir /conf
kubectl apply -f otel-collector.yaml
kubectl get -f otel-collector.yaml

删除

kubectl delete -f otel-collector.yaml

使用官方提供示例

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/main/examples/k8s/otel-config.yaml

根据需要修改文件

otel-config.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-agent-conf
  labels:
    app: opentelemetry
    component: otel-agent-conf
data:
  otel-agent-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    exporters:
      otlp:
        endpoint: "otel-collector.default:4317"
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 400
        # 25% of limit up to 2G
        spike_limit_mib: 100
        check_interval: 5s
    extensions:
      zpages: {}
    service:
      extensions: [zpages]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [otlp]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-agent
  labels:
    app: opentelemetry
    component: otel-agent
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-agent
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-agent
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-agent-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-agent
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 55679 # ZPages endpoint.
        - containerPort: 4317 # Default OpenTelemetry receiver port.
        - containerPort: 8888  # Metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 400MiB
        volumeMounts:
        - name: otel-agent-config-vol
          mountPath: /conf
      volumes:
        - configMap:
            name: otel-agent-conf
            items:
              - key: otel-agent-config
                path: otel-agent-config.yaml
          name: otel-agent-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 1500
        # 25% of limit up to 2G
        spike_limit_mib: 512
        check_interval: 5s
    extensions:
      zpages: {}
    exporters:
      otlp:
        endpoint: "http://someotlp.target.com:4317" # Replace with a real endpoint.
        tls:
          insecure: true
      zipkin:
        endpoint: "http://10.1.1.10:9411/api/v2/spans"
        format: "proto"
    service:
      extensions: [zpages]
      pipelines:
        traces/1:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [zipkin]
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  ports:
  - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
    port: 4318
    protocol: TCP
    targetPort: 4318
  - name: metrics # Default endpoint for querying metrics.
    port: 8888
  selector:
    component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-collector
  minReadySeconds: 5
  progressDeadlineSeconds: 120
  replicas: 1 #TODO - adjust this to your own requirements
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-collector-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-collector
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        ports:
        - containerPort: 55679 # Default endpoint for ZPages.
        - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
        - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
        - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
        - containerPort: 9411 # Default endpoint for Zipkin receiver.
        - containerPort: 8888  # Default endpoint for querying metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 1600MiB
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
#        - name: otel-collector-secrets
#          mountPath: /secrets
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol
#        - secret:
#            name: otel-collector-secrets
#            items:
#              - key: cert.pem
#                path: cert.pem
#              - key: key.pem
#                path: key.pem

部署deepflow监控单个k8s集群

官方文档
官方demo

安装helm

snap install helm --classic

设置pv

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
## config default storage class
kubectl patch storageclass openebs-hostpath  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

部署deepflow

helm repo add deepflow https://deepflowio.github.io/deepflow
helm repo update deepflow # use `helm repo update` when helm < 3.7.0
helm install deepflow -n deepflow deepflow/deepflow --create-namespace
# 显示如下
NAME: deepflow
LAST DEPLOYED: Tue May 14 14:13:50 2024
NAMESPACE: deepflow
STATUS: deployed
REVISION: 1
NOTES:
██████╗ ███████╗███████╗██████╗ ███████╗██╗      ██████╗ ██╗    ██╗
██╔══██╗██╔════╝██╔════╝██╔══██╗██╔════╝██║     ██╔═══██╗██║    ██║
██║  ██║█████╗  █████╗  ██████╔╝█████╗  ██║     ██║   ██║██║ █╗ ██║
██║  ██║██╔══╝  ██╔══╝  ██╔═══╝ ██╔══╝  ██║     ██║   ██║██║███╗██║
██████╔╝███████╗███████╗██║     ██║     ███████╗╚██████╔╝╚███╔███╔╝
╚═════╝ ╚══════╝╚══════╝╚═╝     ╚═╝     ╚══════╝ ╚═════╝  ╚══╝╚══╝ 

An automated observability platform for cloud-native developers.

# deepflow-agent Port for receiving trace, metrics, and log

deepflow-agent service: deepflow-agent.deepflow
deepflow-agent Host listening port: 38086

# Get the Grafana URL to visit by running these commands in the same shell

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

节点安装deepflow-ctl

curl -o /usr/bin/deepflow-ctl https://deepflow-ce.oss-cn-beijing.aliyuncs.com/bin/ctl/stable/linux/$(arch | sed 's|x86_64|amd64|' | sed 's|aarch64|arm64|')/deepflow-ctl
chmod a+x /usr/bin/deepflow-ctl

访问grafana页面

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

FAQ

如何将pod的端口暴露出来

在 Kubernetes 中,如果你想要将 Pod 的端口暴露给集群外部的用户或服务访问,你可以通过创建 Service 来实现。Service 提供了几种不同的类型来支持不同的用例和网络需求。以下是一些常见的方法:

  1. NodePort:这种类型的 Service 会在集群的所有节点上开放一个指定的端口(通常在 30000-32767 范围内),任何发送到这个端口的流量都会被转发到对应的 Pod。这是最简单的方式,但它会占用节点上的端口。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      type: NodePort
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          nodePort: 30007
    
  2. LoadBalancer:这种类型的 Service 通常由云提供商支持,它会为 Service 分配一个外部 IP 地址,所有发送到这个 IP 地址的流量都会被转发到 Pod。这种方式适用于需要直接从互联网访问服务的场景。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: LoadBalancer
    
  3. Ingress:Ingress 是一个 API 对象,它定义了外部访问到集群服务的规则。它可以提供负载均衡、SSL 终端和基于名称的虚拟托管。这是一个更高级的暴露方式,可以让你使用一个单一的入口点来管理多个服务的访问。

    示例 YAML 配置:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
    spec:
      rules:
      - host: www.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80
    
  4. Port Forwarding:如果你只是临时需要从本地机器访问 Pod,可以使用 kubectl port-forward 命令将本地端口转发到 Pod 中的端口。

    示例命令:

    kubectl port-forward pods/my-pod 8080:80
    

这些方法可以根据你的具体需求和环境来选择使用。

deepflow是一个可观测平台吗

是的,DeepFlow 是一个可观测性平台。它是由云杉网络开发的,旨在为复杂的云基础设施及云原生应用提供深度可观测性。DeepFlow 基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取¹。

DeepFlow 提供了以下核心特性:

  • 全景图:利用 eBPF 技术零侵扰绘制生产环境的服务全景图。
  • 分布式追踪:基于 eBPF 和 Wasm 技术零侵扰实现分布式追踪。
  • 持续性能剖析:利用 eBPF 技术以低开销零侵扰采集生产环境进程的性能剖析数据。
  • 存储性能优化:通过 SmartEncoding 机制大幅降低存储成本。
  • 无缝集成:可以作为 Prometheus、OpenTelemetry、SkyWalking、Pyroscope 的存储后端,也提供 SQL、PromQL、OTLP 等数据接口¹。

DeepFlow 旨在解决传统 APM 方案中的侵扰性和观测盲点问题,提供一种更高效、自动化的可观测性解决方案。

deepflow会自动监控k8s中的pod应用吗

是的,DeepFlow 可以自动监控 Kubernetes 中的 Pod 应用。DeepFlow 是一个可观测性平台,它基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取。

以下是 DeepFlow 在 Kubernetes 中自动监控 Pod 应用的一些特性:

  1. 零侵扰采集:
    DeepFlow 使用 eBPF 技术,无需修改应用程序代码,即可自动收集应用性能指标、分布式追踪数据和持续性能剖析信息。

  2. 全栈关联:
    DeepFlow 通过智能标签技术,将所有观测信号(指标、日志、追踪)关联到全栈中的不同组件,帮助您更好地理解应用程序的整体性能。

  3. 自动标记 K8s 资源:
    DeepFlow 自动标记 Kubernetes 资源(如 Pod、Service、Deployment 等),使您可以轻松地按照资源维度进行监控。

  4. 持续性能剖析:
    DeepFlow 可以低开销地采集生产环境进程的性能剖析数据,帮助您深入分析应用程序的性能瓶颈。

总之,DeepFlow 是一个强大的可观测性工具,可以自动监控 Kubernetes 中的应用程序,帮助您更好地理解和优化应用性能。

FAQ

如何将pod的端口暴露出来

在 Kubernetes 中,如果你想要将 Pod 的端口暴露给集群外部的用户或服务访问,你可以通过创建 Service 来实现。Service 提供了几种不同的类型来支持不同的用例和网络需求。以下是一些常见的方法:

  1. NodePort:这种类型的 Service 会在集群的所有节点上开放一个指定的端口(通常在 30000-32767 范围内),任何发送到这个端口的流量都会被转发到对应的 Pod。这是最简单的方式,但它会占用节点上的端口。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      type: NodePort
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          nodePort: 30007
    
  2. LoadBalancer:这种类型的 Service 通常由云提供商支持,它会为 Service 分配一个外部 IP 地址,所有发送到这个 IP 地址的流量都会被转发到 Pod。这种方式适用于需要直接从互联网访问服务的场景。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: LoadBalancer
    
  3. Ingress:Ingress 是一个 API 对象,它定义了外部访问到集群服务的规则。它可以提供负载均衡、SSL 终端和基于名称的虚拟托管。这是一个更高级的暴露方式,可以让你使用一个单一的入口点来管理多个服务的访问。

    示例 YAML 配置:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
    spec:
      rules:
      - host: www.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80
    
  4. Port Forwarding:如果你只是临时需要从本地机器访问 Pod,可以使用 kubectl port-forward 命令将本地端口转发到 Pod 中的端口。

    示例命令:

    kubectl port-forward pods/my-pod 8080:80
    

这些方法可以根据你的具体需求和环境来选择使用。

deepflow是一个可观测平台吗

是的,DeepFlow 是一个可观测性平台。它是由云杉网络开发的,旨在为复杂的云基础设施及云原生应用提供深度可观测性。DeepFlow 基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取¹。

DeepFlow 提供了以下核心特性:

  • 全景图:利用 eBPF 技术零侵扰绘制生产环境的服务全景图。
  • 分布式追踪:基于 eBPF 和 Wasm 技术零侵扰实现分布式追踪。
  • 持续性能剖析:利用 eBPF 技术以低开销零侵扰采集生产环境进程的性能剖析数据。
  • 存储性能优化:通过 SmartEncoding 机制大幅降低存储成本。
  • 无缝集成:可以作为 Prometheus、OpenTelemetry、SkyWalking、Pyroscope 的存储后端,也提供 SQL、PromQL、OTLP 等数据接口¹。

DeepFlow 旨在解决传统 APM 方案中的侵扰性和观测盲点问题,提供一种更高效、自动化的可观测性解决方案。

deepflow会自动监控k8s中的pod应用吗

是的,DeepFlow 可以自动监控 Kubernetes 中的 Pod 应用。DeepFlow 是一个可观测性平台,它基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取。

以下是 DeepFlow 在 Kubernetes 中自动监控 Pod 应用的一些特性:

  1. 零侵扰采集:
    DeepFlow 使用 eBPF 技术,无需修改应用程序代码,即可自动收集应用性能指标、分布式追踪数据和持续性能剖析信息。

  2. 全栈关联:
    DeepFlow 通过智能标签技术,将所有观测信号(指标、日志、追踪)关联到全栈中的不同组件,帮助您更好地理解应用程序的整体性能。

  3. 自动标记 K8s 资源:
    DeepFlow 自动标记 Kubernetes 资源(如 Pod、Service、Deployment 等),使您可以轻松地按照资源维度进行监控。

  4. 持续性能剖析:
    DeepFlow 可以低开销地采集生产环境进程的性能剖析数据,帮助您深入分析应用程序的性能瓶颈。

总之,DeepFlow 是一个强大的可观测性工具,可以自动监控 Kubernetes 中的应用程序,帮助您更好地理解和优化应用性能。

FAQ

如何将pod的端口暴露出来

在 Kubernetes 中,如果你想要将 Pod 的端口暴露给集群外部的用户或服务访问,你可以通过创建 Service 来实现。Service 提供了几种不同的类型来支持不同的用例和网络需求。以下是一些常见的方法:

  1. NodePort:这种类型的 Service 会在集群的所有节点上开放一个指定的端口(通常在 30000-32767 范围内),任何发送到这个端口的流量都会被转发到对应的 Pod。这是最简单的方式,但它会占用节点上的端口。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      type: NodePort
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          nodePort: 30007
    
  2. LoadBalancer:这种类型的 Service 通常由云提供商支持,它会为 Service 分配一个外部 IP 地址,所有发送到这个 IP 地址的流量都会被转发到 Pod。这种方式适用于需要直接从互联网访问服务的场景。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: LoadBalancer
    
  3. Ingress:Ingress 是一个 API 对象,它定义了外部访问到集群服务的规则。它可以提供负载均衡、SSL 终端和基于名称的虚拟托管。这是一个更高级的暴露方式,可以让你使用一个单一的入口点来管理多个服务的访问。

    示例 YAML 配置:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
    spec:
      rules:
      - host: www.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80
    
  4. Port Forwarding:如果你只是临时需要从本地机器访问 Pod,可以使用 kubectl port-forward 命令将本地端口转发到 Pod 中的端口。

    示例命令:

    kubectl port-forward pods/my-pod 8080:80
    

这些方法可以根据你的具体需求和环境来选择使用。

deepflow是一个可观测平台吗

是的,DeepFlow 是一个可观测性平台。它是由云杉网络开发的,旨在为复杂的云基础设施及云原生应用提供深度可观测性。DeepFlow 基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取¹。

DeepFlow 提供了以下核心特性:

  • 全景图:利用 eBPF 技术零侵扰绘制生产环境的服务全景图。
  • 分布式追踪:基于 eBPF 和 Wasm 技术零侵扰实现分布式追踪。
  • 持续性能剖析:利用 eBPF 技术以低开销零侵扰采集生产环境进程的性能剖析数据。
  • 存储性能优化:通过 SmartEncoding 机制大幅降低存储成本。
  • 无缝集成:可以作为 Prometheus、OpenTelemetry、SkyWalking、Pyroscope 的存储后端,也提供 SQL、PromQL、OTLP 等数据接口¹。

DeepFlow 旨在解决传统 APM 方案中的侵扰性和观测盲点问题,提供一种更高效、自动化的可观测性解决方案。

deepflow会自动监控k8s中的pod应用吗

是的,DeepFlow 可以自动监控 Kubernetes 中的 Pod 应用。DeepFlow 是一个可观测性平台,它基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取。

以下是 DeepFlow 在 Kubernetes 中自动监控 Pod 应用的一些特性:

  1. 零侵扰采集:
    DeepFlow 使用 eBPF 技术,无需修改应用程序代码,即可自动收集应用性能指标、分布式追踪数据和持续性能剖析信息。

  2. 全栈关联:
    DeepFlow 通过智能标签技术,将所有观测信号(指标、日志、追踪)关联到全栈中的不同组件,帮助您更好地理解应用程序的整体性能。

  3. 自动标记 K8s 资源:
    DeepFlow 自动标记 Kubernetes 资源(如 Pod、Service、Deployment 等),使您可以轻松地按照资源维度进行监控。

  4. 持续性能剖析:
    DeepFlow 可以低开销地采集生产环境进程的性能剖析数据,帮助您深入分析应用程序的性能瓶颈。

总之,DeepFlow 是一个强大的可观测性工具,可以自动监控 Kubernetes 中的应用程序,帮助您更好地理解和优化应用性能。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1797890.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

电机控制系列模块解析(27)—— 启动前诊断

驱动器启动前诊断 在现代工业自动化和智能控制领域&#xff0c;电机驱动器作为核心部件&#xff0c;承担着将电能高效转化为机械能的重要任务。为了确保电机系统的稳定、安全和高效运行&#xff0c;电机驱动器在启动之前会执行一系列全面的状态检测&#xff0c;以预防潜在故障…

.net 下的身份认证与授权的实现

背景 任何一个系统&#xff0c;都需要对于底层访问的页面和接口进行安全的处理&#xff0c;其中核心就是认证和授权。 另外一个问题就是在实际编程过程中&#xff0c;我们的代码有不同的模式&#xff0c;不同的分层或者在不同的项目之中&#xff0c;如何在不同的地方取得用户…

运维监控领域你不得不知道的黑话-下篇

作者&#xff1a;Tshb 引言 书接上回&#xff1a;《运维监控领域你不得不知道的黑话-中篇》。 在上一讲中&#xff0c;我们对监控系统中的四种指标类型进行了详细的阐述。不同类型的指标可以提供不同维度的系统信息&#xff0c;通过对比不同类型的指标&#xff0c;可以让我们…

5. MySQL 运算符和函数

文章目录 【 1. 算术运算符 】【 2. 逻辑运算符 】2.1 逻辑非 (NOT 或者 !)2.2 逻辑与运算符 (AND 或者 &&)2.3 逻辑或 (OR 或者 ||)2.4 异或运算 (XOR) 【 3. 比较运算符 】3.1 等于 3.2 安全等于运算符 <>3.3 不等于运算符 (<> 或者 !)3.4 小于等于运算符…

GAN网络理论和实验(一)

文章目录 一、说明二、摘要三、对架构的介绍四、相关工作五、理论推演5.1 p g p d a t a p_g p_{data} pg​pdata​的全局最优性5.2 算法1的收敛性 六、实验 一、说明 对发布于2014年的关于GAN的原始描述&#xff0c;我们精读此文&#xff0c;对原始的GAN网络概念进行追溯&…

保姆级教程:以SAR图像目标检测为例

一、项目出发点 AI Studio为我们提供了免费的GPU资源&#xff0c;当我们在NoteBook环境中把代码调试成功后&#xff0c;通常一个训练任务耗时较长&#xff0c;而Notebook离线运行有时长限制&#xff0c;一不小心就容易被kill掉。 如何解决这一问题&#xff1f; 后台任务帮到…

1782java英语陪学记词系统Myeclipse开发mysql数据库web结构java编程计算机网页项目

一、源码特点 java英语陪学记词系统 是一套完善的web设计系统&#xff0c;对理解JSP java编程开发语言有帮助采用了java设计&#xff0c;系统具有完整的源代码和数据库&#xff0c;系统采用web模式&#xff0c;系统主要采用B/S模式开发。开发环境为TOMCAT7.0,Myeclipse8.5开发&…

鸿蒙嵌入式设备开发之hello world

1. 环境搭建 目前鸿蒙设备的开发环境&#xff0c;可以分为2个部分&#xff1a;Windows调试环境&#xff0c;和Linux编译环境。 其中&#xff0c; Linux环境负责编译代码&#xff0c;并生成鸿蒙的包。Windows环境负责连接设备&#xff0c;进行烧录和调试。 特别注意&#xf…

draw.io 如何设置图形圆角?

draw.io 如何设置图形圆角呢&#xff1f; draw.io 是一款强大的&#xff0c;免费的开源工具&#xff0c;我经常用它来画流程图&#xff0c;但是我发现 draw.io 对于图形圆角的设置&#xff0c;只提供了一个设置选项&#xff0c;如下图&#xff1a; 当你选中某个图形&#xff0…

JustAuth Illegal state xx问题

排查 起因 服务上线生产环境后使用飞书登录有些时候会登录失败,查看日志出现以上错误Illegal state [FEISHU],但是测试环境没有出现这个情况 排查 经过排查发现是JustAuth 报的错 分析出现原因 在JustAuth找到出现原因和解决方案 原文地址:异常相关问题 | JustAuth 异常…

用大模型实现PPT可视化几种思路

https://zhuanlan.zhihu.com/p/700685802 背景 前面一篇文章已经介绍了如何根据用户输入&#xff0c;用大模型实现内容检索、分析、脑图可视化的链路。然而往往投研团队需要针对重要新闻做组内分析解读&#xff0c;需要用ppt的方式来展现&#xff1b;那么优美可能让大模型直…

【全开源】Java同城服务同城信息同城任务发布平台小程序APP公众号源码

&#x1f4e2; 连接你我&#xff0c;让任务触手可及 &#x1f31f; 引言 在快节奏的现代生活中&#xff0c;我们时常需要寻找一些便捷的方式来处理生活中的琐事。同城任务发布平台系统应运而生&#xff0c;它为我们提供了一个高效、便捷的平台&#xff0c;让我们能够轻松发布…

【最新鸿蒙应用开发】——沙箱机制是什么?作用?场景?

沙箱机制 1. 什么是沙箱机制&#xff1f; 1.1. 概念 在操作系统当中&#xff0c;沙箱机制&#xff08;Sandboxing&#xff09;是一种安全机制&#xff0c;用于限制程序代码的访问权限&#xff0c;防止恶意软件对系统造成破坏。在沙箱环境中&#xff0c;程序只能访问特定的资…

Docker 学习总结(83)—— 配置文件daemon.json介绍及优化建议

一、daemon.json 文件概述 daemon.json是Docker守护进程的配置文件,它允许系统管理员自定义Docker守护程序的行为。此文件通常位于/etc/docker/目录下。通过修改daemon.json,可以调整Docker守护进程的多种设置,包括网络配置、日志记录、存储驱动等。 二、daemon.json 文件结…

YoloV8改进策略:Block篇|MobileNetV4——移动生态系统的通用模型

文章目录 摘要1、引言2、相关工作3、硬件无关的帕累托效率4、通用反向瓶颈5、Mobile MQA6、MNv4模型设计6.1、精炼NAS以增强架构6.2、MNv4模型的优化 7、结果7.1、ImageNet分类 8、增强蒸馏方案9、结论10、致谢A、搜索空间细节B、基准测试方法论C、ImageNet-1k分类任务的训练设…

Linux之线程及线程安全详解

前言&#xff1a;在操作系统中&#xff0c;进程是资源分配的基本单位&#xff0c;那么线程是什么呢&#xff1f;线程是调度的基本单位&#xff0c;我们该怎么理解呢&#xff1f; 目录 一&#xff0c;线程概念理解 二&#xff0c;Linux里面的线程原理 三&#xff0c;为什么要…

哈夫曼树的构造,哈夫曼树的存在意义--求哈夫曼编码

一:哈夫曼树的构造 ①权值,带权路径长度。 ②一组确定权值的叶子节点可以构造多个不同的二叉树,但是带权路径长度min的是哈夫曼树 ③算法基本思想及其实操图片演示 注:存储结构和伪代码 1 初始化: 构造2n-1棵只有一个根节点的二叉树,parent=rchild=lchild=-1; 其中…

忆恒创源国产系列新品 —— PBlaze7 7A40 取得 PCI-SIG 兼容性认证

在此前报道中&#xff0c;我们曾预告了忆恒创源国产系列 PCIe 5.0 SSD 新品 —— PBlaze7 7A40&#xff0c;今天&#xff0c;这款 SSD 已经顺利通过 PCI-SIG 的严格测试并出现在 Integrators List 集成商列表当中&#xff0c;标志着距离 PBlaze7 7A40 的正式发布又近了一步。 正…

Spring Boot框架基础

文章目录 1 Spring Boot概述2 Spring Boot入门2.1 项目搭建2.2 入门程序 3 数据请求与响应3.1 数据请求3.2 数据响应 4 分层解耦4.1 三层架构4.2 控制反转4.3 依赖注入 5 参考资料 1 Spring Boot概述 Spring是Java EE编程领域的一个轻量级开源框架&#xff0c;是为了解决企业级…

乐高小人分类项目

数据来源 LEGO Minifigures | Kaggle 建立文件目录 BASE_DIR lego/star-wars-images/ names [YODA, LUKE SKYWALKER, R2-D2, MACE WINDU, GENERAL GRIEVOUS ] tf.random.set_seed(1)# Read information about dataset if not os.path.isdir(BASE_DIR train/):for name in …