第12关 精通K8s下的Ingress-Nginx控制器:生产环境实战配置指南

news2025/2/28 6:27:27

------> 课程视频同步分享在今日头条和B站

大家好,我是博哥爱运维,这节课带来k8s的流量入口ingress,作为业务对外服务的公网入口,它的重要性不言而喻,大家一定要仔细阅读,跟着博哥的教程一步步实操去理解。

Ingress基本概念

在Kubernetes集群中,Ingress作为集群内服务对外暴露的访问接入点,其几乎承载着集群内服务访问的所有流量。Ingress是Kubernetes中的一个资源对象,用来管理集群外部访问集群内部服务的方式。您可以通过Ingress资源来配置不同的转发规则,从而达到根据不同的规则设置访问集群内不同的Service后端Pod。

Ingress资源仅支持配置HTTP流量的规则,无法配置一些高级特性,例如负载均衡的算法、Sessions Affinity等,这些高级特性都需要在Ingress Controller中进行配置。

Ingress Controller工作原理

为了使得Ingress资源正常工作,集群中必须要有个Ingress Controller来解析Ingress的转发规则。Ingress Controller收到请求,匹配Ingress转发规则转发到后端Service,而Service转发到Pod,最终由Pod处理请求。Kubernetes中Service、Ingress与Ingress Controller有着以下关系:

  • Service是后端真实服务的抽象,一个Service可以代表多个相同的后端服务。
  • Ingress是反向代理规则,用来规定HTTP/HTTPS请求应该被转发到哪个Service上。例如根据请求中不同的Host和URL路径,让请求落到不同的 Service上。
  • Ingress Controller是一个反向代理程序,负责解析Ingress的反向代理规则。如果Ingress有增删改的变动,Ingress Controller会及时更新自己相应的转发规则,当Ingress Controller收到请求后就会根据这些规则将请求转发到对应的Service。

Ingress Controller通过API Server获取Ingress资源的变化,动态地生成Load Balancer(例如Nginx)所需的配置文件(例如nginx.conf),然后重新加载Load Balancer(例如执行nginx -s load重新加载Nginx。)来生成新的路由转发规则。

这节课所用到的yaml配置比较多,但我发现在头条这里发的格式会有问题,所以我另外把笔记文字部分存了一份在我的github上面,大家可以从这里面来复制yaml配置创建服务:

https://github.com/bogeit/LearnK8s/blob/main/2023/%E7%AC%AC12%E5%85%B3%20%20%E7%94%9F%E4%BA%A7%E7%8E%AF%E5%A2%83%E4%B8%8B%E7%9A%84%E6%B5%81%E9%87%8F%E5%85%A5%E5%8F%A3%E6%8E%A7%E5%88%B6%E5%99%A8Ingress-nginx-controller.md

我们上面学习了通过Service服务来访问pod资源,另外通过修改Service的类型为NodePort,然后通过一些手段作公网IP的端口映射来提供K8s集群外的访问,但这并不是一种很优雅的方式。

通常,services和Pod只能通过集群内网络访问。 所有在边界路由器上的流量都被丢弃或转发到别处。 
从概念上讲,这可能看起来像:

    internet
        |
  ------------
  [ Services ]

另外可以我们通过LoadBalancer负载均衡来提供外部流量的的访问,但这种模式对于实际生产来说,用起来不是很方便,而且用这种模式就意味着每个服务都需要有自己的的负载均衡器以及独立的公有IP。

我们这是用Ingress,因为Ingress只需要一个公网IP就能为K8s上所有的服务提供访问,Ingress工作在7层(HTTP),Ingress会根据请求的主机名以及路径来决定把请求转发到相应的服务,如下图所示:

在这里插入图片描述

Ingress是允许入站连接到达集群服务的一组规则。即介于物理网络和群集svc之间的一组转发规则。 
其实就是实现L4 L7的负载均衡:
注意:这里的Ingress并非将外部流量通过Service来转发到服务pod上,而只是通过Service来找到对应的Endpoint来发现pod进行转发

   
    internet
        |
   [ Ingress ]   ---> [ Services ] ---> [ Endpoint ]
   --|-----|--                                 |
   [ Pod,pod,...... ]<-------------------------|

要在K8s上面使用Ingress,我们就需要在K8s上部署Ingress-controller控制器,只有它在K8s集群中运行,Ingress依次才能正常工作。Ingress-controller控制器有很多种,比如traefik,但我们这里要用到ingress-nginx这个控制器,它的底层就是用Openresty融合nginx和一些lua规则等实现的。

重点来了,我在讲课中一直强调,本课程带给大家的都是基于生产中实战经验,所以这里我们用的ingress-nginx不是普通的社区版本,而是经过了超大生产流量检验,国内最大的云平台阿里云基于社区版分支出来,进行了魔改而成,更符合生产,基本属于开箱即用,下面是aliyun-ingress-controller的介绍:

下面介绍只截取了最新的一部分,更多文档资源可以查阅官档:https://help.aliyun.com/zh/ack/product-overview/nginx-ingress-controller#title-ek8-hx4-hlm

组件介绍
Ingress基本概念
在Kubernetes集群中,Ingress作为集群内服务对外暴露的访问接入点,其几乎承载着集群内服务访问的所有流量。Ingress是Kubernetes中的一个资源对象,用来管理集群外部访问集群内部服务的方式。您可以通过Ingress资源来配置不同的转发规则,从而达到根据不同的规则设置访问集群内不同的Service所对应的后端Pod。

Nginx Ingress Controller工作原理
为了使得Nginx Ingress资源正常工作,集群中必须要有个Nginx Ingress Controller来解析Nginx Ingress的转发规则。Nginx Ingress Controller收到请求,匹配Nginx Ingress转发规则转发到后端Service所对应的Pod,由Pod处理请求。Kubernetes中Service、Nginx Ingress与Nginx Ingress Controller有着以下关系:

Service是后端真实服务的抽象,一个Service可以代表多个相同的后端服务。

Nginx Ingress是反向代理规则,用来规定HTTP/HTTPS请求应该被转发到哪个Service所对应的Pod上。例如根据请求中不同的Host和URL路径,让请求落到不同Service所对应的Pod上。

Nginx Ingress Controller是Kubernetes集群中的一个组件,负责解析Nginx Ingress的反向代理规则。如果Nginx Ingress有增删改的变动,Nginx Ingress Controller会及时更新自己相应的转发规则,当Nginx Ingress Controller收到请求后就会根据这些规则将请求转发到对应Service的Pod上。



变更记录
2023年10月





版本号  v1.9.3-aliyun.1

镜像地址  registry-cn-hangzhou.ack.aliyuncs.com/acs/aliyun-ingress-controller:v1.9.3-aliyun.1

变更时间  2023年10月24日

变更内容
重要
由于安全原因,自该版本起,组件将会默认禁用所有snippet注解(如nginx.ingress.kubernetes.io/configuration-snippet等)。

出于安全和稳定性风险考量,不建议您开启snippet注解功能。如需使用,请在充分评估风险后,通过在ConfigMapkube-system/nginx-configuration中添加allow-snippet-annotations: "true"手动开启。

默认关闭在注解中添加snippet的功能。

加入--enable-annotation-validation参数,默认开启注解内容校验,缓解CVE-2023-5044。

修复CVE-2023-44487。


变更影响

建议在业务低峰期升级,变更过程中可能会导致已经建立的连接发生瞬断。

aliyun-ingress-controller有一个很重要的修改,就是它支持路由配置的动态更新,大家用过Nginx的可以知道,在修改完Nginx的配置,我们是需要进行nginx -s reload来重加载配置才能生效的,在K8s上,这个行为也是一样的,但由于K8s运行的服务会非常多,所以它的配置更新是非常频繁的,因此,如果不支持配置动态更新,对于在高频率变化的场景下,Nginx频繁Reload会带来较明显的请求访问问题:

  1. 造成一定的QPS抖动和访问失败情况
  2. 对于长连接服务会被频繁断掉
  3. 造成大量的处于shutting down的Nginx Worker进程,进而引起内存膨胀

详细原理分析见这篇文章: https://developer.aliyun.com/article/692732

我们准备来部署aliyun-ingress-controller,下面直接是生产中在用的yaml配置,我们保存了aliyun-ingress-nginx.yaml准备开始部署:

详细讲解下面yaml配置的每个部分

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resourceNames:
  - ingress-controller-leader-nginx
  resources:
  - configmaps
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-controller-leader-nginx
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: kube-system


# https://www.cnblogs.com/dudu/p/12197646.html
#---
#apiVersion: monitoring.coreos.com/v1
#kind: ServiceMonitor
#metadata:
#  labels:
#    app: ingress-nginx
#  name: nginx-ingress-scraping
#  namespace: kube-system
#spec:
#  endpoints:
#  - interval: 30s
#    path: /metrics
#    port: metrics
#  jobLabel: app
#  namespaceSelector:
#    matchNames:
#    - ingress-nginx
#  selector:
#    matchLabels:
#      app: ingress-nginx

---

apiVersion: v1
kind: Service
metadata:
  labels:
    app: ingress-nginx
  name: nginx-ingress-lb
  namespace: kube-system
spec:
  # DaemonSet need:
  # ----------------
  type: ClusterIP
  # ----------------
  # Deployment need:
  # ----------------
#  type: NodePort
  # ----------------
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  - name: metrics
    port: 10254
    protocol: TCP
    targetPort: 10254
  selector:
    app: ingress-nginx

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-controller-admission
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: 8443
  selector:
    app: ingress-nginx

---
# all configmaps means:
# https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: kube-system
  labels:
    app: ingress-nginx
data:
  allow-snippet-annotations: "true"
  allow-backend-server-header: "true"
  disable-access-log: "false"
  enable-underscores-in-headers: "true"
  generate-request-id: "true"
  ignore-invalid-headers: "true"
  keep-alive: "900"
  keep-alive-requests: "10000"
  large-client-header-buffers: 5 20k
  log-format-upstream: '{"@timestamp": "$time_iso8601","remote_addr": "$remote_addr","x-forward-for": "$proxy_add_x_forwarded_for","request_id": "$req_id","remote_user": "$remote_user","bytes_sent": $bytes_sent,"request_time": $request_time,"status": $status,"vhost": "$host","request_proto": "$server_protocol","path": "$uri","request_query": "$args","request_length": $request_length,"duration": $request_time,"method": "$request_method","http_referrer": "$http_referer","http_user_agent":  "$http_user_agent","upstream-sever":"$proxy_upstream_name","proxy_alternative_upstream_name":"$proxy_alternative_upstream_name","upstream_addr":"$upstream_addr","upstream_response_length":$upstream_response_length,"upstream_response_time":$upstream_response_time,"upstream_status":$upstream_status}'
  max-worker-connections: "65536"
  proxy-body-size: 20m
  proxy-connect-timeout: "10"
  proxy-read-timeout: "60"
  proxy-send-timeout: "60"
  reuse-port: "true"
  server-tokens: "false"
  ssl-redirect: "false"
  upstream-keepalive-connections: "300"
  upstream-keepalive-requests: "1000"
  upstream-keepalive-timeout: "900"
  worker-cpu-affinity: ""
  worker-processes: "1"
  http-redirect-code: "301"
  proxy_next_upstream: error timeout http_502
  ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
  ssl-protocols: TLSv1 TLSv1.1 TLSv1.2


---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: kube-system
  labels:
    app: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: kube-system
  labels:
    app: ingress-nginx

---
apiVersion: apps/v1
kind: DaemonSet
#kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: kube-system
  labels:
    app: ingress-nginx
  annotations:
    component.revision: "2"
    component.version: 1.9.3
spec:
  # Deployment need:
  # ----------------
#  replicas: 1
  # ----------------
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # DaemonSet need:
      # ----------------
      hostNetwork: true
      # ----------------
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - ingress-nginx
              topologyKey: kubernetes.io/hostname
            weight: 100
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: type
                operator: NotIn
                values:
                - virtual-kubelet
              - key: k8s.aliyun.com
                operator: NotIn
                values:
                - "true"
      containers:
      - args:
          - /nginx-ingress-controller
          - --election-id=ingress-controller-leader-nginx
          - --ingress-class=nginx
          - --watch-ingress-without-class
          - --controller-class=k8s.io/ingress-nginx
          - --configmap=$(POD_NAMESPACE)/nginx-configuration
          - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
          - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
          - --annotations-prefix=nginx.ingress.kubernetes.io
          - --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
          - --validating-webhook=:8443
          - --validating-webhook-certificate=/usr/local/certificates/cert
          - --validating-webhook-key=/usr/local/certificates/key
          - --enable-metrics=false
          - --v=2
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: LD_PRELOAD
            value: /usr/local/lib/libmimalloc.so
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/aliyun-ingress-controller:v1.9.3-aliyun.1
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
                - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
        name: nginx-ingress-controller
        ports:
          - name: http
            containerPort: 80
            protocol: TCP
          - name: https
            containerPort: 443
            protocol: TCP
          - name: webhook
            containerPort: 8443
            protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
#        resources:
#          limits:
#            cpu: 1
#            memory: 2G
#          requests:
#            cpu: 1
#            memory: 2G
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            drop:
              - ALL
            add:
              - NET_BIND_SERVICE
          runAsUser: 101
          # if get 'mount: mounting rw on /proc/sys failed: Permission denied', use:
#          privileged: true
#          procMount: Default
#          runAsUser: 0
        volumeMounts:
        - name: webhook-cert
          mountPath: /usr/local/certificates/
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          if [ "$POD_IP" != "$HOST_IP" ]; then
          mount -o remount rw /proc/sys
          sysctl -w net.core.somaxconn=65535
          sysctl -w net.ipv4.ip_local_port_range="1024 65535"
          sysctl -w kernel.core_uses_pid=0
          fi
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        image: registry.cn-shanghai.aliyuncs.com/acs/busybox:v1.29.2
        imagePullPolicy: IfNotPresent
        name: init-sysctl
        resources:
          limits:
            cpu: 100m
            memory: 70Mi
          requests:
            cpu: 100m
            memory: 70Mi
        securityContext:
          capabilities:
            add:
            - SYS_ADMIN
            drop:
            - ALL
          # if get 'mount: mounting rw on /proc/sys failed: Permission denied', use:
          privileged: true
          procMount: Default
          runAsUser: 0
      # choose node with set this label running
      # kubectl label node xx.xx.xx.xx boge/ingress-controller-ready=true
      # kubectl get node --show-labels
      # kubectl label node xx.xx.xx.xx boge/ingress-controller-ready-
      nodeSelector:
        boge/ingress-controller-ready: "true"
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ingress-nginx
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      # kubectl taint nodes xx.xx.xx.xx boge/ingress-controller-ready="true":NoExecute
      # kubectl taint nodes xx.xx.xx.xx boge/ingress-controller-ready:NoExecute-
      tolerations:
      - operator: Exists
#      tolerations:
#      - effect: NoExecute
#        key: boge/ingress-controller-ready
#        operator: Equal
#        value: "true"
      volumes:
      - name: webhook-cert
        secret:
          defaultMode: 420
          secretName: ingress-nginx-admission
      - hostPath:
          path: /etc/localtime
          type: File
        name: localtime

---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx-master
  namespace: kube-system
  annotations:
   ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: k8s.io/ingress-nginx


---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    name: ingress-nginx
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: kube-system
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: kube-system
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
    verbs:
      - get
      - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: kube-system
---
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  labels:
    name: ingress-nginx
  namespace: kube-system
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        name: ingress-nginx
    spec:
      containers:
        - name: create
#          image: registry-vpc.cn-hangzhou.aliyuncs.com/acs/kube-webhook-certgen:v1.1.1
          image: registry.cn-beijing.aliyuncs.com/acs/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  labels:
    name: ingress-nginx
  namespace: kube-system
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        name: ingress-nginx
    spec:
      containers:
        - name: patch
          image: registry.cn-hangzhou.aliyuncs.com/acs/kube-webhook-certgen:v1.1.1  # if error use this image
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

---
## Deployment need for aliyun'k8s:
#apiVersion: v1
#kind: Service
#metadata:
#  annotations:
#    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "lb-xxxxxxxxxxxxxxxxx"
#    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
#  labels:
#    app: nginx-ingress-lb
#  name: nginx-ingress-lb-boge
#  namespace: kube-system
#spec:
#  externalTrafficPolicy: Local
#  ports:
#  - name: http
#    port: 80
#    protocol: TCP
#    targetPort: 80
#  - name: https
#    port: 443
#    protocol: TCP
#    targetPort: 443
#  selector:
#    app: ingress-nginx
#  type: LoadBalancer



DaemonSet-00:49:30

开始部署:

# kubectl  apply -f aliyun-ingress-nginx.yaml 
namespace/ingress-nginx created
serviceaccount/nginx-ingress-controller created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-controller created
service/nginx-ingress-lb created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
daemonset.apps/nginx-ingress-controller created

# 这里是以daemonset资源的形式进行的安装
# DaemonSet资源和Deployment的yaml配置类似,但不同的是Deployment可以在每个node上运行多个pod副本,但daemonset在每个node上只能运行一个pod副本
# 这里正好就借运行ingress-nginx的情况下,把daemonset这个资源做下讲解

# 我们查看下pod,会发现空空如也,为什么会这样呢?
# kubectl -n kube-system get pod
注意上面的yaml配置里面,我使用了节点选择配置,只有打了我指定lable标签的node节点,也会被允许调度pod上去运行
      nodeSelector:
        boge/ingress-controller-ready: "true"

# 我们现在来打标签
# kubectl label node 10.0.1.201 boge/ingress-controller-ready=true
node/10.0.1.201 labeled
# kubectl label node 10.0.1.202 boge/ingress-controller-ready=true
node/10.0.1.202 labeled

# 接着可以看到pod就被调试到这两台node上启动了
# kubectl -n kube-system get pod -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP           NODE         NOMINATED NODE   READINESS GATES
nginx-ingress-controller-lchgr   1/1     Running   0          9m1s   10.0.1.202   10.0.1.202   <none>           <none>
nginx-ingress-controller-x87rp   1/1     Running   0          9m6s   10.0.1.201   10.0.1.201   <none>           <none>

我们基于前面学到的deployment和service,来创建一个nginx的相应服务资源,保存为nginx.yaml:

注意:记得把前面测试的资源删除掉,以防冲突

---
kind: Service
apiVersion: v1
metadata:
  name: new-nginx
spec:
  selector:
    app: new-nginx
  ports:
    - name: http-port
      port: 80
      protocol: TCP
      targetPort: 80

---
# 新版本k8s的ingress配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: new-nginx
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
    nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($host != 'www.boge.com' ) {
        rewrite ^ http://www.boge.com$request_uri permanent;
      }
spec:
  rules:
    - host: boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
    - host: m.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
    - host: www.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
#  tls:
#      - hosts:
#          - boge.com
#          - m.boge.com
#          - www.boge.com
#        secretName: boge-com-tls

# tls secret create command:
#   kubectl -n <namespace> create secret tls boge-com-tls --key boge-com.key --cert boge-com.csr

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: new-nginx
  labels:
    app: new-nginx
spec:
  replicas: 3  # 数量可以根据NODE节点数量来定义
  selector:
    matchLabels:
      app: new-nginx
  template:
    metadata:
      labels:
        app: new-nginx
    spec:
      containers:
#--------------------------------------------------
      - name: new-nginx
        image: nginx:1.21.6
        env:
          - name: TZ
            value: Asia/Shanghai
        ports:
        - containerPort: 80
        volumeMounts:
          - name: html-files
            mountPath: "/usr/share/nginx/html"
#--------------------------------------------------
      - name: busybox
        image: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2
        args:
        - /bin/sh
        - -c
        - >
           while :; do
             if [ -f /html/index.html ];then
               echo "[$(date +%F\ %T)] ${MY_POD_NAMESPACE}-${MY_POD_NAME}-${MY_POD_IP}" > /html/index.html
               sleep 1
             else
               touch /html/index.html
             fi
           done
        env:
          - name: TZ
            value: Asia/Shanghai
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
        volumeMounts:
          - name: html-files
            mountPath: "/html"
          - mountPath: /etc/localtime
            name: tz-config

#--------------------------------------------------
      volumes:
        - name: html-files
          emptyDir:
            medium: Memory
            sizeLimit: 10Mi
        - name: tz-config
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai

---


运行它:

# kubectl apply -f nginx.yaml 

#查看创建的ingress资源
# kubectl get ingress
NAME        CLASS          HOSTS                              ADDRESS   PORTS   AGE
new-nginx   nginx-master   boge.com,m.boge.com,www.boge.com             80      8s


# 我们在其它节点上,加下本地hosts,来测试下效果
10.0.1.201 boge.com m.boge.com www.boge.com

# 可以看到请求成功了
[root@node-2 ~]# curl www.boge.com


# 回到201节点上,看下ingress-nginx的日志
# kubectl -n kube-system  logs --tail=1 nginx-ingress-controller-nblb9
Defaulted container "nginx-ingress-controller" out of: nginx-ingress-controller, init-sysctl (init)
{"@timestamp": "2023-11-22T22:13:14+08:00","remote_addr": "10.0.1.1","x-forward-for": "10.0.1.1","request_id": "f21a1e569751fb55299ef5f1b039852d","remote_user": "-","bytes_sent": 250,"request_time": 0.003,"status": 200,"vhost": "www.boge.com","request_proto": "HTTP/2.0","path": "/","request_query": "-","request_length": 439,"duration": 0.003,"method": "GET","http_referrer": "-","http_user_agent":  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36","upstream-sever":"default-new-nginx-80","proxy_alternative_upstream_name":"","upstream_addr":"172.20.247.18:80","upstream_response_length":71,"upstream_response_time":0.003,"upstream_status":200}

因为http属于是明文传输数据不安全,在生产中我们通常会配置https加密通信,现在实战下Ingress的tls配置

# 这里我先自签一个https的证书

#1. 先生成私钥key
# openssl genrsa -out boge.key 2048
Generating RSA private key, 2048 bit long modulus
..............................................................................................+++
.....+++
e is 65537 (0x10001)

#2.再基于key生成tls证书(注意:这里我用的*.boge.com,这是生成泛域名的证书,后面所有新增加的三级域名都是可以用这个证书的)
# openssl req -new -x509 -key boge.key -out boge.csr -days 360 -subj /CN=*.boge.com

# 看下创建结果
# ll
total 8
-rw-r--r-- 1 root root 1099 Nov 27 11:44 boge.csr
-rw-r--r-- 1 root root 1679 Nov 27 11:43 boge.key

在生产环境中,如果是自建机房,我们通常会在至少2台node节点上运行有ingress-nginx的pod,那么有必要在这两台node上面部署负载均衡软件做调度,来起到高可用的作用,这里我们用haproxy+keepalived,如果你的生产环境是在云上,假设是阿里云,那么你只需要购买一个负载均衡器SLB,将运行有ingress-nginx的pod的节点服务器加到这个SLB的后端来,然后将请求域名和这个SLB的公网IP做好解析即可,目前我们用二进制部署的K8s集群通信架构如下:

在这里插入图片描述

注意在每台node节点上有已经部署有了个精简版的nginx软件kube-lb做四层负载均衡,来转发apiserver的请求的,那么,我们只需要选取两台节点,部署keepalived软件并重新配置kube-lb,来生成VIP达到ha的效果,具体参照文档部署:

https://github.com/easzlab/kubeasz/blob/master/docs/setup/ex-lb.md

做到这里,是不是有点成就感了呢,在已经知道了ingress能给我们带来什么后,我们回过头来理解Ingress的工作原理,这样掌握ingress会更加稳固,这也是我平时学习的方法

如下图,Client客户端对nginx.boge.com进行DNS查询,DNS服务器(我们这里是配的本地hosts)返回了Ingress控制器的IP(也就是我们的VIP:10.0.1.222)。然后Client客户端向Ingress控制器发送HTTP请求,并在请求Host头中指定nginx.boge.com。Ingress控制器从该头部确定Client客户端是想访问哪个服务,通过与该服务并联的Endpoint对象查看具体的Pod IP,并将Client客户端的请求转发给其中一个pod。

在这里插入图片描述

生产环境正常情况下大部分是一个Ingress对应一个Service服务,但在一些特殊情况,需要复用一个Ingress来访问多个服务的,下面我们来实践下

再创建一个nginx的deployment和service,注意名称修改下不要冲突了

# Finally you need excute this command:
# kubectl create deployment old-nginx --image=nginx:1.21.6 --replicas=1
# kubectl expose deployment old-nginx --port=80 --target-port=80

# if all done,you need test
#  curl -H "Host: www.boge.com" -H "foo: bar" http://10.0.0.201
#  curl -H "Host: www.boge.com"  http://10.0.0.201

# 1 pod  2 containers and  ingress-nginx L7 + 2 service
---
# namespace
apiVersion: v1
kind: Namespace
metadata:
  name: test-nginx

---
# SVC
kind: Service
apiVersion: v1
metadata:
  name: new-nginx
  namespace: test-nginx
spec:
  selector:
    app: new-nginx
  ports:
    - name: http-port
      port: 80
      protocol: TCP
      targetPort: 80
#      nodePort: 30088
#  type: NodePort

---
# ingress-nginx L7
#  https://yq.aliyun.com/articles/594019
#  https://help.aliyun.com/document_detail/200941.html?spm=a2c4g.11186623.6.787.254168fapBIi0A
#  KAE多个请求参数配置: query("boge_id", /^aaa$|^bbb$/)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-nginx
  namespace: test-nginx
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    # 请求参数匹配  curl www.boge.com?boge_id=ID1
    nginx.ingress.kubernetes.io/service-match: | 
      new-nginx: query("boge_id", /^aaa$|^bbb$/)
    # 请求头中满足正则匹配foo=bar的请求才会被路由到新版本服务new-nginx中
    #nginx.ingress.kubernetes.io/service-match: | 
    #  new-nginx: header("foo", /^bar$/)
    # 在满足上述匹配规则的基础上仅允许50%的流量会被路由到新版本服务new-nginx中
    #nginx.ingress.kubernetes.io/service-weight: |
    #    new-nginx: 50, old-nginx: 50
spec:
  rules:
    - host: www.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx  # 新版本服务
                port:
                  number: 80
            path: /
            pathType: Prefix
          - backend:
              service:
                name: old-nginx  # 老版本服务
                port:
                  number: 80
            path: /
            pathType: Prefix

---
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: new-nginx
  namespace: test-nginx
  labels:
    app: new-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: new-nginx
  template:
    metadata:
      labels:
        app: new-nginx
    spec:
      containers:
#--------------------------------------------------
      - name: new-nginx
        image: nginx::1.21.6
        ports:
        - containerPort: 80
        volumeMounts:
          - name: html-files
            mountPath: "/usr/share/nginx/html"
#--------------------------------------------------
      - name: busybox
        image: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2
        args:
        - /bin/sh
        - -c
        - >
           while :; do
             if [ -f /html/index.html ];then
               echo "[$(date +%F\ %T)] hello" > /html/index.html
               sleep 1
             else
               touch /html/index.html
             fi
           done
        volumeMounts:
          - name: html-files
            mountPath: "/html"

#--------------------------------------------------
      volumes:
        - name: html-files
           # Disk of the working node running pod
          #emptyDir: {}
           # Memory of the working node running pod
          emptyDir:
            medium: Memory
            # use temp disk space
            #medium: ""
              # if emptyDir file size more than 1Gi ,then this pod will be Evicted, just like restert pod
            sizeLimit: 1Gi


本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1268052.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

科普 | 隧道代理IP,简化操作提升安全性

随着数字化时代的深入发展&#xff0c;企业对网络数据的依赖日益增强。在这样的背景下&#xff0c;隧道代理IP正在以其独特的优势改变传统的网络代理模式&#xff0c;为企业级数据采集领域带来革命性的变革。 隧道代理IP技术简介 隧道代理IP通过云端服务器实现自动化的HTTP代理…

java开发需要用到的软件,必备软件工具一览

java开发需要用到的软件&#xff0c;必备软件工具一览 如果你对Java编程感兴趣或已经是一名Java开发者&#xff0c;你需要一些必备的软件工具来提高你的生产力和简化开发过程。在本文中&#xff0c;我们将探讨Java开发所需的关键软件工具&#xff0c;并通过具体示例来解释它们的…

笔记63:注意力评分函数

本地笔记地址&#xff1a;D:\work_file\&#xff08;4&#xff09;DeepLearning_Learning\03_个人笔记\3.循环神经网络\第10章&#xff1a;动手学深度学习~注意力机制 a a a a a a a a a a a a a a a a a a a

可重复读为什么能避免一部分幻读,但是没能完全避免幻读

事物的隔离级别已经介绍过了&#xff0c;接下来我们谈谈细节部分。 MySQL innoDB引擎的默认隔离级别——可重复读 虽然可重复读不能完全避免幻读&#xff0c;但其实已经避免了很大一部分了。具体怎么做的呢&#xff0c;主要有以下两个操作&#xff1a; 针对快照读&#xff0c;…

网工内推 | 中高级网工,IE认证优先,带薪年假,五险一金

01 敏于行&#xff08;北京&#xff09;科技有限公司 招聘岗位&#xff1a;高级网络开发工程师 职责描述&#xff1a; 1、负责设计、参与数字身份安全中网络安全模块相关项目&#xff08;零信任SDP、VPN等&#xff09;&#xff1b; 2、深入研究和理解网络底层协议和通信机制&…

机器学习与 S3 相集成 :释放数据的力量

文章作者&#xff1a;Libai 引言 在当今数据驱动的世界中&#xff0c;企业不断寻求如何高效利用企业自身所产生的数据的解决方案。机器学习已经成为一种提取有价值的见解和做出数据驱动决策的强大工具。然而&#xff0c;机器学习模型的成功在很大程度上依赖于高质量数据的可用…

【数据挖掘】国科大刘莹老师数据挖掘课程作业 —— 第三次作业

Written Part 1. 基于表 1 1 1 回答下列问题&#xff08;min_sup40%, min_conf75%&#xff09;&#xff1a; Transaction IDItems Bought0001{a, d, e}0024{a, b, c, e}0012{a, b, d, e}0031{a, c, d, e}0015{b, c, e}0022{b, d, e}0029{c, d}0040{a, b, c}0033{a, d, e}0038…

E. Kolya and Movie Theatre

https://codeforces.com/contest/1862/problem/E 容易发现就是维护一个长度至多为m的序列和 减去 i*d&#xff08;i为最后选择看电影的是哪一天&#xff09; 一开始没有把第0天的p是0用上&#xff0c;没想出来 维护非负序列和这里可以用一个set&#xff0c;有点类似于滑动窗口&…

vuepress-----6、时间更新

# 6、时间更新 基于Git提交时间修改文字时间格式 moment # 最后更新时间 # 时间格式修改 下载库文件 yarn add momentconst moment require(moment); moment.locale(zh-cn)module.exports {themeConfig: {lastUpdated: 更新时间,},plugins: [[vuepress/last-updated,{trans…

java开发必备的Tomcat-servlet-http和spring boot基础入门

tomcat 介绍 概念&#xff1a; Tomcat是一个开源免费的轻量级Web服务器&#xff0c;是Apache软件基金会的核心项目&#xff0c;支持Servlet/JSP少量JavaEE(JakartaEE)规范。 JavaEE&#xff1a;Java Enterprise Edition&#xff0c;Java企业版。指Java企业级开发的技术规范总和…

【hacker送书第4期】推荐4本Java必读书籍(各送一本)

第4期图书推荐 Java从入门到精通&#xff08;第7版&#xff09;内容简介参与方式 项目驱动零基础学Java内容简介参与方式 深入理解Java高并发编程内容简介参与方式 Java编程讲义内容简介参与方式 Java从入门到精通&#xff08;第7版&#xff09; 内容简介 《Java从入门到精通&…

Android String.xml 设置加粗字体/修改字体颜色/动态设置修改文案

之前经常使用Spannable 这次主要在String.xml使用&#xff1a;<![CDATA[和]]> 效果&#xff1a; <resources><string name"str_bianse"><![CDATA[变色 <font color"#ff0000">曲项向天歌</font> 白毛浮绿水]]></st…

R语言gWQS包在加权分位数和回归模型的应用

在流行病学研究中&#xff0c;相较于单一因素的暴露&#xff0c;多因素同时暴露的情况更为常见。传统模型在评价多因素联合暴露时存在数据维度高、多重共线性等问题. WQS 回归模型的基本原理是通过分位数间距及加权的方法&#xff0c;将多种研究因素的效应综合成为一个指数&…

京东数据运营-京东数据平台-京东店铺数据分析-2023年10月京东烘干机品牌销售榜

鲸参谋监测的京东平台10月份烘干机市场销售数据已出炉&#xff01; 10月份&#xff0c;烘干机市场整体销售上涨。鲸参谋数据显示&#xff0c;今年10月份&#xff0c;京东平台上烘干机的销量将近5万件&#xff0c;环比增长约77%&#xff0c;同比增长约22%&#xff1b;销售额将近…

信号类型(通信)——最小频移键控(MSK)

系列文章目录 《信号类型&#xff08;通信&#xff09;——仿真》 《信号类型&#xff08;通信&#xff09;——QAM调制信号》 《信号类型&#xff08;通信&#xff09;——QPSK、OQPSK、IJF_OQPSK调制信号》 目录 前言 一、MSK信号特点 1.1、最小频移 1.2、相位连续 二…

制作心理咨询小程序的详细指南

随着科技的的发展&#xff0c;小程序已经成为了人们日常生活中不可或缺的一部分。特别是在心理咨询这个领域&#xff0c;小程序可以提供一个更为便捷、高效的服务平台。本文将通过乔拓云平台为例&#xff0c;详细介绍如何制作一个心理咨询小程序。 首先&#xff0c;我们需要注册…

L298N模块使用简介

接口作用ENA/ENB使用PWM调节马达转速&#xff08;调速&#xff09;左边一对接口接电机的正负极右边一对接口接电机正负极控制IN1,IN2 控制左边的2个接口的电压&#xff0c;IN1控制一个&#xff0c;IN2控制另外一个 IN1输入3.3V时&#xff0c;OUT1输出3.3v电压&#xff08;可能是…

7. 栈

栈(stack)是一种遵循先入后出的逻辑的线性数据结构。我们可以将栈类比为桌面上的一摞盘子&#xff0c;如果需要拿出底部的盘子&#xff0c;则需要先将上面的盘子依次取出。我们将盘子替换为各种类型的元素&#xff08;如整数、字符、对象等&#xff09;&#xff0c;就得到了栈数…

分析某款go端口扫描器之一

一、概述 进来在学go的端口检测部分&#xff0c;但是自己写遇到很多问题&#xff0c;又不知道从何入手&#xff0c;故找来网上佬们写的现成工具&#xff0c;学习一波怎么实现的。分析过程杂乱&#xff0c;没啥思路&#xff0c;勿喷。 项目来源&#xff1a;https://github.com/…

毫米波传感器系统性能测量(TI文档)

摘要 本应用报告讨论了使用德州仪器高性能毫米波传感器的系统性能测量结果。TI的毫米波传感器是一个77 GHz&#xff0c;高度集成的收发器&#xff0c;具有高速接口(CSI2)&#xff0c;可将原始ADC数据发送出去进行处理。毫米波传感器包括整个毫米波射频和模拟基带信号链&#xf…