如此简单的k8s,快速玩转ingress

news2024/11/9 2:11:53

如此简单的k8s,快速玩转ingress

NodePort 可以实现的功能和缺陷:
功能、在每一个节点上面都会启动一个端口,在访问的时候通过任何的节点,通过节点加ip的形式实现访问
缺点、也就是说每个端口只能使用一次,一个端口对应一个应用。
ingress简单的来说可以实现根据不同的域名跳转到不同的端口服务中去,如果还不理解,那就继续往下看

ingress作为流量的统一入口,然后会找到指定的service,不同的service下面会关联不同的pod组

小编使用的是k8s-1.25版本,应为ingress对版本来说还是有要求,就好比小编在网上找了一个yaml文件,但是部署后一直报错,原因是k8s 1.25 版本需要nginx-ingress 1.4.0 版本

ingress和Pod之间的联系关联流程图:

在这里插入图片描述

主机名系统版本作用ip备注
localhostcentos7.5pull镜像打包镜像的机器192.168.3.129确保可以联网,可以是自己的虚拟机,联网docker pull镜像并打包,导入到k8s集群里面
k8s-master1centos7.5k8s-master110.245.4.1k8s的master节点
k8s-node1centos7.5k8s-node110.245.4.3k8s的node节点1
k8s-node2centos7.5k8s-node210.245.4.4k8s的node节点2

小编这边是内网环境,也没得镜像仓库,所有需要纯碎的用外网下载镜像,打包导入到内网来进行安装

本次所需要的资料下载网址
链接:https://pan.baidu.com/s/1zjq1gr87oU7z2qWtDfuKlg 
提取码:fidg 
--来自百度网盘超级会员V4的分享

先部署一个nginx应用

部署Nginx的应用的原因是,最后咱们需要使用ingress来进行域名访问web,这个nginx就是这个web

利用外网服务器将nginx包Pull下来

[root@localhost ~]# ping www.baidu.com
PING www.a.shifen.com (220.181.38.149) 56(84) bytes of data.
64 bytes from 220.181.38.149 (220.181.38.149): icmp_seq=1 ttl=128 time=7.47 ms
64 bytes from 220.181.38.149 (220.181.38.149): icmp_seq=2 ttl=128 time=8.84 ms
[root@localhost ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[root@localhost ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
a2abf6c4d29d: Pull complete 
a9edb18cadd1: Pull complete 
589b7251471a: Pull complete 
186b1aaa4aa6: Pull complete 
b4df32aa5a72: Pull complete 
a0bcbecc962e: Pull complete 
Digest: sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
[root@localhost ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              605c77e624dd        11 months ago       141MB
[root@localhost ~]# docker save nginx > nginx.tar
[root@localhost ~]# ls nginx.tar 
nginx.tar     ###生成的tar就是我们要导入到内网的nginx镜像

####将这个nginx.tar包导入到我们的内网服务器上面,开始进行部署每个Node节点的镜像,
###因为小编的这套系统是没有镜像仓库的,也是不能联网的所以只能这样来进行导入



将包下载下来
在这里插入图片描述

之后需要上传到内网服务器上面,上传的办法有很多,小编就不在这里进行演示了哈
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

将nginx镜像利用docker load命令部署到各个节点上面

下面以node为例,其他的node 节点也需要这样操作

[root@k8s-node2 ~]# cd docker
[root@k8s-node2 docker]# ls
nginx.tar
[root@k8s-node2 docker]# docker load < nginx.tar 
2edcec3590a4: Loading layer [==================================================>]  83.86MB/83.86MB
e379e8aedd4d: Loading layer [==================================================>]     62MB/62MB
b8d6e692a25e: Loading layer [==================================================>]  3.072kB/3.072kB
f1db227348d0: Loading layer [==================================================>]  4.096kB/4.096kB
32ce5f6a5106: Loading layer [==================================================>]  3.584kB/3.584kB
d874fd2bc83b: Loading layer [==================================================>]  7.168kB/7.168kB
Loaded image: nginx:latest
[root@k8s-node2 docker]# docker images | grep nginx
nginx                       latest              605c77e624dd        11 months ago       141MB    

在这里插入图片描述

生成yaml文件

[root@k8s-master1 ~]# kubectl create deployment web --image=nginx --dry-run -o yaml > nginx.yaml
W1203 09:24:14.117673   15390 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.
[root@k8s-master1 ~]# ls nginx.yaml 
nginx.yaml    ###生成yaml文件

修改yaml文件

主要修改镜像的拉取策略,因为小编这个集群没得联网也没得镜像仓库,所以只能配置下优先本地拉取

[root@k8s-master1 ~]# vi nginx.yaml 
[root@k8s-master1 ~]# cat nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
        imagePullPolicy: IfNotPresent
        resources: {}
status: {}

在这里插入图片描述

创建pod应用

[root@k8s-master1 docker]# cd
[root@k8s-master1 ~]# kubectl apply -f nginx.yaml 
deployment.apps/web created
[root@k8s-master1 ~]# kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
web-59b9bb7664-ppl5h   1/1     Running   0          7s

在这里插入图片描述

对外暴露端口

[root@k8s-master1 ~]# kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
service/web exposed
[root@k8s-master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        4d2h
nginx-dep1   NodePort    10.0.0.50    <none>        80:30306/TCP   11h
web          NodePort    10.0.0.177   <none>        80:30037/TCP   5s

在这里插入图片描述

测试访问下

在这里插入图片描述

可以正常的进行访问

部署ingress controller

这里我们将选择官方维护的Nginx控制器来实现我们的部署

先来看官方得yaml文件

官方网址的不好拉取
给大家提供一个已修改过的yaml文件
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml

外网服务器捞取yaml文件
[root@localhost ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
--2022-12-03 04:18:00--  https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15543 (15K) [text/plain]
Saving to: ‘deploy.yaml’

100%[==============================================================================================================================>] 15,543      --.-K/s   in 0.02s   

2022-12-03 04:18:19 (936 KB/s) - ‘deploy.yaml’ saved [15543/15543]

[root@localhost ~]# ls
anaconda-ks.cfg  deploy.yaml  docker  docker-19.03.9.tgz  ingress-nginx.yaml  nginx.tar
[root@localhost ~]# rm -rf ingress-nginx.yaml 
[root@localhost ~]# ls
anaconda-ks.cfg  deploy.yaml  docker  docker-19.03.9.tgz  nginx.tar   ###daploy.yaml文件就是我们需要的文件

导出并导入到内网服务器中
在这里插入图片描述


##将刚才的deployer.yaml文件上传到master机器上
[root@k8s-master1 ~]# mkdir yaml/ingress
[root@k8s-master1 ~]# cd yaml/ingress/
[root@k8s-master1 ingress]# ls
[root@k8s-master1 ingress]# ls
deploy.yaml

对当前deploy.yaml做一下修改

修改的地方
HostNetwork
是为了打通Cluster和node的网络,让Cluster直接监听node的端口,一般是80和443
在这里插入图片描述
DaemonSet
因为Deployment可能会把多个pod调度到同一个node,那就失去高可用的意义了。而DaemonSet在一个节点上只会有一个Pod,并且如果有条件的话可以进行dns负载来减轻服务器的压力
修改deploy.yaml

在这里插入图片描述

#####################最终yaml文件
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resourceNames:
  - ingress-controller-leader
  resources:
  - configmaps
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-controller-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Local
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: DaemonSet   ##原先是Deployment,需要改成DaemonSet
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
    spec:
	  hostNetwork: true
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
        - --election-id=ingress-controller-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None

[root@k8s-master1 ingress]# grep image  deploy.yaml 
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
        imagePullPolicy: IfNotPresent

##老规矩老套路,将这个镜像文件利用外网服务器,pull下来,然后打一个tar包上传到内网k8s集群中,并load一下,过程比较简单
##和我们上面部署nginx镜像得操作方法是一样得,小编这里就快速得过一下,如果有不懂得就看看上面是怎么pull nginx 镜像得就像

在这里插入图片描述

外网服务器pull所需镜像文件

[root@localhost ~]# systemctl start docker
[root@localhost ~]# docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
v1.3.1: Pulling from google_containers/nginx-ingress-controller
213ec9aee27d: Already exists 
2e0679428050: Already exists 
3bb10086d473: Already exists 
a9e78a589ab3: Already exists 
a101ab4f42d5: Already exists 
4f4fb700ef54: Already exists 
dc27c0ef0bf2: Pull complete 
3d5f4bb7af2f: Pull complete 
8fb78251a937: Pull complete 
4de1b89edb9f: Pull complete 
d0338d42c78f: Pull complete 
7a576f04e9a2: Pull complete 
6362753a0f00: Pull complete 
4c7b5440ced1: Pull complete 
Digest: sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
[root@localhost ~]# docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
v1.3.0: Pulling from google_containers/kube-webhook-certgen
b9f88661235d: Pull complete 
23691ac0df28: Pull complete 
Digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
[root@localhost ~]# docker images
REPOSITORY                                                                     TAG                    IMAGE ID            CREATED             SIZE
anjia0532/google-containers.ingress-nginx.controller                           v1.4.0                 d681a4ce3c50        2 months ago        264MB
anjia0532/google-containers.ingress-nginx.kube-webhook-certgen                 v20220916-gd32f8c343   520347519a8c        2 months ago        46.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   v1.3.1                 b7c8e5e285c0        3 months ago        263MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen       v1.3.0                 69547dffc18f        4 months ago        45.7MB
nginx                                                                          latest                 605c77e624dd        11 months ago       141MB
lizhenliang/nginx-ingress-controller                                           0.30.0                 89ccad40ce8e        2 years ago         323MB
[root@localhost ~]# docker save registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0 > registry.cn-hangzhou.aliyuncs.com-google_containers-kube-webhook-certgen:v1.3.0.tar
[root@localhost ~]# docker save registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1 > registry.cn-hangzhou.aliyuncs.com-google_containers-nginx-ingress-controller:v1.3.1.tar
[root@localhost ~]# ls
anaconda-ks.cfg     nginx.tar
deploy.yaml         registry.cn-hangzhou.aliyuncs.com-google_containers-kube-webhook-certgen:v1.3.0.tar
docker              registry.cn-hangzhou.aliyuncs.com-google_containers-nginx-ingress-controller:v1.3.1.tar
docker-19.03.9.tgz


在这里插入图片描述

导出并导入内网服务器,方法很多还请大家各显神通
在这里插入图片描述

内网服务器以k8s-node2为例,其他节点都需要操作

[root@k8s-node2 docker]# cd ~/docker
[root@k8s-node2 docker]# ls
nginx.tar
registry.cn-hangzhou.aliyuncs.com-google_containers-kube-webhook-certgen_v1.3.0.tar
registry.cn-hangzhou.aliyuncs.com-google_containers-nginx-ingress-controller_v1.3.1.tar
[root@k8s-node2 docker]# docker load < registry.cn-hangzhou.aliyuncs.com-google_containers-kube-webhook-certgen_v1.3.0.tar
8d7366c22fd8: Loading layer [==================================================>]  3.697MB/3.697MB
2e26ea8c648c: Loading layer [==================================================>]  43.34MB/43.34MB
Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
[root@k8s-node2 docker]# docker load < registry.cn-hangzhou.aliyuncs.com-google_containers-nginx-ingress-controller_v1.3.1.tar
994393dc58e7: Loading layer [==================================================>]  5.827MB/5.827MB
2473fa5e408b: Loading layer [==================================================>]    105MB/105MB
1dc17a6abdd2: Loading layer [==================================================>]  4.096kB/4.096kB
0c3404339187: Loading layer [==================================================>]  38.28MB/38.28MB
474efda35be6: Loading layer [==================================================>]  14.75MB/14.75MB
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
885795c45fbd: Loading layer [==================================================>]  2.422MB/2.422MB
57d838194e4e: Loading layer [==================================================>]    319kB/319kB
1cffe3884e6c: Loading layer [==================================================>]  6.342MB/6.342MB
8fc4cd084759: Loading layer [==================================================>]  38.79MB/38.79MB
24e4c85db7fd: Loading layer [==================================================>]  2.873MB/2.873MB
ed2df01a5b3b: Loading layer [==================================================>]  6.144kB/6.144kB
47fd9231a272: Loading layer [==================================================>]  50.75MB/50.75MB
053a1d65c92e: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
[root@k8s-node2 docker]# docker images | grep hangzhou
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   v1.3.1              b7c8e5e285c0        3 months ago        263MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen       v1.3.0              69547dffc18f        4 months ago        45.7MB

在这里插入图片描述

创建ingress-controller

[root@k8s-master1 ingress]# kubectl apply -f deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
daemonset.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created



[root@k8s-master1 ingress]# kubectl get pods -n ingress-nginx
NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create--1-k22tx   0/1     Completed   0          2m49s
ingress-nginx-admission-patch--1-qh76b    0/1     Completed   2          2m49s
ingress-nginx-controller-7tbjf            1/1     Running     0          2m49s
ingress-nginx-controller-mmrk8            1/1     Running     0          2m49s
ingress-nginx-controller-r2vpw            1/1     Running     0          2m49s

在这里插入图片描述

编写规则yaml

#模拟的,域名可以自己配置
[root@k8s-master1 ingress]# vi ingress.yaml
[root@k8s-master1 ingress]# cat ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx-test
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: k8s.ingress.com
    http:
      paths:
      - backend:
          service:
            name: web
            port:
              number: 80
        path: /
        pathType: Prefix
        
**注意我们在 Ingress 资源对象中添加了一个 annotations:kubernetes.io/ingress.class: "nginx",这就是指定让这个 Ingress 通过 nginx-ingress 来处理,小编在这个上面吃了3小时的亏**

创建规则

[root@k8s-master1 ingress]# kubectl apply -f ingress.yaml 
ingress.networking.k8s.io/ingress-nginx-test created

[root@k8s-master1 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                      READY   STATUS      RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create--1-k22tx   0/1     Completed   0          17m   10.244.36.69     k8s-node1     <none>           <none>
ingress-nginx-admission-patch--1-qh76b    0/1     Completed   2          17m   10.244.159.139   k8s-master1   <none>           <none>
ingress-nginx-controller-7tbjf            1/1     Running     0          17m   10.245.4.1       k8s-master1   <none>           <none>
ingress-nginx-controller-mmrk8            1/1     Running     0          17m   10.245.4.3       k8s-node1     <none>           <none>
ingress-nginx-controller-r2vpw            1/1     Running     0          17m   10.245.4.4       k8s-node2     <none>           <none>

检查监听的端口

[root@k8s-master1 ingress]# ss -antup | grep 80   ####80端口
tcp    LISTEN     0      128    10.245.4.1:2380                  *:*                   users:(("etcd",pid=942,fd=5))
tcp    LISTEN     0      128       *:80                    *:*                   users:(("nginx",pid=89482,fd=18),("nginx",pid=89476,fd=18))
tcp    LISTEN     0      128       *:80                    *:*                   users:(("nginx",pid=89481,fd=10),("nginx",pid=89476,fd=10))
tcp    ESTAB      0      0      10.245.4.1:2379               10.245.4.1:52800               users:(("etcd",pid=942,fd=22))
tcp    ESTAB      0      0      10.245.4.1:2380               10.245.4.4:52256               users:(("etcd",pid=942,fd=9))
tcp    ESTAB      0      0      10.245.4.1:39916              10.245.4.4:2380                users:(("etcd",pid=942,fd=31))
tcp    ESTAB      0      0      10.245.4.1:2380               10.245.4.4:51942               users:(("etcd",pid=942,fd=16))
tcp    ESTAB      0      0      10.245.4.1:58802              10.245.4.4:2379                users:(("kube-apiserver",pid=652,fd=130))
tcp    ESTAB      0      0      10.245.4.1:39918              10.245.4.4:2380                users:(("etcd",pid=942,fd=32))
tcp    ESTAB      0      0      10.245.4.1:2380               10.245.4.3:32886               users:(("etcd",pid=942,fd=116))
tcp    ESTAB      0      0      10.245.4.1:2380               10.245.4.4:48934               users:(("etcd",pid=942,fd=20))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:51680               users:(("kube-apiserver",pid=652,fd=321))
tcp    ESTAB      0      0      10.245.4.1:59880              10.245.4.3:2379                users:(("kube-apiserver",pid=652,fd=139))
tcp    ESTAB      0      0      10.245.4.1:2380               10.245.4.3:32882               users:(("etcd",pid=942,fd=106))
tcp    ESTAB      0      0      10.245.4.1:58580              10.245.4.4:2379                users:(("kube-apiserver",pid=652,fd=17))
tcp    ESTAB      0      0      10.245.4.1:59180              10.245.4.4:2379                users:(("kube-apiserver",pid=652,fd=51))
tcp    ESTAB      0      0      10.245.4.1:2379               10.245.4.1:53080               users:(("etcd",pid=942,fd=57))
tcp    ESTAB      0      0      10.245.4.1:41346              10.245.4.3:2380                users:(("etcd",pid=942,fd=12))
tcp    ESTAB      0      0      10.245.4.1:53232              10.245.4.1:2379                users:(("kube-apiserver",pid=652,fd=180))
tcp    ESTAB      0      0      10.245.4.1:39930              10.245.4.4:2380                users:(("etcd",pid=942,fd=11))
tcp    ESTAB      0      0      10.245.4.1:53032              10.245.4.1:2379                users:(("kube-apiserver",pid=652,fd=80))
tcp    ESTAB      0      0      10.245.4.1:59980              10.245.4.3:2379                users:(("kube-apiserver",pid=652,fd=261))
tcp    ESTAB      0      0      10.245.4.1:53080              10.245.4.1:2379                users:(("kube-apiserver",pid=652,fd=104))
tcp    ESTAB      0      0      10.245.4.1:59380              10.245.4.4:2379                users:(("kube-apiserver",pid=652,fd=285))
tcp    ESTAB      0      0      10.245.4.1:2380               10.245.4.3:32906               users:(("etcd",pid=942,fd=117))
tcp    ESTAB      0      0      10.245.4.1:2379               10.245.4.1:53280               users:(("etcd",pid=942,fd=90))
tcp    ESTAB      0      0      10.245.4.1:41348              10.245.4.3:2380                users:(("etcd",pid=942,fd=10))
tcp    ESTAB      0      0      10.245.4.1:53696              10.245.4.1:2379                users:(("kube-apiserver",pid=652,fd=280))
tcp    TIME-WAIT  0      0      127.0.0.1:10246              127.0.0.1:37080              
tcp    ESTAB      0      0      10.245.4.1:2380               10.245.4.4:52254               users:(("etcd",pid=942,fd=8))
tcp    ESTAB      0      0      10.245.4.1:52800              10.245.4.1:2379                users:(("kube-apiserver",pid=652,fd=13))
tcp    ESTAB      0      0      10.245.4.1:58808              10.245.4.4:2379                users:(("kube-apiserver",pid=652,fd=133))
tcp    ESTAB      0      0      10.245.4.1:59808              10.245.4.3:2379                users:(("kube-apiserver",pid=652,fd=27))
tcp    ESTAB      0      0      10.245.4.1:2379               10.245.4.1:53220               users:(("etcd",pid=942,fd=80))
tcp    ESTAB      0      0      10.245.4.1:41132              10.245.4.3:2380                users:(("etcd",pid=942,fd=114))
tcp    ESTAB      0      0      10.245.4.1:53280              10.245.4.1:2379                users:(("kube-apiserver",pid=652,fd=204))
tcp    LISTEN     0      128      :::80                   :::*                   users:(("nginx",pid=89481,fd=11),("nginx",pid=89476,fd=11))
tcp    LISTEN     0      128      :::80                   :::*                   users:(("nginx",pid=89482,fd=19),("nginx",pid=89476,fd=19))
[root@k8s-master1 ingress]# ss -antup | grep 443    ####443端口
tcp    LISTEN     0      128    10.245.4.1:6443                  *:*                   users:(("kube-apiserver",pid=652,fd=8))
tcp    LISTEN     0      128       *:443                   *:*                   users:(("nginx",pid=89482,fd=20),("nginx",pid=89476,fd=20))
tcp    LISTEN     0      128       *:443                   *:*                   users:(("nginx",pid=89481,fd=12),("nginx",pid=89476,fd=12))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:37414               users:(("kube-apiserver",pid=652,fd=270))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:56688               users:(("kube-apiserver",pid=652,fd=315))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:37412               users:(("kube-apiserver",pid=652,fd=275))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:56686               users:(("kube-apiserver",pid=652,fd=18))
tcp    ESTAB      0      0      10.245.4.1:42216              10.0.0.1:443                 users:(("calico-node",pid=2499,fd=5))
tcp    ESTAB      0      0      10.245.4.1:51114              10.0.0.1:443                 users:(("nginx-ingress-c",pid=89456,fd=3))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:51114               users:(("kube-apiserver",pid=652,fd=322))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42948               users:(("kube-apiserver",pid=652,fd=273))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:47234               users:(("kube-apiserver",pid=652,fd=16))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:56692               users:(("kube-apiserver",pid=652,fd=317))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:54970               users:(("kube-apiserver",pid=652,fd=311))
tcp    ESTAB      0      0      10.245.4.1:42952              10.245.4.1:6443                users:(("kubelet",pid=66488,fd=12))
tcp    ESTAB      0      0      10.245.4.1:47234              10.245.4.1:6443                users:(("kube-proxy",pid=81162,fd=12))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42944               users:(("kube-apiserver",pid=652,fd=310))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42942               users:(("kube-apiserver",pid=652,fd=271))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:51218               users:(("kube-apiserver",pid=652,fd=320))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:37418               users:(("kube-apiserver",pid=652,fd=252))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:51680               users:(("kube-apiserver",pid=652,fd=321))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42218               users:(("kube-apiserver",pid=652,fd=268))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:33624               users:(("kube-apiserver",pid=652,fd=304))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:59506               users:(("kube-apiserver",pid=652,fd=253))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:33620               users:(("kube-apiserver",pid=652,fd=290))
tcp    ESTAB      0      0      10.245.4.1:59506              10.245.4.1:6443                users:(("kube-scheduler",pid=662,fd=11))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42212               users:(("kube-apiserver",pid=652,fd=267))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:58624               users:(("kube-apiserver",pid=652,fd=309))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:57754               users:(("kube-apiserver",pid=652,fd=254))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:58622               users:(("kube-apiserver",pid=652,fd=301))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:33622               users:(("kube-apiserver",pid=652,fd=293))
tcp    ESTAB      0      0      10.245.4.1:42942              10.245.4.1:6443                users:(("kubelet",pid=66488,fd=24))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:58626               users:(("kube-apiserver",pid=652,fd=312))
tcp    TIME-WAIT  0      0      127.0.0.1:44438              127.0.0.1:9099               
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:37426               users:(("kube-apiserver",pid=652,fd=323))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:34836               users:(("kube-apiserver",pid=652,fd=313))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:58614               users:(("kube-apiserver",pid=652,fd=259))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42216               users:(("kube-apiserver",pid=652,fd=272))
tcp    ESTAB      0      0      10.245.4.1:42950              10.245.4.1:6443                users:(("kubelet",pid=66488,fd=40))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:60060               users:(("kube-apiserver",pid=652,fd=247))
tcp    ESTAB      0      0      10.245.4.1:60060              10.245.4.1:6443                users:(("kube-apiserver",pid=652,fd=278))
tcp    ESTAB      0      0      10.245.4.1:42212              10.0.0.1:443                 users:(("calico-node",pid=2498,fd=5))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42950               users:(("kube-apiserver",pid=652,fd=324))
tcp    ESTAB      0      0      10.245.4.1:42944              10.245.4.1:6443                users:(("kubelet",pid=66488,fd=28))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:42952               users:(("kube-apiserver",pid=652,fd=325))
tcp    ESTAB      0      0      10.245.4.1:51184              10.245.4.1:6443                users:(("kube-controller",pid=105553,fd=10))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:43624               users:(("kube-apiserver",pid=652,fd=249))
tcp    ESTAB      0      0      10.245.4.1:42948              10.245.4.1:6443                users:(("kubelet",pid=66488,fd=39))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.3:58618               users:(("kube-apiserver",pid=652,fd=307))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.4:37424               users:(("kube-apiserver",pid=652,fd=318))
tcp    ESTAB      0      0      10.245.4.1:6443               10.245.4.1:51184               users:(("kube-apiserver",pid=652,fd=251))
tcp    ESTAB      0      0      10.245.4.1:42218              10.0.0.1:443                 users:(("calico-node",pid=2497,fd=5))
tcp    LISTEN     0      128      :::8443                 :::*                   users:(("nginx-ingress-c",pid=89456,fd=40))
tcp    LISTEN     0      128      :::443                  :::*                   users:(("nginx",pid=89481,fd=13),("nginx",pid=89476,fd=13))
tcp    LISTEN     0      128      :::443                  :::*                   users:(("nginx",pid=89482,fd=21),("nginx",pid=89476,fd=21))

[root@k8s-master1 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME                                      READY   STATUS      RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create--1-k22tx   0/1     Completed   0          17m   10.244.36.69     k8s-node1     <none>           <none>
ingress-nginx-admission-patch--1-qh76b    0/1     Completed   2          17m   10.244.159.139   k8s-master1   <none>           <none>
ingress-nginx-controller-7tbjf            1/1     Running     0          17m   10.245.4.1       k8s-master1   <none>           <none>
ingress-nginx-controller-mmrk8            1/1     Running     0          17m   10.245.4.3       k8s-node1     <none>           <none>
ingress-nginx-controller-r2vpw            1/1     Running     0          17m   10.245.4.4       k8s-node2     <none>           <none>

在这里插入图片描述
小编这里dns将解析一个ip地址,咱们这边有三个,因为小编的集群就三个,咱们的yaml文件里面使用的是DaemonSet控制器,也就是说有几个节点就会有几个ip,如果用条件的话可以尝试进行dns解析,不但分控了流量,还减轻了服务器的压力

测试域名访问

添加域名解析记录

小编这边模拟在winows机器的host文件里面进行添加,毕竟小编没得dns服务器

这里需要注意的是:最好不要用master节点,小编这里用master节点,是因为对自己的mater能干些啥,是很清楚的。要不然大家容易踩坑

在这里插入图片描述

这个文件有一定的权限控制,可以先粘贴到桌面上,然后更改完覆盖掉之前的文件

使用浏览器进行访问

在这里插入图片描述

结束语

努力很苦,但是不快乐
,加油吧少年

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/58987.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

传输层协议 —— TCP(图解1)

目录 一、TCP的基本认识 1. TCP头部格式 2. TCP协议的特点 3. 什么是TCP连接 4. TCP如何封装与分用 二、通过序列号和确认应答号提高可靠性 1. 32位序列号 2. 32位确认应答号 3. 保证可靠性 4. 为什么序列号和确认应答号是单独的字段 三、窗口大小 1. TCP的发送和…

[附源码]JAVA毕业设计敬老院管理系统(系统+LW)

[附源码]JAVA毕业设计敬老院管理系统&#xff08;系统LW&#xff09; 目运行 环境项配置&#xff1a; Jdk1.8 Tomcat8.5 Mysql HBuilderX&#xff08;Webstorm也行&#xff09; Eclispe&#xff08;IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持&#xff09;。 项目技术&a…

ubuntu 实现远程开关机

1. 需求 家里厅里有三台linux主机在跑虚拟机, 一台windows主机在跑wsl2 - ubuntu 20.04 分别是 硬件网络连接方式OSip虚拟机sEUC i5 7250U 16Gwifiwin1010.0.1.223wsl2 - 随机ipMineFine S500 R7 5800H 64G网线Zorin OS 16.2 (Ubuntu 20.04 LTS)10.0.1.198vm1 - 10.0.1.156 …

统计信号处理基础 习题解答6-14

题目&#xff1a; 噪声过程由IID零均值&#xff0c;PDF为 的随机变量组成&#xff0c;其中0<ε<1 。这样的PDF成为高斯混合PDF&#xff0c;它用来模拟 具有方差为 的高斯噪声以及剩余的服从方差 的高斯噪声。一般 &#xff0c;且ε≪1 &#xff0c;所以具有方差为 的背…

[Java安全]—Tomcat反序列化注入回显内存马

前言 在之前学的tomcat filter、listener、servlet等内存马中&#xff0c;其实并不算真正意义上的内存马&#xff0c;因为Web服务器在编译jsp文件时生成了对应的class文件&#xff0c;因此进行了文件落地。 所以本篇主要是针对于反序列化进行内存马注入来达到无文件落地的目的…

[附源码]计算机毕业设计JAVA学生信息管理系统

[附源码]计算机毕业设计JAVA学生信息管理系统 项目运行 环境配置&#xff1a; Jdk1.8 Tomcat7.0 Mysql HBuilderX&#xff08;Webstorm也行&#xff09; Eclispe&#xff08;IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持&#xff09;。 项目技术&#xff1a; SSM mybati…

交流转直流LED驱动芯片 MT7930 SOP8

隔离LED驱动ic芯片 MT7930 SOP8 交流转直流LED驱动芯片 MT7930的描述&#xff1a; MT7930是一个单级、高功率因数&#xff0c;源边控制交流转直流LED驱动芯片。只需极少的外围器件感应源边的电学信息&#xff0c;就可以精确地调制LED电流&#xff0c;而不需要光耦及副边感应器件…

5.javase_循环语句

一.for循环语句 (1)for循环语句 1.1 for循环语句格式&#xff1a; for(初始化语句a;条件判断语句b;条件控制语句c) { 循环体语句d; } 1.2 for循环语句范例&#xff1a; for (int i 1; i < 5; i 1) { System.out.println(“HelloWorld”); } 1.3 for循环语句执行顺序 (2)…

6.每天进步一点点---Python爬虫urllib库

文章未完成待续 urllib 库是 Python 内置的一个 HTTP 请求库。在 Python 2.x 中&#xff0c;是由 urllib 和 urllib2 两 个库来实现请求发送的&#xff0c;在 Python 3.x 中&#xff0c;这两个库已经合并到一起&#xff0c;统一为 urllib 了。 urllib 库由四个模块组成。 re…

LeetCode 69. x 的平方根

&#x1f308;&#x1f308;&#x1f604;&#x1f604; 欢迎来到茶色岛独家岛屿&#xff0c;本期将为大家揭晓LeetCode 69. x 的平方根 &#xff0c;做好准备了么&#xff0c;那么开始吧。 &#x1f332;&#x1f332;&#x1f434;&#x1f434; 一、题目名称 69. x 的平方根…

Git全套,从简到细

Git DravenGit一、git工具引入二、git本地工具2.1、下载2.2、使用2.3、修改2.4、查看历史版本2.5、回退历史版本2.6、起死回生三、git远程仓库3.1、使用gitee3.2、配置本地仓库参数3.3、查看gitee仓库3.4、修改后推送四、git两人协作-非冲突小智小杨五、git两人协作-冲突六、gi…

搜索与图论-BFS

文章目录一、BFS1. BFS 简介2. BFS 的基本思想3. BFS 的实现步骤4. BFS 的实际演示二、BFS 例题——走迷宫具体实现1. 样例演示2. 实现思路3. 代码注解4. 实现代码三、BFS 例题——八数码具体实现1. 实现思路2. 代码注解3. 实现代码一、BFS BFS 的关键点是状态的选取和标记。 …

iwebsec靶场 SQL注入漏洞通关笔记13-二次注入

系列文章目录 iwebsec靶场 SQL注入漏洞通关笔记1- 数字型注入_mooyuan的博客-CSDN博客 iwebsec靶场 SQL注入漏洞通关笔记2- 字符型注入&#xff08;宽字节注入&#xff09;_mooyuan的博客-CSDN博客 iwebsec靶场 SQL注入漏洞通关笔记3- bool注入&#xff08;布尔型盲注&#…

力扣(LeetCode)133. 克隆图(C++)

dfs哈希表图 先深搜建立所有点&#xff0c;加入哈希表。再遍历哈希表&#xff0c;按照拷贝前后的结点&#xff0c;拷贝边。最后返回某一结点&#xff0c;即为所求。 class Solution { public:unordered_map<Node*,Node*> mp;Node* cloneGraph(Node* node) {if(!node) r…

UNPV2 学习:Pipes and FIFOs 学习记录

命名管道 fifo 的特点 特点描述 打破了匿名管道只能在父子进程间使用的限制&#xff0c;可以在无亲缘关系的进程之间使用同一个 fifo未指定 NONBLOCK 参数 open fifo 时可能会 block&#xff0c;不当的编码可能会导致进程死锁当请求读超过 fifo 中现有数据大小的数据时&#…

基于yolov5轻量级的学生上课姿势检测识别分析系统

在我之前的博文中已经做过关于人体姿势识别人体姿态检测的博文&#xff0c;都是比较早期的技术模型了&#xff0c;随机技术的迭代更新&#xff0c;很多更加出色的模型陆续出现&#xff0c;这里基于一些比较好用的轻量级模型开发的姿态检测模型。 原始博文如下&#xff1a; 《…

HTML+CSS+JS家乡主题网页设计 学生网页设计作品 dreamweaver作业静态HTML网页设计模板 旅游景点网页作业制作

家乡旅游景点网页作业制作 网页代码运用了DIV盒子的使用方法&#xff0c;如盒子的嵌套、浮动、margin、border、background等属性的使用&#xff0c;外部大盒子设定居中&#xff0c;内部左中右布局&#xff0c;下方横向浮动排列&#xff0c;大学学习的前端知识点和布局方式都有…

BERT模型的详细介绍

Bert模型的基本原理是什么? BERT 来自 Google 的论文Pre-training of Deep Bidirectional Transformers for Language Understanding&#xff0c;BERT 是“Bidirectional Encoder Representations from Transformers”的首字母缩写&#xff0c;整体是一个自编码语言模型&…

全文检索-Elasticsearch-入门

文章目录前言一、基本概念1.1 Index&#xff08;索引&#xff09;1.2 Type&#xff08;类型&#xff09;1.3 Document&#xff08;文档&#xff09;1.4 ES与关系型数据库对比1.5 倒排索引二、初步检索2.1 _cat2.2 索引一个文档&#xff08;保存&#xff09;2.3 查询文档前言 全…

[静态时序分析简明教程(八)]虚假路径

静态时序分析简明教程-虚假路径一、写在前面1.1 快速导航链接二、虚假路径2.1 set_false_path2.2 -from -to -through2.3 上升/下降沿 约束2.4 建立/保持 约束2.5 虚假路径示例三、总结一、写在前面 一个数字芯片工程师的核心竞争力是什么&#xff1f;不同的工程师可能给出不同…