kubernetes中的微服务

news2024/10/17 9:30:19

目录

一 什么是微服务

二 微服务的类型

三 ipvs模式

3.1 ipvs模式配置方式

四 微服务类型详解

4.1 clusterip

4.2 ClusterIP中的特殊模式headless

4.3 nodeport

4.4 loadbalancer

4.5 metalLB

4.6 externalname

五 Ingress-nginx

5.1 ingress-nginx功能

5.2 部署ingress

5.2.1 下载部署文件(资源已发)

5.2.2 安装ingress

5.2.3 测试ingress

5.3 ingress 的高级用法

5.3.1 基于路径的访问

5.3.2 基于域名的访问

5.3.3 建立tls加密

5.3.4 建立auth认证

5.3.5 rewrite重定向

六 Canary金丝雀发布

6.1 什么是金丝雀发布

6.2 Canary发布方式

6.2.1 基于header(http包头)灰度

6.2.2 基于权重的灰度发布


一 什么是微服务

用控制器来完成集群的工作负载,那么应用如何暴漏出去?需要通过微服务暴漏出去后才能被访问

  • Service是一组提供相同服务的Pod对外开放的接口。

  • 借助Service,应用可以实现服务发现和负载均衡。

  • service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)

 

二 微服务的类型

微服务类型作用描述
ClusterIP默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问
NodePort将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP
LoadBalancer在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用
ExternalName将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定

示例:

#生成控制器文件并建立控制器

[root@k8s-master ~]# kubectl create deployment timinglee --image reg.timinglee.org/library/myapp:v1 --replicas 2 --dry-run=client -o yaml > timinglee.yaml

[root@k8s-master ~]# kubectl apply -f timinglee.yaml 
deployment.apps/timinglee created
[root@k8s-master ~]# kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
timinglee-56f99b7f4b-4c9kc   1/1     Running   0          6s
timinglee-56f99b7f4b-9wlxl   1/1     Running   0          6s


#生成微服务yaml追加到已有yaml
[root@k8s-master ~]# kubectl expose deployment timinglee --port 80 --target-port 80 --dry-run=client -o yaml >> timinglee.yaml 

[root@k8s-master ~]# vim timinglee.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: timinglee
  name: timinglee
spec:
  replicas: 2
  selector:
    matchLabels:
      app: timinglee
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: timinglee
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
        resources: {}
status: {}

---                                        #不同资源间用---隔开
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: timinglee
  name: timinglee
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
status:
  loadBalancer: {}

[root@k8s-master ~]# kubectl delete deployments.apps timinglee 
deployment.apps "timinglee" deleted
[root@k8s-master ~]# kubectl get pods 
No resources found in default namespace.

[root@k8s-master ~]# kubectl apply -f timinglee.yaml 
deployment.apps/timinglee created
service/timinglee created


[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   31d
timinglee    ClusterIP   10.99.121.99   <none>        80/TCP  

微服务默认使用iptables调度

[root@k8s-master ~]# kubectl get service -o wide 
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   31d   <none>
timinglee    ClusterIP   10.99.121.99   <none>        80/TCP    89s   app=timinglee   #集群内部IP   10.99.121.99  

  #可以在火墙中查看到策略信息

[root@k8s-master ~]# iptables -t nat -nL

Chain KUBE-SVC-I7WXYK76FWYNTTGM (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.99.121.99         /* default/timinglee cluster IP */ tcp dpt:80

三 ipvs模式


  • Service 是由 kube-proxy 组件,加上 iptables 来共同实现的
  • kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源
  • IPVS模式的service,可以使K8s集群支持更多量级的Pod

3.1 ipvs模式配置方式


1 在所有节点中安装ipvsadm

[root@k8s-master/node/node2 ~]# yum install ipvsadm.x86_64 -y

2 修改master节点的代理配置

[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy 
configmap/kube-proxy edited
 

 metricsBindAddress: ""
    mode: "ipvs"                #设置kube-proxy使用ipvs模式
    nftables:

3 重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod

[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-22hr6" deleted
pod "kube-proxy-r4jj7" deleted
pod "kube-proxy-vwfgr" deleted

[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.10.100:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0         
  -> 10.244.0.3:9153              Masq    1      0          0         
TCP  10.99.121.99:80 rr
  -> 10.244.1.3:80                Masq    1      0          0         
  -> 10.244.2.3:80                Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0         
[root@k8s-master ~]# 

注意:

切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP

[root@k8s-master ~]# ip a | tail
    inet6 fe80::ac84:aaff:fe44:17f3/64 scope link 
       valid_lft forever preferred_lft forever
8: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 9e:10:d2:0c:25:33 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.99.121.99/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
[root@k8s-master ~]#

四 微服务类型详解

4.1 clusterip


特点:

clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能

示例:

[root@k8s-master ~]# vim myapp.yml 
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee 
  name: timinglee
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: ClusterIP

[root@k8s-master ~]# kubectl apply -f myapp.yml 
service/timinglee created

[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   31d
timinglee    ClusterIP   10.110.19.199   <none>        80/TCP    16s
 

#service创建后集群DNS提供解析

[root@k8s-master ~]# kubectl -n kube-system get svc
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   31d
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   31d
timinglee    ClusterIP   10.110.19.199   <none>        80/TCP    12m

[root@k8s-master ~]# dig timinglee.dedault.svc.cluster.local@10.96.0.10

; <<>> DiG 9.16.23-RH <<>> timinglee.dedault.svc.cluster.local@10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 48678
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;timinglee.dedault.svc.cluster.local\@10.96.0.10. IN A

;; AUTHORITY SECTION:
.            3600    IN    SOA    a.root-servers.net. nstld.verisign-grs.com. 2024101500 1800 900 604800 86400

;; Query time: 1066 msec
;; SERVER: 114.114.114.114#53(114.114.114.114)
;; WHEN: Tue Oct 15 15:48:32 CST 2024
;; MSG SIZE  rcvd: 139

4.2 ClusterIP中的特殊模式headless


headless(无头服务)

对于无头 Services 并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成

[root@k8s-master ~]# kubectl delete -f myapp.yml 
service "timinglee" deleted
[root@k8s-master ~]# vim myapp.yml 
[root@k8s-master ~]# cat myapp.yml 
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee 
  name: timinglee
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: ClusterIP
  clusterIP: None


[root@k8s-master ~]# kubectl apply -f myapp.yml 
service/timinglee created


[root@k8s-master ~]# kubectl get service timinglee 
NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
timinglee   ClusterIP   None         <none>        80/TCP    51s

[root@k8s-master ~]# dig timinglee.dedault.svc.cluster.local@10.96.0.10

; <<>> DiG 9.16.23-RH <<>> timinglee.dedault.svc.cluster.local@10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57288
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;timinglee.dedault.svc.cluster.local\@10.96.0.10. IN A

;; AUTHORITY SECTION:
.            3233    IN    SOA    a.root-servers.net. nstld.verisign-grs.com. 2024101500 1800 900 604800 86400

;; Query time: 27 msec
;; SERVER: 114.114.114.114#53(114.114.114.114)
;; WHEN: Tue Oct 15 15:54:39 CST 2024
;; MSG SIZE  rcvd: 150


[root@k8s-master ~]# kubectl run test --image reg.timinglee.org/library/busyboxplus:latest -it
If you don't see a command prompt, try pressing enter.

/ # nslookup timinglee.default.svc.cluster.local.
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      timinglee.default.svc.cluster.local.
Address 1: 10.96.132.41 timinglee.default.svc.cluster.local
/ # curl timinglee.default.svc.cluster.local.
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl timinglee
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl timinglee/hostname.html
timinglee-56f99b7f4b-fnqrp          

4.3 nodeport


通过ipvs暴漏端口从而使外部主机通过master节点的对外ip:<port>来访问pod业务

其访问过程为:

示例:

[root@k8s-master ~]# vim timinglee.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: timinglee
  name: timinglee
spec:
  replicas: 2
  selector:
    matchLabels:
      app: timinglee
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: timinglee
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: timinglee
  name: timinglee
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: NodePort
status:
  loadBalancer: {}

[root@k8s-master ~]# kubectl apply -f timinglee.yaml 
deployment.apps/timinglee created
service/timinglee created
[root@k8s-master ~]# kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
timinglee-56f99b7f4b-blxbj   1/1     Running   0          5s
timinglee-56f99b7f4b-sbl2r   1/1     Running   0          5s
[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        31d
timinglee    NodePort    10.103.125.62   <none>        80:32494/TCP   15s
[root@k8s-master ~]# curl 192.168.10.100:32494
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 192.168.10.100:32494/hostname.html
timinglee-56f99b7f4b-sbl2r
[root@k8s-master ~]# curl 192.168.10.100:32494/hostname.html
timinglee-56f99b7f4b-blxbj

注意:

nodeport默认端口

nodeport默认端口是30000-32767,超出会报错

[root@k8s-master ~]# vim timinglee.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: timinglee
  name: timinglee
spec:
  replicas: 2
  selector:
    matchLabels:
      app: timinglee
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: timinglee
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 33333
  selector:
    app: timinglee
  type: NodePort
status:
  loadBalancer: {}

[root@k8s-master ~]# kubectl apply -f timinglee.yaml 
deployment.apps/timinglee created
The Service "timinglee-service" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767

如果需要使用这个范围以外的端口就需要特殊设定

[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml 

- --service-node-port-range=30000-40000

注意:

添加“--service-node-port-range=“ 参数,端口范围可以自定义

修改后api-server会自动重启,等apiserver正常启动后才能操作集群

集群重启自动完成在修改完参数后全程不需要人为干预

4.4 loadbalancer

云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配

[root@k8s-master ~]# vim timinglee.yaml 
......

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: LoadBalancer
status:
  loadBalancer: {}
[root@k8s-master ~]# kubectl delete -f timinglee.yaml 
deployment.apps "timinglee" deleted
service "timinglee-service" deleted
[root@k8s-master ~]# kubectl apply -f timinglee.yaml 
deployment.apps/timinglee created
service/timinglee-service created
[root@k8s-master ~]# kubectl get service
NAME                TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes          ClusterIP      10.96.0.1       <none>        443/TCP        31d
timinglee-service   LoadBalancer   10.111.37.137   <pending>     80:37927/TCP   12s

LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

4.5 metalLB

官网:Installation :: MetalLB, bare metal load-balancer for Kubernetes


metalLB功能:

为LoadBalancer分配vip

部署方式

1.设置ipvs模式

[root@k8s-master ~]# kubectl edit cm -n kube-system kube-proxy 
configmap/kube-proxy edited
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

[root@k8s-master ~]# kubectl -n kube-system get  pods   | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-6785p" deleted
pod "kube-proxy-vmk8g" deleted
pod "kube-proxy-w4qgl" deleted

2.下载部署文件(资源已发)
[root@k8s2 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml 

  

3.修改文件中镜像地址,与harbor仓库路径保持一致
[root@k8s-master ~]# vim metallb-native.yaml
...
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.8

4.上传镜像到harbor
[root@k8s-master ~]# docker pull quay.io/metallb/controller:v0.14.8
[root@k8s-master ~]# docker pull quay.io/metallb/speaker:v0.14.8

[root@k8s-master metallb]# docker load -i metalLB.tag.gz 
f144bb4c7c7f: Loading layer  327.7kB/327.7kB
49626df344c9: Loading layer  40.96kB/40.96kB
945d17be9a3e: Loading layer  2.396MB/2.396MB
4d049f83d9cf: Loading layer  1.536kB/1.536kB
af5aa97ebe6c: Loading layer   2.56kB/2.56kB
ac805962e479: Loading layer   2.56kB/2.56kB
bbb6cacb8c82: Loading layer   2.56kB/2.56kB
2a92d6ac9e4f: Loading layer  1.536kB/1.536kB
1a73b54f556b: Loading layer  10.24kB/10.24kB
f4aee9e53c42: Loading layer  3.072kB/3.072kB
b336e209998f: Loading layer  238.6kB/238.6kB
371134a463a4: Loading layer  61.38MB/61.38MB
6e64357636e3: Loading layer  13.31kB/13.31kB
Loaded image: quay.io/metallb/controller:v0.14.8
0b8392a2e3be: Loading layer  2.137MB/2.137MB
3d5a6e3a17d1: Loading layer  65.46MB/65.46MB
8311c2bd52ed: Loading layer  49.76MB/49.76MB
4f4d43efeed6: Loading layer  3.584kB/3.584kB
881ed6f5069a: Loading layer  13.31kB/13.31kB
Loaded image: quay.io/metallb/speaker:v0.14.8
 

[root@k8s-master ~]# docker tag quay.io/metallb/speaker:v0.14.8 reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker tag quay.io/metallb/controller:v0.14.8 reg.timinglee.org/metallb/controller:v0.14.8

[root@k8s-master ~]# docker push reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker push reg.timinglee.org/metallb/controller:v0.14.8

5.部署服务

[root@k8s-master metallb]# kubectl apply -f metallb-native.yaml

[root@k8s-master metallb]# kubectl -n metallb-system get pods
NAME                          READY   STATUS    RESTARTS   AGE
controller-584575df59-wblql   1/1     Running   0          29s
speaker-8xwvh                 1/1     Running   0          29s
speaker-m845b                 1/1     Running   0          29s
speaker-wrvh7                 1/1     Running   0          29s
 

6.配置分配地址段

[root@k8s-master metallb]# vim configmap.yml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool                #地址池名称
  namespace: metallb-system
spec:
  addresses:
  - 192.168.10.10-192.168.10.200        #修改为自己本地地址段

---                       #两个不同的kind中间必须加分割 
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool        #使用地址池

[root@k8s-master ~]# kubectl get service
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
kubernetes          ClusterIP      10.96.0.1        <none>          443/TCP        31d
timinglee-service   LoadBalancer   10.105.122.155   192.168.10.50   80:36677/TCP   11s

#通过分配地址从集群外访问服务

[root@k8s-master ~]# curl 192.168.10.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

4.6 externalname

  • 开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题

  • 一般应用于外部业务和pod沟通或外部业务迁移到pod内时

  • 在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。

  • 集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题

示例:

[root@k8s-master ~]# vim timinglee.yaml 
[root@k8s-master ~]# cat timinglee.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: timinglee
  name: timinglee
spec:
  replicas: 2
  selector:
    matchLabels:
      app: timinglee
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: timinglee
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: ExternalName
  externalName: www.timinglee.org
status:
  loadBalancer: {}

[root@k8s-master ~]# kubectl get service
NAME                TYPE           CLUSTER-IP   EXTERNAL-IP         PORT(S)   AGE
kubernetes          ClusterIP      10.96.0.1    <none>              443/TCP   31d
timinglee-service   ExternalName   <none>       www.timinglee.org   80/TCP    8s

五 Ingress-nginx


官网:

Installation Guide - Ingress-Nginx Controller

5.1 ingress-nginx功能

  • 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层
  • Ingress由两部分组成:Ingress controller和Ingress服务
  • Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。
  • 业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。

5.2 部署ingress


#部署前准备工作

[root@k8s-master ~]# kubectl create deployment myappv1 --image reg.timinglee.org/library/myapp:v1 --dry-run=client -o yaml > myapp-v1.yml
[root@k8s-master ~]# cp myapp-v1.yml myapp-v2.yml
[root@k8s-master ~]# vim myapp-v2.yml 
[root@k8s-master ~]# cat myapp-v1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv1
  name: myappv1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv1
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
        resources: {}
status: {}
[root@k8s-master ~]# cat myapp-v2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv2
  name: myappv2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv2
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv2
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v2
        name: myapp2
        resources: {}
status: {}
[root@k8s-master ~]# kubectl apply -f myapp-v1.yml 
deployment.apps/myappv1 created
[root@k8s-master ~]# kubectl apply -f myapp-v2.yml 
deployment.apps/myappv2 created
[root@k8s-master ~]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
myappv1-78ff74589d-mqm6k   1/1     Running   0          11s
myappv2-68578565d8-swgzv   1/1     Running   0          6s

[root@k8s-master ~]# kubectl expose deployment myappv1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yml 
[root@k8s-master ~]# kubectl expose deployment myappv2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v2.yml

[root@k8s-master ~]# vim myapp-v1.yml 
[root@k8s-master ~]# vim myapp-v2.yml 
[root@k8s-master ~]# cat myapp-v1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv1
  name: myappv1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv1
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: myappv1
  name: myappv1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myappv1
status:
  loadBalancer: {}
[root@k8s-master ~]# cat myapp-v2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv2
  name: myappv2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv2
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv2
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v2
        name: myapp2
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: myappv2
  name: myappv2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myappv2
status:
  loadBalancer: {}
[root@k8s-master ~]# kubectl apply -f myapp-v1.yml 
deployment.apps/myappv1 configured
service/myappv1 created
[root@k8s-master ~]# kubectl apply -f myapp-v2.yml 
deployment.apps/myappv2 configured
service/myappv2 created
[root@k8s-master ~]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
myappv1-78ff74589d-mqm6k   1/1     Running   0          4m59s
myappv2-68578565d8-swgzv   1/1     Running   0          4m54s


#测试

[root@k8s-master ~]# kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   31d
myappv1      ClusterIP   10.100.212.4   <none>        80/TCP    45s
myappv2      ClusterIP   10.99.186.84   <none>        80/TCP    40s
[root@k8s-master ~]# curl 10.100.212.4
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.99.186.84
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

5.2.1 下载部署文件(资源已发)

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml

上传ingress所需镜像到harbor

[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce reg.timinglee.org/ingress-nginx/controller:v1.11.2

[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3

5.2.2 安装ingress

[root@k8s-master ~]# vim deploy.yaml
445         image: ingress-nginx/controller:v1.11.2
546         image: ingress-nginx/kube-webhook-certgen:v1.4.3
599         image: ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ingress]# kubectl apply -f deploy.yaml 

[root@k8s-master ingress]# kubectl -n ingress-nginx get pods
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-xql2j        0/1     Completed   0          38s
ingress-nginx-admission-patch-46zhq         0/1     Completed   2          38s
ingress-nginx-controller-67bd6649b6-whdjw   1/1     Running     0          38s
[root@k8s-master ingress]# 


[root@k8s-master ingress]# kubectl -n ingress-nginx get svc
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.34.154    <none>        80:38991/TCP,443:36893/TCP   63s
ingress-nginx-controller-admission   ClusterIP   10.111.70.191   <none>        443/TCP                      63s


#修改微服务为loadbalancer

[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49   type: LoadBalancer

[root@k8s-master ingress]# kubectl -n ingress-nginx get services
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.96.34.154    <pending>     80:38991/TCP,443:36893/TCP   4m13s
ingress-nginx-controller-admission   ClusterIP      10.111.70.191   <none>        443/TCP                      4m13s

[root@k8s-master ingress]# kubectl -n ingress-nginx get all

NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-xql2j        0/1     Completed   0          28m
pod/ingress-nginx-admission-patch-46zhq         0/1     Completed   2          28m
pod/ingress-nginx-controller-67bd6649b6-whdjw   1/1     Running     0          28m

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.96.34.154    192.168.10.50   80:38991/TCP,443:36893/TCP   28m
service/ingress-nginx-controller-admission   ClusterIP      10.111.70.191   <none>          443/TCP                      28m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           28m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-67bd6649b6   1         1         1       28m

NAME                                       STATUS     COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   Complete   1/1           7s         28m
job.batch/ingress-nginx-admission-patch    Complete   1/1           20s        28m
[root@k8s-master ingress]# 

注意:

在ingress-nginx-controller中看到的对外IP就是ingress最终对外开放的ip

5.2.3 测试ingress

#生成yaml文件

[root@k8s-master ingress]# kubectl create ingress webcluster --rule '*/=timinglee-svc:80' --dry-run=client -o yaml > timinglee-ingress.yml

[root@k8s-master ingress]# vim timinglee-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
spec:
  rules:
  -  http:
      paths:
      - backend:
          service:
            name: timinglee-svc
            port:
              number: 80
        path: /
        pathType: Prefix

#Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配)
 

#建立ingress控制器

[root@k8s-master ingress]# kubectl apply -f timinglee-ingress.yml 
ingress.networking.k8s.io/test-ingress created


[root@k8s-master ingress]# kubectl get ingress
NAME      CLASS   HOSTS   ADDRESS         PORTS   AGE
myappv1   nginx   *       192.168.10.10   80      34s
[root@k8s-master ingress]# curl 192.168.10.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

注意:ingress必须和输出的service资源处于同一namespace

5.3 ingress 的高级用法


5.3.1 基于路径的访问

1.建立用于测试的控制器myapp(上面已经做了,如果按照上面做了这个不用弄了)

[root@k8s-master ~]# kubectl create deployment myappv1 --image reg.timinglee.org/library/myapp:v1 --dry-run=client -o yaml > myapp-v1.yml
[root@k8s-master ~]# cp myapp-v1.yml myapp-v2.yml
[root@k8s-master ~]# vim myapp-v2.yml 
[root@k8s-master ~]# cat myapp-v1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv1
  name: myappv1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv1
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
        resources: {}
status: {}
[root@k8s-master ~]# cat myapp-v2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv2
  name: myappv2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv2
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv2
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v2
        name: myapp2
        resources: {}
status: {}
[root@k8s-master ~]# kubectl apply -f myapp-v1.yml 
deployment.apps/myappv1 created
[root@k8s-master ~]# kubectl apply -f myapp-v2.yml 
deployment.apps/myappv2 created
[root@k8s-master ~]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
myappv1-78ff74589d-mqm6k   1/1     Running   0          11s
myappv2-68578565d8-swgzv   1/1     Running   0          6s

[root@k8s-master ~]# kubectl expose deployment myappv1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yml 
[root@k8s-master ~]# kubectl expose deployment myappv2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v2.yml

[root@k8s-master ~]# vim myapp-v1.yml 
[root@k8s-master ~]# vim myapp-v2.yml 
[root@k8s-master ~]# cat myapp-v1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv1
  name: myappv1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv1
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: myappv1
  name: myappv1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myappv1
status:
  loadBalancer: {}
[root@k8s-master ~]# cat myapp-v2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myappv2
  name: myappv2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myappv2
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myappv2
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v2
        name: myapp2
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: myappv2
  name: myappv2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myappv2
status:
  loadBalancer: {}
[root@k8s-master ~]# kubectl apply -f myapp-v1.yml 
deployment.apps/myappv1 configured
service/myappv1 created
[root@k8s-master ~]# kubectl apply -f myapp-v2.yml 
deployment.apps/myappv2 configured
service/myappv2 created
[root@k8s-master ~]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
myappv1-78ff74589d-mqm6k   1/1     Running   0          4m59s
myappv2-68578565d8-swgzv   1/1     Running   0          4m54s

[root@k8s-master ~]# kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   31d
myappv1      ClusterIP   10.100.212.4   <none>        80/TCP    45s
myappv2      ClusterIP   10.99.186.84   <none>        80/TCP    40s

2.建立ingress的yaml

[root@k8s-master ingress]# vim ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /        #访问路径后加任何内容都被定向到/
  name: ingress1
spec:
  ingressClassName: nginx
  rules:
  - host: www.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /v1
        pathType: Prefix

      - backend:
          service:
            name: myappv2
            port:
              number: 80
        path: /v2
        pathType: Prefix

#测试:

[root@k8s-master ingress]# kubectl apply -f ingress.yml 
ingress.networking.k8s.io/ingress1 created

[root@k8s-master ingress]# echo 192.168.10.50 www.timinglee.org >> /etc/hosts
[root@k8s-master ingress]# curl www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ingress]# curl www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

#nginx.ingress.kubernetes.io/rewrite-target: / 的功能实现

[root@k8s-master ingress]# curl www.timinglee.org/v2/aaa
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

5.3.2 基于域名的访问

 #在测试主机中设定解析

[root@reg ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.130    reg.timinglee.org
192.168.10.50 www.timinglee.org myappv1.timinglee.org myappv2.timinglee.org

# 建立基于域名的yml文件

[root@k8s-master ingress]# vim ingress2.yml 
[root@k8s-master ingress]# cat ingress2.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress2
spec:
  ingressClassName: nginx
  rules:
  - host: myappv1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /
        pathType: Prefix

  - host: myappv2.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv2
            port:
              number: 80
        path: /
        pathType: Prefix

#利用文件建立ingress

[root@k8s-master ingress]# kubectl apply -f ingress2.yml 
ingress.networking.k8s.io/ingress2 created


[root@k8s-master ingress]# kubectl describe ingress ingress2 
Name:             ingress2
Labels:           <none>
Namespace:        default
Address:          192.168.10.10
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                   Path  Backends
  ----                   ----  --------
  myappv1.timinglee.org  
                         /   myappv1:80 (10.244.1.23:80)
  myappv2.timinglee.org  
                         /   myappv2:80 (10.244.2.20:80)
Annotations:             nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    30s (x2 over 66s)  nginx-ingress-controller  Scheduled for sync

#在测试主机中测试

[root@reg ~]# curl myappv1.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl myappv2.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

5.3.3 建立tls加密

 #建立证书

[root@k8s-master tls]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt
.....+..+.............+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+........+....+..+.+.........+.....+...+.+......+...............+...+.....+.+...........+.+..+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*....+...............+..+...+....+..+.+............+..+...............+....+..+.............+.....+....+.....+...+....+...+.....+.+...........+.+..+......+.........+......+.+.........+.....+.......+.....+.......+......+.....+.......+..+......+.+......+..+.+..............+.......+......+..+...+.........+....+.........+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
....+...............+...+.+..+.......+.....+.+..+.......+...+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.........+...+........+.........+...+....+...+.....+......+.......+...+.....+....+...+...+.........+..+...+..........+...+..+......+.........+.+............+..+.......+.....+......+...+.+......+...+..+.......+...+.................+.+..+...+....+......+..+.........+....+...........+.+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.+............+..+...+.+......+...........+...+.........+.+...........+...+...+....+.....+.........+....+..+.........+.......+.........+...+...............+...+..+...+...+.+...+...........+......+......+...+....+...+..+.......+...........+..........+..+...+....+.........+.....+....+...........+..........+.....+......+.+..+......+....+.....+...+....+...+..+.........+......+..........+.........+..+..........+..+.+.....+.+.....+.+..................+......+...+..+...+......+..........+...............+.........+........+...+.+...+......+.....+.+......+..............+.........+.+......+.......................+.........+...+....+.........+..............+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----
 

#建立加密资源类型secret

[root@k8s-master tls]# ls
tls.crt  tls.key
[root@k8s-master tls]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt 
secret/web-tls-secret created
[root@k8s-master tls]# kubectl get secrets 
NAME             TYPE                DATA   AGE
web-tls-secret   kubernetes.io/tls   2      12s

注意:

secret通常在kubernetes中存放敏感数据,他并不是一种加密方式,在后面课程中会有专门讲解

#建立ingress3基于tls认证的yml文件

[root@k8s-master tls]# vim ingress3.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress3
spec:
  tls:
  - hosts:
    - myapp-tls.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /
        pathType: Prefix


[root@k8s-master tls]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.10    k8s-node
192.168.10.20    k8s-node2
192.168.10.100    k8s-master
192.168.10.130    reg.timinglee.org
192.168.10.50 www.timinglee.org myapp-tls.timinglee.org 

[root@k8s-master tls]# kubectl apply -f ingress3.yml 
ingress.networking.k8s.io/ingress3 created
 

#测试

[root@k8s-master tls]# curl -k https://myapp-tls.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

5.3.4 建立auth认证

#建立认证文件

[root@k8s-master tls]# yum install httpd-tools.x86_64 -y
[root@k8s-master tls]# htpasswd -cm auth lee
New password:                 #密码是123
Re-type new password: 
Adding password for user lee
[root@k8s-master tls]# cat auth 
lee:$apr1$BgZiZC5c$UZ559xczgGxU0ejRWypgs0

#建立认证类型资源

[root@k8s-master tls]# kubectl create secret generic auth-web --from-file auth 
secret/auth-web created
[root@k8s-master tls]# kubectl describe secrets auth-web 
Name:         auth-web
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
auth:  42 bytes

#建立ingress4基于用户认证的yaml文件

[root@k8s-master tls]# vim ingress4.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password" 
  name: ingress4
spec:
  tls:
  - hosts:
    - myapp-tls.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /
        pathType: Prefix


#建立ingress4

[root@k8s-master tls]# kubectl apply -f ingress4.yml 
ingress.networking.k8s.io/ingress4 created
[root@k8s-master tls]# kubectl describe ingress ingress4
Name:             ingress4
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp-tls.timinglee.org
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  myapp-tls.timinglee.org  
                           /   myappv1:80 (10.244.1.23:80)
Annotations:               nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                           nginx.ingress.kubernetes.io/auth-secret: auth-web
                           nginx.ingress.kubernetes.io/auth-type: basic
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    30s   nginx-ingress-controller  Scheduled for sync


#测试:

[root@k8s-master tls]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.10    k8s-node
192.168.10.20    k8s-node2
192.168.10.100    k8s-master
192.168.10.130    reg.timinglee.org
192.168.10.50 www.timinglee.org myapp-tls.timinglee.org 

[root@k8s-master tls]# curl -k https://myapp-tls.timinglee.org
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>

[root@k8s-master tls]# curl -k https://myapp-tls.timinglee.org -ulee:123
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

5.3.5 rewrite重定向


#指定默认访问的文件到hostname.html上

[root@k8s-master tls]# vim ingress5.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/app-root: /hostname.html
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password" 
  name: ingress5
spec:
  tls:
  - hosts:
    - myapp-tls.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master tls]# kubectl apply -f ingress5.yml 
ingress.networking.k8s.io/ingress5 created


[root@k8s-master tls]# kubectl describe ingress ingress5 
Name:             ingress5
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp-tls.timinglee.org
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  myapp-tls.timinglee.org  
                           /   myappv1:80 (10.244.1.23:80)
Annotations:               nginx.ingress.kubernetes.io/app-root: /hostname.html
                           nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                           nginx.ingress.kubernetes.io/auth-secret: auth-web
                           nginx.ingress.kubernetes.io/auth-type: basic
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    57s   nginx-ingress-controller  Scheduled for sync


#测试:

[root@k8s-master tls]# curl -Lk https://myapp-tls.timinglee.org -ulee:123
myappv1-78ff74589d-mqm6k

[root@k8s-master tls]# curl -Lk https://myapp-tls.timinglee.org/hostname.html -ulee:123
myappv1-78ff74589d-mqm6k

[root@k8s-master tls]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:123
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>

#解决重定向路径问题

[root@k8s-master tls]# vim ingress6.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password" 
  name: ingress6
spec:
  tls:
  - hosts:
    - myapp-tls.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /
        pathType: Prefix

      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /lee(/|$)(.*)
        pathType: ImplementationSpecific


[root@k8s-master tls]# kubectl apply -f ingress6.yml 
ingress.networking.k8s.io/ingress6 created
[root@k8s-master tls]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:123
myappv1-78ff74589d-mqm6k

六 Canary金丝雀发布

6.1 什么是金丝雀发布

金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。

主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。

是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。

6.2 Canary发布方式

其中header和weiht中的最多

6.2.1 基于header(http包头)灰度

  • 通过Annotaion扩展
  • 创建灰度ingress,配置灰度头部key以及value
  • 灰度流量验证完毕后,切换正式ingress到新版本
  • 之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。

示例:

#建立版本1的ingress

[root@k8s-master tls]# vim ingress7.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
  name: myapp-v1-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv1
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master tls]# kubectl apply -f ingress7.yml 
ingress.networking.k8s.io/myapp-v1-ingress created
[root@k8s-master tls]# kubectl describe ingress myapp-v1-ingress 
Name:             myapp-v1-ingress
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  myapp.timinglee.org  
                       /   myappv1:80 (10.244.1.23:80)
Annotations:           <none>
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    15s   nginx-ingress-controller  Scheduled for sync

#测试:

[root@k8s-master tls]# curl myapp.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

#建立基于header的ingress

[root@k8s-master tls]# vim ingress8.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: version
    nginx.ingress.kubernetes.io/canary-by-header-value: "2"  
  name: myapp-v2-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv2
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master tls]# kubectl apply -f ingress8.yml 
ingress.networking.k8s.io/myapp-v2-ingress created
[root@k8s-master tls]# kubectl describe ingress myapp-v2-ingress 
Name:             myapp-v2-ingress
Labels:           <none>
Namespace:        default
Address:          192.168.10.10
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  myapp.timinglee.org  
                       /   myappv2:80 (10.244.2.20:80)
Annotations:           nginx.ingress.kubernetes.io/canary: true
                       nginx.ingress.kubernetes.io/canary-by-header: version
                       nginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    24s (x2 over 53s)  nginx-ingress-controller  Scheduled for sync


#测试:

[root@k8s-master tls]# curl -H "version: 2"  myapp.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

6.2.2 基于权重的灰度发布

  • 通过Annotaion拓展

  • 创建灰度ingress,配置灰度权重以及总权重

  • 灰度流量验证完毕后,切换正式ingress到新版本

示例

#基于权重的灰度发布

[root@k8s-master tls]# vim ingress9.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"        #更改权重值
    nginx.ingress.kubernetes.io/canary-weight-total: "100"  
  name: myapp-v2-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myappv2
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master tls]# kubectl apply -f ingress9.yml 
ingress.networking.k8s.io/myapp-v2-ingress created


#测试:

[root@k8s-master tls]# vim check_ingress.sh 
#!/bin/bash
v1=0
v2=0

for (( i=0; i<100; i++))
do
    response=`curl -s myapp.timinglee.org |grep -c v1`

    v1=`expr $v1 + $response`
    v2=`expr $v2 + 1 - $response`

done
echo "v1:$v1, v2:$v2"
[root@k8s-master tls]# kubectl apply -f ingress7.yml 
ingress.networking.k8s.io/myapp-v1-ingress created
[root@k8s-master tls]# kubectl apply -f ingress8.yml 
ingress.networking.k8s.io/myapp-v2-ingress configured
[root@k8s-master tls]# kubectl apply -f ingress9.yml 
ingress.networking.k8s.io/myapp-v2-ingress configured

[root@k8s-master tls]# kubectl get ingress
NAME               CLASS   HOSTS                 ADDRESS         PORTS   AGE
myapp-v1-ingress   nginx   myapp.timinglee.org   192.168.10.10   80      56s
myapp-v2-ingress   nginx   myapp.timinglee.org   192.168.10.10   80      8m7s

[root@k8s-master tls]# sh check_ingress.sh 
v1:93, v2:7
[root@k8s-master tls]# sh check_ingress.sh 
v1:88, v2:12
[root@k8s-master tls]# sh check_ingress.sh 
v1:92, v2:8


#更改完毕权重后继续测试可观察变化

#更改权重值为30
[root@k8s-master tls]# sh check_ingress.sh 
v1:69, v2:31
[root@k8s-master tls]# sh check_ingress.sh 
v1:68, v2:32
[root@k8s-master tls]# sh check_ingress.sh 
v1:74, v2:26

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2216053.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【数据结构】1.顺序表

「前言」 &#x1f308;个人主页&#xff1a; 代码探秘者 &#x1f308;C语言专栏&#xff1a;C语言 &#x1f308;C专栏&#xff1a; C &#x1f308;喜欢的诗句:天行健,君子以自强不息. 线性表 线性表&#xff08;List&#xff09;&#xff1a;零个或多个数据元素的有限序列…

软考(网工)——数据通信基础

&#x1f550;信道特性 1️⃣概念 通信得目的就是传递信息。通信中产生和发送信息得一端叫信源&#xff0c;接受信息的一段叫信宿&#xff0c;信源和信宿之间的通信线路称为信道。 2️⃣信道带宽 W 模拟信道&#xff1a;Wf2-f1&#xff08;f2 和 f1分别表示&#xff1a;信道…

树的中心——dfs

题目 代码 #include <bits/stdc.h> using namespace std; const int N 1e510, M N*2; int h[N], e[M], ne[M], idx; int n; int ans 2e9; bool st[N]; void add(int a, int b) // 添加一条边a->b {e[idx] b, ne[idx] h[a], h[a] idx ; } int dfs(int u) {int…

芯片记录一下

1、MC34063 电源管理DCDC 输入电压&#xff1a;-0.3~40V 输出电压&#xff1a;Vout1.25*&#xff08;1R2/R1&#xff09; 1.25V~40V

【报错解决】安装scikit-rebate包报错

scikit-rebate ReBATE是一套基于Relief的机器学习特征选择算法 报错信息 解决方案 conda install numpy scipy scikit-learnpip install skrebate依次运行以上两步&#xff0c;即可成功安装&#xff01;

如何实时监测你的光纤资源?

光纤资源作为重要的通信基础设施&#xff0c;实时监测光纤资源的状态是运营管理好光纤资源的重要手段&#xff0c;那常用的监测指标维度与方法有那些呢&#xff1f; 维度1&#xff1a;资源数量 资源数量主是建立资源的基础档案&#xff0c;掌握光缆的型号、路由&#xff1b;光…

健康推荐系统:SpringBoot技术实现

3系统分析 3.1可行性分析 通过对本基于智能推荐的卫生健康系统实行的目的初步调查和分析&#xff0c;提出可行性方案并对其一一进行论证。我们在这里主要从技术可行性、经济可行性、操作可行性等方面进行分析。 3.1.1技术可行性 本基于智能推荐的卫生健康系统采用SSM框架&#…

python 位运算 笔记

起因&#xff0c; 目的: 位运算&#xff0c;令我头疼的地方。算法题里面也是经常见到。 位运算。 按位或&#xff0c;OR, | , 只要有一个为1&#xff0c; 结果就是1&#xff0c;否则为0按位异或&#xff0c;XOR, ^, 2个数不同&#xff0c;结果为1&#xff0c; 否则为0&#…

k8s jenkins 2.421动态创建slave

k8s jenkins 动态创建slave 简述使用jenkins动态slave的优势&#xff1a;制作jenkins-slave从节点jenkins-slave-latest:v1配置jenkins动态slave配置 Pod Template配置容器模板挂载卷 测试 简述 持续构建与发布是我们日常工作中必不可少的一个步骤&#xff0c;目前大多公司都采…

《OpenCV计算机视觉》—— 使用DNN模块实现图片风格迁移

文章目录 OpenCV中的DNN模块一、功能概述二、支持的模型格式三、基本使用方法四、DNN 模块的特点五、常见应用示例 示例&#xff1a;图片风格迁移 OpenCV中的DNN模块 OpenCV中的DNN&#xff08;Deep Neural Network&#xff09;模块是一个功能强大的工具&#xff0c;它允许开发…

python pip安装requirements.txt依赖与国内镜像

python pip安装requirements.txt依赖与国内镜像 如果网络通畅&#xff0c;直接pip安装依赖&#xff1a; pip install -r requirements.txt 如果需要国内的镜像&#xff0c;可以考虑使用阿里的&#xff0c;在后面加上&#xff1a; -i http://mirrors.aliyun.com/pypi/simple --…

Linux--多路转接之epoll

上一篇:Linux–多路转接之select epoll epoll 是 Linux 下多路复用 I/O 接口 select/poll 的增强版本&#xff0c;它能显著提高程序在大量并发连接中只有少量活跃的情况下的系统 CPU 利用率。它是 Linux 下多路复用 API 的一个选择&#xff0c;相比 select 和 poll&#xff0c…

自建 Bitwarden 密码管理器

大佬零度解说的文件修改,与原文不太一样,详细请看&#xff1a;自建 Bitwarden 密码管理器&#xff01;完全免费开源&#xff0c;轻量级&#xff0c;安全又可靠&#xff01;-零度解说 教程&#xff1a;你的密码真的安全吗&#xff1f;Bitwarden 密码管理器&#xff01;一键部署…

【Redis】缓存预热、雪崩、击穿、穿透、过期删除策略、内存淘汰策略

Redis常见问题总结&#xff1a; Redis常见问题总结Redis缓存预热Redis缓存雪崩Redis缓存击穿Redis缓存穿透 Redis 中 key 的过期删除策略数据删除策略 Redis内存淘汰策略一、Redis对过期数据的处理&#xff08;一&#xff09;相关配置&#xff08;二&#xff09;内存淘汰流程&a…

el-table表格里面有一条横线

表格里面 有一条横线&#xff0c; 出现原因&#xff1a;是自定义了表格头.使用了固定列&#xff08;fixed&#xff09;&#xff0c;定宽。就很难受。。。 添加样式文件&#xff1a; <style lang"scss" scoped>::v-deep {.el-table__fixed-right {height: 100%…

植物大战僵尸杂交版之后要出联机版植物大战僵尸?(内测中,可在安卓手机上玩,文末附下载链接)

继植物大战僵尸杂交版之后给大家介绍一个杂交版作者正在酝酿的“植物大战僵尸射击版” 植物大战僵尸射击版介绍 《植物大战僵尸杂交版》的创作者“潜艇伟伟迷”即将推出PVZ改版新作——《植物大战僵尸射击版》。游戏将支持PC、手游和web端&#xff0c;提供单人、双人、三人、…

取证之FTK Imager学习笔记

一、FTK Imager制作镜像详细教程 1、文件-创建磁盘镜像 2、参数详解&#xff1a; 1&#xff09;物理驱动器 整个驱动器&#xff0c;如&#xff1a;识别到的是整块硬盘、U盘等&#xff0c;而不管你分几个分区&#xff1b; 2&#xff09;逻辑驱动器&#xff08;L&#xff09…

数据结构-5.9.树的存储结构

一.树的逻辑结构&#xff1a; 二.双亲表示法(顺序存储)&#xff1a; 1.树中除了根结点外每一颗树中的任意一个结点都只有一个父结点(双亲结点)&#xff1b; 2.结点包括结点数据和指针&#xff1b; 3.上述图片中右边的顺序存储解析&#xff1a;比如A结点左边的0&#xff0c;就…

机器学习:知识蒸馏(Knowledge Distillation,KD)

知识蒸馏&#xff08;Knowledge Distillation&#xff0c;KD&#xff09;作为深度学习领域中的一种模型压缩技术&#xff0c;主要用于将大规模、复杂的神经网络模型&#xff08;即教师模型&#xff09;压缩为较小的、轻量化的模型&#xff08;即学生模型&#xff09;。在实际应…

Java基础 03

⭐输入法的原理&#xff1a;⭐ 1.输入法本质就是输入字符的编码 2. Unicode对应16位编码-->所有字符都是16进制&#xff08;也就是16进制&#xff09; 码点&#xff1a;一套编码表中&#xff0c;单个字符对应的代码串叫做“码点” 3.变量 Java中所有应用的变量都要声明且…