【k8s】ruoyi微服务迁移到k8s

news2024/11/18 1:45:00

书接上回【传统方式部署Ruoyi微服务】,此刻要迁移至k8s。

环境说明

31 master , 32 node1 , 33 node2

迁移思路

交付思路:
其实和交付到Linux主机上是一样的,无外乎将这些微服务都做成了Docker镜像;
1、微服务数据层: MySQL、 Redis;

2、微服务治理层: NACos、sentinel、 skywalking...

3、微服务组件
3.1 将微服务编译为jar包;
3.2 将其构建成Docker镜像;
3.3根据服务情况选择对应的工作负载来进行交付;Deployment、Service、Ingress:
    system:Deployment;
    auth:Deployment;
	gateway: Deployment、 service;
	monitor: Deployment、 Service、 Ingress
	ui:	Deployment、 Service、 Ingress;	nginx/haproxy

01-mysql (Service、StatefulSet)

kubectl create ns dev

01-mysql-ruoyi-sts-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql-ruoyi-svc
  namespace: dev
spec:
  clusterIP: None
  selector:
    app: mysql
    role: ruoyi
  ports:
  - port: 3306
    targetPort: 3306
    
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-ruoyi
  namespace: dev
spec:
  serviceName: "mysql-ruoyi-svc"
  replicas: 1
  selector:
    matchLabels:
      app: mysql
      role: ruoyi
  template:
    metadata:
      labels:
        app: mysql
        role: ruoyi
    spec:
      containers:
      - name: db
        image: mysql:5.7
        args:
        - "--character-set-server=utf8"
        
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: oldxu
        - name: MYSQL_DATABASE
          value: ry-cloud
        ports:
        - containerPort: 3306
        
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql/
          
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteMany"]
      storageClassName: "nfs"
      resources:
        requests:
          storage: 6Gi

解析mysql对应的IP

${statefulSetName}-${headlessName}.{namspace}.svc.cluster.local

[root@master01 01-mysql]#  dig @10.96.0.10 mysql-ruoyi-0.mysql-ruoyi-svc.dev.svc.cluster.local +short
10.244.2.129

在这里插入图片描述

连接mysql,导入sql文件

yum install -y mysql
mysql -uroot -poldxu -h10.244.2.129
mysql -uroot -poldxu -h10.244.2.129 -B ry-cloud < ry_20220814.sql

02-redis/

(这里使用的是无状态部署,做缓存。 按情况而定也可参考mysql的部署方法,做成有状态部署来redis)
在这里插入图片描述

01-redis-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-server
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: cache
        image: redis
        ports:
        - containerPort: 6379

02-redis-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis-svc
  namespace: dev
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379

验证redis

[root@master01 02-redis]# dig @10.96.0.10 redis-svc.dev.svc.cluster.local +short
10.111.240.148

kubectl describe svc -n dev redis-svc

在这里插入图片描述

sudo yum install epel-release
sudo yum install redis
[root@master01 02-redis]# redis-cli -h 10.111.240.148

03-nacos/

官方的nacos k8s参考资料https://github.com/nacos-group/nacos-k8s/blob/master/README-CN.md
在这里插入图片描述
迁移思路:
在这里插入图片描述

安装nacos的mysql数据库

01-mysql-nacos-sts-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql-nacos-svc
  namespace: dev
spec:
  clusterIP: None
  selector:
    app: mysql
    role: nacos
  ports:
  - port: 3306
    targetPort: 3306
    
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-nacos-sts
  namespace: dev
spec:
  serviceName: "mysql-nacos-svc"
  replicas: 1
  selector:
    matchLabels:
      app: mysql
      role: nacos
  template:
    metadata:
      labels:
        app: mysql
        role: nacos
    spec:
      containers:
      - name: db
        image: mysql:5.7
        args:
        - "--character-set-server=utf8"
        
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: oldxu
        - name: MYSQL_DATABASE
          value: ry-config
        ports:
        - containerPort: 3306
        
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql/
        
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteMany"]
      storageClassName: "nfs"
      resources:
        requests:
          storage: 6Gi

[root@master01 03-nacos]# dig @10.96.0.10 mysql-nacos-sts-0.mysql-nacos-svc.dev.svc.cluster.local +short
10.244.2.130

在这里插入图片描述

导入config的sql文件

mysql -uroot -poldxu -h10.244.2.130 -B ry-config < ry_config_20220510.sql 

02-nacos-configmap.yaml

configmap(填写对应数据库地址、名称、端口、用户名及密码)

apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: dev
data:
  mysql.host: "mysql-nacos-sts-0.mysql-nacos-svc.dev.svc.cluster.local"
  mysql.db.name: "ry-config"
  mysql.port: "3306"
  mysql.user: "root"
  mysql.password: "oldxu"

03-nacos-sts-deploy-svc.yaml

#可提前下载,因为镜像大小1GB多
docker pull nacos/nacos-peer-finder-plugin:1.1
docker pull nacos/nacos-server:v2.1.1

#自动PV,引用pvc 、pod反亲和性保证每个节点部署一个pod、initContainer找到nacos集群的IP、
apiVersion: v1
kind: Service
metadata:
  name: nacos-svc
  namespace: dev
spec:
  clusterIP: None
  selector:
    app: nacos
  ports:
  - name: server
    port: 8848
    targetPort: 8848
  - name: client-rpc
    port: 9848
    targetPort: 9848
  - name: raft-rpc
    port: 9849
    targetPort: 9849
  - name: old-raft-rpc
    port: 7848
    targetPort: 7848

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: dev
spec:
  serviceName: "nacos-svc"
  replicas: 3
  selector:
    matchLabels:
      app: nacos
  template:
    metadata:
      labels:
        app: nacos
    spec:
      affinity:                                                 # 避免Pod运行到同一个节点上了
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values: ["nacos"]
              topologyKey: "kubernetes.io/hostname"  
      initContainers:
      - name: peer-finder-plugin-install
        image: nacos/nacos-peer-finder-plugin:1.1
        imagePullPolicy: Always
        volumeMounts:
          - name: datan
            mountPath: /home/nacos/plugins/peer-finder
            subPath: peer-finder
      containers:
      - name: nacos
        image: nacos/nacos-server:v2.1.1
        resources:
          requests:
            memory: "800Mi"
            cpu: "500m"
        ports:
        - name: client-port
          containerPort: 8848
        - name: client-rpc
          containerPort: 9848
        - name: raft-rpc
          containerPort: 9849
        - name: old-raft-rpc
          containerPort: 7848
        env:
        - name: MODE  
          value: "cluster"
        - name: NACOS_VERSION
          value: 2.1.1
        - name: NACOS_REPLICAS
          value: "3"
        - name: SERVICE_NAME 
          value: "nacos-svc"
        - name: DOMAIN_NAME 
          value: "cluster.local"
        - name: NACOS_SERVER_PORT   
          value: "8848"
        - name: NACOS_APPLICATION_PORT
          value: "8848"
        - name: PREFER_HOST_MODE
          value: "hostname"
        - name: POD_NAMESPACE      
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: MYSQL_SERVICE_HOST
          valueFrom:
            configMapKeyRef:
              name: nacos-cm
              key: mysql.host
        - name: MYSQL_SERVICE_DB_NAME
          valueFrom:
            configMapKeyRef:
              name: nacos-cm
              key: mysql.db.name
        - name: MYSQL_SERVICE_PORT
          valueFrom:
            configMapKeyRef:
              name: nacos-cm
              key: mysql.port
        - name: MYSQL_SERVICE_USER
          valueFrom:
            configMapKeyRef:
              name: nacos-cm
              key: mysql.user
        - name: MYSQL_SERVICE_PASSWORD
          valueFrom:
            configMapKeyRef:
              name: nacos-cm
              key: mysql.password
        volumeMounts:
        - name: datan
          mountPath: /home/nacos/plugins/peer-finder
          subPath: peer-finder
        - name: datan
          mountPath: /home/nacos/data
          subPath: data
        - name: datan
          mountPath: /home/nacos/logs
          subPath: logs

  volumeClaimTemplates:
    - metadata:
        name: datan
      spec:
        storageClassName: "nfs"
        accessModes: ["ReadWriteMany"]
        resources:
          requests:
            storage: 20Gi

访问验证:
http://nacos.oldxu.net:30080/nacos/
在这里插入图片描述

04-nacos-ingress.yaml

打#号的是新版本的写法

apiVersion: extensions/v1beta1    
kind: Ingress
metadata:
  name: nacos-ingress
  namespace: dev
spec:
  ingressClassName: "nginx"
  rules:
  - host: nacos.oldxu.net
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          serviceName: nacos-svc
          servicePort: 8848

#         service:
#         name: nacos-svc
#         port: 
#           name: server

04-sentinel/

sentinel迁移思路

Sentinel
1、编写Dockerfile 、entrypoint.sh
2、推送到Harbor镜像仓库;
3、使用Deployment就可以运行该镜像;
4、使用Service、 Ingress来讲 其对外提供访问;

编写sentinel的Dockerfile

#下载包
wget https://linux.oldxu.net/sentinel-dashboard-1.8.5.jar
docker login harbor.oldxu.net

Dockerfile 与 entrypoint.sh

Dockerfile

FROM openjdk:8-jre-alpine
COPY ./sentinel-dashboard-1.8.5.jar /sentinel-dashboard.jar
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 8718 8719
CMD ["/bin/sh","-c","/entrypoint.sh"]

entrypoint.sh

JAVA_OPTS="-Dserver.port=8718 \
-Dcsp.sentinel.dashboard.server=localhost:8718 \
-Dproject.name=sentinel-dashboard \
-Dcsp.sentinel.api.port=8719 \
-Xms${XMS_OPTS:-150m} \
-Xmx${XMX_OPTS:-150m}"

java ${JAVA_OPTS} -jar /sentinel-dashboard.jar
[root@master01 04-sentinel]# ls
Dockerfile  entrypoint.sh  sentinel-dashboard-1.8.5.jar

docker build -t harbor.oldxu.net/springcloud/sentinel-dashboard:v1.0 .
docker push harbor.oldxu.net/springcloud/sentinel-dashboard:v1.0

01-sentinel-deploy.yaml

kubectl create secret docker-registry harbor-admin \
 --docker-username=admin \
 --docker-password=Harbor12345 \
 --docker-server=harbor.oldxu.net \
 -n dev
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sentinel-server
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sentinel
  template:
    metadata:
      labels:
        app: sentinel
    spec:
      imagePullSecrets:
      - name: harbor-admin
      containers:
      - name: sentinel
        image: harbor.oldxu.net/springcloud/sentinel-dashboard:v2.0
        ports:
        - name: server
          containerPort: 8718
        - name: api
          containerPort: 8719

02-sentinel-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: sentinel-svc
  namespace: dev
spec:
  selector:
    app: sentinel
  ports:
  - name: server
    port: 8718
    targetPort: 8718
  - name: api
    port: 8719
    targetPort: 8719

03-sentinel-ingress.yaml

写#号的是新版本的写法。

#apiVersion: networking.k8s.io/v1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sentinel-ingress
  namespace: dev
spec:
  ingressClassName: "nginx"
  rules:
  - host: sentinel.oldxu.net
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          serviceName: sentinel-svc
          servicePort: 8718
          #service:
          #  name: sentinel-svc
          #  port:
          #    name: server

访问 http://sentinel.oldxu.net:30080/#/dashboard/metric/sentinel-dashboard
在这里插入图片描述

05-skywalking/

迁移思路
在这里插入图片描述

本次Skywalking采用内置H2作为存储,也可考虑采用ElasticSearch作为数据存储。
在这里插入图片描述

01-skywalking-oap-deploy.yaml


在这里插入图片描述

02-skywalking-ui-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: skywalking-ui
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sky-ui
  template:
    metadata:
      labels:
        app: sky-ui
    spec:
      containers:
      - name: ui
        image: apache/skywalking-ui:8.9.1
        ports:
        - containerPort: 8080
        env:
        - name: SW_OAP_ADDRESS
          value: "http://skywalking-oap-svc:12800"
---
apiVersion: v1
kind: Service
metadata:
  name: skywalking-ui-svc
  namespace: dev
spec:
  selector:
    app: sky-ui
  ports:
  - name: ui
    port: 8080
    targetPort: 8080
[root@master01 05-skywalking]# dig @10.96.0.10 skywalking-oap-svc.dev.svc.cluster.local  +short
10.111.30.115

03-skywalking-ingress.yaml

带#号的是新版本的写法。

apiVersion: extensions/v1beta1
#apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: skywalking-ingress    
  namespace: dev
spec:
  ingressClassName: "nginx"
  rules:
  - host: sky.oldxu.net
    http:
      paths: 
      - path: /
        pathType: Prefix
        backend:
          serviceName: skywalking-ui-svc
          servicePort: 8080
          #service:
          #  name: skywalking-ui-svc
          #  port:
          #    name: ui

访问sky.oldxu.net:30080
在这里插入图片描述

04-skywalking-agent-demo.yaml (客户端demo)

将Skywalking-agent制作为Docker镜像,后续业务容器通过sidecar 模式挂载 agent

下载agent 和 制作dockerfile ,推送镜像

wget https://linux.oldxu.net/apache-skywalking-javaagent-8.8.0.tgz
wget https://linux.oldxu.net/apache-skywalking-java-agent-8.8.0.tgz

[root@master01 04-skywalking-agent-demo]# cat Dockerfile 
FROM alpine
ADD ./apache-skywalking-java-agent-8.8.0.tgz /

[root@master01 04-skywalking-agent-demo]# ls
apache-skywalking-java-agent-8.8.0.tgz  Dockerfile
docker build -t harbor.oldxu.net/springcloud/skywalking-java-agent:8.8 .
docker push harbor.oldxu.net/springcloud/skywalking-java-agent:8.8

#使用边车模式的思想来实现 (类似的有ELK收集Pod的日志)
业务容器通过sidecar模式挂载制作好的skywalking-agent镜像

apiVersion: apps/v1
kind: Deployment
metadata:
  name: skywalking-agent-demo
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      imagePullSecrets:
      - name: harbor-admin
      
      volumes:   #定义共享的存储卷
      - name: skywalking-agent
        emptyDir: {}
        
      initContainers: #初始化容器,将这个容器中的数据拷贝到共享的卷中
      - name: init-skywalking-agent
        image: harbor.oldxu.net/springcloud/skywalking-java-agent:8.8
        command:
        - 'sh'
        - '-c'
        - 'mkdir -p /agent; cp -r /skywalking-agent/* /agent;'
        
        volumeMounts:
        - name: skywalking-agent
          mountPath: /agent
      
      containers:
      - name: web
        image: nginx
        volumeMounts:
        - name: skywalking-agent
          mountPath: /skywalking-agent/

06-service-all/ (ruoyi业务层面 system , auth , gateway ,monitor ,ui)

迁移思路
在这里插入图片描述

6.1 迁移微服务ruoyi-system

1 maven编译system项目

对应的路径及信息

cd /root/k8sFile/project/danji-ruoyi/guanWang
[root@node4 guanWang]# ls
logs  note.txt  RuoYi-Cloud  skywalking-agent  startServer.sh

[root@node4 guanWang]# ls RuoYi-Cloud/
bin  docker  LICENSE  pom.xml  README.md  ruoyi-api  ruoyi-auth  ruoyi-common  ruoyi-gateway  ruoyi-modules  ruoyi-ui  ruoyi-visual  sql
[root@node4 RuoYi-Cloud]# pwd
/root/k8sFile/project/danji-ruoyi/guanWang/RuoYi-Cloud
[root@node4 RuoYi-Cloud]# ls
bin  docker  LICENSE  pom.xml  README.md  ruoyi-api  ruoyi-auth  ruoyi-common  ruoyi-gateway  ruoyi-modules  ruoyi-ui  ruoyi-visual  sql
[root@node4 RuoYi-Cloud]# 

[root@node4 RuoYi-Cloud]# mvn package -Dmaven.test.skip=true -pl ruoyi-modules/ruoyi-system/ -am

在这里插入图片描述

2 编写Dockerfile

vim ruoyi-modoles/ruoyi-system/Dockerfile

FROM openjdk:8-jre-alpine
COPY ./target/*.jar /ruoyi-modules-system.jar
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 8080
CMD ["/bin/sh","-c","/entrypoint.sh"]

3 编写entrypoint.sh

在此之前回顾传统部署system运行时的指令:

#启动ruoyi-system
nohup java -javaagent:./skywalking-agent/skywalking-agent.jar \
-Dskywalking.agent.service_name=ruoyi-system \
-Dskywalking.collector.backend_service=192.168.79.35:11800 \
-Dspring.profiles.active=dev \
-Dspring.cloud.nacos.config.file-extension=yml \
-Dspring.cloud.nacos.discovery.server-addr=192.168.79.35:8848 \
-Dspring.cloud.nacos.config.server-addr=192.168.79.35:8848 \
-jar RuoYi-Cloud/ruoyi-modules/ruoyi-system/target/ruoyi-modules-system.jar &>/var/log/system.log &
#entrypoint.sh
[root@node4 ruoyi-system]# cat entrypoint.sh 
#设定端口
PARAMS="--server.port=${Server_Port:-8080}"

#JVM堆内存设置,
JAVA_OPTS="-Xms${XMS_OPTS:-150m} -Xmx${XMX_OPTS:-150m}"

#Nacos相关选项
NACOS_OPTS=" \
-Djava.security.egd=file:/dev/./urandom \
-Dfile.encoding=utf8 \
-Dspring.profiles.active=${Nacos_Active:-dev} \
-Dspring.cloud.nacos.config.file-extension=yml \
-Dspring.cloud.nacos.discovery.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848} \
-Dspring.cloud.nacos.config.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848} 
"

#skywalking选项:
#边车模式的initContainer将skywalking.jar塞到了pod里面。
SKY_OPTS="-javaagent:/skywalking-agent/skywalking-agent.jar \
-Dskywalking.agent.service_name=ruoyi-system \
-Dskywalking.collector.backend_service=${Sky_Server_Addr:-localhost:11800}
"

# 启动命令(指定sky选项、jvm堆内存选项、jar包,最后跟上params参数)
java ${SKY_OPTS} ${NACOS_OPTS} ${JAVA_OPTS} -jar /ruoyi-modules-system.jar ${PARAMS}
#路径及文件信息
[root@node4 ruoyi-system]# ls
Dockerfile  entrypoint.sh  pom.xml  src  target

4 制作镜像和推送

docker build -t harbor.oldxu.net/springcloud/ruoyi-system:v1.0 .
docker push harbor.oldxu.net/springcloud/ruoyi-system:v1.0

5 修改system组件配置

通过Kubernetes运行system之前,先登录Nacos修改ruoyi-system-dev.yml的相关配置信息;
修改redis地址,新增sentienl字段、 mysql地址

# spring配置
spring:
  cloud:
    sentinel:
      eager: true
      transport:
        dashboard: sentinel-svc.dev.svc.cluster.local:8718 
  redis:
    host: redis-svc.dev.svc.cluster.local
    port: 6379
    password: 
  datasource:
    druid:
      stat-view-servlet:
        enabled: true
        loginUsername: admin
        loginPassword: 123456
    dynamic:
      druid:
        initial-size: 5
        min-idle: 5
        maxActive: 20
        maxWait: 60000
        timeBetweenEvictionRunsMillis: 60000
        minEvictableIdleTimeMillis: 300000
        validationQuery: SELECT 1 FROM DUAL
        testWhileIdle: true
        testOnBorrow: false
        testOnReturn: false
        poolPreparedStatements: true
        maxPoolPreparedStatementPerConnectionSize: 20
        filters: stat,slf4j
        connectionProperties: druid.stat.mergeSql\=true;druid.stat.slowSqlMillis\=5000
      datasource:
          # 主库数据源
          master:
            driver-class-name: com.mysql.cj.jdbc.Driver
            url: jdbc:mysql://mysql-ruoyi-svc.dev.svc.cluster.local:3306/ry-cloud?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
            username: root
            password: oldxu
          # 从库数据源
          # slave:
            # username: 
            # password: 
            # url: 
            # driver-class-name: 
      # seata: true    # 开启seata代理,开启后默认每个数据源都代理,如果某个不需要代理可单独关闭

# seata配置
seata:
  # 默认关闭,如需启用spring.datasource.dynami.seata需要同时开启
  enabled: false
  # Seata 应用编号,默认为 ${spring.application.name}
  application-id: ${spring.application.name}
  # Seata 事务组编号,用于 TC 集群名
  tx-service-group: ${spring.application.name}-group
  # 关闭自动代理
  enable-auto-data-source-proxy: false
  # 服务配置项
  service:
    # 虚拟组和分组的映射
    vgroup-mapping:
      ruoyi-system-group: default
  config:
    type: nacos
    nacos:
      serverAddr: 127.0.0.1:8848
      group: SEATA_GROUP
      namespace:
  registry:
    type: nacos
    nacos:
      application: seata-server
      server-addr: 127.0.0.1:8848
      namespace:

# mybatis配置
mybatis:
    # 搜索指定包别名
    typeAliasesPackage: com.ruoyi.system
    # 配置mapper的扫描,找到所有的mapper.xml映射文件
    mapperLocations: classpath:mapper/**/*.xml

# swagger配置
swagger:
  title: 系统模块接口文档
  license: Powered By ruoyi
  licenseUrl: https://ruoyi.vip
#验证redis 、 mysql 、sentinel-svc
[root@master01 bin]# dig @10.96.0.10 redis-svc.dev.svc.cluster.local +short
10.111.240.148

[root@master01 bin]# dig @10.96.0.10 sentinel-svc.dev.svc.cluster.local +short
10.111.31.36

[root@master01 bin]# dig @10.96.0.10 mysql-ruoyi-svc.dev.svc.cluster.local +short
10.244.1.130

01-system-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-system
  namespace: dev
spec:
  replicas: 2
  selector:
    matchLabels:
      app: system
  template:
    metadata:
      labels:
        app: system
    spec:
      imagePullSecrets:
      - name: harbor-admin
      
      volumes:
      - name: skywalking-agent
        emptyDir: {}
      
      initContainers:
      - name: init-sky-java-agent
        image: harbor.oldxu.net/springcloud/skywalking-java-agent:8.8
        command:
        - 'sh'
        - '-c'
        - 'mkdir -p /agent; cp -r /skywalking-agent/* /agent/;'
        
        volumeMounts:
        - name: skywalking-agent
          mountPath: /agent
      
      containers:
      - name: system
        image: harbor.oldxu.net/springcloud/ruoyi-system:v1.0
        env:
        - name: Nacos_Active
          value: dev
        - name: Nacos_Server_Addr
          value: "nacos-svc.dev.svc.cluster.local:8848"
        - name: Sky_Server_Addr
          value: "skywalking-oap-svc.dev.svc.cluster.local:11800"
        - name: XMS_OPTS
          value: 200m
        - name: XMX_OPTS
          value: 200m
        
        ports:
        - containerPort: 8080
        livenessProbe:        
          tcpSocket:
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 10
        volumeMounts:
        - name: skywalking-agent
          mountPath: /skywalking-agent/        

在这里插入图片描述

#验证nacos和 skywalking-oap
[root@master01 bin]# dig @10.96.0.10 nacos-svc.dev.svc.cluster.local +short
10.244.1.129
10.244.0.143
10.244.2.154

[root@master01 bin]# dig @10.96.0.10 skywalking-oap-svc.dev.svc.cluster.local +short
10.111.30.115

登录nacos、sentinel、skywalking查看状态
nacos:
在这里插入图片描述
sentinel:
在这里插入图片描述
skywalking:
在这里插入图片描述

6.2 迁移微服务ruoyi-auth

1、编译auth项目

[root@node4 RuoYi-Cloud]# pwd
/root/k8sFile/project/danji-ruoyi/guanWang/RuoYi-Cloud

mvn package -Dmaven.test.skip=true -pl ruoyi-auth/ -am

在这里插入图片描述

2、编写dockerfile和entrypoint.sh

``shell
[root@node4 ruoyi-auth]# pwd
/root/k8sFile/project/danji-ruoyi/guanWang/RuoYi-Cloud/ruoyi-auth
[root@node4 ruoyi-auth]# ls
Dockerfile entrypoint.sh pom.xml src target

dockerfile
```shell
FROM openjdk:8-jre-alpine
COPY ./target/*.jar /ruoyi-auth.jar
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

EXPOSE 8080
CMD ["/bin/sh","-c","/entrypoint.sh"]

entrypoint.sh

[root@node4 ruoyi-auth]# cat entrypoint.sh 
# 设定端口,默认不传参则为8080端口       
PARAMS="--server.port=${Server_Port:-8080}"

#JVM堆内存设定
JAVA_OPTS="-Xms${XMS_OPTS:-100M} -Xmx${XMX_OPTS:-100}"

# Nacos相关选项
NACOS_OPTS="
-Djava.security.egd=file:/dev/./urandom \
-Dfile.encoding=utf8 \
-Dspring.profiles.active.file=${Nacos_Active:-dev} \
-Dspring.cloud.nacos.config.file-extension=yml \
-Dspring.cloud.nacos.discovery.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848} \
-Dspring.cloud.nacos.config.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848}
"

#Skywalking相关选项
SKY_OPTS="
-javaagent:/skywalking-agent/skywalking-agent.jar \
-Dskywalking.agent.service_name=ruoyi-auth \
-Dskywalking.collector.backend_service=${Sky_Server_Addr:-localhost:11800}
"

# 启动命令(指定sky选项、jvm堆内存选项、jar包,最后跟上params参数)
java ${SKY_OPTS} ${NACOS_OPTS} ${JAVA_OPTS} -jar /ruoyi-auth.jar ${PARAMS}

3、制作镜像,推送

docker build -t harbor.oldxu.net/springcloud/ruoyi-auth:v1.0 .
docker push harbor.oldxu.net/springcloud/ruoyi-auth:v1.0

4、去nacos修改ruoyi-auth-dev.yml

使用Kubernetes运行auth之前,先通过Nacos修改对应ruoyi-auth-dev.yml相关配置;

spring:
  cloud:
    sentinel:
      eager: true
      transport:
        dashboard: sentinel-svc.dev.svc.cluster.local:8718 
  redis:
    host: redis-svc.dev.svc.cluster.local
    port: 6379
    password: 

5、02-auth-deploy.yaml

运行auth应用

[root@master01 06-all-service]# cat 02-auth-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-auth
  namespace: dev
spec:
  replicas: 2
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
    spec:
      imagePullSecrets:
      - name: harbor-admin
 
      volumes:
      - name: skywalking-agent
        emptyDir: {}
              
      initContainers:
      - name: init-sky-java-agent
        image: harbor.oldxu.net/springcloud/skywalking-java-agent:8.8
        command:
        - 'sh'
        - '-c'
        - 'mkdir -p /agent; cp -r /skywalking-agent/* /agent/;'
        volumeMounts:
        - name: skywalking-agent
          mountPath: /agent
      
      containers:
      - name: auth
        image: harbor.oldxu.net/springcloud/ruoyi-auth:v1.0
        env:
        - name: Nacos_Active
          value: dev
        - name: Nacos_Server_Addr
          value: "nacos-svc.dev.svc.cluster.local:8848"          
        - name: Sky_Server_Addr
          value: "skywalking-oap-svc.dev.svc.cluster.local:11800"
        - name: XMS_OPTS
          value: 200m
        - name: XMX_OPTS
          value: 200m
        ports:
        - containerPort: 8080
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 10
        volumeMounts:
        - name: skywalking-agent
          mountPath: /skywalking-agent/

在这里插入图片描述

6.3 迁移微服务ruoyi-gateway

1、编译gateway项目

[root@node4 RuoYi-Cloud]# pwd
/root/k8sFile/project/danji-ruoyi/guanWang/RuoYi-Cloud
[root@node4 RuoYi-Cloud]# ls
bin  docker  LICENSE  pom.xml  README.md  ruoyi-api  ruoyi-auth  ruoyi-common  ruoyi-gateway  ruoyi-modules  ruoyi-ui  ruoyi-visual  sql
mvn package -Dmaven.test.skip=true -pl ruoyi-gateway/ -am

在这里插入图片描述

2、编写dockerfile和entrypoint

[root@node4 ruoyi-gateway]# ls
Dockerfile  entrypoint.sh  pom.xml  src  target

Dockerfile

#如果使用alpine镜像(openjdk:8-jre-alpine),会出现[网关异常处理]请求路径:/code
FROM openjdk:8-jre
COPY ./target/*.jar /ruoyi-gateway.jar
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 8080
CMD ["/bin/sh","-c","/entrypoint.sh"]

entrypoint.sh

# 设定端口,默认不传参则为8080端口
PARAMS="--server.port=${Server_Port:-8080}"

#JVM堆内存设定
JAVA_OPTS="-Xms${XMS_OPTS:-100m} -Xmx${XMX_OPTS:-100m}
"

#Nacos相关选项
NACOS_OPTS="
-Djava.security.egd=file:/dev/./urandom \
-Dfile.encoding=utf8 \
-Dspring.profiles.active=${Nacos_Active:-dev} \
-Dspring.cloud.nacos.config.file-extension=yml \
-Dspring.cloud.nacos.discovery.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848} \
-Dspring.cloud.nacos.config.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848}
"

# Skywalking相关选项
SKY_OPTS="
-javaagent:/skywalking-agent/skywalking-agent.jar \
-Dskywalking.agent.service_name=ruoyi-gateway \
-Dskywalking.collector.backend_service=${Sky_Server_Addr:-localhost:11800}
"

#启动命令(指定sky选项、jvm堆内存选项、jar包,最后跟上params参数)
java ${SKY_OPTS} ${NACOS_OPTS} ${JAVA_OPTS} -jar /ruoyi-gateway.jar ${PARAMS}

3、制作镜像并推送仓库

docker build -t harbor.oldxu.net/springcloud/ruoyi-gateway:v1.0 .
docker push harbor.oldxu.net/springcloud/ruoyi-gateway:v1.0

4、修改gateway组件配置(ruoyi-gateway-dev.yml)

使用Kubernetes运行gateway之前,先通过Nacos修改对应ruoyi-gateway-dev.yml的相关配置;

spring:
  redis:
    host: redis-svc.dev.svc.cluster.local
    port: 6379
    password: 
  sentinel:
    eager: true
    transport:
      dashboard: sentinel-svc.dev.svc.cluster.local:8718
    
    datasource:
      ds1:
        nacos:
          server-addr: nacos-svc.dev.svc.cluster:8848
          dataId: sentinel-ruoyi-gateway
          groupId: DEFAULT_GROUP
          data-type: json
          rule-type: flow          
  cloud:
    nacos:
      discovery:
        server-addr: nacos-svc.dev.svc.cluster.local:8848
      config:
        server-addr: nacos-svc.dev.svc.cluster.local:8848
    gateway:
      discovery:
        locator:
          lowerCaseServiceId: true
          enabled: true
      routes:
        # 认证中心
        - id: ruoyi-auth
          uri: lb://ruoyi-auth
          predicates:
            - Path=/auth/**
          filters:
            # 验证码处理
            - CacheRequestFilter
            - ValidateCodeFilter
            - StripPrefix=1
        # 代码生成
        - id: ruoyi-gen
          uri: lb://ruoyi-gen
          predicates:
            - Path=/code/**
          filters:
            - StripPrefix=1
        # 定时任务
        - id: ruoyi-job
          uri: lb://ruoyi-job
          predicates:
            - Path=/schedule/**
          filters:
            - StripPrefix=1
        # 系统模块
        - id: ruoyi-system
          uri: lb://ruoyi-system
          predicates:
            - Path=/system/**
          filters:
            - StripPrefix=1
        # 文件服务
        - id: ruoyi-file
          uri: lb://ruoyi-file
          predicates:
            - Path=/file/**
          filters:
            - StripPrefix=1

# 安全配置
security:
  # 验证码
  captcha:
    enabled: true
    type: math
  # 防止XSS攻击
  xss:
    enabled: true
    excludeUrls:
      - /system/notice
  # 不校验白名单
  ignore:
    whites:
      - /auth/logout
      - /auth/login
      - /auth/register
      - /*/v2/api-docs
      - /csrf

5、 03-gateway-deploy-svc.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-gateway
  namespace: dev
spec:
  replicas: 2
  selector:
    matchLabels:
      app: gateway
  template:
    metadata:
      labels:
        app: gateway
    spec:
      imagePullSecrets:
      - name: harbor-admin
      
      volumes:
      - name: skywalking-agent
        emptyDir: {}
      
      initContainers:
      - name: init-sky-java-agent
        image: harbor.oldxu.net/springcloud/skywalking-java-agent:8.8
        command:
        - 'sh'
        - '-c'
        - 'mkdir -p /agent; cp -r /skywalking-agent/* /agent/;'
        volumeMounts:
        - name: skywalking-agent
          mountPath: /agent
      containers:
      - name: gateway
        image: harbor.oldxu.net/springcloud/ruoyi-gateway:v2.0
        env:
        - name: Nacos_Active
          value: dev
        - name: Nacos_Server_Addr
          value: "nacos-svc.dev.svc.cluster.local:8848"
        - name: Sky_Server_Addr
          value: "skywalking-oap-svc.dev.svc.cluster.local:11800"
        - name: XMS_OPTS
          value: 500m
        - name: XMX_OPTS
          value: 500M
        ports:
        - containerPort: 8080
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 10
        volumeMounts:
        - name: skywalking-agent
          mountPath: /skywalking-agent/   

---
apiVersion: v1
kind: Service
metadata:
  name: gateway-svc
  namespace: dev
spec:
  selector:
    app: gateway
  ports:
  - port: 8080
    targetPort: 8080  

在这里插入图片描述

6.4 迁移微服务ruoyi-monitor

1 编译monitor项目

[root@node4 RuoYi-Cloud]# pwd
/root/k8sFile/project/danji-ruoyi/guanWang/RuoYi-Cloud

[root@node4 RuoYi-Cloud]# ls
bin  docker  LICENSE  pom.xml  README.md  ruoyi-api  ruoyi-auth  ruoyi-common  ruoyi-gateway  ruoyi-modules  ruoyi-ui  ruoyi-visual  sql

[root@node4 RuoYi-Cloud]# mvn package -Dmaven.test.skip=true -pl ruoyi-visual/ruoyi-monitor/ -am

在这里插入图片描述

2 编写dockerfile 和 entrypoint.sh

cd /root/k8sFile/project/danji-ruoyi/guanWang/RuoYi-Cloud/ruoyi-visual/ruoyi-monitor

[root@node4 ruoyi-monitor]# ls
Dockerfile  entrypoint.sh  pom.xml  src  target

dockerfile

FROM openjdk:8-jre-alpine
COPY ./target/*.jar  /ruoyi-monitor.jar
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
EXPOSE 8080
CMD ["/bin/sh", "-c", "/entrypoint.sh"]

entrypoint.sh

#设定端口,默认不传参则为8080端口
PARAMS="--server.port=${Server_Port:-8080}"

#JVM堆内存设定
JAVA_OPTS="-Xms${XMS_OPTS:-100m} -Xmx${XMX_OPTS:-100m}"

# Nacos相关选项
NACOS_OPTS="
-Djava.security.egd=file:/dev/./urandom \
-Dfile.encoding=utf8 \
-Dspring.profiles.active=${Nacos_Active:-dev} \
-Dspring.cloud.nacos.config.file-extension=yml \
-Dspring.cloud.nacos.discovery.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848} \
-Dspring.cloud.nacos.config.server-addr=${Nacos_Server_Addr:-127.0.0.1:8848}
"

#Skywalking相关选项
SKY_OPTS="
-javaagent:/skywalking-agent/sky-java-agent.jar \
-Dskywalking.agent.service_name=ruoyi-monitor \
-Dskywalking.collector.backend_service=${Sky_Server_Addr:-11800}
"

#启动命令(指定sky选项、jvm堆内存选项、jar包,最后跟上params参数)
java ${SKY_OPTS} ${NACOS_OPTS} ${JAVA_OPTS} -jar /ruoyi-monitor.jar ${PARAMS}

3 制作镜像并推送仓库

docker build -t  harbor.oldxu.net/springcloud/ruoyi-monitor:v1.0 .
docker push harbor.oldxu.net/springcloud/ruoyi-monitor:v1.0

4 修改monitor组件配置(ruoyi-monitor-dev.yml)

使用Kubernetes运行monitor之前,先通过Nacos修改对应ruoyi-monitor-dev.yml的相关配置;

# spring
spring:
  cloud:
    sentinel:
      eager: true
      transport:
        dashboard: sentinel-svc.dev.svc.cluster.local:8718 
  security:
    user:
      name: ruoyi
      password: 123456
  boot:
    admin:
      ui:
        title: 若依服务状态监控

5、04-monitor-deploy-svc-ingress.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-monitor
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: monitor
  template:
    metadata:
      labels:
        app: monitor
    spec:
      imagePullSecrets:
      - name: harbor-admin

      volumes:
      - name: skywalking-agent
        emptyDir: {}

      initContainers:
      - name: init-sky-java-agent
        image: harbor.oldxu.net/springcloud/skywalking-java-agent:8.8
        command:
        - 'sh'
        - '-c'
        - 'mkdir -p /agent; cp -r /skywalking-agent/* /agent/;'
        volumeMounts:
        - name: skywalking-agent
          mountPath: /agent
      containers:
      - name: monitor
        image: harbor.oldxu.net/springcloud/ruoyi-monitor:v3.0
        env:
        - name: Nacos_Active
          value: dev
        - name: Nacos_Server_Addr
          value: "nacos-svc.dev.svc.cluster.local:8848"
        - name: Sky_Server_Addr
          value: "skywalking-oap-svc.dev.svc.cluster.local:11800"
        - name: XMS_OPTS
          value: 200m
        - name: XMX_OPTS
          value: 200m
        ports:
        - containerPort: 8080
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 10
        volumeMounts:
        - name: skywalking-agent
          mountPath: /skywalking-agent/

---
apiVersion: v1
kind: Service
metadata:
  name: monitor-svc
  namespace: dev
spec:
  selector:
    app: monitor
  ports:
  - port: 8080
    targetPort: 8080

---
#apiVersion: networking.k8s.io/v1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: monitor-ingress
  namespace: dev
spec:
  ingressClassName: "nginx"
  rules:
  - host: "monitor.oldxu.net"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          serviceName: monitor-svc
          servicePort: 8080
          #service:
          #  name: monitor-svc
          #  port:
          #    number: 8080

有个蛇皮问题。(monitor.oldxu.net:30080 会重定向到 monitor.oldxu.net/login),这是monitor程序的哪里搞了重定向?
还有静态文件js 、css文件会走80端口,不是走30080端口。
在这里插入图片描述
在这里插入图片描述

6.5 迁移微服务ruoyi-ui 前端

1 修改前端配置ruoyi-ui/vue.config.js

[root@node4 ruoyi-ui]# pwd
/root/k8sFile/project/danji-ruoyi/guanWang/RuoYi-Cloud/ruoyi-ui

修改网关的地址

  devServer: {
    host: '0.0.0.0',
    port: port,
    open: true,
    proxy: {
      // detail: https://cli.vuejs.org/config/#devserver-proxy
      [process.env.VUE_APP_BASE_API]: {
        target: `http://gateway-svc.dev.svc.cluster.local:8080`,
        changeOrigin: true,
        pathRewrite: {
          ['^' + process.env.VUE_APP_BASE_API]: ''
        }
      }
    },
    disableHostCheck: true
  },
  css: {

[root@master01 06-all-service]# dig @10.96.0.10 gateway-svc.dev.svc.cluster.local +short
10.97.133.31

2 编译前端项目

npm install --registry=https://registry.npmmirror.com
npm run build:prod

在这里插入图片描述

3 编写Dockerfile

[root@node4 ruoyi-ui]# ls
babel.config.js  bin  build  dist  Dockerfile  node_modules  package.json  package-lock.json  public  README.md  src  vue.config.js  vue.config.js.bak  vue.config.js-danji
[root@node4 ruoyi-ui]# 
[root@node4 ruoyi-ui]# cat Dockerfile 
FROM nginx
COPY ./dist /code/

4 制作镜像并推送仓库

docker build -t harbor.oldxu.net/springcloud/ruoyi-ui:v1.0 .
docker push harbor.oldxu.net/springcloud/ruoyi-ui:v1.0

5 创建ConfigMap ( ruoyi.oldxu.net.conf)

ruoyi.oldxu.net.conf

server {
	listen 80;
	server_name ruoyi.oldxu.net;
	charset utf-8;
	root /code;
	location / {
		try_files $uri $uri/ /index.html;
		index index.html index.htm;
	}

	location /prod-api/ {
		proxy_set_header Host $http_host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header REMOTE-HOST $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_pass http://gateway-svc.dev.svc.cluster.local:8080/;
	}
}
kubectl create configmap ruoyi-ui-conf --from-file=ruoyi.oldxu.net.conf -n dev

6、 05-ui-dp-svc-ingress.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-ui
  namespace: dev
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ui
  template:
    metadata:
      labels:
        app: ui
    spec:
      imagePullSecrets:
      - name: harbor-admin
      
      containers:
      - name: ui
        image: harbor.oldxu.net/springcloud/ruoyi-ui:v1.0
        
        ports:
        - containerPort: 80
        
        readinessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 10
        
        livenessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 10
          
        volumeMounts:
        - name: ngxconfs
          mountPath: /etc/nginx/conf.d/        
      volumes:
      - name: ngxconfs
        configMap:
          name: ruoyi-ui-conf
            
---
apiVersion: v1
kind: Service
metadata:
  name: ui-svc
  namespace: dev
spec:
  selector:
    app: ui
  ports:
  - port: 80
    targetPort: 80


---
#apiVersion: networking.k8s.io/v1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ui-ingress
  namespace: dev
spec:
  ingressClassName: "nginx" 
  rules:
  - host: "ruoyi.oldxu.net"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          serviceName: ui-svc
          servicePort: 80
          #service:
          #  name: ui-svc
          #  port:
          #    number: 80            
                  

访问:
在这里插入图片描述

END

其他/迁移小结

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

#排查查看pod 已使用的内存信息
kubectl top pod --sort-by=memory --all-namespaces

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/463411.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

聊聊 IP packet 的 TTL 与 tcp segment 的 MSL

聊聊 IP packet 的 TTL 与 tcp segment 的 MSL 1 前言 - 网络知识的重要性 近几年在排查解决应用系统在客户现场遇到的复杂问题时&#xff0c;越来越觉得除了扎实的LINUX操作系统知识&#xff0c;对TCP/IP网络知识的深入理解也是至关重要的。 有鉴于此&#xff0c;后续笔者会…

排序算法---插入排序

插入排序是一种简单的排序算法&#xff0c;一般又称为直接插入排序。插入排序的思想与选择排序有些相似&#xff0c;即在原数组上将数组分为两个部分&#xff1a;已排列好的有序数组和待排列数组&#xff0c;选择排序强调的是“选择”&#xff0c;而插入排序强调的是”插入“&a…

【Excel统计分析插件】上海道宁为您提供统计分析、数据可视化和建模软件——Analyse-it

Analyse-it是Microsoft Excel中的 统计分析插件 它为Microsoft Excel带来了 易于使用的统计软件 Analyse-it在软件中 引入了一些新的创新统计分析 Analyse-it与 许多Excel加载项开发人员不同 使用完善的软件开发和QA实践 包括单元/集成/系统测试 敏捷开发、代码审查 …

【JavaScript面向对象】

JavaScript面向对象 1 本节目标2 面向对象编程介绍2.1 两大编程思想2.2 面向过程编程POP2.3 面向对象编程OOP2.4 面向过程和面向对象的对比 3 ES6中的类和对象3.1 对象3.2 类class3.3 创建类3.4 类constructor构造函数3.5 类添加方法3.6 三个注意点 4 类的继承4.1 继承4.2 supe…

matlab 点云采样相关操作-源码复制粘贴即可

1.随机采样一个百分点的随机抽样 clc; clear; close all; % clear everything% Import point cloud pc pointCloud(Lion.xyz);% Plot all points pc.plot; % points are colored by z coordinate title(All Points, Color, w); view(0,0); snapnow;% Select randomly 5 perce…

FL Studio21免费吗?怎么下载最新中文版本?

FL Studio中文版已上线&#xff0c;自20.8版起已支持简体中文。推荐使用Windows 10系统安装&#xff0c; Windows 7系统设置FL Studio语言为中文时若出现乱码&#xff0c;可以将Win10系统中的“微软雅黑”字体复制并安装进Win7系统电脑中&#xff01;FL Studio支持什么格式的插…

【软考数据库】第四章 操作系统知识

目录 4.1 进程管理 4.1.1 操作系统概述 4.1.2 进程组成和状态 4.1.3 前趋图 4.1.4 进程同步与互斥 4.1.5 进程调度 4.1.6 死锁 4.1.7 线程 4.2 存储管理 4.2.1 分区存储管理 4.2.3 分页存储管理 4.2.…

进程与线程:同步和互斥

进程与线程&#xff1a;同步&互斥 同步&互斥的概念 ​ 进程具有异步性的特征。异步性是指各并发进程执行的进程的以各自独立的&#xff0c;不可预知的速度向前推进 同步 ​ 同步 亦称为直接制约关系&#xff0c;它是指为完成某种任务而建立的两个或多个进程&#xf…

虚拟主机解压/压缩功能说明

使用帮助说明 主机控制面板上点击文件管理&#xff0c;进入目录。 一、解压 windows操作系统: 鼠标移动到压缩文件&#xff0c;点击“解压” Linux操作系统&#xff1a; 压缩文件后点击解压按钮。 注意linux系统不支持rar在线解压&#xff0c;rar改名为zip也不能解压&…

JVM 调优

大部分的情况都是由于企业内部代码逻辑不合理导致。 JVM内部性能优化 栈上分配 方法内联 JVM的自适应调整 JVM改错 大并发内存不足OOM 内存泄漏GC频繁CPU飙升 JVM的调优的原则是让你各项指标尽可能的利用到你硬件的性能瓶颈。 JVM的性能优化可以分为代码层面和非代码层面。…

数据库系统工程师——第五章 网络基础知识

文章目录 &#x1f4c2; 第五章、网络基础知识 &#x1f4c1; 5.1 计算机网络概述 &#x1f4d6; 5.1.1 计算机网络的概念 &#x1f4d6; 5.1.2 计算机网络的分类 &#x1f4d6; 5.1.3 网络的拓扑结构 &#x1f4c1; 5.2 网络硬件基础 &#x1f4d6; 5.2.1 网络设备 &…

Linux 文件内容相关命令使用汇总

Linux操作系统有很多强大的文件内容相关命令&#xff0c;这些命令可以让您查看、分析和编辑文件。其中&#xff0c;最基本和常用的命令包括cat、more、less和head/tail等。除了这些基本命令之外&#xff0c;grep和find命令也是文件搜索和过滤方面的有力工具。 前言 我们这篇主…

UM2080F32 低功耗32 位 Sub1GHz 无线SOC收发器芯片

产品描述 UM2080F32 是广芯微电子&#xff08;广州&#xff09;股份有限公司研制的基于 ARM Cortex M0 内核的超低功 耗、高性能的、单片集成 (G)FSK/OOK 无线收发机的 32 位 S o C 芯片。 UM2080F32 工作于 200MHz~960MHz 范围内&#xff0c;支持灵活可设的数据包格式&#xf…

危险试探,产品经理赋予AI人格来打造品牌忠诚度

图片来源&#xff1a;由无界 AI工具生成 你可能不会相信&#xff0c;你的手机很可能变成你的虚拟情人&#xff0c;升级情人需要升级手机&#xff0c;而你从此再也不想换其他品牌手机。 AI时代&#xff0c;赋予产品以人格&#xff0c;让用户爱上产品&#xff0c;这或许是接下来产…

Python整个颜色小网站,给刚刚失恋的他.........

一些过场剧情: 死党一直暗恋校花&#xff0c;但是校花对他印象也不差&#xff0c; 就是死党一直太怂了&#xff0c;不敢去找校花&#xff0c; 直到昨天看到校花登上了校董儿子的豪车&#xff0c; 死党终于彻底死心&#xff0c;大醉一场&#xff0c;作为他的兄弟&#xff0c…

井电双控智能取水计量设备-井电双控遥测终端机

井电双控遥测终端机/井电双控智能取水计量设备&#xff08;MGTR-W4122C&#xff09;是针对取水计量控制系统开发智能终端产品。集预收费、流量监测、电量监测、余额提醒、欠费停机、无线传输、远程控制等多种功能于一体&#xff0c;并可根据项目需求选择实体IC卡和APP电子卡取水…

【JavaEE】从收发消息的角度理解 TCP/IP 五层网络模型的封装与分用

文章目录 1 为什么需要分层&#xff1f;2 TCP/IP 五层网络模型3 数据的封装&#xff08;发送消息为例&#xff09;4 数据的分用&#xff08;接收消息为例&#xff09;5 实际网络环境上的封装与分用写在最后 1 为什么需要分层&#xff1f; 你问我为啥需要分层&#xff1f;那必然…

python+vue 健康体检预约管理系统

该专门体检预约管理系统包括会员和管理员。其主要功能包括个人中心、会员管理、体检服务管理、类型管理、订单信息管理、取消订单管理、 体检报告管理、通知信息管理、交流论坛、系统管理等功能。 目 录 一、绪论 1 1.1研发背景和意义 2 1.2 国内研究动态 3 1.3论文主…

US-DAT2-F、US-DAT2-A比例放大器接线

多路控制阀比例放大器接线端子定义&#xff1a; 序号 端口 名称 1 CMD1 1阀指令 2 CMD1- 1阀指令- 5 RS485_A - 6 RS485_B - 7 VREF_10V 参考电压10V 8 VREF_0V 参考电压0V 9 VAL1_A 1阀电磁铁A 10 VAL1_AB- 1阀电磁铁AB- 11 VAL1_B 1阀电磁铁B 12 PWR 电源 13 PWR…

Unreal5 实现角色动画重定向

解决问题&#xff1a; 有时候有的角色动画想用到另外的角色身上&#xff0c;不能直接用怎么办&#xff1f; 解决方案&#xff1a; 使用重定向 实现方式&#xff1a; 在资产里面创建IK绑定 在列表中选中需要绑定的骨骼网格体 需要创建两个&#xff0c;我这里是女人需要使用男…