k8s 配置hadoop集群,nfs作为存储

news2024/11/24 19:12:36

目录

一、简介

二、nfs服务&nfs-provisioner配置

1、k8S服务器需安装nfs客户端

2、nfs服务端安装配置

3、使用nfs-provisioner动态创建PV  (文件已修改)

三、hadoop配置文件

1、# cat hadoop.yaml

2、# cat hadoop-datanode.yaml

3、# cat yarn-node.yaml 

4、执行文件并查看

5、联通性验证

 四、报错&解决

1、nfs报错 

 2、nfs报错 


一、简介

基础环境使用kubeasz(https://github.com/easzlab/kubeasz)配置K8S环境.
K8S配置示例:https://blog.csdn.net/zhangxueleishamo/article/details/108670578
使用nfs作为资源存储。

二、nfs服务&nfs-provisioner配置

1、k8S服务器需安装nfs客户端

yum -y install nfs-utils

2、nfs服务端安装配置

yum -y install nfs-utils rpcbind nfs-server

# cat /etc/exports
/data/hadoop	*(rw,no_root_squash,no_all_squash,sync)
###权限及目录配置,具体不再说明

systemctl start rpcbind 
systemctl enable rpcbind

systemctl start nfs
systemctl enable nfs

3、使用nfs-provisioner动态创建PV  (文件已修改)

# cat nfs-provisioner.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: dev
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: dev 
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: dev
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: quay.io/external_storage/nfs-client-provisioner:latest
          image: jmgao1983/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              # 此处供应者名字供storageclass调用
              value: nfs-storage
            - name: NFS_SERVER
              value: 10.2.1.190
            - name: NFS_PATH
              value: /data/hadoop
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.2.1.190
            path: /data/hadoop

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs-storage
volumeBindingMode: Immediate
reclaimPolicy: Delete


###执行并查看sa & sc ##

# kubectl apply -f nfs-provisioner.yaml 

# kubectl get sa,sc  -n dev 
NAME                                    SECRETS   AGE
serviceaccount/nfs-client-provisioner   1         47m

NAME                                                PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/nfs-storage (default)   nfs-storage   Delete          Immediate           false                  45m

三、hadoop配置文件

1、# cat hadoop.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-hadoop-conf
  namespace: dev
data:
  HDFS_MASTER_SERVICE: hadoop-hdfs-master
  HDOOP_YARN_MASTER: hadoop-yarn-master
---
apiVersion: v1
kind: Service
metadata:
  name: hadoop-hdfs-master
  namespace: dev
spec:
  type: NodePort
  selector:
    name: hdfs-master
  ports:
    - name: rpc
      port: 9000
      targetPort: 9000
    - name: http
      port: 50070
      targetPort: 50070
      nodePort: 32007
---
apiVersion: v1
kind: Service
metadata:
  name: hadoop-yarn-master
  namespace: dev
spec:
  type: NodePort
  selector:
    name: yarn-master
  ports:
     - name: "8030"
       port: 8030
     - name: "8031"
       port: 8031
     - name: "8032"
       port: 8032
     - name: http
       port: 8088
       targetPort: 8088
       nodePort: 32088
---
apiVersion: v1
kind: Service
metadata:
  name: yarn-node
  namespace: dev
spec:
  clusterIP: None
  selector:
    name: yarn-node
  ports:
     - port: 8040

2、# cat hadoop-datanode.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hdfs-master
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      name: hdfs-master
  template:
    metadata:
      labels:
        name: hdfs-master
    spec:
      containers:
        - name: hdfs-master
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
            - containerPort: 50070
          env:
            - name: HADOOP_NODE_TYPE
              value: namenode
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hadoop-datanode
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      name: hadoop-datanode
  serviceName: hadoop-datanode
  template:
    metadata:
      labels:
        name: hadoop-datanode
    spec:
      containers:
        - name: hadoop-datanode
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
            - containerPort: 50070
          volumeMounts:
            - name: data
              mountPath: /root/hdfs/
              subPath: hdfs
            - name: data
              mountPath: /usr/local/hadoop/logs/
              subPath: logs
          env:
            - name: HADOOP_NODE_TYPE
              value: datanode
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
  volumeClaimTemplates:
    - metadata:
        name: data
        namespace: dev
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 2Gi
        storageClassName: "nfs-storage"

3、# cat yarn-node.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: yarn-master
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      name: yarn-master
  template:
    metadata:
      labels:
        name: yarn-master
    spec:
      containers:
        - name: yarn-master
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
            - containerPort: 50070
          env:
            - name: HADOOP_NODE_TYPE
              value: resourceman
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
---
 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: yarn-node
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      name: yarn-node
  serviceName: yarn-node
  template:
    metadata:
      labels:
        name: yarn-node
    spec:
      containers:
        - name: yarn-node
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8040
            - containerPort: 8041
            - containerPort: 8042
          volumeMounts:
            - name: yarn-data
              mountPath: /root/hdfs/
              subPath: hdfs
            - name: yarn-data
              mountPath: /usr/local/hadoop/logs/
              subPath: logs
          env:
            - name: HADOOP_NODE_TYPE
              value: yarnnode
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
  volumeClaimTemplates:
    - metadata:
        name: yarn-data
        namespace: dev
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 2Gi
        storageClassName: "nfs-storage"

4、执行文件并查看

kubectl apply -f hadoop.yaml
kubectl apply -f hadoop-datanode.yaml
kubectl apply -f yarn-node.yaml


# kubectl get pv,pvc -n dev 
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
persistentvolume/pvc-2bf83ccf-85eb-43d7-8d49-10a2617c1bde   2Gi        RWX            Delete           Bound    dev/data-hadoop-datanode-0   nfs-storage             34m
persistentvolume/pvc-5ecff2b2-ea9d-4d6f-851b-0ab2cecbbe54   2Gi        RWX            Delete           Bound    dev/yarn-data-yarn-node-1    nfs-storage             32m
persistentvolume/pvc-91132f6d-a3e1-4938-b8d7-674d6b0656a8   2Gi        RWX            Delete           Bound    dev/data-hadoop-datanode-2   nfs-storage             34m
persistentvolume/pvc-a44adf12-2505-4133-ab57-99a61c4d4476   2Gi        RWX            Delete           Bound    dev/data-hadoop-datanode-1   nfs-storage             34m
persistentvolume/pvc-c4bf1e26-936f-46f6-8529-98d2699a916e   2Gi        RWX            Delete           Bound    dev/yarn-data-yarn-node-2    nfs-storage             32m
persistentvolume/pvc-e6d360be-2f72-4c47-a99b-fee79ca5e03b   2Gi        RWX            Delete           Bound    dev/yarn-data-yarn-node-0    nfs-storage             32m

NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-hadoop-datanode-0   Bound    pvc-2bf83ccf-85eb-43d7-8d49-10a2617c1bde   2Gi        RWX            nfs-storage    39m
persistentvolumeclaim/data-hadoop-datanode-1   Bound    pvc-a44adf12-2505-4133-ab57-99a61c4d4476   2Gi        RWX            nfs-storage    34m
persistentvolumeclaim/data-hadoop-datanode-2   Bound    pvc-91132f6d-a3e1-4938-b8d7-674d6b0656a8   2Gi        RWX            nfs-storage    34m
persistentvolumeclaim/yarn-data-yarn-node-0    Bound    pvc-e6d360be-2f72-4c47-a99b-fee79ca5e03b   2Gi        RWX            nfs-storage    32m
persistentvolumeclaim/yarn-data-yarn-node-1    Bound    pvc-5ecff2b2-ea9d-4d6f-851b-0ab2cecbbe54   2Gi        RWX            nfs-storage    32m
persistentvolumeclaim/yarn-data-yarn-node-2    Bound    pvc-c4bf1e26-936f-46f6-8529-98d2699a916e   2Gi        RWX            nfs-storage    32m


# kubectl get all -n dev  -o wide 
NAME                                         READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
pod/hadoop-datanode-0                        1/1     Running   0          40m   172.20.4.65   10.2.1.194   <none>           <none>
pod/hadoop-datanode-1                        1/1     Running   0          35m   172.20.4.66   10.2.1.194   <none>           <none>
pod/hadoop-datanode-2                        1/1     Running   0          35m   172.20.4.67   10.2.1.194   <none>           <none>
pod/hdfs-master-5946bb8ff4-lt5mp             1/1     Running   0          40m   172.20.4.64   10.2.1.194   <none>           <none>
pod/nfs-client-provisioner-8ccc8b867-ndssr   1/1     Running   0          52m   172.20.4.63   10.2.1.194   <none>           <none>
pod/yarn-master-559c766d4c-jzz4s             1/1     Running   0          33m   172.20.4.68   10.2.1.194   <none>           <none>
pod/yarn-node-0                              1/1     Running   0          33m   172.20.4.69   10.2.1.194   <none>           <none>
pod/yarn-node-1                              1/1     Running   0          33m   172.20.4.70   10.2.1.194   <none>           <none>
pod/yarn-node-2                              1/1     Running   0          33m   172.20.4.71   10.2.1.194   <none>           <none>

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                       AGE   SELECTOR
service/hadoop-hdfs-master   NodePort    10.68.193.79    <none>        9000:26007/TCP,50070:32007/TCP                                40m   name=hdfs-master
service/hadoop-yarn-master   NodePort    10.68.243.133   <none>        8030:34657/TCP,8031:35352/TCP,8032:33633/TCP,8088:32088/TCP   40m   name=yarn-master
service/yarn-node            ClusterIP   None            <none>        8040/TCP                                                      40m   name=yarn-node

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS               IMAGES                                    SELECTOR
deployment.apps/hdfs-master              1/1     1            1           40m   hdfs-master              kubeguide/hadoop:latest                   name=hdfs-master
deployment.apps/nfs-client-provisioner   1/1     1            1           52m   nfs-client-provisioner   jmgao1983/nfs-client-provisioner:latest   app=nfs-client-provisioner
deployment.apps/yarn-master              1/1     1            1           33m   yarn-master              kubeguide/hadoop:latest                   name=yarn-master

NAME                                               DESIRED   CURRENT   READY   AGE   CONTAINERS               IMAGES                                    SELECTOR
replicaset.apps/hdfs-master-5946bb8ff4             1         1         1       40m   hdfs-master              kubeguide/hadoop:latest                   name=hdfs-master,pod-template-hash=5946bb8ff4
replicaset.apps/nfs-client-provisioner-8ccc8b867   1         1         1       52m   nfs-client-provisioner   jmgao1983/nfs-client-provisioner:latest   app=nfs-client-provisioner,pod-template-hash=8ccc8b867
replicaset.apps/yarn-master-559c766d4c             1         1         1       33m   yarn-master              kubeguide/hadoop:latest                   name=yarn-master,pod-template-hash=559c766d4c

NAME                               READY   AGE   CONTAINERS        IMAGES
statefulset.apps/hadoop-datanode   3/3     40m   hadoop-datanode   kubeguide/hadoop:latest
statefulset.apps/yarn-node         3/3     33m   yarn-node         kubeguide/hadoop:latest

访问http://ip:32007 和http://ip:32088 即可看到hadoop管理界面

 

5、联通性验证

在hdfs上创建一个目录

# kubectl exec -it hdfs-master-5946bb8ff4-lt5mp -n dev /bin/bash

# hdfs dfs -mkdir /BigData

在Hadoop WebUI界面查看刚才创建的目录

 

 

 四、报错&解决

1、nfs报错 

Unexpected error getting claim reference to claim "dev/data-hadoop-datanode-0": selfLink was empty, can't make reference

 问题原因: 没有权限创建

解决:

1、chmod 777 /data/hadoop #配置nfs共享文件权限,方便测试就777了

2、修改nfs-provisioner.yaml的rules

rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

 2、nfs报错 

persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs-provisioner" or manually created by system  administrator

 问题原因:selfLink导致,因为kubernetes 1.20版本 禁用了 selfLink导致

https://github.com/kubernetes/kubernetes/pull/94397

问题解决:

1、在hadoop的3个文件里 添加namespace与nfs的统一

2、修改kube-apiserver.yaml配置文件,添加如下内容

apiVersion: v1
-----
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --feature-gates=RemoveSelfLink=false # 添加这个配置

本k8s使用kubeasz配置未找到该文件,需要直接修改所有的master的kube-apiserver服务的配置文件

# cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kube/bin/kube-apiserver \
  --advertise-address=10.2.1.190 \
  --allow-privileged=true \
  --anonymous-auth=false \
  --api-audiences=api,istio-ca \
  --authorization-mode=Node,RBAC \
  --token-auth-file=/etc/kubernetes/ssl/basic-auth.csv \
  --bind-address=10.2.1.190 \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --endpoint-reconciler-type=lease \
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.2.1.190:2379,https://10.2.1.191:2379,https://10.2.1.192:2379 \
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/admin.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/admin-key.pem \
  --service-account-issuer=kubernetes.default.svc \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca.pem \
  --service-cluster-ip-range=10.68.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --feature-gates=RemoveSelfLink=false \  #添加这个配置
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --requestheader-allowed-names= \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --proxy-client-cert-file=/etc/kubernetes/ssl/aggregator-proxy.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/aggregator-proxy-key.pem \
  --enable-aggregator-routing=true \
  --v=2
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

 重启服务 systemctl daemon-reload && systemctl restart kubelet

最好是整体服务器重启下

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/639256.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

fprintf 和 fscanf 、 fscanf和fgets的区别

一、fprintf与fscanf应用 #include <stdio.h> #include <windows.h>void write(){FILE *fp fopen("abc.c" , "w");if(!fp){perror("fopen error");return;}fprintf(fp , "%d%c%d%d\n",10,*,8,10*8);fclose(fp); }void r…

银行项目软件测试中都测哪些内容?怎么测

在我们的日常在金融或银行软件测试工作中都有哪些内容需要测试&#xff1f;在这些测试的内容中如何去更好的掌握测试技能保证测试质量&#xff0c;一起来学习探讨交流。 下面为银行测试点的概括&#xff1a; 根据上图&#xff0c;我们可以从以下几个方面重点关注&#xff1a; …

一些原理图设计最佳实践

要画出清晰、可读性好和整洁的电路原理图&#xff0c;应该遵守以下一般规范&#xff1a; 使用专业的绘图软件&#xff1a;使用专业的电路设计软件&#xff0c;如KiCad、Eagle、Altium Designer、OrCAD等&#xff0c;这些软件提供了丰富的元件库和绘图工具&#xff0c;可以轻松创…

使用POI实现JAVA操作Excel文件

1、POI工具介绍 1.1、POI 是用Java编写的免费开源的跨平台的 Java API&#xff0c;Apache POI提供API给Java程式对Microsoft Office格式档案读和写的功能。 1.2、主要是运用其中读取和输出excel的功能。 1.3、POI官网地址&#xff1a; https://poi.apache.org/components/i…

2021 年全国硕士研究生入学统一考试管理类专业学位联考逻辑试题

2021 年全国硕士研究生入学统一考试管理类专业学位联考逻辑试题 一. 逻辑推理&#xff1a;第 26~55 小题&#xff0c;每小题 2 分&#xff0c;共 60 分。下列每题给出的 A、B、C、D、E 五个选项中&#xff0c;只有一项是符合试题要求的。 26.哲学是关于世界观、方法论的学问。哲…

SEC的尴尬,无法找到行踪飘忽不定的“华人首富”

圈内很多有&#xff0c;也许会有这样一幅画面。“哎呀&#xff0c;他到底跑哪儿去了&#xff1f;”美国证券交易委员会SEC主席Gensler气得拍着桌子&#xff0c;旁边还放着长达136页的对币安的起诉文件。这可真是让人头疼啊&#xff01;毕竟赵长鹏最近几年一直神出鬼没&#xff…

Mybatis学习笔记二

目录 一、MyBatis的各种查询功能1.1 查询一个实体类对象1.2 查询一个List集合1.3 查询单个数据1.4 查询一条数据为map集合1.5 查询多条数据为map集合1.5.1 方法一&#xff1a;1.5.2 方法二&#xff1a; 二、特殊SQL的执行2.1 模糊查询2.2 批量删除2.3 动态设置表名2.4 添加功能…

现代图片性能优化: 懒加载及异步图像解码方案

图片的懒加载 懒加载是一种网页性能优化的常见方式&#xff0c;它能极大的提升用户体验。到今天&#xff0c;现在一张图片超过几 M 已经是常见事了。如果每次进入页面都需要请求页面上的所有的图片资源&#xff0c;会较大的影响用户体验&#xff0c;对用户的带宽也是一种极大的…

【id:76】【20分】B. 商旅信用卡(多重继承)

题目描述 某旅游网站&#xff08;假设旅程网&#xff09;和某银行推出旅游综合服务联名卡—旅程信用卡&#xff0c;兼具旅程会员卡和银行信用卡功能。 旅程会员卡&#xff0c;有会员卡号&#xff08;int&#xff09;、旅程积分&#xff08;int&#xff09;&#xff0c;通过会员…

Spring Cloud Kubernetes详解

目录 一、 为什么你需要 Spring Cloud Kubernetes&#xff1f; 二、 Starter 三、 用于 Kubernetes 的 DiscoveryClient 四、Kubernetes 原生服务发现&#xff08;service discovery&#xff09; 五、Kubernetes PropertySource 的实现 1、使用 ConfigMap PropertySource …

ssg标识符

1. 关键字&#xff08;keyword&#xff09; 定义&#xff1a;被Java语言赋予了特殊含义&#xff0c;用做专门用途的字符串&#xff08;或单词&#xff09; HelloWorld案例中&#xff0c;出现的关键字有 class、public 、 static 、 void 等&#xff0c;这些单词已经被Java定义…

【appium】appium自动化入门之API(上)

这个系列预计会讲启动APP—元素定位—初步使用—API命令详解 本系列没提过的知识点也不用急&#xff0c;大家可以点击文末小卡片进群来一起交流 目录 第 2 章 初步使用 2.1 启动 app&#xff08;淘宝&#xff09; 前言 2.1.1 下载 aapt 2.1.2 获取 apk 包名 2.1.3 获取 launch…

Linux之通配符、引号的使用

目录 Linux之通配符、引号的使用 通配符 定义 范围 用法及含义 案例 引号使用 案例 Linux之通配符、引号的使用 通配符 定义 通配符是一种特殊语句&#xff0c;主要有星号(*)、问号(?)等表示&#xff0c;用来模糊搜索文件&#xff0c;当查找目录或文件时&#xff0c;…

Gin微服务框架_golang web框架_完整示例Demo

Gin简介 前些天发现了一个巨牛的人工智能学习网站&#xff0c;通俗易懂&#xff0c;风趣幽默&#xff0c;忍不住分享一下给大家。点击跳转到网站 Gin是一个golang的微框架&#xff0c;封装比较优雅&#xff0c;API友好&#xff0c;源码注释比较明确&#xff0c;具有快速灵活&…

Spark入门

Spark概述 1.1 什么是Spark 回顾&#xff1a;Hadoop主要解决&#xff0c;海量数据的存储和海量数据的分析计算。 Spark是一种基于内存的快速、通用、可扩展的大数据分析计算引擎。 1.2 Hadoop与Spark历史 MR是进程模型&#xff0c;ResourceManager NodeManager都是进程&…

107-Spring的底层原理(上篇)

Spring的底层原理 之前说明的都是Spring的应用&#xff08;64章博客开始&#xff08;包括其后面的相关博客&#xff09;&#xff09;&#xff0c;现在来说明他为什么可以那样做 在说明之前&#xff0c;这里需要对64章博客中的spring的了解需要再次的说明&#xff1a; Spring…

Unity中UI方案。IMGUI、UIElement、UGUI、NGUI

引言 unity中有很多ui方案&#xff0c;各种方案有什么优势劣势&#xff0c;这里一一列举一下&#xff0c;知识扩充一下。 UI方案适用范围IMGUI仅用于Editor扩展&#xff0c;或运行时DebugUIElement可用于发布运行时和EditorUGUIRuntime&#xff0c;两大主流 UI 解决方案之一NG…

python语法-MySQL数据库(综合案例:读取文件,写入MySQL数据库中)

python语法-MySQL数据库 综合案例&#xff1a;读取文件&#xff0c;写入MySQL数据库中 &#xff08;项目数据见文章末参考内容&#xff09; 解析&#xff1a; sql代码如下&#xff1a; create database pysql charset utf8;use pysql;select database();create table orders…

华为OD机试真题 JavaScript 实现【求小球落地5次后所经历的路程和第5次反弹的高度】【牛客练习题 HJ38】

一、题目描述 假设一个球从任意高度自由落下&#xff0c;每次落地后反跳回原高度的一半; 再落下, 求它在第5次落地时&#xff0c;共经历多少米?第5次反弹多高&#xff1f; 数据范围&#xff1a;输入的小球初始高度满足 1 \le n \le 1000 \1≤n≤1000 &#xff0c;且保证是一…

今年十八,期末速刷(操作系统篇1)

马上期末了&#xff0c;想问问各位期末考几科 我家学校网安考7科呜呜呜 只能出点文章一把梭了。。。 争取只挂一科 先来先算法&#xff08;FCFS&#xff09; 算法思想 我今天学FCFS只有一个要求 公平、公平 还是tnd公平 算法规则 按照进程的先后顺序来进行服务。 是否…