面试题整理12----K8s中Pod创建常见错误

news2024/12/27 21:37:26

面试题整理12----K8s中Pod创建常见错误

  • 1. 镜像获取失败
    • 1.1 ErrImagePull(镜像拉取错误)
    • 1.2 ImagePullBackOff(镜像拉取退避)
    • 1.3 故障复现
    • 1.4 解决方法
    • 1.5 确认恢复正常
  • 2. Pending
    • 2.1 镜像拉取失败
    • 2.2 资源不足(CPU,内存)
      • 2.2.1 故障复现
      • 2.2.2 解决故障
    • 2.3 资源不足(存储)
      • 2.3.1 故障复现
      • 2.3.2 故障修复
    • 2.4 标签选择器或亲和
      • 2.4.1 故障复现
      • 2.4.2 故障修复
  • 3. 补充:Pod常见状态及原因
    • 3.1 ContainerCreating(容器创建中)
    • 3.2 ErrImagePull(镜像拉取错误)
    • 3.3 ImagePullBackOff(镜像拉取退避)
    • 3.4 CrashLoopBackOff(崩溃循环退避)
    • 3.5 Running - Ready(运行中 - 就绪)
    • 3.6 Terminating(终止中)
    • 3.7 Pending - ImagePullBackOff(待定 - 镜像拉取退避)

在Kubernetes中,Pod是核心资源对象,其稳定运行至关重要。然而,Pod可能会遇到各种错误状态,影响其正常运行。以下是一些常见错误及其解决方法:

1. 镜像获取失败

此错误通常是以ErrImagePullImagePullBackOff的错误出现.

1.1 ErrImagePull(镜像拉取错误)

Kubernetes 无法从镜像仓库拉取容器镜像。
可能的原因包括镜像名称错误、镜像不存在、认证失败、网络问题等。

1.2 ImagePullBackOff(镜像拉取退避)

类似于 ErrImagePull,但在多次尝试失败后,Kubernetes 会进入退避状态,等待一段时间后重试

1.3 故障复现

root@k8s-master01:~# kubectl get pods
NAME                               READY   STATUS             RESTARTS   AGE
nginx-deployment-d556bf558-9swpd   0/1     ImagePullBackOff   0          46m
nginx-deployment-d556bf558-d2482   0/1     ErrImagePull       0          46m
nginx-deployment-d556bf558-r4v4z   0/1     ErrImagePull       0          46m
root@k8s-master01:~# kubectl describe pods nginx-deployment-d556bf558-r4v4z |tail -10
  Normal   Scheduled  47m                   default-scheduler  Successfully assigned default/nginx-deployment-d556bf558-r4v4z to k8s-node03
  Warning  Failed     46m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 162.125.32.13:443: connect: connection refused
  Warning  Failed     46m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 69.171.229.11:443: connect: connection refused
  Warning  Failed     45m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 157.240.11.40:443: connect: connection refused
  Normal   Pulling    44m (x4 over 47m)     kubelet            Pulling image "nginx:1.14.2"
  Warning  Failed     44m (x4 over 46m)     kubelet            Error: ErrImagePull
  Warning  Failed     44m                   kubelet            Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 108.160.165.48:443: connect: connection refused
  Warning  Failed     44m (x6 over 46m)     kubelet            Error: ImagePullBackOff
  Warning  Failed     12m (x4 over 28m)     kubelet            (combined from similar events): Failed to pull image "nginx:1.14.2": failed to pull and unpack image "docker.io/library/nginx:1.14.2": failed to resolve reference "docker.io/library/nginx:1.14.2": failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/1.14.2": dial tcp 108.160.172.1:443: connect: connection refused
  Normal   BackOff    2m8s (x178 over 46m)  kubelet            Back-off pulling image "nginx:1.14.2"

1.4 解决方法

  1. 下载镜像
  2. 修改并上传本地harbor(或者保存到每个节点)
  3. 将deployment中image修改为内网harbor镜像
# 下载nginx镜像
root@k8s-master01:~/yaml# nerdctl pull nginx:1.14.2
docker.io/library/nginx:1.14.2:                                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:8ca774778e858d3f97d9ec1bec1de879ac5e10096856dc22ed325a3ad944f78a:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:27833a3ba0a545deda33bb01eaf95a14d05d43bf30bce9267d92d17f069fe897:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:0f23e58bd0b7c74311703e20c21c690a6847e62240ed456f8821f4c067d3659b:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 826.6s                                                                   total:  42.6 M (52.8 KiB/s)                                      
root@k8s-master01:~/yaml# nerdctl tag nginx:1.14.2 harbor.panasonic.cn/nginx/nginx:1.14.2
# 将镜像推送至harbor仓库
root@k8s-master01:~/yaml# nerdctl push harbor.panasonic.cn/nginx/nginx:1.14.2
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:3d206f335adbabfc33b20c0190ef88cb47d627d21546d48e72e051e5fc27451a) 
index-sha256:3d206f335adbabfc33b20c0190ef88cb47d627d21546d48e72e051e5fc27451a:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:706446e9c6667c0880d5da3f39c09a6c7d2114f5a5d6b74a2fafd24ae30d2078: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:295c7be079025306c4f1d65997fcf7adb411c88f139ad1d34b537164aa060369:   done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 0.6 s                                                                    total:  7.1 Ki (11.8 KiB/s)                                      
root@k8s-master01:~/yaml# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        #image: nginx:1.14.2  # 注释原镜像
        # 使用harbor作为镜像
        image: harbor.intra.com/nginx/nginx:1.14.2
        ports:
        - containerPort: 80
deployment.apps "nginx-deployment" deleted
root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml
deployment.apps/nginx-deployment created

1.5 确认恢复正常

root@k8s-master01:~/yaml# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-8677887b4f-2h2rd   1/1     Running   0          36s
nginx-deployment-8677887b4f-j7kwj   1/1     Running   0          36s
nginx-deployment-8677887b4f-vfmfq   1/1     Running   0          36s
root@k8s-master01:~/yaml# kubectl describe pods nginx-deployment-8677887b4f-vfmfq |tail
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  49s   default-scheduler  Successfully assigned default/nginx-deployment-8677887b4f-vfmfq to k8s-node01
  Normal  Pulling    49s   kubelet            Pulling image "harbor.intra.com/nginx/nginx:1.14.2"
  Normal  Pulled     46s   kubelet            Successfully pulled image "harbor.intra.com/nginx/nginx:1.14.2" in 3.069s (3.069s including waiting). Image size: 44708492 bytes.
  Normal  Created    46s   kubelet            Created container nginx
  Normal  Started    46s   kubelet            Started container nginx
root@k8s-master01:~/yaml# 

2. Pending

Pending是K8s最常见的一种错误状态,这个报错主要原因有:

  1. 镜像拉取失败
  2. 资源不足
  3. 调度约束
  4. 依赖不存在

2.1 镜像拉取失败

这个在1里面已经详细表述过了,常见会伴有ErrImagePullImagePullBackOff的报错.这里就不再复述

2.2 资源不足(CPU,内存)

这个故障的原因就是Pod做了资源限制或者由于亲和或者指定node等情况,出现CPU,内存资源不足.容器创建后无法提供于是就处于Pending的状态

2.2.1 故障复现

可以看到node节点的内存资源基本都是1G以下,那么当我们申请一个6G的内存作为requests,当pod创建被提交后,一直无法得到内存大于6G的node来调度pod,于是Pod的状态就一直处于Pending的状态.

root@k8s-master01:~/yaml# kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   78m          0%     1203Mi          15%       
k8s-node01     26m          0%     1091Mi          28%       
k8s-node02     25m          0%     739Mi           19%       
k8s-node03     24m          0%     701Mi           18%  
root@k8s-master01:~/yaml# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        #image: nginx:1.14.2
        image: harbor.intra.com/nginx/nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "6Gi"
            cpu: "1"
          limits:
            memory: "6Gi"
            cpu: "1"

root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml 
deployment.apps/nginx-deployment created
root@k8s-master01:~/yaml# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-554d6d7fd9-62dn7   0/1     Pending   0          6s
nginx-deployment-554d6d7fd9-bcwvt   0/1     Pending   0          6s
nginx-deployment-554d6d7fd9-n9dnp   0/1     Pending   0          6s
root@k8s-master01:~/yaml# kubectl describe pod nginx-deployment-554d6d7fd9-n9dnp | tail -4
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  5m1s  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 Insufficient memory. preemption: 0/4 nodes are available: 1 Preemption is not helpful for scheduling, 3 No preemption victims found for incoming pod.

可以看到有Insufficient memory 的告警出现在日志中.说明内存不足

2.2.2 解决故障

经过我们对应用的测试,适当调整requests.memory的值,使得node节点有足够的资源进行调度,然后重新发布deployment使得配置内容生效.

root@k8s-master01:~/yaml# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        #image: nginx:1.14.2
        image: harbor.intra.com/nginx/nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "200Mi"
            cpu: "1"
          limits:
            memory: "400Mi"
            cpu: "1"

root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml
deployment.apps/nginx-deployment configured
root@k8s-master01:~/yaml# kubectl get po
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5b696b7fc8-2gkgc   1/1     Running   0          66s
nginx-deployment-5b696b7fc8-8kt6p   1/1     Running   0          64s
nginx-deployment-5b696b7fc8-dm8jt   1/1     Running   0          67s

此时Pod状态都是Running了

2.3 资源不足(存储)

这种情况也非常常见,通常是CM,Secret或者PVC等存储资源在Pod中申明,但在Pod启动前并没有被正确创建.当Pod创建时无法引用这些资源,就停在Pending状态.
通常会有persistentvolumeclaim "xxxx--xxx not found.的报错

2.3.1 故障复现

root@k8s-master01:~/yaml# cat nginx-nfs.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-nfs-example
  namespace: default
spec:
  containers:
    - image: harbor.panasonic.cn/nginx/nginx:1.14.2
      name: nginx
      ports:
        - containerPort: 80
          protocol: TCP
      volumeMounts:
        - mountPath: /var/www
          name: pvc-nginx
          readOnly: false
  volumes:
    - name: pvc-nginx
      persistentVolumeClaim:
        claimName: nfs-pvc-default
root@k8s-master01:~/yaml# kubectl apply -f  nginx-nfs.yaml 
pod/nginx-nfs-example created
root@k8s-master01:~/yaml# kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
nginx-nfs-example   0/1     Pending   0          5s
root@k8s-master01:~/yaml# kubectl describe pod nginx-nfs-example |tail -5
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16s   default-scheduler  0/4 nodes are available: persistentvolumeclaim "nfs-pvc-default" not found. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

2.3.2 故障修复

添加pv和pvc资源提供给pod挂载

root@k8s-master01:~/yaml# cat nginx-nfs.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /nfs
    server: 192.168.31.104
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-nfs
  namespace: default
spec:
  containers:
    - image: harbor.panasonic.cn/nginx/nginx:1.14.2
      name: nginx
      ports:
        - containerPort: 80
          protocol: TCP
      volumeMounts:
        - mountPath: /var/www
          name: nfs-pvc
          readOnly: false
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: nfs-pvc

应用配置后故障消除

root@k8s-master01:~/yaml# kubectl apply -f nginx-nfs.yaml
persistentvolume/nfs-pv created
persistentvolumeclaim/nfs-pvc created
pod/nginx-nfs created
root@k8s-master01:~/yaml# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                     STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
nfs-pv                                     200Mi      RWX            Retain           Bound         default/nfs-pvc                          <unset>                          3s
pvc-0748bb20-1e4a-4741-845c-0bae59160ef6   10Gi       RWX            Delete           Bound         default/pvc-nfs-dynamic   nfs-csi        <unset>                          32d
pvc-7a0bba72-8d63-4393-861d-c4a409d48933   2Gi        RWO            Delete           Terminating   test/nfs-pvc              nfs-storage    <unset>                          32d
root@k8s-master01:~/yaml# kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
nfs-pvc           Bound    nfs-pv                                     200Mi      RWX                           <unset>                 6s
pvc-nfs-dynamic   Bound    pvc-0748bb20-1e4a-4741-845c-0bae59160ef6   10Gi       RWX            nfs-csi        <unset>                 32d
root@k8s-master01:~/yaml# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
nginx-nfs   1/1     Running   0          9s

CM和Secret等资源也是类似.

2.4 标签选择器或亲和

这类故障通常是由于标签选择或者强亲和造成没有配置正确的node节点或node节点没有足够的资源

2.4.1 故障复现

给node节点打上worker=true的label,这是我们在配置deployment时错误的将nodeselector设置成了错误的值,这样pod状态就会变成Pending

root@k8s-master01:~/yaml# kubectl get nodes --label-columns worker=true
NAME           STATUS   ROLES           AGE   VERSION   WORKER=TRUE
k8s-master01   Ready    control-plane   94d   v1.31.0   
k8s-node01     Ready    <none>          94d   v1.31.0   
k8s-node02     Ready    <none>          94d   v1.31.0   
k8s-node03     Ready    <none>          94d   v1.31.0   
root@k8s-master01:~/yaml# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        worker: node8
      containers:
      - name: nginx
        #image: nginx:1.14.2
        image: harbor.intra.com/nginx/nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "6Gi"
            cpu: "1"
          limits:
            memory: "6Gi"
            cpu: "1"

root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml
deployment.apps/nginx-deployment created
root@k8s-master01:~/yaml# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-86895b4d79-dm6z4   0/1     Pending   0          84s
nginx-deployment-86895b4d79-tptlw   0/1     Pending   0          84s
nginx-deployment-86895b4d79-v6bfh   0/1     Pending   0          84s
root@k8s-master01:~/yaml# kubectl describe pods nginx-deployment-86895b4d79-v6bfh | tail -5
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  104s  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

2.4.2 故障修复

这里一般2种做法.

  1. 修改deployment中的nodeselector改为正确值.
  2. 可能生成环境中不想停止应用,那么就给对应的节点打上指定的标签
    我们这里修改yaml中的nodeSelector然后重新部署
root@k8s-master01:~/yaml# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        worker: 'true'
      containers:
      - name: nginx
        #image: nginx:1.14.2
        image: harbor.intra.com/nginx/nginx:1.14.2
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "200Mi"
            cpu: "0.1"
          limits:
            memory: "500Mi"
            cpu: "1"

root@k8s-master01:~/yaml# kubectl apply -f deployment.yaml 
deployment.apps/nginx-deployment configured
root@k8s-master01:~/yaml# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-55cdb49d65-2jkxl   1/1     Running   0          2s
nginx-deployment-55cdb49d65-bdltk   1/1     Running   0          3s
nginx-deployment-55cdb49d65-cb44w   1/1     Running   0          5s

常见的一般就是这几种情况,基本就是依赖未实现造成的,一般用kubectl describe pods <POD_NAME>就能发现问题,然后根据报错进行排错就可以了.

3. 补充:Pod常见状态及原因

常见的具体状态或事件

3.1 ContainerCreating(容器创建中)

  • Kubernetes 正在创建 Pod 的容器,但尚未完成。
  • 可能的原因包括等待存储卷挂载、配置网络等。

3.2 ErrImagePull(镜像拉取错误)

  • Kubernetes 无法从镜像仓库拉取容器镜像。
  • 可能的原因包括镜像名称错误、镜像不存在、认证失败、网络问题等。

3.3 ImagePullBackOff(镜像拉取退避)

类似于 ErrImagePull,但在多次尝试失败后,Kubernetes 会进入退避状态,等待一段时间后重试。

3.4 CrashLoopBackOff(崩溃循环退避)

容器启动后立即崩溃,并且 Kubernetes 正在尝试重启容器,但连续失败后进入退避状态。
可能的原因包括应用程序错误、配置错误、资源不足等。

3.5 Running - Ready(运行中 - 就绪)

Pod 中的所有容器都在运行,并且已经通过健康检查,可以接收流量。

3.6 Terminating(终止中)

Kubernetes 正在终止 Pod,可能是因为删除 Pod 或者节点维护等原因。

3.7 Pending - ImagePullBackOff(待定 - 镜像拉取退避)

Pod 处于 Pending 状态,并且因为镜像拉取失败进入退避状态。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2266604.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【机器学习】探索机器学习与人工智能:驱动未来创新的关键技术

探索机器学习与人工智能&#xff1a;驱动未来创新的关键技术 前言&#xff1a;人工智能的核心技术深度学习&#xff1a;自然语言处理&#xff08;NLP&#xff09;&#xff1a;计算机视觉&#xff1a; 机器学习与人工智能的驱动创新医疗健康领域金融行业智能制造与工业互联网智慧…

Visual Studio Code(VS Code)配置C/C++环境

一、Visual Studio Code安装 Visual Studio Code&#xff0c;下文中简称为VS Code的详细安装方法请参考VSCode安装教程&#xff08;超详细&#xff09;-CSDN博客 二、MinGW编译器下载与配置 1、MinGW介绍 MinGW(Minimalist GNU for Windows)是一款用于Windows 平台的轻…

Postman接口测试01|接口测试基础概念、http协议、RESTful风格、接口文档

目录 一、接口测试基础概念 1、什么是接口 2、接口的类型 3、什么是接口测试 4、为什么要做接口测试 5、接口测试的实现方式 6、什么是自动化接口测试&#xff1f; 二、接口返回的数据格式 1、三种格式 2、Json 三、接口协议 1、webservice协议 2、dubbo协议 3、…

HDR视频技术之十一:HEVCH.265 的 HDR 编码方案

前文我们对 HEVC 的 HDR 编码优化技术做了介绍&#xff0c;侧重编码性能的提升。 本章主要阐述 HEVC 中 HDR/WCG 相关的整体编码方案&#xff0c; 包括不同应用场景下的 HEVC 扩展编码技术。 1 背景 HDR 信号一般意味着使用更多比特&#xff0c;一般的 HDR 信号倾向于使用 10…

ThinkPHP 8开发环境安装

【图书介绍】《ThinkPHP 8高效构建Web应用》-CSDN博客 《ThinkPHP 8高效构建Web应用 夏磊 编程与应用开发丛书 清华大学出版社》【摘要 书评 试读】- 京东图书 1. 安装PHP8 Windows系统用户可以前往https://windows.php.net/downloads/releases/archives/下载PHP 8.0版本&am…

CAN201 Introduction to Networking(计算机网络)Pt.2 传输层

文章目录 3. Transport Layer&#xff08;传输层&#xff09;3.1 Multiplexing and demultiplexing&#xff08;多路复用和多路分解&#xff09;3.2 Connectionless transport&#xff1a;UDP3.3 Principles of reliable data transfer3.4 Pipelined communication3.5 TCP: con…

linux系统上SQLPLUS的重“大”发现

SQL plus版本&#xff1a; [oraclepg-xc2 ~]$ sqlplus -v SQL*Plus: Release 19.0.0.0.0 - Production Version 19.3.0.0.0 操作系统&#xff1a;CentOS Linux 7 (Core) 数据库&#xff1a;Oracle 19c Version 19.3.0.0.0 同样的SQL脚本在windos CMD sqlplus 执行没问题。…

Unity中实现人物残影效果

今天火柴人联盟3公测了&#xff0c;看到一个残影的效果&#xff0c;很有意思&#xff0c;上网查询了一下实现方式&#xff0c; 实现思路&#xff1a; 将角色的网格复制出来&#xff0c;然后放置到新建的物体的MeshFilter组件上&#xff0c;每隔几十毫秒在玩家的位置生成一个&a…

mysql mmm和mha对比

本文简单介绍MySQL的两个high availability方案&#xff0c;MMM和MHA。 MMM MMM(Master-Master replication managerfor Mysql)的基本组成如下&#xff0c; 主节点master1&#xff1a;承载写流量备主节点master2&#xff1a;replicate主节点master1的写流量&#xff0c;在主节…

探秘“香水的 ChatGPT”:AI 开启嗅觉奇幻之旅!

你没有看错&#xff0c;AI也能闻到味道了&#xff01;这是一家名为Osmo公司公布的信息&#xff0c;他们成功创造出了由AI生成的李子味道&#xff0c;快跟着小编一探究竟吧~ 【图片来源于网络&#xff0c;侵删】 Osmo公司的这项技术&#xff0c;通过分析香味的化学成分和人类嗅…

uniapp实现为微信小程序扫一扫的功能

引言 随着微信小程序的快速发展,越来越多的开发者开始关注和学习微信小程序的开发。其中,微信小程序的扫一扫功能是非常常用且实用的功能之一。通过扫描二维码,用户可以获取到相关的信息或者实现特定的功能。 正文 在过去,开发者需要使用微信开发者工具以及相关的开发文档…

网络ip地址改成什么比较好

在数字化时代&#xff0c;网络IP地址作为设备在网络中的唯一标识&#xff0c;扮演着至关重要的角色。然而&#xff0c;随着网络环境的日益复杂和网络安全问题的频发&#xff0c;许多用户开始考虑更改自己的网络IP地址&#xff0c;以期获得更好的网络体验和安全保障。那么&#…

iOS从Matter的设备认证证书中获取VID和PID

设备认证证书也叫 DAC, 相当于每个已经认证的设备的标识。包含了 VID 和 PID. VID: Vendor ID &#xff0c;标识厂商 PID: Product ID&#xff0c; 标识设备的 根据 Matter 对于设备证书的规定&#xff0c;DAC证书subject应该包含VID 和 PID. 可通过解析 X509 证书读取subject…

聊一聊 C#线程池 的线程动态注入

提高注入速度的两种方法 1. 降低GateThread的延迟时间 上一篇跟大家聊过 Result 默认情况下GateThread每秒会注入4个&#xff0c;底层逻辑是由 Blocking.MaxDelayMs250ms 变量控制的&#xff0c;言外之意就是能不能减少这个变量的值呢&#xff1f;当然可以的&#xff0c;这里我…

【嵌入式C语言】指针数组结构体

指针与数组 指针与数组指针数组数组指针 多维数组数组名的保存 结构体定义结构体定义结构体变量使用typedef简化结构体声明访问结构体成员结构体内存分配字节对齐位域定义位域位域的限制示例 指针与数组 指针数组和数组指针是两个不同的概念&#xff0c;它们涉及到指针和数组的…

HarmonyOS NEXT 实战之元服务:静态案例效果(二)

背景&#xff1a; 前几篇学习了元服务&#xff0c;后面几期就让我们开发简单的元服务吧&#xff0c;里面丰富的内容大家自己加&#xff0c;本期案例 仅供参考 先上本期效果图 &#xff0c;里面图片自行替换 效果图代码案例如下&#xff1a; Index里面实现 import { authent…

Android Studio超级详细讲解下载、安装配置教程(建议收藏)

博主介绍&#xff1a;✌专注于前后端、机器学习、人工智能应用领域开发的优质创作者、秉着互联网精神开源贡献精神&#xff0c;答疑解惑、坚持优质作品共享。本人是掘金/腾讯云/阿里云等平台优质作者、擅长前后端项目开发和毕业项目实战&#xff0c;深受全网粉丝喜爱与支持✌有…

从安全角度看 SEH 和 VEH

从安全角度看 SEH 和 VEH 异常处理程序是处理程序中不可预见的错误的基本方法之一 https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/exceptions/ SEH——结构化异常处理程序 就其工作方式而言&#xff0c;异常处理程序与其他处理程序相比相当基础&#xff0…

运行Zr.Admin项目(前端)

1.确认环境信息 我这里装的是node16.17版本的 官网16版本的最新为v16.20.2&#xff0c;下载链接https://nodejs.org/dist/v16.20.2/node-v16.20.2-x64.msi 2.去掉ssl 进入到Zr.Admin项目根目录&#xff0c;进入到ZR.vue 打开package.json 文件&#xff0c;删除启动命令配置中…

uniapp登录

第一步整登录 先整个appid APPID和APPSecret https://developers.weixin.qq.com/community/develop/article/doc/000ca4601b8f70e379febac985b413 一个账号只能整一个小程序 正确流程 调用uni.login https://juejin.cn/post/7126553599445827621 https://www.jb51.net/a…