Kubernetes集群部署(第二篇)

news2024/10/7 4:34:31

安装flannel

        Master 节点NotReady 的原因就是因为没有使用任何的网络插件,此时Node 和Master的连接还不正常。目前最流行的Kubernetes 网络插件有Flannel、Calico、Canal、Weave 这里选择使用flannel。

flannel提取链接:https://pan.baidu.com/s/1fLJKhBtcONFCln6_u3xMPQ?pwd=1mf2 
提取码:1mf2

所有主机:

master上传kube-flannel.yml,所有主机上传flannel_v0.12.0-amd64.tar

 

[root@k8s-master ~]# docker load < flannel_v0.12.0-amd64.tar

256a7af3acb1: Loading layer  5.844MB/5.844MB
d572e5d9d39b: Loading layer  10.37MB/10.37MB
57c10be5852f: Loading layer  2.249MB/2.249MB
7412f8eefb77: Loading layer  35.26MB/35.26MB
05116c9ff7bf: Loading layer   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64

 在所有主机上上传解压cni-plugins-linux-amd64-v0.8.6.tgz

[root@k8s-master ~]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz 
[root@k8s-master ~]# cp flannel /opt/cni/bin/

master主机:

[root@k8s-master ~]#  kubectl apply -f kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

[root@k8s-master ~]#  kubectl get nodes

NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   19h   v1.20.0
k8s-node1    Ready    <none>                 19h   v1.20.0
k8s-node2    Ready    <none>                 19h   v1.20.0

[root@k8s-master ~]# kubectl get pods -n kube-system

NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-bq88g             1/1     Running   0          19h
coredns-7f89b7bc75-nmmsf             1/1     Running   0          19h
etcd-k8s-master                      1/1     Running   0          19h
kube-apiserver-k8s-master            1/1     Running   0          19h
kube-controller-manager-k8s-master   1/1     Running   0          19h
kube-flannel-ds-amd64-6vh6f          1/1     Running   4          36m
kube-flannel-ds-amd64-jxgjb          1/1     Running   0          36m
kube-flannel-ds-amd64-mxrz5          1/1     Running   4          36m
kube-proxy-94cl4                     1/1     Running   0          19h
kube-proxy-ncwg6                     1/1     Running   0          19h
kube-proxy-nvnr6                     1/1     Running   0          19h
kube-scheduler-k8s-master            1/1     Running   0          19h

已经是ready状态

三、安装Dashboard UI

3.1、部署Dashboard

dashboard的github仓库地址:https://github.com/kubernetes/dashboard

代码仓库当中,有给出安装示例的相关部署文件,我们可以直接获取之后,直接部署即可

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml

--2023-08-10 15:46:20--  https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... 已连接。
无法建立 SSL 连接。

         默认这个部署文件当中,会单独创建一个名为kubernetes-dashboard的命名空间,并将kubernetes-dashboard部署在该命名空间下。dashboard的镜像来自docker hub官方,所以可不用修改镜像地址,直接从官方获取即可。

3.2、开放端口设置

recommended.yaml文件提取链接:https://pan.baidu.com/s/1OYeC84YNY2_HZ1699n8Eug?pwd=7fhp 
提取码:7fhp

        在默认情况下,dashboard并不对外开放访问端口,这里简化操作,直接使用nodePort的方式将其端口暴露出来,修改serivce部分的定义:

[root@k8s-master ~]# docker pull kubernetesui/dashboard:v2.0.0

v2.0.0: Pulling from kubernetesui/dashboard
2a43ce254c7f: Pull complete 
Digest: sha256:06868692fb9a7f2ede1a06de1b7b32afabc40ec739c1181d83b5ed3eb147ec6e
Status: Downloaded newer image for kubernetesui/dashboard:v2.0.0
docker.io/kubernetesui/dashboard:v2.0.0

[root@k8s-master ~]# docker pull kubernetesui/metrics-scraper:v1.0.4

v1.0.4: Pulling from kubernetesui/metrics-scraper
07008dc53a3e: Pull complete 
1f8ea7f93b39: Pull complete 
04d0e0aeff30: Pull complete 
Digest: sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf
Status: Downloaded newer image for kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/metrics-scraper:v1.0.4

[root@k8s-master ~]# vim recommended.yaml
 
 30 ---
 31 
 32 kind: Service
 33 apiVersion: v1
 34 metadata:
 35   labels:
 36     k8s-app: kubernetes-dashboard
 37   name: kubernetes-dashboard
 38   namespace: kubernetes-dashboard
 39 spec:
 40   type: NodePort
 41   ports:
 42     - port: 443
 43       targetPort: 8443
 44       nodePort: 32443
 45   selector:
 46     k8s-app: kubernetes-dashboard
 47 
 48 ---

192           image: kubernetesui/dashboard:v2.0.0
276           image: kubernetesui/metrics-scraper:v1.0.4

3.3、权限配置

配置一个超级管理员权限

[root@k8s-master ~]# vim recommended.yaml 

155 ---
156 
157 apiVersion: rbac.authorization.k8s.io/v1
158 kind: ClusterRoleBinding
159 metadata:
160   name: kubernetes-dashboard
161 roleRef:
162   apiGroup: rbac.authorization.k8s.io
163   kind: ClusterRole
164   name: cluster-admin
165 subjects:
166   - kind: ServiceAccount
167     name: kubernetes-dashboard
168     namespace: kubernetes-dashboard
169 
170 ---

[root@k8s-master ~]# kubectl apply -f recommended.yaml

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

[root@k8s-master ~]#  kubectl get pods -n kubernetes-dashboard

NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-fp8qx   1/1     Running   0          67s
kubernetes-dashboard-74d688b6bc-qkwrn        1/1     Running   0          67s

[root@k8s-master ~]#  kubectl get pods -A  -o wide

NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
kube-system            coredns-7f89b7bc75-bq88g                     1/1     Running   0          20h   10.244.0.2      k8s-master   <none>           <none>
kube-system            coredns-7f89b7bc75-nmmsf                     1/1     Running   0          20h   10.244.0.3      k8s-master   <none>           <none>
kube-system            etcd-k8s-master                              1/1     Running   0          20h   192.168.2.116   k8s-master   <none>           <none>
kube-system            kube-apiserver-k8s-master                    1/1     Running   0          20h   192.168.2.116   k8s-master   <none>           <none>
kube-system            kube-controller-manager-k8s-master           1/1     Running   0          20h   192.168.2.116   k8s-master   <none>           <none>
kube-system            kube-flannel-ds-amd64-6vh6f                  1/1     Running   4          58m   192.168.2.117   k8s-node1    <none>           <none>
kube-system            kube-flannel-ds-amd64-jxgjb                  1/1     Running   0          58m   192.168.2.116   k8s-master   <none>           <none>
kube-system            kube-flannel-ds-amd64-mxrz5                  1/1     Running   4          58m   192.168.2.118   k8s-node2    <none>           <none>
kube-system            kube-proxy-94cl4                             1/1     Running   0          19h   192.168.2.117   k8s-node1    <none>           <none>
kube-system            kube-proxy-ncwg6                             1/1     Running   0          20h   192.168.2.116   k8s-master   <none>           <none>
kube-system            kube-proxy-nvnr6                             1/1     Running   0          19h   192.168.2.118   k8s-node2    <none>           <none>
kube-system            kube-scheduler-k8s-master                    1/1     Running   0          20h   192.168.2.116   k8s-master   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-fp8qx   1/1     Running   0          77s   10.244.2.2      k8s-node2    <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-74d688b6bc-qkwrn        1/1     Running   0          77s   10.244.1.2      k8s-node1    <none>           <none>

3.4、访问Token配置

使用谷歌浏览器测试访问 https://192.168.2.116:32443

 

 

         可以看到出现如上图画面,需要我们输入一个kubeconfig文件或者一个token。事实上在安装dashboard时,也为我们默认创建好了一个serviceaccount,为kubernetes-dashboard,并为其生成好了token,我们可以通过如下指令获取该sa的token:

[root@k8s-master ~]# kubectl describe secret -n kubernetes-dashboard $(kubectl get secret -n kubernetes-dashboard |grep kubernetes-dashboard-token | awk '{print $1}') |grep token | awk '{print $2}'

kubernetes-dashboard-token-tnv6k
kubernetes.io/service-account-token
eyJhbGciOiJSUzI1NiIsImtpZCI6InB0RVQwT3JvN3AxTXNhZnFmRTduNng3NGgzcEgtN0dpU1Y3QkNnVmlET2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi10bnY2ayIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjEzOTEzOWZlLTZiNDktNDQ3ZC05MjdmLWIwOGI1MjVmZDVjMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.PQ_0QNs19ei_WlGK73NWJwG1YamGEQz3Gez6BlwAX4uiH_ZrVVS0G4ReWRQCLfqwFQmWlaNhuFIggfJAldxDzQ9aUtKrxbGcgdl4PQ28sT-0FpZSAS8qYtZOcM_Dv96Gp39Erm-PcPMk3vZD_TBn118l9-eAkEfgohUMBVD4cUMgL1olwgZHnlFbtrvpf_gwzF-oeGuRp8VeMXOYtB7J5_D8RSaoLpGQ-rfRq9mG0TqJ5laF_8Sb6WFsnjSb8i0_WXZ1pnXEh3_UiEgd78KlSH6wNoi4jB5B1k61KZlQ0ol9owDUDAAS-qznG8ER41KvMGs6CWt5queOjyQCK3XwQA

 

 

 

 

 

 

 

四、安装metrics-server

4.1、在Node节点上下载镜像

heapster已经被metrics-server取代

[root@k8s-node1 ~]#  docker pull bluersw/metrics-server-amd64:v0.3.6

v0.3.6: Pulling from bluersw/metrics-server-amd64
e8d8785a314f: Pull complete 
b2f4b24bed0d: Pull complete 
Digest: sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b
Status: Downloaded newer image for bluersw/metrics-server-amd64:v0.3.6
docker.io/bluersw/metrics-server-amd64:v0.3.6

[root@k8s-node1 ~]# docker tag bluersw/metrics-server-amd64:v0.3.6 k8s.gcr.io/metrics-server-amd64:v0.3.6 

4.2、修改 Kubernetes apiserver 启动参数

在kube-apiserver项中添加如下配置选项 修改后apiserver会自动重启

[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml	
 44     - --enable-aggregator-routing=true

4.3、Master上进行部署

[root@k8s-master ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

--2023-08-10 16:43:31--  https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
正在解析主机 github.com (github.com)... 20.205.243.166
正在连接 github.com (github.com)|20.205.243.166|:443... 已连接。
无法建立 SSL 连接。

修改安装脚本:

components.yaml文件提取链接:https://pan.baidu.com/s/1REsXCZIoy4rak3mFQDOgBg?pwd=m6xz 
提取码:m6xz

[root@k8s-master ~]# vim components.yaml

 73   template:
 74     metadata:
 75       name: metrics-server
 76       labels:
 77         k8s-app: metrics-server
 78     spec:
 79       serviceAccountName: metrics-server
 80       volumes:
 81       # mount in tmp so we can safely use from-scratch images and/or read-only containers
 82       - name: tmp-dir
 83         emptyDir: {}
 84       containers:
 85       - name: metrics-server
 86         image: k8s.gcr.io/metrics-server-amd64:v0.3.6
 87         imagePullPolicy: IfNotPresent
 88         args:
 89           - --cert-dir=/tmp
 90           - --secure-port=4443
 91           - --kubelet-preferred-address-types=InternalIP
 92           - --kubelet-insecure-tls

[root@k8s-master ~]#  kubectl create -f components.yaml

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

等待1-2分钟后查看结果

[root@k8s-master ~]# kubectl top nodes

NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   102m         5%     1432Mi          39%       
k8s-node1    34m          1%     978Mi           26%       
k8s-node2    32m          1%     900Mi           24%       

再回到dashboard界面可以看到CPU和内存使用情况了

 到此K8S集群安装全部完成

五、应用部署测试

下面我们部署一个简单的Nginx WEB服务,该容器运行时会监听80端口。

Kubernetes 支持两种方式创建资源:

(1)用kubectl命令直接创建,在命令行中通过参数指定资源的属性。此方式简单直观,比较适合临时测试或实验使用。

[root@k8s-master ~]# kubectl run web --image=nginx --replicas=2

 (2)通过配置文件和kubectl create/apply创建。在配置文件中描述了应用的信息和需要达到的预期状态。

nginx-deployment.yaml文件提取链接:https://pan.baidu.com/s/1ZRnXausXPPr84k1sbvp2ww?pwd=4e8o 
提取码:4e8o

[root@k8s-master ~]# kubectl create -f nginx-deployment.yaml

以Deployment YAML方式创建Nginx服务

创建deployment

[root@k8s-master ~]# cat nginx-deployment.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector: 
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.4
        ports:
        - containerPort: 80

deployment配置文件说明:

apiVersion: apps/v1	#apiVersion是当前配置格式的版本
kind: Deployment	#kind是要创建的资源类型,这里是Deploymnet
metadata:			#metadata是该资源的元数据,name是必须的元数据项
  name: nginx-deployment
  labels:
    app: nginx
spec:				#spec部分是该Deployment的规则说明
  replicas: 3		#relicas指定副本数量,默认为1
  selector: 
    matchLabels:
      app: nginx
  template:			#template定义Pod的模板,这是配置的重要部分
    metadata:		#metadata定义Pod的元数据,至少要顶一个label,label的key和value可以任意指定
      labels:
        app: nginx
    spec:			#spec描述的是Pod的规则,此部分定义pod中每一个容器的属性,name和image是必需的
      containers:
      - name: nginx
        image: nginx:1.19.4
        ports:
        - containerPort: 80

创建nginx-deployment应用

[root@k8s-master ~]# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created

查看deployment详情

[root@k8s-master ~]# kubectl get deployment

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           5m57s

查看pod在状态,正在创建中,此时应该正在拉取镜像

[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-7947dc656-ccts9   1/1     Running   0          6m32s
nginx-deployment-7947dc656-gcwvq   1/1     Running   0          6m32s
nginx-deployment-7947dc656-kvv6t   1/1     Running   0          6m32s
web                                1/1     Running   0          73m

查看具体某个pod的状态信息

[root@k8s-master ~]# kubectl describe pod nginx-deployment-7947dc656-ccts9
Name:         nginx-deployment-7947dc656-ccts9
Namespace:    default
Priority:     0
Node:         k8s-node2/192.168.2.118
Start Time:   Thu, 10 Aug 2023 18:02:46 +0800
Labels:       app=nginx
              pod-template-hash=7947dc656
Annotations:  <none>
Status:       Running
IP:           10.244.2.4
IPs:
  IP:           10.244.2.4
Controlled By:  ReplicaSet/nginx-deployment-7947dc656
Containers:
  nginx:
    Container ID:   docker://7043ecf3c52d43fe4bc11f27b0c614aadd10534c565bd320be87fd8feac90647
    Image:          nginx:1.19.4
    Image ID:       docker-pullable://nginx@sha256:c3a1592d2b6d275bef4087573355827b200b00ffc2d9849890a4f3aa2128c4ae
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 10 Aug 2023 18:07:49 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7c9z (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-s7c9z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s7c9z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  7m43s  default-scheduler  Successfully assigned default/nginx-deployment-7947dc656-ccts9 to k8s-node2
  Normal  Pulling    7m42s  kubelet            Pulling image "nginx:1.19.4"
  Normal  Pulled     2m41s  kubelet            Successfully pulled image "nginx:1.19.4" in 5m1.644567511s
  Normal  Created    2m41s  kubelet            Created container nginx
  Normal  Started    2m40s  kubelet            Started container nginx

[root@k8s-master ~]# kubectl get pod -o wide        #创建成功,状态为Running

NAME                               READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
nginx-deployment-7947dc656-ccts9   1/1     Running   0          9m3s   10.244.2.4   k8s-node2   <none>           <none>
nginx-deployment-7947dc656-gcwvq   1/1     Running   0          9m3s   10.244.1.4   k8s-node1   <none>           <none>
nginx-deployment-7947dc656-kvv6t   1/1     Running   0          9m3s   10.244.2.5   k8s-node2   <none>           <none>
web                                1/1     Running   0          75m    10.244.2.3   k8s-node2   <none>           <none>

        如果要删除这些资源,执行 kubectl delete deployment nginx-deployment 或者 kubectl delete -f nginx-deployment.yaml。

测试Pod访问

[root@k8s-master ~]# curl --head http://10.244.2.3

HTTP/1.1 200 OK
Server: nginx/1.21.5
Date: Thu, 10 Aug 2023 10:12:48 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Dec 2021 15:28:38 GMT
Connection: keep-alive
ETag: "61cb2d26-267"
Accept-Ranges: bytes


[root@k8s-master ~]# elinks --dump http://10.244.2.3
                               Welcome to nginx!

   If you see this page, the nginx web server is successfully installed and
   working. Further configuration is required.

   For online documentation and support please refer to [1]nginx.org.
   Commercial support is available at [2]nginx.com.

   Thank you for using nginx.

References

   Visible links
   1. http://nginx.org/
   2. http://nginx.com/

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/859605.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

GB28181智慧可视化指挥控制系统之执法记录仪设计探讨

什么是智慧可视化指挥控制系统&#xff1f; 智慧可视化指挥控制平台通过4G/5G网络、WIFI实时传输视音频数据至指挥中心&#xff0c;特别是在有突发情况时&#xff0c;可以指定一台执法仪为现场视频监控器&#xff0c;实时传输当前画面到指挥中心&#xff0c;指挥中心工作人员可…

JVM笔记 —— 出现内存溢出错误时时如何排查

一、出现内存溢出的几种情况 内存溢出错误分为StackOverflowError和OutOfMemoryError&#xff0c;前者是栈中出现溢出&#xff0c;后者一般是堆或方法区出现溢出&#xff0c;简称OOM 1. 栈溢出 StackOverflowError 栈溢出一般都是因为没有正确的结束递归导致的&#xff0c;无…

【Pyhthon实战】Python对全校电费查询采集并可视化分析

前言 今天&#xff0c;我来说说怎么抓取宿舍电费的过程。我们学校是在完美校园交电费的&#xff0c;我们可以不用取抓包完美校园的数据接口&#xff0c;我们可以直接登录学校的一卡通网站&#xff0c;每个学校都有&#xff0c;大家可以自己找找&#xff0c;这里我为什么要抓包呢…

新华日报-北京晚报-天津日报-投稿要求

新华日报-北京晚报-天津日报-投稿要求 报纸出版快 稳妥 价优 《中国教育报》1800字符1-3个月见报 《中国教师报》1800字符1-3个月左右见报 《光明日报》普通版 1500字符左右 各科 2个月见报 《经济日报》普通版 1500字符 1-3个月见报 《法治日报》普通版 2000字符 3个月见报…

基于子口袋的分子生成

生成与靶蛋白具有高结合亲和力的分子&#xff08;也称为基于结构的药物设计&#xff0c;structure-based drug design&#xff09;是药物发现中的一项基本且具有挑战性的任务。最近&#xff0c;深度生成模型在生成以蛋白质口袋为条件的3D分子方面取得了显著成功。然而&#xff…

怎么绘制乡土中国思维导图?了解一下这个绘制步骤

怎么绘制乡土中国思维导图&#xff1f;乡土中国思维导图是一种将中国传统文化与现代思维方法相结合的思维导图。它是一种系统化的思考方法&#xff0c;可以帮助我们更好地理解乡土中国文化的内涵和特点&#xff0c;同时也能帮助我们更好地应对当下的社会和文化问题。那么今天就…

TZOJ 曹冲养猪 (扩展)中国剩余定理

求解&#xff1a; M a1 (b1) M a2 (b2) M a3 (b3) ........ 对于 上述式子我们可以拆成 &#xff1a; M b1 * p a1 b2 * q a2 左右移项得到&#xff1a; b1 * p - b2 * q a2 - a1 可以发现 这就是一个同余方程&#xff1a; a b1 , b b2 , x p , y q , c …

关于新手学习STM32开发应该如何入门?

对于新手来说&#xff0c;学习STM32开发可能会感到困惑&#xff0c;尤其是在拿到开发板后该如何入门。在这里有嵌入式学习路线&#xff0c;毕设&#xff0c;各种项目&#xff0c;需要留个6。以下是部分内容概述&#xff1a;硬件介绍&#xff1a;了解STM32开发板的基本硬件组成和…

Chatgpt API调用报错:openai.error.RateLimitError

Chatgpt API 调用报错&#xff1a; openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details. 调用OpenAI API接口 import openai import osopenai.api_key os.getenv("OPENAI_API_KEY")result openai.Chat…

欧科云链与华为云达成战略合作,开启Web3安全合规新时代

华为云——作为全球增速最快的主流云服务提供商&#xff1b; 欧科云链——作为全球领先的Web3链上数据及合规解决方案提供商&#xff1b; 今天&#xff0c;华为云 与 欧科云链 正式达成战略合作&#xff01; 两者相加在一起&#xff0c;未来又将会碰撞出怎样的火花&#xff1f;…

01-向量究竟是什么?

文章作者&#xff1a;里海 来源网站&#xff1a;https://blog.csdn.net/WangPaiFeiXingYuan 向量究竟是什么 引入一些数作为坐标是一种鲁莽的行为 ——赫尔曼外尔 The introduction of numbers as coordinates is an act of violence - Hermann Weyl 向量的定义 向量&#xff0…

代码随想录算法训练营第50天|动态规划part08|139.单词拆分、关于多重背包,你该了解这些!、背包问题总结篇!

代码随想录算法训练营第50天&#xff5c;动态规划part08&#xff5c;139.单词拆分、关于多重背包&#xff0c;你该了解这些&#xff01;、背包问题总结篇&#xff01; 139. 单词拆分 139. 单词拆分 思路&#xff1a; 单词就是物品&#xff0c;字符串s就是背包 拆分时可以重…

【EI复现】考虑区域多能源系统集群协同优化的联合需求侧响应模型(Matlab代码实现)

&#x1f4a5;&#x1f4a5;&#x1f49e;&#x1f49e;欢迎来到本博客❤️❤️&#x1f4a5;&#x1f4a5; &#x1f3c6;博主优势&#xff1a;&#x1f31e;&#x1f31e;&#x1f31e;博客内容尽量做到思维缜密&#xff0c;逻辑清晰&#xff0c;为了方便读者。 ⛳️座右铭&a…

远程桌面配置指南:保留TCP地址、配置隧道和使用固定TCP地址

远程桌面配置指南&#xff1a;保留TCP地址、配置隧道和使用固定TCP地址 文章目录 远程桌面配置指南&#xff1a;保留TCP地址、配置隧道和使用固定TCP地址第一步&#xff1a;保留TCP地址第二步&#xff1a;为远程桌面隧道配置固定的TCP地址第三步&#xff1a;使用固定TCP地址远程…

10. Docker Swarm(一)

目录 1、前言 2、Docker Swarm体系架构 2.1、简单介绍 2.2、体系架构 3、简单使用 3.1、环境准备 3.2、初始化master节点 3.3、建立worker节点 3.4、查看集群的节点信息 3.5、部署应用 3.5.1、创建Dockerfile文件 3.5.2、构建镜像 3.5.3、将镜像上传到Docker仓库 …

crypto-js中AES的加解密封装

在项目中安装依赖&#xff1a; npm i crypto-js在使用的页面引入&#xff1a; import CryptoJS from crypto-jscrypto-js中AES的加解密简单的封装了一下&#xff1a; //加密const KEY 000102030405060708090a0b0c0d0e0f // 秘钥 这两个需要和后端统一const IV 8a8c8fd8fe3…

栈和队列OJ题讲解

&#x1f493;博主个人主页:不是笨小孩&#x1f440; ⏩专栏分类:数据结构与算法&#x1f440; 刷题专栏&#x1f440; C语言&#x1f440; &#x1f69a;代码仓库:笨小孩的代码库&#x1f440; ⏩社区&#xff1a;不是笨小孩&#x1f440; &#x1f339;欢迎大家三连关注&…

mysql事务隔离级别详细讲解

mysql事务讲解 MySQL事务处理&#xff08;TransAction&#xff09; 大家好&#xff0c;我是一名热爱研究技术并且喜欢自己亲手实践的博主。 工作这么多年&#xff0c;一直没有深入理解MySQL的事务&#xff0c;因为最近也在面试&#xff0c;准备复习mysql的相关知识&#xff0…

verilog 实现异步fifo

理论知识参考 异步FIFO_Verilog实现_verilog实现异步fifo_Crazzy_M的博客-CSDN博客 代码 /* 位宽8bit, 位深8 */ module async_fifo#(parameter FIFO_DEPTH 8,parameter FIFO_WIDTH 8 ) (input rst_n,input wr_clk,input wr_en,input [FIFO_WIDTH - 1:0…

数据结构之红黑树

二叉搜索树对于某个节点而言&#xff0c;其左子树的节点关键值都小于该节点关键值&#xff0c;右子树的所有节点关键值都大于该节点关键值。二叉搜索树作为一种数据结构&#xff0c;其查找、插入和删除操作的时间复杂度都为O(logn),底数为2。但是我们说这个时间复杂度是在平衡的…