体验 Kubernetes Cluster API

news2024/12/23 23:41:22

体验 Kubernetes Cluster API

  • 什么是 Kubernetes Cluster API
  • 安装 Kind
  • 增加 ulimit 和 inotify
  • 创建 Kind 集群
  • 安装 clusterctl CLI 工具
  • 初始化管理集群
  • 创建一个工作负载集群
  • 访问工作负载集群
  • 部署一个 CNI 解决方案
  • 安装 MetalLB
  • 部署 nginx 示例
  • 清理
  • (参考)capi-quickstart.yaml 文件内容
  • 参考资料

什么是 Kubernetes Cluster API

Cluster API 是一个 Kubernetes 子项目,专注于提供声明性 API 和工具,以简化多个 Kubernetes 集群的配置、升级和操作。

Cluster API 使用 Kubernetes 风格的 API 和模式为平台运营商自动化集群生命周期管理。支持基础设施(如虚拟机、网络、负载均衡器和 VPC)以及 Kubernetes 集群的定义方式都与部署和管理其工作负载的方式相同。这样可以在各种基础架构环境中实现一致且可重复的集群部署。

安装 Kind

访问https://github.com/kubernetes-sigs/kind/releases 查看Kind最新的Release版本和对应的Node镜像。

cd /tmp
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.18.0/kind-linux-amd64
chmod +x ./kind
sudo mv kind /usr/local/bin/

增加 ulimit 和 inotify

Linux 用户注意:使用 Docker (CAPD) 时,您可能需要增加 ulimit 和 inotify 。

refer: https://cluster-api.sigs.k8s.io/user/troubleshooting.html#cluster-api-with-docker----too-many-open-files

refer: https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files

sudo sysctl fs.inotify.max_user_watches=1048576
sudo sysctl fs.inotify.max_user_instances=8192
sudo vi /etc/sysctl.conf
--- add
fs.inotify.max_user_watches = 1048576
fs.inotify.max_user_instances = 8192
---

创建 Kind 集群

运行以下命令以创建一个类型的配置文件,以允许 Docker 提供程序访问主机上的 Docker:

cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock
EOF
kind create cluster --config kind-cluster-with-extramounts.yaml

安装 clusterctl CLI 工具

在 Linux 上使用 curl 安装 clusterctl 二进制文件,

cd /tmp
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.4.1/clusterctl-linux-amd64 -o clusterctl
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
clusterctl version

--- output
clusterctl version: &version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"39d87e91080088327c738c43f39e46a7f557d03b", GitTreeState:"clean", BuildDate:"2023-04-04T17:31:43Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
---

初始化管理集群

现在我们已经安装了 clusterctl 并满足了所有先决条件,让我们转换 Kubernetes 集群 通过使用 clusterctl init 放入管理集群中。

该命令接受要安装的 providers 列表作为输入。首次执行时,clusterctl init 会自动将 cluster-api core provider 添加到列表中,如果未指定,则还会添加 kubeadm 引导程序 和 kubeadm 控制平面 providers。

此示例以使用 Docker 为例。

# Enable the experimental Cluster topology feature.
export CLUSTER_TOPOLOGY=true

# Initialize the management cluster
clusterctl init --infrastructure docker

输出示例,

Fetching providers
Installing cert-manager Version="v1.11.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.4.1" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.4.1" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.4.1" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v1.4.1" TargetNamespace="capd-system"

Your management cluster has been initialized successfully!

You can now create your first workload cluster by running the following:

  clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -

创建一个工作负载集群

此示例以 Docker 为例,Docker 所需的配置,

# The list of service CIDR, default ["10.128.0.0/12"]
export SERVICE_CIDR=["10.96.0.0/12"]

# The list of pod CIDR, default ["192.168.0.0/16"]
export POD_CIDR=["192.168.0.0/16"]

# The service domain, default "cluster.local"
export SERVICE_DOMAIN="k8scloud.local"

# It is also possible but not recommended to disable the per-default enabled Pod Security Standard
export POD_SECURITY_STANDARD_ENABLED="false"

生成群集配置,我们将群集命名为 capi-quickstart。

clusterctl generate cluster capi-quickstart --flavor development \
  --kubernetes-version v1.27.0 \
  --control-plane-machine-count=1 \
  --worker-machine-count=1 \
  > capi-quickstart.yaml

运行以下命令以应用集群清单,

kubectl apply -f capi-quickstart.yaml

输出示例,

clusterclass.cluster.x-k8s.io/quick-start created
dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created
cluster.cluster.x-k8s.io/capi-quickstart created

访问工作负载集群

集群现在将开始预配,可以通过以下方式检查状态,

kubectl get cluster

--- output
NAME              PHASE          AGE     VERSION
capi-quickstart   Provisioned   10s   v1.27.0
---

还可以通过运行以下命令获得集群及其资源的"概览"视图,

clusterctl describe cluster capi-quickstart

--- output
NAME                                                            READY  SEVERITY  REASON                           SINCE  MESSAGE

Cluster/capi-quickstart                                         False  Warning   ScalingUp                        26s    Scaling up control plane to 1 replicas (actual 0)
├─ClusterInfrastructure - DockerCluster/capi-quickstart-cq94k   True                                              22s

├─ControlPlane - KubeadmControlPlane/capi-quickstart-d86xg      False  Warning   ScalingUp                        26s    Scaling up control plane to 1 replicas (actual 0)
│ └─Machine/capi-quickstart-d86xg-mg7kq                         False  Info      WaitingForBootstrapData          20s    1 of 2 completed

└─Workers

  └─MachineDeployment/capi-quickstart-md-0-dbhbw                False  Warning   WaitingForAvailableMachines      26s    Minimum availability requires 1 replicas, current 0 available
    └─Machine/capi-quickstart-md-0-dbhbw-85cd6498cx9bbwg-f5nqx  False  Info      WaitingForControlPlaneAvailable  22s    0 of 2 completed
---

要验证第一个控制平面是否已启动,请执行以下操作,

kubectl get kubeadmcontrolplane

--- output
NAME                    CLUSTER           INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE   VERSION
capi-quickstart-d86xg   capi-quickstart                                        1                  1         1             55s   v1.27.0
---

在下一步我们安装CNI之前,控制平面不会变成 Ready

在第一个控制平面节点启动和运行后,我们可以检索工作负载集群 Kubeconfig。

clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
# Point the kubeconfig to the exposed port of the load balancer, rather than the inaccessible container IP.
sed -i -e "s/server:.*/server: https:\/\/$(docker port capi-quickstart-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-quickstart.kubeconfig

部署一个 CNI 解决方案

这里以Calico为例,

kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml

输出结果示例,

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

过了一会儿,我们的节点应该正在运行并处于就绪状态,让我们用 kubectl get nodes 来检查状态,

kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes

输出示例,

NAME                                               STATUS   ROLES           AGE     VERSION
capi-quickstart-d86xg-mg7kq                        Ready    control-plane   4m59s   v1.27.0
capi-quickstart-md-0-dbhbw-85cd6498cx9bbwg-f5nqx   Ready    <none>          4m9s    v1.27.0

安装 MetalLB

KUBECONFIG=./capi-quickstart.kubeconfig
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
kubectl create secret generic \
    -n metallb-system memberlist \
    --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml
docker inspect kind | jq '.[0].IPAM.Config[0].Subnet' -r

--- output
172.18.0.0/16
---
KUBECONFIG=./capi-quickstart.kubeconfig
kubectl apply -f - <<-EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: my-ip-space
      protocol: layer2
      addresses:
      - 172.18.0.230-172.18.0.250
EOF

部署 nginx 示例

KUBECONFIG=./capi-quickstart.kubeconfig
kubectl create deployment nginx-deploy --image=nginx
kubectl expose deployment nginx-deploy --name=nginx-lb --port=80 --target-port=80 --type=LoadBalancer
KUBECONFIG=./capi-quickstart.kubeconfig
MetalLB_IP=$(kubectl get svc nginx-lb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s $MetalLB_IP | grep "Thank you"

--- output
<p><em>Thank you for using nginx.</em></p>
---

在这里插入图片描述

清理

删除工作负载集群,

KUBECONFIG=~/.kube/config
kubectl delete cluster capi-quickstart

重要提示:为了确保基础设施的正确清理,你必须始终删除集群对象。用 kubectl delete -f capi-quickstart.yaml 删除整个集群模板可能会导致需要手动清理的 pending resources。

删除管理集群,

kind delete cluster

完结!

(参考)capi-quickstart.yaml 文件内容

$ cat capi-quickstart.yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
  name: quick-start
  namespace: default
spec:
  controlPlane:
    machineInfrastructure:
      ref:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: DockerMachineTemplate
        name: quick-start-control-plane
    ref:
      apiVersion: controlplane.cluster.x-k8s.io/v1beta1
      kind: KubeadmControlPlaneTemplate
      name: quick-start-control-plane
  infrastructure:
    ref:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerClusterTemplate
      name: quick-start-cluster
  patches:
  - definitions:
    - jsonPatches:
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/imageRepository
        valueFrom:
          variable: imageRepository
      selector:
        apiVersion: controlplane.cluster.x-k8s.io/v1beta1
        kind: KubeadmControlPlaneTemplate
        matchResources:
          controlPlane: true
    description: Sets the imageRepository used for the KubeadmControlPlane.
    enabledIf: '{{ ne .imageRepository "" }}'
    name: imageRepository
  - definitions:
    - jsonPatches:
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/initConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-driver
        value: cgroupfs
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/joinConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-driver
        value: cgroupfs
      selector:
        apiVersion: controlplane.cluster.x-k8s.io/v1beta1
        kind: KubeadmControlPlaneTemplate
        matchResources:
          controlPlane: true
    description: |
      Sets the cgroupDriver to cgroupfs if a Kubernetes version < v1.24 is referenced.
      This is required because kind and the node images do not support the default
      systemd cgroupDriver for kubernetes < v1.24.
    enabledIf: '{{ semverCompare "<= v1.23" .builtin.controlPlane.version }}'
    name: cgroupDriver-controlPlane
  - definitions:
    - jsonPatches:
      - op: add
        path: /spec/template/spec/joinConfiguration/nodeRegistration/kubeletExtraArgs/cgroup-driver
        value: cgroupfs
      selector:
        apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
        kind: KubeadmConfigTemplate
        matchResources:
          machineDeploymentClass:
            names:
            - default-worker
    description: |
      Sets the cgroupDriver to cgroupfs if a Kubernetes version < v1.24 is referenced.
      This is required because kind and the node images do not support the default
      systemd cgroupDriver for kubernetes < v1.24.
    enabledIf: '{{ semverCompare "<= v1.23" .builtin.machineDeployment.version }}'
    name: cgroupDriver-machineDeployment
  - definitions:
    - jsonPatches:
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/etcd
        valueFrom:
          template: |
            local:
              imageTag: {{ .etcdImageTag }}
      selector:
        apiVersion: controlplane.cluster.x-k8s.io/v1beta1
        kind: KubeadmControlPlaneTemplate
        matchResources:
          controlPlane: true
    description: Sets tag to use for the etcd image in the KubeadmControlPlane.
    name: etcdImageTag
  - definitions:
    - jsonPatches:
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/dns
        valueFrom:
          template: |
            imageTag: {{ .coreDNSImageTag }}
      selector:
        apiVersion: controlplane.cluster.x-k8s.io/v1beta1
        kind: KubeadmControlPlaneTemplate
        matchResources:
          controlPlane: true
    description: Sets tag to use for the etcd image in the KubeadmControlPlane.
    name: coreDNSImageTag
  - definitions:
    - jsonPatches:
      - op: add
        path: /spec/template/spec/customImage
        valueFrom:
          template: |
            kindest/node:{{ .builtin.machineDeployment.version | replace "+" "_" }}
      selector:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: DockerMachineTemplate
        matchResources:
          machineDeploymentClass:
            names:
            - default-worker
    - jsonPatches:
      - op: add
        path: /spec/template/spec/customImage
        valueFrom:
          template: |
            kindest/node:{{ .builtin.controlPlane.version | replace "+" "_" }}
      selector:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: DockerMachineTemplate
        matchResources:
          controlPlane: true
    description: Sets the container image that is used for running dockerMachines
      for the controlPlane and default-worker machineDeployments.
    name: customImage
  - definitions:
    - jsonPatches:
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/apiServer/extraArgs
        value:
          admission-control-config-file: /etc/kubernetes/kube-apiserver-admission-pss.yaml
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/apiServer/extraVolumes
        value:
        - hostPath: /etc/kubernetes/kube-apiserver-admission-pss.yaml
          mountPath: /etc/kubernetes/kube-apiserver-admission-pss.yaml
          name: admission-pss
          pathType: File
          readOnly: true
      - op: add
        path: /spec/template/spec/kubeadmConfigSpec/files
        valueFrom:
          template: |
            - content: |
                apiVersion: apiserver.config.k8s.io/v1
                kind: AdmissionConfiguration
                plugins:
                - name: PodSecurity
                  configuration:
                    apiVersion: pod-security.admission.config.k8s.io/v1{{ if semverCompare "< v1.25" .builtin.controlPlane.version }}beta1{{ end }}
                    kind: PodSecurityConfiguration
                    defaults:
                      enforce: "{{ .podSecurityStandard.enforce }}"
                      enforce-version: "latest"
                      audit: "{{ .podSecurityStandard.audit }}"
                      audit-version: "latest"
                      warn: "{{ .podSecurityStandard.warn }}"
                      warn-version: "latest"
                    exemptions:
                      usernames: []
                      runtimeClasses: []
                      namespaces: [kube-system]
              path: /etc/kubernetes/kube-apiserver-admission-pss.yaml
      selector:
        apiVersion: controlplane.cluster.x-k8s.io/v1beta1
        kind: KubeadmControlPlaneTemplate
        matchResources:
          controlPlane: true
    description: Adds an admission configuration for PodSecurity to the kube-apiserver.
    enabledIf: '{{ .podSecurityStandard.enabled }}'
    name: podSecurityStandard
  variables:
  - name: imageRepository
    required: true
    schema:
      openAPIV3Schema:
        default: ""
        description: imageRepository sets the container registry to pull images from.
          If empty, nothing will be set and the from of kubeadm will be used.
        example: registry.k8s.io
        type: string
  - name: etcdImageTag
    required: true
    schema:
      openAPIV3Schema:
        default: ""
        description: etcdImageTag sets the tag for the etcd image.
        example: 3.5.3-0
        type: string
  - name: coreDNSImageTag
    required: true
    schema:
      openAPIV3Schema:
        default: ""
        description: coreDNSImageTag sets the tag for the coreDNS image.
        example: v1.8.5
        type: string
  - name: podSecurityStandard
    required: false
    schema:
      openAPIV3Schema:
        properties:
          audit:
            default: restricted
            description: audit sets the level for the audit PodSecurityConfiguration
              mode. One of privileged, baseline, restricted.
            type: string
          enabled:
            default: true
            description: enabled enables the patches to enable Pod Security Standard
              via AdmissionConfiguration.
            type: boolean
          enforce:
            default: baseline
            description: enforce sets the level for the enforce PodSecurityConfiguration
              mode. One of privileged, baseline, restricted.
            type: string
          warn:
            default: restricted
            description: warn sets the level for the warn PodSecurityConfiguration
              mode. One of privileged, baseline, restricted.
            type: string
        type: object
  workers:
    machineDeployments:
    - class: default-worker
      template:
        bootstrap:
          ref:
            apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
            kind: KubeadmConfigTemplate
            name: quick-start-default-worker-bootstraptemplate
        infrastructure:
          ref:
            apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
            kind: DockerMachineTemplate
            name: quick-start-default-worker-machinetemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerClusterTemplate
metadata:
  name: quick-start-cluster
  namespace: default
spec:
  template:
    spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
metadata:
  name: quick-start-control-plane
  namespace: default
spec:
  template:
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            certSANs:
            - localhost
            - 127.0.0.1
            - 0.0.0.0
            - host.docker.internal
          controllerManager:
            extraArgs:
              enable-hostpath-provisioner: "true"
        initConfiguration:
          nodeRegistration:
            criSocket: unix:///var/run/containerd/containerd.sock
            kubeletExtraArgs:
              eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
        joinConfiguration:
          nodeRegistration:
            criSocket: unix:///var/run/containerd/containerd.sock
            kubeletExtraArgs:
              eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: quick-start-control-plane
  namespace: default
spec:
  template:
    spec:
      extraMounts:
      - containerPath: /var/run/docker.sock
        hostPath: /var/run/docker.sock
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: quick-start-default-worker-machinetemplate
  namespace: default
spec:
  template:
    spec:
      extraMounts:
      - containerPath: /var/run/docker.sock
        hostPath: /var/run/docker.sock
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
  name: quick-start-default-worker-bootstraptemplate
  namespace: default
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          criSocket: unix:///var/run/containerd/containerd.sock
          kubeletExtraArgs:
            eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: capi-quickstart
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    serviceDomain: cluster.local
    services:
      cidrBlocks:
      - 10.128.0.0/12
  topology:
    class: quick-start
    controlPlane:
      metadata: {}
      replicas: 1
    variables:
    - name: imageRepository
      value: ""
    - name: etcdImageTag
      value: ""
    - name: coreDNSImageTag
      value: ""
    - name: podSecurityStandard
      value:
        audit: restricted
        enabled: true
        enforce: baseline
        warn: restricted
    version: v1.27.0
    workers:
      machineDeployments:
      - class: default-worker
        name: md-0
        replicas: 1

参考资料

refer: https://cluster-api.sigs.k8s.io/user/quick-start.html

refer: https://oracle.github.io/cluster-api-provider-oci/gs/gs.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/449330.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

C++的类和对象(2)

类和对象 1.类对象模型1.1. 如何计算类对象的大小1.2. 类的存储模式讨论1.3. 类对象的空间符合结构体对齐规则 2. this指针2.1. this指针的引出2.2. this指针的特性2.3.面试题2.4. C语言和C实现栈的对比 1.类对象模型 1.1. 如何计算类对象的大小 class A { public: void Prin…

类加载与卸载

加载过程 其中验证,准备,解析合称链接 加载通过类的完全限定名,查找此类字节码文件,利用字节码文件创建Class对象. 验证确保Class文件符合当前虚拟机的要求,不会危害到虚拟机自身安全. 准备进行内存分配,为static修饰的类变量分配内存,并设置初始值(0或null).不包含final修饰…

用python脚本从Cadence导出xdc约束文件

用python脚本从Cadence导出xdc约束文件 概述转换方法先导出csv文件修改CSV文件 CSV转XDC检查输出XDC文件csv2xdc源代码下载 概述 在Cadence设计完成带有FPGA芯片的原理图的时候&#xff0c;往往需要将FPGA管脚和网络对应关系导入vivado设计软件中&#xff0c;对于大规模FPGA管…

springboot+vue准妈妈孕期交流平台(源码+文档)

风定落花生&#xff0c;歌声逐流水&#xff0c;大家好我是风歌&#xff0c;混迹在java圈的辛苦码农。今天要和大家聊的是一款基于springboot的准妈妈孕期交流平台。项目源码以及部署相关请联系风歌&#xff0c;文末附上联系信息 。 &#x1f495;&#x1f495;作者&#xff1a;…

C++引用篇

文章目录 一、引用概念及示例二、引用做函数参数二、引用做函数的返回值四、常引用五、引用和指针的区别 一、引用概念及示例 c语言指针存变量地址&#xff0c;然后通过解引用可以访问或者改变变量&#xff0c;且也可以改变指针变量里面存的地址 修改变量这样还需要对指针变量…

Faster RCNN系列3——RPN的真值详解与损失值计算

Faster RCNN系列&#xff1a; Faster RCNN系列1——Anchor生成过程 Faster RCNN系列2——RPN的真值与预测值概述 Faster RCNN系列3——RPN的真值详解与损失值计算 Faster RCNN系列4——生成Proposal与RoI Faster RCNN系列5——RoI Pooling与全连接层 目录 一、RPN真值详解二、…

手把手教你实现el-table实现跨表格禁用选项,以及禁用选择后,对应的全选按钮也要禁用任何操作

哈喽 大家好啊 今天我要实现不能跨表格选择&#xff0c;如果我选择了其中一个表格的选项后&#xff0c;那么其他的表格选项则被禁用 然后我选择了其中一个表格行&#xff0c;我其他的表格选项则应该被禁用 实现代码&#xff1a; 其中关键属性&#xff1a; selectable仅对 typ…

如何保障企业网络安全

随着信息技术的迅速发展&#xff0c;网络已经渗透到了我们生活的方方面面。企业对网络的依赖程度也越来越高&#xff0c;网络安全问题已经成为了企业面临的一个重要挑战。那么&#xff0c;在这个风险重重的网络世界里&#xff0c;我们如何充分利用现有技术保障企业网络安全呢&a…

智能指针——C++

智能指针相较于普通指针的区别&#xff0c;就是智能指针可以不用主动释放内存空间&#xff0c;系统会自动释放&#xff0c;避免了内存泄漏。 1、unique_ptr&#xff1a;独占指针 需包含的头文件&#xff1a;#include <memory> unique_ptr 三种定义方式 先定义一个类 …

learn_C_deep_5 (温故知新、sigend char a = -128的深度理解、unsigned int类型的写法规范)

目录 温故知新 理解"unsigned int a -10;" 如何理解大小端 大小端的概念 大小端是如何影响数据存储的 sigend char a -128的深度理解 10000000为什么是-128&#xff0c;而不是-0 代码练习 unsigned int类型的写法规范 温故知新 理解"unsigned int a…

python数据结构与算法-动态规划(最长公共子序列)

一、最长公共子序列问题 1、问题概念 一个序列的子序列是在该序列中删去若干元素后得 到的序列。 例如&#xff1a;"ABCD”和“BDF”都是“ABCDEFG”的子序列。 最长公共子序列(LCS) 问题: 给定两个序列X和Y&#xff0c;求X和Y长度最大的公共子字列。 例:X"ABBCBDE”…

【ABAQUS Python二次开发】 debug : ini解析ERROR:没有实例属性‘__getintem__’

我的主页&#xff1a; 技术邻&#xff1a;小铭的ABAQUS学习的技术邻主页博客园 : HF_SO4的主页哔哩哔哩&#xff1a;小铭的ABAQUS学习的个人空间csdn&#xff1a;qgm1702 博客园文章链接&#xff1a; https://www.cnblogs.com/aksoam/p/17287136.html abaqus python 搭配ini…

古埃及:金字塔

文章目录 I 建造金字塔1.1 切割巨石1.2 开凿巨石1.3 摞石1.4 大金字塔的入口呈三角形 see also I 建造金字塔 在生活中&#xff0c;事实是正确的&#xff0c;如果理论解释不了现实&#xff0c;需要更正理论。 1.1 切割巨石 建筑材料巨石的切割&#xff1a;把石英砂粘在了铜锯…

记一次Macbook pro电池修复

记一次Macbook pro电池修复 mac版本 A1708 问题描述 Macbook更换新电池后&#xff0c;在项头栏中&#xff0c;没有显示电池图标&#xff0c;系统设置里面也找不到电池图标。这样开机还得连着电源线 ~ ^~ 原因分析&#xff1a; 有可能是电池排线坏了。 解决方案&#xff1a…

【C/C++】C++11 线程库重大历史意义

文章目录 C11 线程库重大意义【C11 中最重要的特性&#xff1a;就是对线程进行支持】API 比较C11 线程库APILinux/Win 系统线程库 API代码示例 Demo C11 线程库重大意义【C11 中最重要的特性&#xff1a;就是对线程进行支持】 C11 线程库解决了历史多线程跨平台问题&#xff0…

C++语法(20)---- 模拟红黑树

C语法&#xff08;19&#xff09;---- 模拟AVL树_哈里沃克的博客-CSDN博客https://blog.csdn.net/m0_63488627/article/details/130229501?spm1001.2014.3001.5501 目录 1.红黑树介绍 2.模拟实现 1.枚举红黑颜色 2.节点的定义 3.树类框架 4.插入 5.检查 3.代码实现 1…

【开发经验】spring事件监听机制关心的同步、异步、事务问题

文章目录 spring发布订阅示例同步核心源码分析如何配置异步事务问题 观察者模式又称为发布订阅模式&#xff0c;定义为&#xff1a;对象间的一种一对多的依赖关系&#xff0c;当一个对象的状态发生改变时&#xff0c;所有依赖它的对象都得到通知并被自动更新。 如下图所示&…

【Go】六、并发编程

文章目录 并发编程1、并发介绍2、Goroutine3、runtime包 3、Channel3.1、channel相关信息 4、Goroutine池&#xff08;❌&#xff09;5、定时器6、select多路复用7、并发安全和锁8、Sync9、原子操作&#xff08;atomic包&#xff09; 并发编程 1、并发介绍 1、进程和线程 ​…

心塞,被面试官在朋友圈吐槽了

​前阵子一个后辈小学弟向我诉苦&#xff0c;说自己在参加某大厂测试的时候被面试官怼得哑口无言&#xff0c;场面让他一度十分尴尬。 印象最深的就是下面几个问题&#xff1a; 自动化测试中&#xff0c;如何解决Case依赖&#xff1f;你们公司业务中&#xff0c;自动化和手工分…

“五一”预订量创5年新高!如何制定营销活动引爆门店客流?

作为疫情3年经济复苏后&#xff0c;2023年的第一个长假&#xff0c;今年“五一”的消费需求将全面集中释放&#xff0c;带动全国各地线下实体生意全面复苏。 根据官方平台发布的数据显示&#xff0c;今年五一的旅游订单比疫情前的2019年增长了200%&#xff0c;是近5年预订量最多…