kubespray 部署 kubernetes 排错细节仅供参考

news2025/2/28 6:33:01

文章目录

    • 1. TASK [kubernetes/preinstall : Hosts | create list from inventory]
    • 2: TASK [container-engine/containerd : containerd Create registry directories]
    • 3. TASK [kubernetes/control-plane : kubeadm | Initialize first master]
    • 4. reslov.conf 权限无法修改
    • 5. install package failed
    • 6. TASK [container-engine/containerd : containerd | Write hosts.toml file]
    • 7. TASK [kubernetes/node : Modprobe nf_conntrack_ipv4]
    • 8 可忽略报错
      • 8.1 TASK [network_plugin/calico : Calico | Get existing calico network pool]
      • 8.2 TASK [kubernetes-apps/ansible : Kubernetes Apps | Register coredns deployment annotation `createdby`]

  • ansible-playbook 分步执行可节约排错时间

1. TASK [kubernetes/preinstall : Hosts | create list from inventory]

遇到以下报错:

TASK [kubernetes/preinstall : Hosts | create list from inventory] ******************************************************************************************************************
fatal: [kube-control-plan01 -> localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'address'\n\nThe error appears to be in '/root/kubespray-offline-2.21.0/outputs/kubespray-2.21.0/roles/kubernetes/preinstall/tasks/0090-etchosts.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: Hosts | create list from inventory\n  ^ here\n"}

修改以下内容:
该文件修改第6行添加:and 'address' in hostvars[item]['ansible_default_ipv4'],实例:

$ vi /root/kubespray-offline-2.21.0/outputs/kubespray-2.21.0/roles/kubernetes/preinstall/tasks/0090-etchosts.yml

     1  ---
     2  - name: Hosts | create list from inventory
     3    set_fact:
     4      etc_hosts_inventory_block: |-
     5        {% for item in (groups['k8s_cluster'] + groups['etcd']|default([]) + groups['calico_rr']|default([]))|unique -%}
     6        {% if 'access_ip' in hostvars[item] or 'ip' in hostvars[item] or 'ansible_default_ipv4' in hostvars[item] and 'address' in hostvars[item]['ansible_default_ipv4'] -%}

2: TASK [container-engine/containerd : containerd Create registry directories]

报错内容:

fatal: [kube-control-plan01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'key'\n\nThe error appears to be in '/root/kubespray-offline-2.21.0/outputs/kubespray-2.21.0/roles/container-engine/containerd/tasks/main.yml': line 114, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: containerd Create registry directories\n  ^ here\n"}

修改 此文件with_items 改为 with_dict

$ vi /root/kubespray-offline-2.21.0/outputs/kubespray-2.21.0/roles/container-engine/containerd/tasks/main.yml


   114  - name: containerd | Create registry directories
   115    file:
   116      path: "{{ containerd_cfg_dir }}/certs.d/{{ item.key }}"
   117      state: directory
   118      mode: 0755
   119      recurse: true
   120    with_dict: "{{ containerd_insecure_registries }}"
   121    when: containerd_insecure_registries is defined
   122
   123  - name: containerd | Write hosts.toml file
   124    blockinfile:
   125      path: "{{ containerd_cfg_dir }}/certs.d/{{ item.key }}/hosts.toml"
   126      owner: "root"
   127      mode: 0640
   128      create: true
   129      block: |
   130        server = "{{ item.value }}"
   131        [host."{{ item.value }}"]
   132          capabilities = ["pull", "resolve", "push"]
   133          skip_verify = true
   134    with_dict: "{{ containerd_insecure_registries }}"
   135    when: containerd_insecure_registries is defined

3. TASK [kubernetes/control-plane : kubeadm | Initialize first master]

TASK [kubernetes/control-plane : kubeadm | Initialize first master] ****************************************************************************************************************
fatal: [kube-control-plan01]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["timeout", "-k", "300s", "300s", "/usr/local/bin/kubeadm", "init", "--config=/etc/kubernetes/kubeadm-config.yaml", "--ignore-preflight-errors=all", "--skip-phases=addon/coredns", "--upload-certs"], "delta": "0:01:56.126009", "end": "2023-04-10 21:41:40.621046", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2023-04-10 21:39:44.495037", "stderr": "W0410 21:39:44.514437  222884 utils.go:69] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already 

由于containerd 配置导致 kubelet 启动失败:unknown service runtime.v1alpha2.ImageService

原因:

  • https://github.com/containerd/containerd/issues/4581

因此,我们对containerd 配置模板进行修改删掉仓库相关配置,内容如下:

修改 关于 config.toml 的ansible-playbook 模板 

```bash
cat roles/container-engine/containerd/templates/config.toml.j2
version = 2
root = "{{ containerd_storage_dir }}"
state = "{{ containerd_state_dir }}"
oom_score = {{ containerd_oom_score }}

[grpc]
  max_recv_message_size = {{ containerd_grpc_max_recv_message_size | default(16777216) }}
  max_send_message_size = {{ containerd_grpc_max_send_message_size | default(16777216) }}

[debug]
  level = "{{ containerd_debug_level | default('info') }}"

[metrics]
  address = "{{ containerd_metrics_address | default('') }}"
  grpc_histogram = {{ containerd_metrics_grpc_histogram | default(false) | lower }}

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "{{ pod_infra_image_repo }}:{{ pod_infra_image_tag }}"
    max_container_log_line_size = {{ containerd_max_container_log_line_size }}
    enable_unprivileged_ports = {{ containerd_enable_unprivileged_ports | default(false) | lower }}
    enable_unprivileged_icmp = {{ containerd_enable_unprivileged_icmp | default(false) | lower }}
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "{{ containerd_default_runtime | default('runc') }}"
      snapshotter = "{{ containerd_snapshotter | default('overlayfs') }}"
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
{% for runtime in [containerd_runc_runtime] + containerd_additional_runtimes %}
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.{{ runtime.name }}]
          runtime_type = "{{ runtime.type }}"
          runtime_engine = "{{ runtime.engine }}"
          runtime_root = "{{ runtime.root }}"
{% if runtime.base_runtime_spec is defined %}
          base_runtime_spec = "{{ containerd_cfg_dir }}/{{ runtime.base_runtime_spec }}"
{% endif %}

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.{{ runtime.name }}.options]
{% for key, value in runtime.options.items() %}
            {{ key }} = {{ value }}
{% endfor %}
{% endfor %}
{% if kata_containers_enabled %}
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata-qemu]
          runtime_type = "io.containerd.kata-qemu.v2"
{% endif %}
{% if gvisor_enabled %}
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
          runtime_type = "io.containerd.runsc.v1"
{% endif %}
    [plugins."io.containerd.grpc.v1.cri".registry]
{% if containerd_insecure_registries is defined and containerd_insecure_registries|length>0 %}
      config_path = "{{ containerd_cfg_dir }}/certs.d"
{% endif %}

{% if containerd_extra_args is defined %}
{{ containerd_extra_args }}
{% endif %}

然后执行重新部署:

ansible-playbook -i inventory/local/inventory.ini --become --become-user=root reset.yml
ansible-playbook -i inventory/local/inventory.ini --become --become-user=root cluster.yml 

4. reslov.conf 权限无法修改

K [kubernetes/preinstall : Add domain/search/nameservers/options to resolv.conf] *********************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 1] Operation not permitted
fatal: [kube-control-plan01]: FAILED! => {"changed": false, "msg": "Unable to make /root/.ansible/tmp/ansible-tmp-1681283032.2323463-3779-108831243697784/tmpCSlNgC into to /etc/resolv.conf, failed final rename from /etc/.ansible_tmpRVelRDresolv.conf: [Errno 1] Operation not permitted"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 1] Operation not permitted

执行:

t@c925b21ac60e:/kubespray/outputs/kubespray-2.21.0# ansible -i inventory/local/inventory.ini all -m shell -a "lsattr /etc/resolv.conf"
[WARNING]: Skipping callback plugin 'ara_default', unable to load
kube-node01 | CHANGED | rc=0 >>
----i--------e-- /etc/resolv.conf
kube-control-plan01 | CHANGED | rc=0 >>
-------------e-- /etc/resolv.conf
kube-node02 | CHANGED | rc=0 >>
----i--------e-- /etc/resolv.conf
kube-node03 | CHANGED | rc=0 >>
----i--------e-- /etc/resolv.conf
dbscale-control-plan01 | CHANGED | rc=0 >>
----i--------e-- /etc/resolv.conf
root@c925b21ac60e:/kubespray/outputs/kubespray-2.21.0# ansible -i inventory/local/inventory.ini all -m shell -a "chattr -i /etc/resolv.conf"
[WARNING]: Skipping callback plugin 'ara_default', unable to load
kube-control-plan01 | CHANGED | rc=0 >>

kube-node02 | CHANGED | rc=0 >>

kube-node03 | CHANGED | rc=0 >>

kube-node01 | CHANGED | rc=0 >>

dbscale-control-plan01 | CHANGED | rc=0 >>

root@c925b21ac60e:/kubespray/outputs/kubespray-2.21.0# ansible -i inventory/local/inventory.ini all -m shell -a "lsattr /etc/resolv.conf"
[WARNING]: Skipping callback plugin 'ara_default', unable to load
kube-node03 | CHANGED | rc=0 >>
-------------e-- /etc/resolv.conf
kube-control-plan01 | CHANGED | rc=0 >>
-------------e-- /etc/resolv.conf
kube-node01 | CHANGED | rc=0 >>
-------------e-- /etc/resolv.conf
kube-node02 | CHANGED | rc=0 >>
-------------e-- /etc/resolv.conf
dbscale-control-plan01 | CHANGED | rc=0 >>
-------------e-- /etc/resolv.conf


5. install package failed

TASK [kubernetes/preinstall : Install packages requirements] ********************************************************************************************************
fatal: [kube-control-plan01]: FAILED! => {"attempts": 4, "changed": false, "changes": {"installed": ["conntrack", "container-selinux", "socat", "ipvsadm"]}, "msg": "libnetfilter_queue-1.0.2-2.el7_2.x86_64 was supposed to be installed but is not!\nlibselinux-2.5-15.el7.x86_64 was supposed to be installed but is not!\n2:container-selinux-2.119.2-1.911c772.el7_8.noarch was supposed to be installed but is not!\npolicycoreutils-python-2.5-34.el7.x86_64 was supposed to be installed but is not!\nipvsadm-1.27-8.el7.x86_64 was supposed to be installed but is not!\nselinux-policy-targeted-3.13.1-268.el7_9.2.noarch was supposed to be installed but is not!\nconntrack-tools-1.4.4-7.el7.x86_64 was supposed to be installed but is not!\npolicycoreutils-2.5-34.el7.x86_64 was supposed to be installed but is not!\nlibselinux-utils-2.5-15.el7.x86_64 was supposed to be installed but is not!\nlibnetfilter_cttimeout-1.0.0-7.el7.x86_64 was supposed to be installed but is not!\nsetools-libs-3.3.8-4.el7.x86_64 was supposed to be installed but is not!\nlibsemanage-python-2.5-14.el7.x86_64 was supposed to be installed but is not!\nlibsemanage-2.5-14.el7.x86_64 was supposed to be installed but is not!\nlibselinux-python-2.5-15.el7.x86_64 was supposed to be installed but is not!\nlibsepol-2.5-10.el7.x86_64 was supposed to be installed but is not!\nselinux-policy-3.13.1-268.el7_9.2.noarch was supposed to be installed but is not!\nlibnetfilter_cthelper-1.0.0-11.el7.x86_64 was supposed to be installed but is not!\nsocat-1.7.3.2-2.el7.x86_64 was supposed to be installed but is not!\nlibsemanage-python-2.5-11.el7.x86_64 was supposed to be removed but is not!\nlibsemanage-2.5-11.el7.x86_64 was supposed to be removed but is not!\nlibselinux-python-2.5-12.el7.x86_64 was supposed to be removed but is not!\nsetools-libs-3.3.8-2.el7.x86_64 was supposed to be removed but is not!\npolicycoreutils-2.5-22.el7.x86_64 was supposed to be removed but is not!\nlibsepol-2.5-8.1.el7.x86_64 was supposed to be removed but is not!\npolicycoreutils-python-2.5-22.el7.x86_64 was supposed to be removed but is 

解决方法:
我们可以利用镜像自带的 yum源提供支持,我们需要以下配置:

mkdir /root/kubespray-offline-2.21.0-0/outputs/rpms/cdrom
mount -t iso9660 /dev/cdrom /root/kubespray-offline-2.21.0-0/outputs/rpms/cdrom

查看

$ df -Th |grep cdrom
/dev/sr0            iso9660    10G   10G     0 100% /mnt/cdrom
$ ls /mnt/cdrom/
AppStream  BaseOS  EFI  images  isolinux  LICENSE  media.repo  TRANS.TBL

持久挂载

$ vim /etc/fstab
/dev/sr0 /mnt/cdrom iso9660 defaults 0 0

本地配置yum源,在offline.repo添加:

$ cat /etc/yum.repos.d/offline.repo
[offline-repo]
name=Offline repo
baseurl=http://localhost/rpms/local/
enabled=1
gpgcheck=0

[BaseOS]
name=BaseOS
baseurl=http://localhost/rpms/cdrom/BaseOS
enable=1
gpgcheck=0

[AppStream]
name=AppStream
baseurl=http://localhost/rpms/cdrom/AppStream
enabled=1
gpgcheck=0

其他集群节点 yum 源配置:

 cat /etc/yum.repos.d/offline.repo
[offline-repo]
async = 1
baseurl = http://100.168.110.199/rpms/local
enabled = 1
gpgcheck = 0
name = Offline repo for kubespray


[BaseOS]
name=BaseOS
baseurl=http://100.168.110.199/rpms/cdrom/BaseOS
enable=1
gpgcheck=0

[AppStream]
name=AppStream
baseurl=http://100.168.110.199/rpms/cdrom/AppStream
enabled=1
gpgcheck=0

验证:

$ yum clean all
$ yum repolist --verbose
Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, repoclosure, repodiff, repograph, repomanage, reposync, versionlock
YUM version: 4.7.0
cachedir: /var/cache/dnf
Last metadata expiration check: 1:36:06 ago on Wed 19 Apr 2023 01:59:27 PM CST.
Repo-id            : AppStream
Repo-name          : AppStream
Repo-revision      : 8.5
Repo-distro-tags      : [cpe:/o:rocky:rocky:8]:  ,  , 8, L, R, c, i, k, n, o, u, x, y
Repo-updated       : Sun 14 Nov 2021 05:25:39 PM CST
Repo-pkgs          : 6,163
Repo-available-pkgs: 5,279
Repo-size          : 8.0 G
Repo-baseurl       : http://100.168.110.199/rpms/cdrom/AppStream
Repo-expire        : 172,800 second(s) (last: Tue 18 Apr 2023 10:24:35 PM CST)
Repo-filename      : /etc/yum.repos.d/offline.repo

Repo-id            : BaseOS
Repo-name          : BaseOS
Repo-revision      : 8.5
Repo-distro-tags      : [cpe:/o:rocky:rocky:8]:  ,  , 8, L, R, c, i, k, n, o, u, x, y
Repo-updated       : Sun 14 Nov 2021 05:23:13 PM CST
Repo-pkgs          : 1,708
Repo-available-pkgs: 1,706
Repo-size          : 1.2 G
Repo-baseurl       : http://100.168.110.199/rpms/cdrom/BaseOS
Repo-expire        : 172,800 second(s) (last: Tue 18 Apr 2023 10:24:35 PM CST)
Repo-filename      : /etc/yum.repos.d/offline.repo

Repo-id            : offline-repo
Repo-name          : Offline repo for kubespray
Repo-revision      : 1681789393
Repo-updated       : Tue 18 Apr 2023 11:43:14 AM CST
Repo-pkgs          : 265
Repo-available-pkgs: 262
Repo-size          : 182 M
Repo-baseurl       : http://100.168.110.199/rpms/local
Repo-expire        : 172,800 second(s) (last: Wed 19 Apr 2023 01:59:27 PM CST)
Repo-filename      : /etc/yum.repos.d/offline.repo
Total packages: 8,136

6. TASK [container-engine/containerd : containerd | Write hosts.toml file]

TASK [container-engine/containerd : containerd | Write hosts.toml file] ***********************************************************************************************************
fatal: [kube-control-plan01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'key'\n\nThe error appears to be in '/root/kubespray-offline-2.21.0/outputs/kubespray-2.21.0/roles/container-engine/containerd/tasks/main.yml': line 123, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: containerd | Write hosts.toml file\n  ^ here\n"}

解决方法:
修改 with_itemswith_dict

$ vi roles/container-engine/containerd/tasks/main.yml
- name: containerd | Create registry directories
  file:
    path: "{{ containerd_cfg_dir }}/certs.d/{{ item.key }}"
    state: directory
    mode: 0755
    recurse: true
  with_dict: "{{ containerd_insecure_registries }}"
  when: containerd_insecure_registries is defined

或者手动将分发脚本,并将上面 ansible-playbook 代码内容注释。

$ cat host.toml
server = "100.168.110.199:35000"
[host."100.168.110.199:35000"]
  capabilities = ["pull", "resolve", "push"]
  skip_verify = true

执行:

ansible -i inventory/local/inventory.ini all -m copy -a "src=./host.toml dest=/etc/containerd/certs.d/100.168.110.199:35000/"

7. TASK [kubernetes/node : Modprobe nf_conntrack_ipv4]

TASK [kubernetes/node : Modprobe nf_conntrack_ipv4] ********************************************************************************************************************************
fatal: [kube-control-plan01]: FAILED! => {"changed": false, "msg": "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/4.18.0-425.13.1.el8_7.x86_64\n", "name": "nf_conntrack_ipv4", "params": "", "rc": 1, "state": "present", "stderr": "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/4.18.0-425.13.1.el8_7.x86_64\n", "stderr_lines": ["modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/4.18.0-425.13.1.el8_7.x86_64"], "stdout": "", "stdout_lines": []}
...ignoring

手动批量执行:

ansible -i inventory/local/inventory.ini all -m shell -a "lsmod | grep conntrack"
ansible -i inventory/local/inventory.ini all -m shell -a "modprobe ip_conntrack"
ansible -i inventory/local/inventory.ini all -m shell -a "modprobe br_netfilter"
ansible -i inventory/local/inventory.ini all -m shell -a "sysctl -p"

说明:nf_conntrack_ipv4 havs been rename to nf_conntrack since Linux kernel 4.18+

8 可忽略报错

8.1 TASK [network_plugin/calico : Calico | Get existing calico network pool]


TASK [network_plugin/calico : Calico | Get existing calico network pool] ***********************************************************************************************************
fatal: [kube-control-plan01]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/calicoctl.sh", "get", "ippool", "default-pool", "-o", "json"], "delta": "0:00:00.022302", "end": "2023-04-18 18:22:15.399574", "msg": "non-zero return code", "rc": 1, "start": "2023-04-18 18:22:15.377272", "stderr": "resource does not exist: IPPool(default-pool) with error: ippools.crd.projectcalico.org \"default-pool\" not found", "stderr_lines": ["resource does not exist: IPPool(default-pool) with error: ippools.crd.projectcalico.org \"default-pool\" not found"], "stdout": "null", "stdout_lines": ["null"]}
...ignoring
Tuesday 18 April 2023  06:22:15 -0400 (0:00:00.295)       0:18:10.590 *********

TASK [network_plugin/calico : Calico | Set kubespray calico network pool] **********************************************************************************************************
ok: [kube-control-plan01]
Tuesday 18 April 2023  06:22:15 -0400 (0:00:00.066)       0:18:10.656 *********
Tuesday 18 April 2023  06:22:15 -0400 (0:00:00.069)       0:18:10.726 *********

TASK [network_plugin/calico : Calico | Configure calico network pool] **************************************************************************************************************
ok: [kube-control-plan01]
Tuesday 18 April 2023  06:22:15 -0400 (0:00:00.333)       0:18:11.060 *********
Tuesday 18 April 2023  06:22:15 -0400 (0:00:00.065)       0:18:11.125 *********
Tuesday 18 April 2023  06:22:15 -0400 (0:00:00.072)       0:18:11.197 *********
Tuesday 18 April 2023  06:22:15 -0400 (0:00:00.067)       0:18:11.264 *********
Tuesday 18 April 2023  06:22:16 -0400 (0:00:00.066)       0:18:11.331 *********
Tuesday 18 April 2023  06:22:16 -0400 (0:00:00.018)       0:18:11.350 *********
Tuesday 18 April 2023  06:22:16 -0400 (0:00:00.015)       0:18:11.366 *********
Tuesday 18 April 2023  06:22:16 -0400 (0:00:00.023)       0:18:11.389 *********

TASK [network_plugin/calico : Calico | Get existing BGP Configuration] *************************************************************************************************************
fatal: [kube-control-plan01]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/calicoctl.sh", "get", "bgpconfig", "default", "-o", "json"], "delta": "0:00:00.024893", "end": "2023-04-18 18:22:16.520670", "msg": "non-zero return code", "rc": 1, "start": "2023-04-18 18:22:16.495777", "stderr": "resource does not exist: BGPConfiguration(default) with error: bgpconfigurations.crd.projectcalico.org \"default\" not found", "stderr_lines": ["resource does not exist: BGPConfiguration(default) with error: bgpconfigurations.crd.projectcalico.org \"default\" not found"], "stdout": "null", "stdout_lines": ["null"]}
...ignoring

8.2 TASK [kubernetes-apps/ansible : Kubernetes Apps | Register coredns deployment annotation createdby]

TASK [kubernetes-apps/ansible : Kubernetes Apps | Register coredns deployment annotation `createdby`] ******************************************************************************fatal: [kube-control-plan01]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "get", "deploy", "-n", "kube-system", "coredns", "-o", "jsonpath={ .spec.template.metadata.annotations.createdby }"], "delta": "0:00:00.036241", "end": "2023-04-18 18:23:08.117748", "msg": "non-zero return code", "rc": 1, "start": "2023-04-18 18:23:08.081507", "stderr": "Error from server (NotFound): deployments.apps \"coredns\" not found", "stderr_lines": ["Error from server (NotFound): deployments.apps \"coredns\" not found"], "stdout": "", "stdout_lines": []}
...ignoring
Tuesday 18 April 2023  06:23:07 -0400 (0:00:00.323)       0:19:03.310 *********

TASK [kubernetes-apps/ansible : Kubernetes Apps | Register coredns service annotation `createdby`] *********************************************************************************fatal: [kube-control-plan01]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "get", "svc", "-n", "kube-system", "coredns", "-o", "jsonpath={ .metadata.annotations.createdby }"], "delta": "0:00:00.038043", "end": "2023-04-18 18:23:08.443334", "msg": "non-zero return code", "rc": 1, "start": "2023-04-18 18:23:08.405291", "stderr": "Error from server (NotFound): services \"coredns\" not found", "stderr_lines": ["Error from server (NotFound): services \"coredns\" not found"], "stdout": "", "stdout_lines": []}
...ignoring

参考:

  • github:https://github.com/kubernetes-sigs/kubespray
  • 官网:https://kubespray.io/#/
  • 网友kubespray 学习:https://github.com/wenwenxiong/book/tree/master/k8s/kubespray

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/436033.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

LeetCode算法小抄 -- 环检测算法 和 拓扑排序算法

LeetCode算法小抄 -- 环检测算法 和 拓扑排序算法 环检测算法(DFS)[207. 课程表](https://leetcode.cn/problems/course-schedule/) 拓扑排序算法(DFS)[210. 课程表 II](https://leetcode.cn/problems/course-schedule-ii/) 环检测算法(BFS)拓扑排序算法(BFS) ⚠申明&#xff1…

第四章-图像加密与解密

加密与加密原理 使用异或运算实现图像加密及解密功能。 异或运算规则(相同为0,不同为1) 运算数相同,结果为0;运算数不同,结果为1任何数(0/1)与0异或,结果仍为自身任何数(0/1)与1异或,结果为另外一个数,即0变1, 1变0任何数和自身异或,结果为0 同理到图像加密解密 加密过程:…

Stable Diffusion成为生产力工具(六):制作一张庆祝五一劳动节的海报

S:AI能取代设计师么? I :至少在设计行业,目前AI扮演的主要角色还是超级工具,要顶替?除非甲方对设计效果无所畏惧~~ 预先学习: 安装webui《Windows安装Stable Diffusion WebUI及问题解决记录》。…

JS逆向 - 破解oklink加密参数及加密数据

版权声明:原创不易,本文禁止抄袭、转载,侵权必究! 目录 一、JS逆向目标-会当临绝顶二、JS逆向分析-不识庐山真面目三、JS逆向测试-只缘身在此山中四、JS反逆向-柳暗花明又一村五、oklink逆向完整代码下载六、作者Info 一、JS逆向目…

Redis --- 常用命令、Java中操作Redis

一、Redis常用命令 1.1、字符串string操作命令 Redis 中字符串类型常用命令: SET key value 设置指定key的值 GET key 获取指定key的值 SETEX key seconds value 设置指定key的值,并将 key 的过期时间设为 seconds 秒 SETNX key value 只有在 key 不…

Java入坑之抽象类、设计模式与接口

目录 一、抽象类 1.1定义 1.2特点 1.3使用场景 1.4抽象方法 1.5抽象类的实现 1.6开-闭原则 1.7匿名类 二、设计模式(了解) 2.1定义 2.2分类 2.3模板设计模式 2.4单例模式 三、接口 3.1定义 3.2语法格式 3.3接口实现 3.4接口类型变量 …

cyberdefenders------------Insider

cyberdefenders------------Insider 防守更聪明,而不是更难 0x01 前言 ​ CyberDefenders 是一个蓝队培训平台,专注于网络安全的防御方面,以学习、验证和提升网络防御技能。使用cyberdefenders的题目来学习恶意流量取证,题目来…

GBDT算法原理及实战

1.什么是GBDT算法 GBDT(Gradient Boosting Decision Tree),全名叫梯度提升决策树,是一种迭代的决策树算法,又叫 MART(Multiple Additive Regression Tree),它通过构造一组弱的学习器(树),并把多棵决策树的结果累加起来…

手把手教你实现控制数组某一个属性之和不能超过某一个数值变量

大家好啊,最近有个小任务,就是我表格多选后,某一项关于栏目数量之和不能超过其他变量 先看图: 代码就是: 这里有一个点就是我需要累加数量之和,其实遍历循环累加也可以 我这里用的是reduce方法 0代表设置…

机器学习实战:Python基于LDA线性判别模型进行分类预测(五)

文章目录 1 前言1.1 线性判别模型的介绍1.2 线性判别模型的应用 2 demo数据演示2.1 导入函数2.2 训练模型2.3 预测模型 3 LDA手写数字数据演示3.1 导入函数3.2 导入数据3.3 输出图像3.4 建立模型3.5 预测模型 4 讨论 1 前言 1.1 线性判别模型的介绍 线性判别模型(…

vue2使用sync修饰符父子组件的值双向绑定

1、使用场景 当我需要对一个 prop 进行“双向绑定的时候,通常用在封装弹窗组件的时候来进行使用,当然也会有其他的使用场景,只要涉及到父子组件之间需要传参的都可以使用,尽量不要使用watch监听来进行修改值,也不要尝试…

GCC编译器的使用

源文件需要经过编译才能生成可执行文件。GCC是一款强大的程序编译软件,能够在多个平台中使用。 1. GCC编译过程 主要分为四个过程:预处理、编译、汇编、链接。 1.1 预处理 主要处理源代码文件中以#开头的预编译指令。 处理规则有: &…

怎么使用midjourney?9个步骤教你学会AI创作

人工智能生成艺术作品的时代已经来临,互联网上到处都是试图创造完美提示的用户,以引导人工智能创造出正确的图像——有时甚至是错误的图像。听起来很有趣?Midjourney 是一种更常见的 AI 工具,人们用它只用几句话就能创造出梦幻般的…

【Linux系统编程】15.fcntl、lseek、truncate

目录 fcntl lseek 参数fd 参数offset 参数whence 返回值 应用场景 测试代码1 测试结果 测试代码2 测试结果 查看文件方式 truncate 参数path 参数length 测试代码3 测试结果 fcntl 获取文件属性、修改文件属性。 int flgsfcntl(fd,F_GETFL); //获取 flgs|…

微服务架构是什么?

一、微服务 1、什么是微服务? 微服务架构(通常简称为微服务)是指开发应用所用的一种架构形式。通过微服务,可将大型应用分解成多个独立的组件,其中每个组件都有各自的责任领域。在处理一个用户请求时,基于…

DOM事件流

DOM事件流 1. 常用事件绑定方式1.1 对象属性绑定1.2 addEventListener()绑定1.3 两种方式区别 2. 事件流2.1 概念2.2 事件顺序2.2.1 捕获阶段2.2.2 目标阶段2.2.3 冒泡阶段 3. 阻止事件冒泡3.1 event.stopPropagation()3.2 stopPropagation与stopImmediatePropagation区别 4. 事…

“科技助力财富增值 京华四季伴您一生”,北银理财深化线下线上客户交流互动

2023年4月12日,北银理财有限责任公司(以下简称“北银理财”)携手东方财富网启动北银理财财富号,首次采用线上直播及线下主题演讲相结合的方式,在上海举办以“科技助力财富增值,京华四季伴您一生”为主题的机…

6、springboot快速使用

文章目录 1、最佳实践1.1、引入场景依赖1.2、查看自动配置了哪些(选做)1.3、是否需要修改配置1、修改配置2、自定义加入或者替换组件3、自定义器 XXXXXCustomizer 2、开发小技巧2.1、Lombok1、引入坐标2、在IDEA中安装lombok插件(新版默认安装…

趣说数据结构 —— 前言

趣说数据结构 —— 前言 一次偶然的机会,翻到当初自己读大学的时候教材,看着自己当初的勾勾画画,一时感触良多。 很值得一提的是,我在封面后第一页,写着自己的专业和名字的地方下面,写着几行这样的字&…

leetcode刷题(6)

各位朋友们大家好,今天是我的leetcode刷题系列的第六篇。这篇文章将与队列方面的知识相关,因为这些知识用C语言实现较为复杂,所以我们就只使用Java来实现。 文章目录 设计循环队列题目要求用例输入提示做题思路代码实现 用栈实现队列题目要求…