ceph-deploy bclinux aarch64 ceph 14.2.10

news2024/11/20 4:44:44

ssh-copy-id,部署机免密登录其他三台主机

所有机器硬盘配置参考如下,计划采用vdb作为ceph数据盘

下载ceph-deploy

 pip install ceph-deploy

免密登录+设置主机名

hostnamectl --static set-hostname ceph-0 .. 3

配置hosts

172.17.163.105 ceph-0
172.17.112.206 ceph-1
172.17.227.100 ceph-2
172.17.67.157 ceph-3
 scp /etc/hosts root@ceph-1:/etc/hosts
 scp /etc/hosts root@ceph-2:/etc/hosts
 scp /etc/hosts root@ceph-3:/etc/hosts

本机先安装各软件包

 rpm -ivhU liboath/liboath-*

过滤掉安装不上的包

 find rpmbuild/RPMS/ | grep \\.rpm | grep -v debug | grep -v k8s | grep -v mgr\-rook | grep -v mgr\-ssh | xargs -i echo "{} \\"

添加用户,安装软件

useradd ceph
yum install -y rpmbuild/RPMS/noarch/ceph-mgr-dashboard-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-cloud-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-grafana-dashboards-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-local-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/aarch64/librgw-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-base-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-mirror-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-test-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rados-objclass-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mds-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mgr-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd1-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libradospp-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-nbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mon-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-radosgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rados-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librgw2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-common-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-compat-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-osd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rados-14.2.10-0.oe1.bclinux.aarch64.rpm

安装成功日志(截图是二次reinstall)

注意:如未提前创建用户,警报

分发编译好的rpm包+el8的 liboath

ceph-0

rsync -avr -P liboath root@ceph-1:~/
rsync -avr -P liboath root@ceph-2:~/
rsync -avr -P liboath root@ceph-3:~/
rsync -avr -P rpmbuild/RPMS root@ceph-1:~/rpmbuild/
rsync -avr -P rpmbuild/RPMS root@ceph-2:~/rpmbuild/
rsync -avr -P rpmbuild/RPMS root@ceph-3:~/rpmbuild/

分别登录ceph-1、ceph-2、ceph-3执行(后续可以考虑ansible封装)

cd ~ 
rpm -ivhU liboath/liboath-*
useradd ceph
yum install -y rpmbuild/RPMS/noarch/ceph-mgr-dashboard-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-cloud-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-grafana-dashboards-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-local-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/aarch64/librgw-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-base-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-mirror-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-test-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rados-objclass-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mds-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mgr-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd1-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libradospp-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-nbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mon-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-radosgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rados-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librgw2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-common-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-compat-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-osd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rados-14.2.10-0.oe1.bclinux.aarch64.rpm


安装日志

时间同步ntpdate

ceph-0

yum install ntpdate

编辑/etc/ntp.conf

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noepeer noquery
restrict source nomodify notrap noepeer noquery
restrict 127.0.0.1 
restrict ::1
tos maxclock 5
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
server asia.pool.ntp.org

 启动ntpd

systemctl enable ntpd --now

ceph-1 ceph-2 ceph-3

 yum install -y ntpdate 

 编辑/etc/ntp.conf

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noepeer noquery
restrict source nomodify notrap noepeer noquery
restrict 127.0.0.1 
restrict ::1
tos maxclock 5
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
server ceph-0

  启动ntpd

systemctl enable ntpd --now

部署mon节点,生成ceph.conf

cd /etc/ceph/
ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3

报错,如下

[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: bclinux 21.10U3 LTS 21.10U3

修改/usr/lib/python2.7/site-packages/ceph_deploy/calamari.py,新增一个bclinux 差异如下

[root@ceph-0 ceph_deploy]# diff calamari.py calamari.py.bak -Npr
*** calamari.py	2023-11-10 16:56:49.445013228 +0800
--- calamari.py.bak	2023-11-10 16:56:14.793013228 +0800
*************** def distro_is_supported(distro_name):
*** 13,19 ****
      An enforcer of supported distros that can differ from what ceph-deploy
      supports.
      """
!     supported = ['centos', 'redhat', 'ubuntu', 'debian', 'bclinux']
      if distro_name in supported:
          return True
      return False
--- 13,19 ----
      An enforcer of supported distros that can differ from what ceph-deploy
      supports.
      """
!     supported = ['centos', 'redhat', 'ubuntu', 'debian']
      if distro_name in supported:
          return True
      return False

修改/usr/lib/python2.7/site-packages/ceph_deploy/hosts/__init__.py

[root@ceph-0 ceph_deploy]# diff -Npr hosts/__init__.py hosts/__init__.py.bak 
*** hosts/__init__.py	2023-11-10 17:06:27.585013228 +0800
--- hosts/__init__.py.bak	2023-11-10 17:05:48.697013228 +0800
*************** def _get_distro(distro, fallback=None, u
*** 101,107 ****
          'fedora': fedora,
          'suse': suse,
          'virtuozzo': centos,
-         'bclinux': centos,
          'arch': arch
          }
  
--- 101,106 ----

ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3

成功生成ceph.conf,过程日志

[root@ceph-0 ceph]# ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffb246c280>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xffffb236e9d0>
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /usr/sbin/ip link show
[ceph-0][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-0][DEBUG ] IP addresses found: [u'172.18.0.1', u'172.17.163.105']
[ceph_deploy.new][DEBUG ] Resolving host ceph-0
[ceph_deploy.new][DEBUG ] Monitor ceph-0 at 172.17.163.105
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-1][DEBUG ] connected to host: ceph-0 
[ceph-1][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-1
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /usr/sbin/ip link show
[ceph-1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-1][DEBUG ] IP addresses found: [u'172.17.112.206']
[ceph_deploy.new][DEBUG ] Resolving host ceph-1
[ceph_deploy.new][DEBUG ] Monitor ceph-1 at 172.17.112.206
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-2][DEBUG ] connected to host: ceph-0 
[ceph-2][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-2
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /usr/sbin/ip link show
[ceph-2][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-2][DEBUG ] IP addresses found: [u'172.17.227.100']
[ceph_deploy.new][DEBUG ] Resolving host ceph-2
[ceph_deploy.new][DEBUG ] Monitor ceph-2 at 172.17.227.100
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-3][DEBUG ] connected to host: ceph-0 
[ceph-3][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-3
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /usr/sbin/ip link show
[ceph-3][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-3][DEBUG ] IP addresses found: [u'172.17.67.157']
[ceph_deploy.new][DEBUG ] Resolving host ceph-3
[ceph_deploy.new][DEBUG ] Monitor ceph-3 at 172.17.67.157
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.17.163.105', '172.17.112.206', '172.17.227.100', '172.17.67.157']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

自动生成的/etc/ceph/ceph.conf内容如下

[global]
fsid = ff72b496-d036-4f1b-b2ad-55358f3c16cb
mon_initial_members = ceph-0, ceph-1, ceph-2, ceph-3
mon_host = 172.17.163.105,172.17.112.206,172.17.227.100,172.17.67.157
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

由于只是测试环境,且未挂第二个网络,暂时不设置public_network参数

部署monitor

cd /etc/ceph
ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3

故障

[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][ERROR ] Traceback (most recent call last):
[ceph-3][ERROR ]   File "/bin/ceph", line 151, in <module>
[ceph-3][ERROR ]     from ceph_daemon import admin_socket, DaemonWatcher, Termsize
[ceph-3][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_daemon.py", line 24, in <module>
[ceph-3][ERROR ]     from prettytable import PrettyTable, HEADER
[ceph-3][ERROR ] ImportError: No module named prettytable
[ceph-3][WARNIN] monitor: mon.ceph-3, might not be running yet
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][ERROR ] Traceback (most recent call last):
[ceph-3][ERROR ]   File "/bin/ceph", line 151, in <module>
[ceph-3][ERROR ]     from ceph_daemon import admin_socket, DaemonWatcher, Termsize
[ceph-3][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_daemon.py", line 24, in <module>
[ceph-3][ERROR ]     from prettytable import PrettyTable, HEADER
[ceph-3][ERROR ] ImportError: No module named prettytable

[ceph-3][WARNIN] monitor ceph-3 does not exist in monmap
[ceph-3][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors
[ceph-3][WARNIN] monitors may not be able to form quorum

No module named prettytable

pip install PrettyTable

下载离线包

分发到其他三台机器上离线安装

rsync -avr -P preetytable-python27 root@ceph-1:~/
rsync -avr -P preetytable-python27 root@ceph-2:~/
rsync -avr -P preetytable-python27 root@ceph-3:~/

再次部署monitor

cd /etc/ceph
ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3

日志记录

[root@ceph-0 ~]# cd /etc/ceph
[root@ceph-0 ceph]# ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff992fb320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0xffff993967d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-0 ...
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-0][DEBUG ] determining if provided host has same hostname in remote
[ceph-0][DEBUG ] get remote short hostname
[ceph-0][DEBUG ] deploying mon to ceph-0
[ceph-0][DEBUG ] get remote short hostname
[ceph-0][DEBUG ] remote hostname: ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][DEBUG ] create the mon path if it does not exist
[ceph-0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-0/done
[ceph-0][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-0][DEBUG ] create the init path if it does not exist
[ceph-0][INFO  ] Running command: systemctl enable ceph.target
[ceph-0][INFO  ] Running command: systemctl enable ceph-mon@ceph-0
[ceph-0][INFO  ] Running command: systemctl start ceph-mon@ceph-0
[ceph-0][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-0.asok mon_status
[ceph-0][DEBUG ] ********************************************************************************
[ceph-0][DEBUG ] status for monitor: mon.ceph-0
[ceph-0][DEBUG ] {
[ceph-0][DEBUG ]   "election_epoch": 8, 
[ceph-0][DEBUG ]   "extra_probe_peers": [
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }
[ceph-0][DEBUG ]   ], 
[ceph-0][DEBUG ]   "feature_map": {
[ceph-0][DEBUG ]     "mon": [
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-0][DEBUG ]         "num": 1, 
[ceph-0][DEBUG ]         "release": "luminous"
[ceph-0][DEBUG ]       }
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "features": {
[ceph-0][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-0][DEBUG ]     "quorum_mon": [
[ceph-0][DEBUG ]       "kraken", 
[ceph-0][DEBUG ]       "luminous", 
[ceph-0][DEBUG ]       "mimic", 
[ceph-0][DEBUG ]       "osdmap-prune", 
[ceph-0][DEBUG ]       "nautilus"
[ceph-0][DEBUG ]     ], 
[ceph-0][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-0][DEBUG ]     "required_mon": [
[ceph-0][DEBUG ]       "kraken", 
[ceph-0][DEBUG ]       "luminous", 
[ceph-0][DEBUG ]       "mimic", 
[ceph-0][DEBUG ]       "osdmap-prune", 
[ceph-0][DEBUG ]       "nautilus"
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "monmap": {
[ceph-0][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-0][DEBUG ]     "epoch": 1, 
[ceph-0][DEBUG ]     "features": {
[ceph-0][DEBUG ]       "optional": [], 
[ceph-0][DEBUG ]       "persistent": [
[ceph-0][DEBUG ]         "kraken", 
[ceph-0][DEBUG ]         "luminous", 
[ceph-0][DEBUG ]         "mimic", 
[ceph-0][DEBUG ]         "osdmap-prune", 
[ceph-0][DEBUG ]         "nautilus"
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-0][DEBUG ]     "min_mon_release": 14, 
[ceph-0][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-0][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-0][DEBUG ]     "mons": [
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-3", 
[ceph-0][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 0
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-1", 
[ceph-0][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 1
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-0", 
[ceph-0][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 2
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-2", 
[ceph-0][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 3
[ceph-0][DEBUG ]       }
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "name": "ceph-0", 
[ceph-0][DEBUG ]   "outside_quorum": [], 
[ceph-0][DEBUG ]   "quorum": [
[ceph-0][DEBUG ]     0, 
[ceph-0][DEBUG ]     1, 
[ceph-0][DEBUG ]     2, 
[ceph-0][DEBUG ]     3
[ceph-0][DEBUG ]   ], 
[ceph-0][DEBUG ]   "quorum_age": 917, 
[ceph-0][DEBUG ]   "rank": 2, 
[ceph-0][DEBUG ]   "state": "peon", 
[ceph-0][DEBUG ]   "sync_provider": []
[ceph-0][DEBUG ] }
[ceph-0][DEBUG ] ********************************************************************************
[ceph-0][INFO  ] monitor: mon.ceph-0 is running
[ceph-0][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-1 ...
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-1][DEBUG ] determining if provided host has same hostname in remote
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] deploying mon to ceph-1
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] remote hostname: ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][DEBUG ] create the mon path if it does not exist
[ceph-1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-1/done
[ceph-1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-1][DEBUG ] create the init path if it does not exist
[ceph-1][INFO  ] Running command: systemctl enable ceph.target
[ceph-1][INFO  ] Running command: systemctl enable ceph-mon@ceph-1
[ceph-1][INFO  ] Running command: systemctl start ceph-mon@ceph-1
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][DEBUG ] status for monitor: mon.ceph-1
[ceph-1][DEBUG ] {
[ceph-1][DEBUG ]   "election_epoch": 8, 
[ceph-1][DEBUG ]   "extra_probe_peers": [
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }
[ceph-1][DEBUG ]   ], 
[ceph-1][DEBUG ]   "feature_map": {
[ceph-1][DEBUG ]     "mon": [
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-1][DEBUG ]         "num": 1, 
[ceph-1][DEBUG ]         "release": "luminous"
[ceph-1][DEBUG ]       }
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "features": {
[ceph-1][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-1][DEBUG ]     "quorum_mon": [
[ceph-1][DEBUG ]       "kraken", 
[ceph-1][DEBUG ]       "luminous", 
[ceph-1][DEBUG ]       "mimic", 
[ceph-1][DEBUG ]       "osdmap-prune", 
[ceph-1][DEBUG ]       "nautilus"
[ceph-1][DEBUG ]     ], 
[ceph-1][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-1][DEBUG ]     "required_mon": [
[ceph-1][DEBUG ]       "kraken", 
[ceph-1][DEBUG ]       "luminous", 
[ceph-1][DEBUG ]       "mimic", 
[ceph-1][DEBUG ]       "osdmap-prune", 
[ceph-1][DEBUG ]       "nautilus"
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "monmap": {
[ceph-1][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-1][DEBUG ]     "epoch": 1, 
[ceph-1][DEBUG ]     "features": {
[ceph-1][DEBUG ]       "optional": [], 
[ceph-1][DEBUG ]       "persistent": [
[ceph-1][DEBUG ]         "kraken", 
[ceph-1][DEBUG ]         "luminous", 
[ceph-1][DEBUG ]         "mimic", 
[ceph-1][DEBUG ]         "osdmap-prune", 
[ceph-1][DEBUG ]         "nautilus"
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-1][DEBUG ]     "min_mon_release": 14, 
[ceph-1][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-1][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-1][DEBUG ]     "mons": [
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-3", 
[ceph-1][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 0
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-1", 
[ceph-1][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 1
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-0", 
[ceph-1][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 2
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-2", 
[ceph-1][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 3
[ceph-1][DEBUG ]       }
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "name": "ceph-1", 
[ceph-1][DEBUG ]   "outside_quorum": [], 
[ceph-1][DEBUG ]   "quorum": [
[ceph-1][DEBUG ]     0, 
[ceph-1][DEBUG ]     1, 
[ceph-1][DEBUG ]     2, 
[ceph-1][DEBUG ]     3
[ceph-1][DEBUG ]   ], 
[ceph-1][DEBUG ]   "quorum_age": 921, 
[ceph-1][DEBUG ]   "rank": 1, 
[ceph-1][DEBUG ]   "state": "peon", 
[ceph-1][DEBUG ]   "sync_provider": []
[ceph-1][DEBUG ] }
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][INFO  ] monitor: mon.ceph-1 is running
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-2 ...
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-2][DEBUG ] determining if provided host has same hostname in remote
[ceph-2][DEBUG ] get remote short hostname
[ceph-2][DEBUG ] deploying mon to ceph-2
[ceph-2][DEBUG ] get remote short hostname
[ceph-2][DEBUG ] remote hostname: ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][DEBUG ] create the mon path if it does not exist
[ceph-2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-2/done
[ceph-2][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-2][DEBUG ] create the init path if it does not exist
[ceph-2][INFO  ] Running command: systemctl enable ceph.target
[ceph-2][INFO  ] Running command: systemctl enable ceph-mon@ceph-2
[ceph-2][INFO  ] Running command: systemctl start ceph-mon@ceph-2
[ceph-2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-2.asok mon_status
[ceph-2][DEBUG ] ********************************************************************************
[ceph-2][DEBUG ] status for monitor: mon.ceph-2
[ceph-2][DEBUG ] {
[ceph-2][DEBUG ]   "election_epoch": 8, 
[ceph-2][DEBUG ]   "extra_probe_peers": [
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }
[ceph-2][DEBUG ]   ], 
[ceph-2][DEBUG ]   "feature_map": {
[ceph-2][DEBUG ]     "mon": [
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-2][DEBUG ]         "num": 1, 
[ceph-2][DEBUG ]         "release": "luminous"
[ceph-2][DEBUG ]       }
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "features": {
[ceph-2][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-2][DEBUG ]     "quorum_mon": [
[ceph-2][DEBUG ]       "kraken", 
[ceph-2][DEBUG ]       "luminous", 
[ceph-2][DEBUG ]       "mimic", 
[ceph-2][DEBUG ]       "osdmap-prune", 
[ceph-2][DEBUG ]       "nautilus"
[ceph-2][DEBUG ]     ], 
[ceph-2][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-2][DEBUG ]     "required_mon": [
[ceph-2][DEBUG ]       "kraken", 
[ceph-2][DEBUG ]       "luminous", 
[ceph-2][DEBUG ]       "mimic", 
[ceph-2][DEBUG ]       "osdmap-prune", 
[ceph-2][DEBUG ]       "nautilus"
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "monmap": {
[ceph-2][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-2][DEBUG ]     "epoch": 1, 
[ceph-2][DEBUG ]     "features": {
[ceph-2][DEBUG ]       "optional": [], 
[ceph-2][DEBUG ]       "persistent": [
[ceph-2][DEBUG ]         "kraken", 
[ceph-2][DEBUG ]         "luminous", 
[ceph-2][DEBUG ]         "mimic", 
[ceph-2][DEBUG ]         "osdmap-prune", 
[ceph-2][DEBUG ]         "nautilus"
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-2][DEBUG ]     "min_mon_release": 14, 
[ceph-2][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-2][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-2][DEBUG ]     "mons": [
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-3", 
[ceph-2][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 0
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-1", 
[ceph-2][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 1
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-0", 
[ceph-2][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 2
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-2", 
[ceph-2][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 3
[ceph-2][DEBUG ]       }
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "name": "ceph-2", 
[ceph-2][DEBUG ]   "outside_quorum": [], 
[ceph-2][DEBUG ]   "quorum": [
[ceph-2][DEBUG ]     0, 
[ceph-2][DEBUG ]     1, 
[ceph-2][DEBUG ]     2, 
[ceph-2][DEBUG ]     3
[ceph-2][DEBUG ]   ], 
[ceph-2][DEBUG ]   "quorum_age": 926, 
[ceph-2][DEBUG ]   "rank": 3, 
[ceph-2][DEBUG ]   "state": "peon", 
[ceph-2][DEBUG ]   "sync_provider": []
[ceph-2][DEBUG ] }
[ceph-2][DEBUG ] ********************************************************************************
[ceph-2][INFO  ] monitor: mon.ceph-2 is running
[ceph-2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-3 ...
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-3][DEBUG ] determining if provided host has same hostname in remote
[ceph-3][DEBUG ] get remote short hostname
[ceph-3][DEBUG ] deploying mon to ceph-3
[ceph-3][DEBUG ] get remote short hostname
[ceph-3][DEBUG ] remote hostname: ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][DEBUG ] create the mon path if it does not exist
[ceph-3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-3/done
[ceph-3][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-3][DEBUG ] create the init path if it does not exist
[ceph-3][INFO  ] Running command: systemctl enable ceph.target
[ceph-3][INFO  ] Running command: systemctl enable ceph-mon@ceph-3
[ceph-3][INFO  ] Running command: systemctl start ceph-mon@ceph-3
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][DEBUG ] ********************************************************************************
[ceph-3][DEBUG ] status for monitor: mon.ceph-3
[ceph-3][DEBUG ] {
[ceph-3][DEBUG ]   "election_epoch": 8, 
[ceph-3][DEBUG ]   "extra_probe_peers": [
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }
[ceph-3][DEBUG ]   ], 
[ceph-3][DEBUG ]   "feature_map": {
[ceph-3][DEBUG ]     "mon": [
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-3][DEBUG ]         "num": 1, 
[ceph-3][DEBUG ]         "release": "luminous"
[ceph-3][DEBUG ]       }
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "features": {
[ceph-3][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-3][DEBUG ]     "quorum_mon": [
[ceph-3][DEBUG ]       "kraken", 
[ceph-3][DEBUG ]       "luminous", 
[ceph-3][DEBUG ]       "mimic", 
[ceph-3][DEBUG ]       "osdmap-prune", 
[ceph-3][DEBUG ]       "nautilus"
[ceph-3][DEBUG ]     ], 
[ceph-3][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-3][DEBUG ]     "required_mon": [
[ceph-3][DEBUG ]       "kraken", 
[ceph-3][DEBUG ]       "luminous", 
[ceph-3][DEBUG ]       "mimic", 
[ceph-3][DEBUG ]       "osdmap-prune", 
[ceph-3][DEBUG ]       "nautilus"
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "monmap": {
[ceph-3][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-3][DEBUG ]     "epoch": 1, 
[ceph-3][DEBUG ]     "features": {
[ceph-3][DEBUG ]       "optional": [], 
[ceph-3][DEBUG ]       "persistent": [
[ceph-3][DEBUG ]         "kraken", 
[ceph-3][DEBUG ]         "luminous", 
[ceph-3][DEBUG ]         "mimic", 
[ceph-3][DEBUG ]         "osdmap-prune", 
[ceph-3][DEBUG ]         "nautilus"
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-3][DEBUG ]     "min_mon_release": 14, 
[ceph-3][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-3][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-3][DEBUG ]     "mons": [
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-3", 
[ceph-3][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 0
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-1", 
[ceph-3][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 1
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-0", 
[ceph-3][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 2
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-2", 
[ceph-3][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 3
[ceph-3][DEBUG ]       }
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "name": "ceph-3", 
[ceph-3][DEBUG ]   "outside_quorum": [], 
[ceph-3][DEBUG ]   "quorum": [
[ceph-3][DEBUG ]     0, 
[ceph-3][DEBUG ]     1, 
[ceph-3][DEBUG ]     2, 
[ceph-3][DEBUG ]     3
[ceph-3][DEBUG ]   ], 
[ceph-3][DEBUG ]   "quorum_age": 931, 
[ceph-3][DEBUG ]   "rank": 0, 
[ceph-3][DEBUG ]   "state": "leader", 
[ceph-3][DEBUG ]   "sync_provider": []
[ceph-3][DEBUG ] }
[ceph-3][DEBUG ] ********************************************************************************
[ceph-3][INFO  ] monitor: mon.ceph-3 is running
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status

[errno 2] error connecting to the cluster

没有key?继续下个步骤再观察

收集秘钥

ceph-deploy gatherkeys ceph-0 ceph-1 ceph-2 ceph-3

日志

ceph -s 可以看到daemons服务了

部署admin节点

ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[root@ceph-0 ceph]# ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff91add0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff91c777d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-0
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-1
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-2
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-3
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

ceph -s

部署OSD

ceph-deploy osd create ceph-0 --data /dev/vdb
ceph-deploy osd create ceph-1 --data /dev/vdb
ceph-deploy osd create ceph-2 --data /dev/vdb
ceph-deploy osd create ceph-3 --data /dev/vdb

日志ceph 0 1 2 3

[root@ceph-0 ceph]# ceph-deploy osd create ceph-0 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-0 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9cc8cd20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-0
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff9cd1bed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][WARNIN] osd keyring does not exist yet, creating one
[ceph-0][DEBUG ] create a keyring file
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-0][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-837353d8-91ff-4418-bc8f-a655d94049d4 /dev/vdb
[ceph-0][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-0][WARNIN]  stdout: Volume group "ceph-837353d8-91ff-4418-bc8f-a655d94049d4" successfully created
[ceph-0][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f ceph-837353d8-91ff-4418-bc8f-a655d94049d4
[ceph-0][WARNIN]  stdout: Logical volume "osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f" created.
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-0][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[ceph-0][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-0][WARNIN]  stderr: 2023-11-11 10:48:34.800 ffff843261e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-0][WARNIN] 2023-11-11 10:48:34.800 ffff843261e0 -1 AuthRegistry(0xffff7c081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-0][WARNIN]  stderr: got monmap epoch 1
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQB/605l8419HxAAhIoXMxEJCV5J6qOB8AyHrw==
[ceph-0][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-0][WARNIN] added entity osd.0 auth(key=AQB/605l8419HxAAhIoXMxEJCV5J6qOB8AyHrw==)
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-0][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c1870346-8e19-4788-b1dd-19bd75d6ec2f --setuser ceph --setgroup ceph
[ceph-0][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-0][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-c1870346-8e19-4788-b1dd-19bd75d6ec2f.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-0][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
[ceph-0][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-0][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@0
[ceph-0][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-0][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-0][INFO  ] checking OSD status...
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-0 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-1 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-1 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff87d9ed20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff87e2ded0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][WARNIN] osd keyring does not exist yet, creating one
[ceph-1][DEBUG ] create a keyring file
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-89d26557-d392-4a46-8d3d-6904076cd4e0 /dev/vdb
[ceph-1][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-1][WARNIN]  stdout: Volume group "ceph-89d26557-d392-4a46-8d3d-6904076cd4e0" successfully created
[ceph-1][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-4aa0152e-d817-4583-817b-81ada419624a ceph-89d26557-d392-4a46-8d3d-6904076cd4e0
[ceph-1][WARNIN]  stdout: Logical volume "osd-block-4aa0152e-d817-4583-817b-81ada419624a" created.
[ceph-1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-1][WARNIN] Running command: /bin/ln -s /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[ceph-1][WARNIN]  stderr: 2023-11-11 10:49:41.805 ffff89d6d1e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-1][WARNIN] 2023-11-11 10:49:41.805 ffff89d6d1e0 -1 AuthRegistry(0xffff84081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-1][WARNIN]  stderr: got monmap epoch 1
[ceph-1][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQDC605lWArnLhAAEYhGC+H+Jy224yAIJhL0gA==
[ceph-1][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[ceph-1][WARNIN] added entity osd.1 auth(key=AQDC605lWArnLhAAEYhGC+H+Jy224yAIJhL0gA==)
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[ceph-1][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 4aa0152e-d817-4583-817b-81ada419624a --setuser ceph --setgroup ceph
[ceph-1][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a --path /var/lib/ceph/osd/ceph-1 --no-mon-config
[ceph-1][WARNIN] Running command: /bin/ln -snf /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-1-4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-4aa0152e-d817-4583-817b-81ada419624a.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-1][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@1
[ceph-1][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-1][WARNIN] Running command: /bin/systemctl start ceph-osd@1
[ceph-1][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[ceph-1][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-1][INFO  ] checking OSD status...
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-1 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-2 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-2 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9a808d20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-2
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff9a897ed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][WARNIN] osd keyring does not exist yet, creating one
[ceph-2][DEBUG ] create a keyring file
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-2][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-2][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f /dev/vdb
[ceph-2][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-2][WARNIN]  stdout: Volume group "ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f" successfully created
[ceph-2][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f
[ceph-2][WARNIN]  stdout: Logical volume "osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960" created.
[ceph-2][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-2][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-2][WARNIN] Running command: /bin/ln -s /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[ceph-2][WARNIN]  stderr: 2023-11-11 10:50:01.837 ffff947321e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-2][WARNIN] 2023-11-11 10:50:01.837 ffff947321e0 -1 AuthRegistry(0xffff8c081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-2][WARNIN]  stderr: got monmap epoch 1
[ceph-2][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQDW605lUIA0MhAAqOoCGrnDsVpfoIIKVtCXHg==
[ceph-2][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[ceph-2][WARNIN] added entity osd.2 auth(key=AQDW605lUIA0MhAAqOoCGrnDsVpfoIIKVtCXHg==)
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[ceph-2][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid fe7a2030-94ac-4bbb-af27-7950509b0960 --setuser ceph --setgroup ceph
[ceph-2][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[ceph-2][WARNIN] Running command: /bin/ln -snf /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-fe7a2030-94ac-4bbb-af27-7950509b0960.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-2][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[ceph-2][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-2][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph-2][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[ceph-2][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-2][INFO  ] checking OSD status...
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-2 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-3 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-3 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff7f600d20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-3
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff7f68fed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][WARNIN] osd keyring does not exist yet, creating one
[ceph-3][DEBUG ] create a keyring file
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-a75dd665-280f-4901-90db-d72aea971fd7 /dev/vdb
[ceph-3][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-3][WARNIN]  stdout: Volume group "ceph-a75dd665-280f-4901-90db-d72aea971fd7" successfully created
[ceph-3][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 ceph-a75dd665-280f-4901-90db-d72aea971fd7
[ceph-3][WARNIN]  stdout: Logical volume "osd-block-223dea89-7b5f-4584-b294-bbc0457cd250" created.
[ceph-3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-3][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-3][WARNIN] Running command: /bin/ln -s /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
[ceph-3][WARNIN]  stderr: 2023-11-11 10:50:22.197 ffffa80151e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-3][WARNIN] 2023-11-11 10:50:22.197 ffffa80151e0 -1 AuthRegistry(0xffffa0081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-3][WARNIN]  stderr: got monmap epoch 1
[ceph-3][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQDr605lrGtEDRAAvHq/3Wbxx0jH8NgtcKN/aA==
[ceph-3][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-3/keyring
[ceph-3][WARNIN] added entity osd.3 auth(key=AQDr605lrGtEDRAAvHq/3Wbxx0jH8NgtcKN/aA==)
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
[ceph-3][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 223dea89-7b5f-4584-b294-bbc0457cd250 --setuser ceph --setgroup ceph
[ceph-3][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
[ceph-3][WARNIN] Running command: /bin/ln -snf /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-3-223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-223dea89-7b5f-4584-b294-bbc0457cd250.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-3][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@3
[ceph-3][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-3][WARNIN] Running command: /bin/systemctl start ceph-osd@3
[ceph-3][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
[ceph-3][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-3][INFO  ] checking OSD status...
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-3 is now ready for osd use.

ceph -s 状态没变,还没有看到存储状态

部署mgr

 ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3

日志

[root@ceph-0 ceph]# ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-0', 'ceph-0'), ('ceph-1', 'ceph-1'), ('ceph-2', 'ceph-2'), ('ceph-3', 'ceph-3')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff94d07730>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0xffff94e71dd0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-0:ceph-0 ceph-1:ceph-1 ceph-2:ceph-2 ceph-3:ceph-3
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][WARNIN] mgr keyring does not exist yet, creating one
[ceph-0][DEBUG ] create a keyring file
[ceph-0][DEBUG ] create path recursively if it doesn't exist
[ceph-0][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-0 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-0/keyring
[ceph-0][INFO  ] Running command: systemctl enable ceph-mgr@ceph-0
[ceph-0][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-0.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-0][INFO  ] Running command: systemctl start ceph-mgr@ceph-0
[ceph-0][INFO  ] Running command: systemctl enable ceph.target
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1613) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][WARNIN] mgr keyring does not exist yet, creating one
[ceph-1][DEBUG ] create a keyring file
[ceph-1][DEBUG ] create path recursively if it doesn't exist
[ceph-1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-1/keyring
[ceph-1][INFO  ] Running command: systemctl enable ceph-mgr@ceph-1
[ceph-1][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-1.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-1][INFO  ] Running command: systemctl start ceph-mgr@ceph-1
[ceph-1][INFO  ] Running command: systemctl enable ceph.target
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1626) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][WARNIN] mgr keyring does not exist yet, creating one
[ceph-2][DEBUG ] create a keyring file
[ceph-2][DEBUG ] create path recursively if it doesn't exist
[ceph-2][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-2 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-2/keyring
[ceph-2][INFO  ] Running command: systemctl enable ceph-mgr@ceph-2
[ceph-2][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-2.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-2][INFO  ] Running command: systemctl start ceph-mgr@ceph-2
[ceph-2][INFO  ] Running command: systemctl enable ceph.target
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
dhclient(1634) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issues

exiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][WARNIN] mgr keyring does not exist yet, creating one
[ceph-3][DEBUG ] create a keyring file
[ceph-3][DEBUG ] create path recursively if it doesn't exist
[ceph-3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-3/keyring
[ceph-3][INFO  ] Running command: systemctl enable ceph-mgr@ceph-3
[ceph-3][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-3.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-3][INFO  ] Running command: systemctl start ceph-mgr@ceph-3
[ceph-3][INFO  ] Running command: systemctl enable ceph.target

ceph -s 查看mgr

!osd经过了约15分钟,才显示数据空间情况

验证ceph块存储

创建存储池(后面两个参数还不清楚意思)

ceph osd pool create vdbench 250 250

指定类型为块存储

ceph osd pool application enable vdbench rbd

创建一个20G的镜像(未设置压缩)

rbd create image1 --size 20G --pool vdbench --image-format 2 --image-feature layering

映射到linux设备文件

rbd map vdbench/image1

参考以下日志,可以看到,已经生成了/dev/rdb0设备文件

参考文档:

ceph-deploy – 使用最少的基础架构部署 Ceph — ceph-deploy 2.1.0 文档

【精选】ceph-deploy部署指定版本ceph集群_mons are allowing insecure global_id reclaim_ggrong0213的博客-CSDN博客

Ceph使用---dashboard启用及Prometheus监控 - cyh00001 - 博客园 (cnblogs.com)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1195911.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

局部路由守卫component守卫

局部路由守卫component守卫 component守卫&#xff08;beforeRouteEnter、beforeRouteLeave&#xff09; 代码位置&#xff1a;在路由组件中&#xff0c;代码是写在component当中的&#xff08;XXX.vue&#xff09;beforeRouteEnter、beforeRouteLeave都有三个参数&#xff1…

python Flask框架,调用MobileNetV2图像分类模型,实现前端上传图像分类

python Flask框架&#xff0c;调用MobileNetV2图像分类模型&#xff0c;实现前端上传图像分类 今天博主介绍一个图像分类的小项目 基于flask 和mobileNetV2模型的前端图像分类项目 环境配置如下&#xff1a; python版本3.7.6 安装库的版本如下&#xff1a; tensorflow 2.11.…

单挑特斯拉,华为智选车迈入第二阶段

文丨刘俊宏 编丨王一粟 华为智选车的节奏越来越快。 11月9日&#xff0c;华为跟奇瑞打造的首款C级纯电轿车智界S7发布&#xff0c;预售价为25.8万起。 在发布会上&#xff0c;华为常务董事、终端BG CEO、智能汽车解决方案BU董事长余承东采取手机以往最惯用的对标营销手法&a…

蓝桥杯算法心得——拼数(排列型回溯dfs)

大家好&#xff0c;我是晴天学长&#xff0c;排列型的dfs&#xff0c;在一些需要暴搜的题中很中很重要&#xff0c;需要的小伙伴可以关注支持一下哦&#xff01;后续会继续更新的。&#x1f4aa;&#x1f4aa;&#x1f4aa; 1) .拼数 2) .算法思路 超级递归 1.遍历数组&#…

想要和猫妹一起学Python吗?快进群吧

这是一篇2024年猫妹学Python新同学召集令&#xff0c;感兴趣的朋友可以看下。 初始Python 猫爸第一次被Python惊艳&#xff0c;是几年前的一个风格迁移程序。 国外某大学的一篇博士论文&#xff0c;为风格迁移提供了理论支撑。 下载到模型之后&#xff0c;就可以用简单的Py…

Linux的基本指令(1)

目录 快速认识的几个指令 pwd指令 mkdir指令 touch指令 cd指令 clear指令 whoami指令 ls指令 ls -l ls -la ls 目录名 ls -ld 目录名 文件 路径 路径是什么&#xff1f; 路径的形成 ​ 怎么保证路径必须有唯一性&#xff1f; ls -la隐藏文件 隐藏文件的是什…

UnRaid安装安装仓库管理系统GreaterWMS

文章目录 0、前言1、安装流程1.1、克隆GreaterWMS项目到UnRaid本地目录1.2、修改项目前后端端口1.3、修改baseurl1.4、修改Nginx.conf配置文件1.5、安装依赖插件1.5.1、Docker Compose Manager插件1.5.2、Python3环境 1.6、创建GreaterWMS容器1.6.1、为前后端启动脚本赋执行权限…

C++ static关键字

C static关键字 1、概述2、重要概念解释3、分情况案例解释3.1 static在类内使用3.2 static在类外使用案例一&#xff1a;案例二&#xff1a;案例三 1、概述 static关键字分为两种情况&#xff1a; 1.在类内使用 2.在类外使用 2、重要概念解释 &#xff08;1&#xff09;翻译…

keepalived+Nginx+邮件

实验场景&#xff1a; 我使用keepalived保证nginx的高可用&#xff0c;我想知道什么时候ip发生漂移&#xff0c;可以让ip发生漂移的时候 我的邮箱收到消息. 如果对keepalived不了解&#xff0c;这有详细解释&#xff1a;keepalived与nginx与MySQL-CSDN博客https://blog.csdn.ne…

在Spring Boot中使用JTA实现对多数据源的事务管理

了解事务的都知道&#xff0c;在我们日常开发中单单靠事务管理就可以解决绝大多数问题了&#xff0c;但是为啥还要提出JTA这个玩意呢&#xff0c;到底JTA是什么呢&#xff1f;他又是具体来解决啥问题的呢&#xff1f; JTA JTA&#xff08;Java Transaction API&#xff09;是…

思维模型 梅拉宾法则

1 梅拉宾法则的应用 1.1 演讲口才中的梅拉宾法则应用 苹果公司的演讲&#xff1a;苹果公司的演讲一直以来都以其独特的风格和效果著称。苹果公司的演讲者在演讲中注重运用肢体语言和声音等非语言因素&#xff0c;如手势、表情和语调等&#xff0c;来增强演讲的效果。例如&am…

Linux文件类型与权限及其修改

后面我们写代码时&#xff0c;写完可能会出现没有执行权限什么的&#xff0c;所以我们要知道文件都有哪些权限和类型。 首先 就像我们之前目录结构图里面有个/dev,它就是存放设备文件的&#xff0c;也就是说&#xff0c;哪怕是一个硬件设备&#xff0c;例如打印机啥的&#xf…

Linux学习教程(第一章 简介)3

第一章 简介 七、Linux系统的优缺点 前面章节提到&#xff0c;相比 Windows 系统&#xff0c;Linux 系统有更好的稳定性&#xff0c;那么除此之外&#xff0c;Linux 系统还有那些优点&#xff08;或者不足&#xff09;呢&#xff1f;本节带领大家详细了解一下。 1、大量的可…

【Kurbernetes集群】Pod资源、Pod资源限制和Pod容器的健康检查(探针)详解

Pod资源 一、Pod概述1.1 Pod的定义1.2 一个Pod能包含几个容器&#xff1f;1.3 Pod的分类1.3.1 控制器管理的Pod1.3.2 自主式Pod1.3.3 静态Pod 1.4 Pod中容器的分类1.4.1 Pause容器1.4.2 初始化容器1.4.3 应用容器 1.5 Pod常见的状态 二、Pod中的策略2.1 镜像拉取策略2.2 Pod中容…

另辟奚径-Android Studio调用Delphi窗体

大家都知道Delphi能调用安卓SDK&#xff0c;比如jar、aar等&#xff0c; 但是反过来&#xff0c;能在Android Studio中调用Delphi开发的窗体吗&#xff1f; 想想不太可能吧&#xff0c; Delphi用的是Pascal&#xff0c;Android Studio用的是Java&#xff0c;这两个怎么能混用…

AI时代如何提升自己晋升力

要在AI时代提升职场晋升力&#xff0c;采取以下详细策略&#xff1a; 终身学习的实践&#xff1a; 专业课程&#xff1a; 定期参加在线课程或研讨会&#xff0c;如Coursera、edX等&#xff0c;学习最新的AI技术和行业动态。行业资讯&#xff1a; 订阅相关的行业杂志、博客&…

通过海康私有协议Ehome/ISUP协议将海康摄像头、录像机等设备统一接入到LiveNVR Web流媒体平台实现统一汇聚及Web播放等的配置说明,

LiveNVR海康摄像头海康NVR通过EHOME协议ISUP协议接入支持转GB28181级联 1、海康 ISUP 接入配置2、海康设备接入2.1、海康EHOME接入配置示例2.2、海康ISUP接入配置示例 3、通道配置3.1、直播流接入类型 海康ISUP3.2、海康 ISUP 设备ID3.3、启用保存3.4、接入成功 4、相关问题4.1…

初识RabbitMQ - 安装 - 搭建基础环境

RabbitMQ 各个名词介绍 Broker&#xff1a;接收和分发消息的应用&#xff0c;RabbitMQ Server 就是 Message Broker Virtual host&#xff1a;出于多租户和安全因素设计的&#xff0c;把 AMQP 的基本组件划分到一个虚拟的分组中&#xff0c;类似于网络中的 namespace 概念。当…

毅速课堂丨3D打印随形水路在注塑生产中的优势

随着科技的不断发展&#xff0c;3D打印技术已经成为了模具制造领域的一种重要技术。其中&#xff0c;3D打印随形水路在注塑生产中的应用也越来越广泛。 3D打印随形水路在注塑生产中的优势主要有以下几点&#xff1a; 一、提高生产效率 3D打印随形水路可以根据注塑产品的形状和…

帝国cms中如何让外部链接直接从新窗口打开页面

<?php if($bqr[isurl]) { ?> <a href"<?$bqsr[titleurl]?>" target"_blank"> <?php } else { ?> <a href"<?$bqsr[titleurl]?>"> <?php } ?>