3. ceph-mimic版本部署

news2024/11/23 22:22:23

ceph-mimic版本部署

  • 一、ceph-mimic版本部署
    • 1、环境规划
    • 2、系统基础环境准备
      • 2.1 关闭防火墙、SELinux
      • 2.2 确保所有主机时间同步
      • 2.3 所有主机ssh免密
      • 2.4 添加所有主机解析
    • 3、配置ceph软件仓库
    • 4、安装ceph-deploy工具
    • 5、ceph集群初始化
    • 6、所有ceph集群节点安装相关软件
    • 7、客户端安装ceph-common软件
    • 8、在ceph集群中创建ceph monitor组件
      • 8.1 在配置文件中添加public network配置
      • 8.2 在node01节点创建MON服务
      • 8.2 在所有ceph集群节点上同步配置
      • 8.3 为避免MON的单点故障,在另外两个节点添加MON服务
      • 8.4 查看ceph集群状态
    • 9、创建mgr服务
      • 9.1 在node01节点创建mgr
      • 9.2 添加多个mgr
      • 9.3 再次查看集群状态
    • 10、添加osd
      • 10.1 磁盘初始化
      • 10.2 添加osd服务
      • 10.3 再次查看集群状态
    • 11、添加dashboard插件
      • 11.1 查看mgr运行的主节点
      • 11.2 启用dashboard插件
      • 11.3 创建dashboard需要的证书
      • 11.4 设置dashboard访问地址
      • 11.5 重启dashboard插件
      • 11.6 设置用户名、密码
      • 11.7 访问webUI

一、ceph-mimic版本部署

1、环境规划

192.168.140.10 node01 ceph集群节点/ceph-deploy /dev/sdb
192.168.140.11 node02 ceph集群节点 /dev/sdb
192.168.140.12 node03 ceph集群节点 /dev/sdb
192.168.140.13 业务服务器

2、系统基础环境准备

2.1 关闭防火墙、SELinux

2.2 确保所有主机时间同步

[root@node01 ~]# ntpdate 120.25.115.20 
14 Jun 11:43:21 ntpdate[1300]: step time server 120.25.115.20 offset -86398.543174 sec

[root@node01 ~]# crontab -l
*/30 * * * * /usr/sbin/ntpdate 120.25.115.20 &> /dev/null

2.3 所有主机ssh免密

[root@node01 ~]# ssh-keygen -t rsa 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I56kRZucE7YCQAOM4RGGOuYuiBj4tgo/1CI9IXqaDEw root@node01
The key's randomart image is:
+---[RSA 2048]----+
|XB.              |
|=oo              |
|...   +          |
|+E.. + *         |
|B+ o. @ S        |
|*o* .* + .       |
|OO o. o          |
|O++              |
|oooo             |
+----[SHA256]-----+


[root@node01 ~]# mv /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys

[root@node01 ~]# scp -r /root/.ssh/ root@192.168.140.11:/root/
[root@node01 ~]# scp -r /root/.ssh/ root@192.168.140.12:/root/
[root@node01 ~]# scp -r /root/.ssh/ root@192.168.140.13:/root/
[root@node01 ~]# for i in 10 11 12 13; do ssh root@192.168.140.$i hostname; ssh root@192.168.140.$i date; done
node01
Fri Jun 14 11:49:57 CST 2024
node02
Fri Jun 14 11:49:57 CST 2024
node03
Fri Jun 14 11:49:58 CST 2024
app
Fri Jun 14 11:49:58 CST 2024

2.4 添加所有主机解析

[root@node01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.140.10	node01
192.168.140.11	node02
192.168.140.12	node03
192.168.140.13	app

[root@node01 ~]# for i in 10 11 12 13
> do
> scp /etc/hosts root@192.168.140.$i:/etc/hosts
> done
hosts                                                                                               100%  244   411.0KB/s   00:00    
hosts                                                                                               100%  244    51.2KB/s   00:00    
hosts                                                                                               100%  244    58.0KB/s   00:00    
hosts                                                                                               100%  244    58.3KB/s   00:00    
[root@node01 ~]# 

3、配置ceph软件仓库

[root@node01 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@node01 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
[root@node01 ~]# cat /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=0
priority=1
 
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=0
priority=1
 
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=0
priority=1
[root@ceph-node01 ~]# yum update
[root@ceph-node01 ~]# reboot

4、安装ceph-deploy工具

[root@node01 ~]# yum install -y ceph-deploy 

5、ceph集群初始化

[root@node01 ~]# mkdir /etc/ceph
[root@node01 ~]# cd /etc/ceph
[root@node01 ceph]# 

ImportError: No module named pkg_resources初始化时提示错误,安装yum install -y python2-pip

[root@node01 ceph]# ceph-deploy new node01 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f671c856ed8>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f671bfd17a0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['node01']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/sbin/ip link show
[node01][INFO  ] Running command: /usr/sbin/ip addr show
[node01][DEBUG ] IP addresses found: [u'192.168.140.10']
[ceph_deploy.new][DEBUG ] Resolving host node01
[ceph_deploy.new][DEBUG ] Monitor node01 at 192.168.140.10
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node01']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.140.10']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@node01 ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

ceph.conf:配置文件
ceph-deploy-ceph.log:日志文件
ceph.mon.keyring:ceph mon组件通信时的认证令牌

6、所有ceph集群节点安装相关软件

[root@node01 ceph]# yum install -y ceph ceph-radosgw 

[root@node01 ceph]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

7、客户端安装ceph-common软件

[root@app ~]# yum install -y ceph-common

8、在ceph集群中创建ceph monitor组件

8.1 在配置文件中添加public network配置

[root@node01 ceph]# cat ceph.conf
[global]
fsid = e2010562-0bae-4999-9247-4017f875acc8
mon_initial_members = node01
mon_host = 192.168.140.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.140.0/24

8.2 在node01节点创建MON服务

[root@node01 ceph]# ceph-deploy mon create-initial 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f45d3cec368>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f45d3d3c500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node01
[ceph_deploy.mon][DEBUG ] detecting platform for host node01 ...
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.9.2009 Core
[node01][DEBUG ] determining if provided host has same hostname in remote
[node01][DEBUG ] get remote short hostname
[node01][DEBUG ] deploying mon to node01
[node01][DEBUG ] get remote short hostname
[node01][DEBUG ] remote hostname: node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][DEBUG ] create the mon path if it does not exist
[node01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node01/done
[node01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node01/done
[node01][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node01.mon.keyring
[node01][DEBUG ] create the monitor keyring file
[node01][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node01 --keyring /var/lib/ceph/tmp/ceph-node01.mon.keyring --setuser 167 --setgroup 167
[node01][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node01.mon.keyring
[node01][DEBUG ] create a done file to avoid re-doing the mon deployment
[node01][DEBUG ] create the init path if it does not exist
[node01][INFO  ] Running command: systemctl enable ceph.target
[node01][INFO  ] Running command: systemctl enable ceph-mon@node01
[node01][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node01.service to /usr/lib/systemd/system/ceph-mon@.service.
[node01][INFO  ] Running command: systemctl start ceph-mon@node01
[node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node01.asok mon_status
[node01][DEBUG ] ********************************************************************************
[node01][DEBUG ] status for monitor: mon.node01
[node01][DEBUG ] {
[node01][DEBUG ]   "election_epoch": 3, 
[node01][DEBUG ]   "extra_probe_peers": [], 
[node01][DEBUG ]   "feature_map": {
[node01][DEBUG ]     "mon": [
[node01][DEBUG ]       {
[node01][DEBUG ]         "features": "0x3ffddff8ffacfffb", 
[node01][DEBUG ]         "num": 1, 
[node01][DEBUG ]         "release": "luminous"
[node01][DEBUG ]       }
[node01][DEBUG ]     ]
[node01][DEBUG ]   }, 
[node01][DEBUG ]   "features": {
[node01][DEBUG ]     "quorum_con": "4611087854031667195", 
[node01][DEBUG ]     "quorum_mon": [
[node01][DEBUG ]       "kraken", 
[node01][DEBUG ]       "luminous", 
[node01][DEBUG ]       "mimic", 
[node01][DEBUG ]       "osdmap-prune"
[node01][DEBUG ]     ], 
[node01][DEBUG ]     "required_con": "144115738102218752", 
[node01][DEBUG ]     "required_mon": [
[node01][DEBUG ]       "kraken", 
[node01][DEBUG ]       "luminous", 
[node01][DEBUG ]       "mimic", 
[node01][DEBUG ]       "osdmap-prune"
[node01][DEBUG ]     ]
[node01][DEBUG ]   }, 
[node01][DEBUG ]   "monmap": {
[node01][DEBUG ]     "created": "2024-06-14 14:33:44.291070", 
[node01][DEBUG ]     "epoch": 1, 
[node01][DEBUG ]     "features": {
[node01][DEBUG ]       "optional": [], 
[node01][DEBUG ]       "persistent": [
[node01][DEBUG ]         "kraken", 
[node01][DEBUG ]         "luminous", 
[node01][DEBUG ]         "mimic", 
[node01][DEBUG ]         "osdmap-prune"
[node01][DEBUG ]       ]
[node01][DEBUG ]     }, 
[node01][DEBUG ]     "fsid": "e2010562-0bae-4999-9247-4017f875acc8", 
[node01][DEBUG ]     "modified": "2024-06-14 14:33:44.291070", 
[node01][DEBUG ]     "mons": [
[node01][DEBUG ]       {
[node01][DEBUG ]         "addr": "192.168.140.10:6789/0", 
[node01][DEBUG ]         "name": "node01", 
[node01][DEBUG ]         "public_addr": "192.168.140.10:6789/0", 
[node01][DEBUG ]         "rank": 0
[node01][DEBUG ]       }
[node01][DEBUG ]     ]
[node01][DEBUG ]   }, 
[node01][DEBUG ]   "name": "node01", 
[node01][DEBUG ]   "outside_quorum": [], 
[node01][DEBUG ]   "quorum": [
[node01][DEBUG ]     0
[node01][DEBUG ]   ], 
[node01][DEBUG ]   "rank": 0, 
[node01][DEBUG ]   "state": "leader", 
[node01][DEBUG ]   "sync_provider": []
[node01][DEBUG ] }
[node01][DEBUG ] ********************************************************************************
[node01][INFO  ] monitor: mon.node01 is running
[node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node01.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node01.asok mon_status
[ceph_deploy.mon][INFO  ] mon.node01 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmph2e00E
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] get remote short hostname
[node01][DEBUG ] fetch remote file
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.node01.asok mon_status
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.admin
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-mds
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-mgr
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-osd
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmph2e00E
[root@node01 ceph]# 
[root@node01 ceph]# netstat -tunlp | grep ceph-mon
tcp        0      0 192.168.140.10:6789     0.0.0.0:*               LISTEN      8522/ceph-mon   
[root@node01 ceph]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log  rbdmap
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring
[root@node01 ceph]# ceph health
HEALTH_OK

8.2 在所有ceph集群节点上同步配置

[root@node01 ceph]# ceph-deploy admin node01 node02 node03 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin node01 node02 node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbc339dd7e8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['node01', 'node02', 'node03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7fbc346f3320>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node02
The authenticity of host 'node02 (192.168.140.11)' can't be established.
ECDSA key fingerprint is SHA256:zyaTdmF7ziBAGyqkd/uOp+BnislJYeB9cr5rg0Mh488.
ECDSA key fingerprint is MD5:3d:95:e4:7f:f0:ca:e6:0d:db:24:15:49:da:50:01:d4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node02' (ECDSA) to the list of known hosts.
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node03
The authenticity of host 'node03 (192.168.140.12)' can't be established.
ECDSA key fingerprint is SHA256:cMuRX+kQHIlvZ74g4XfGisp4SpqQ14GmzD8fXk/7Rc0.
ECDSA key fingerprint is MD5:9b:1f:e0:95:ac:b8:ca:43:3c:49:9a:48:4f:e3:11:a5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node03' (ECDSA) to the list of known hosts.
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

在其他节点查看配置

[root@node02 ~]# ls /etc/ceph/
ceph.client.admin.keyring  ceph.conf  rbdmap  tmp5ymlpm

8.3 为避免MON的单点故障,在另外两个节点添加MON服务

[root@node01 ceph]# ceph-deploy mon add node02 
[root@node01 ceph]# ceph-deploy mon add node03

8.4 查看ceph集群状态

[root@node01 ceph]# ceph -s
  cluster:
    id:     e2010562-0bae-4999-9247-4017f875acc8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node01,node02,node03
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs: 

9、创建mgr服务

ceph自L版本后,添加Ceph Manager Daemon,简称ceph-mgr
该组件的出现主要是为了缓解ceph-monitor的压力,分担了moniotr的工作,例如插件管理等,以更好的管理集群

9.1 在node01节点创建mgr

[root@node01 ceph]# ceph-deploy mgr create node01 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('node01', 'node01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0a5c66ebd8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f0a5cf4f230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts node01:node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][WARNIN] mgr keyring does not exist yet, creating one
[node01][DEBUG ] create a keyring file
[node01][DEBUG ] create path recursively if it doesn't exist
[node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node01/keyring
[node01][INFO  ] Running command: systemctl enable ceph-mgr@node01
[node01][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@node01.service to /usr/lib/systemd/system/ceph-mgr@.service.
[node01][INFO  ] Running command: systemctl start ceph-mgr@node01
[node01][INFO  ] Running command: systemctl enable ceph.target
[root@node01 ceph]# 
       
[root@node01 ceph]# netstat -tunlp | grep ceph-mgr
tcp        0      0 192.168.140.10:6800     0.0.0.0:*               LISTEN      9115/ceph-mgr      

9.2 添加多个mgr

[root@node01 ceph]# ceph-deploy mgr create node02
[root@node01 ceph]# ceph-deploy mgr create node03

9.3 再次查看集群状态

[root@node01 ceph]# ceph -s
  cluster:
    id:     e2010562-0bae-4999-9247-4017f875acc8
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum node01,node02,node03
    mgr: node01(active), standbys: node02, node03
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:   

10、添加osd

10.1 磁盘初始化

[root@node01 ceph]# ceph-deploy disk zap node01 /dev/sdb 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node01 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbbcf959878>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : node01
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fbbcf994a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[node01][DEBUG ] zeroing last few blocks of device
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[node01][WARNIN] --> Zapping: /dev/sdb
[node01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node01][WARNIN] Running command: /usr/bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
[node01][WARNIN]  stderr: 10+0 records in
[node01][WARNIN] 10+0 records out
[node01][WARNIN] 10485760 bytes (10 MB) copied
[node01][WARNIN]  stderr: , 0.0304543 s, 344 MB/s
[node01][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>
[root@node01 ceph]# ceph-deploy disk zap node02 /dev/sdb 
[root@node01 ceph]# ceph-deploy disk zap node03 /dev/sdb 

10.2 添加osd服务

[root@node01 ceph]# ceph-deploy osd create --data /dev/sdb node01 
[root@node01 ceph]# ceph-deploy osd create --data /dev/sdb node02 
[root@node01 ceph]# ceph-deploy osd create --data /dev/sdb node03 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbf998db998>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node03
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fbf999119b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to node03
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node03][WARNIN] osd keyring does not exist yet, creating one
[node03][DEBUG ] create a keyring file
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[node03][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[node03][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 33310a37-6e35-4f34-a8c6-543254bf0384
[node03][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-0cf70ab4-cab2-4492-9610-691efc7bc222 /dev/sdb
[node03][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[node03][WARNIN]  stdout: Volume group "ceph-0cf70ab4-cab2-4492-9610-691efc7bc222" successfully created
[node03][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-33310a37-6e35-4f34-a8c6-543254bf0384 ceph-0cf70ab4-cab2-4492-9610-691efc7bc222
[node03][WARNIN]  stdout: Logical volume "osd-block-33310a37-6e35-4f34-a8c6-543254bf0384" created.
[node03][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[node03][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[node03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-0cf70ab4-cab2-4492-9610-691efc7bc222/osd-block-33310a37-6e35-4f34-a8c6-543254bf0384
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[node03][WARNIN] Running command: /bin/ln -s /dev/ceph-0cf70ab4-cab2-4492-9610-691efc7bc222/osd-block-33310a37-6e35-4f34-a8c6-543254bf0384 /var/lib/ceph/osd/ceph-2/block
[node03][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[node03][WARNIN]  stderr: got monmap epoch 3
[node03][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQDc7Wtm/uzoEhAAJfd9GdY0QxFhzRgalw5UWg==
[node03][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[node03][WARNIN]  stdout: added entity osd.2 auth auth(auid = 18446744073709551615 key=AQDc7Wtm/uzoEhAAJfd9GdY0QxFhzRgalw5UWg== with 0 caps)
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[node03][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 33310a37-6e35-4f34-a8c6-543254bf0384 --setuser ceph --setgroup ceph
[node03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[node03][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0cf70ab4-cab2-4492-9610-691efc7bc222/osd-block-33310a37-6e35-4f34-a8c6-543254bf0384 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[node03][WARNIN] Running command: /bin/ln -snf /dev/ceph-0cf70ab4-cab2-4492-9610-691efc7bc222/osd-block-33310a37-6e35-4f34-a8c6-543254bf0384 /var/lib/ceph/osd/ceph-2/block
[node03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[node03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-33310a37-6e35-4f34-a8c6-543254bf0384
[node03][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-33310a37-6e35-4f34-a8c6-543254bf0384.service to /usr/lib/systemd/system/ceph-volume@.service.
[node03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[node03][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service to /usr/lib/systemd/system/ceph-osd@.service.
[node03][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[node03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[node03][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[node03][INFO  ] checking OSD status...
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node03 is now ready for osd use.
[root@node01 ceph]# netstat -tunlp | grep osd
tcp        0      0 192.168.140.10:6801     0.0.0.0:*               LISTEN      9705/ceph-osd       
tcp        0      0 192.168.140.10:6802     0.0.0.0:*               LISTEN      9705/ceph-osd       
tcp        0      0 192.168.140.10:6803     0.0.0.0:*               LISTEN      9705/ceph-osd       
tcp        0      0 192.168.140.10:6804     0.0.0.0:*               LISTEN      9705/ceph-osd   

10.3 再次查看集群状态

[root@node01 ceph]# ceph -s
  cluster:
    id:     e2010562-0bae-4999-9247-4017f875acc8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node01,node02,node03
    mgr: node01(active), standbys: node02, node03
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs: 

集群扩容的流程(加osd):
1、准备系统基础环境
2、新节点安装ceph, ceph-radosgw软件
3、同步配置文件到新节点
4、磁盘初始化,添加osd

11、添加dashboard插件

11.1 查看mgr运行的主节点

[root@node01 ceph]# ceph -s
  cluster:
    id:     e2010562-0bae-4999-9247-4017f875acc8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node01,node02,node03
    mgr: node01(active), standbys: node02, node03
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs: 

11.2 启用dashboard插件

[root@node01 ceph]# ceph mgr module enable dashboard

11.3 创建dashboard需要的证书

[root@node01 ceph]# ceph dashboard create-self-signed-cert
Self-signed certificate created

[root@node01 ceph]# mkdir /etc/mgr-dashboard
[root@node01 ceph]# cd /etc/mgr-dashboard
[root@node01 mgr-dashboard]#  openssl req -new -nodes -x509 -subj "/O=IT-ceph/CN=cn" -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca
Generating a 2048 bit RSA private key
...............................................................................................+++
......+++
writing new private key to 'dashboard.key'
-----
[root@node01 mgr-dashboard]# 
[root@node01 mgr-dashboard]# ls
dashboard.crt  dashboard.key
[root@node01 mgr-dashboard]# 

11.4 设置dashboard访问地址

[root@node01 mgr-dashboard]# ceph config set mgr mgr/dashboard/server_addr 192.168.140.10
[root@node01 mgr-dashboard]# ceph config set mgr mgr/dashboard/server_port 8080
[root@node01 mgr-dashboard]# 

11.5 重启dashboard插件

[root@node01 mgr-dashboard]# ceph mgr module disable dashboard
[root@node01 mgr-dashboard]# ceph mgr module enable dashboard

11.6 设置用户名、密码

[root@node01 mgr-dashboard]#  ceph dashboard set-login-credentials martin redhat
Username and password updated
[root@node01 mgr-dashboard]# 

11.7 访问webUI

在这里插入图片描述在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1830410.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【C++】多态|原理|override|final|抽象类|多继承虚函数表|对象模型|虚表打印|(万字详解版)

目录 ​编辑 一.多态的概念 二.多态的构建 虚函数 重写 虚函数重写的例外 协变 隐藏 析构函数的重写 三.重载、重写(覆盖)、隐藏(重定义)的对比 四.C11新增的 override 和 final override final 五.抽象类 六.多态的原理 虚函数表 总结&#xff1a; 引用…

CMake多行注释以及通过Message打印不同级别日志

1 CMake注释 1.1 单行注释 CMake中单行注释时以 # 开头。 # 指定CMake最低版本cmake_minimum_required(VERSION 3.20)# 这是注释project(myproject)1.2 多行注释 多行注释时&#xff0c;以 #[[ 开头&#xff0c;以 ]] 结尾&#xff0c;中间都可以写注释内容。3.0之前的版本…

MySQL面试重点-1

1. 数据库基础知识&#xff1a; DDL、DML、DQL、DCL的概念与区别&#xff1f; DDL&#xff08;数据定义语言&#xff09;&#xff1a;创建&#xff08;CREATE&#xff09;数据库中的各种对象&#xff1a;表、视图、索引等DML&#xff08;数据操纵语言&#xff09;&#xff1a…

点积和叉积

文章目录 1、向量的点积2、向量的叉积3、矩阵的点积4、矩阵的叉积 1、向量的点积 数量积又称标量积&#xff08;Scalar product&#xff09;、点积&#xff08;Dot product&#xff09;&#xff0c;在欧几里得空间&#xff08;Euclidean space&#xff09;中称为内积&#xff…

为何云原生是未来?企业IT架构的颠覆与重构(上)

&#x1f407;明明跟你说过&#xff1a;个人主页 &#x1f3c5;个人专栏&#xff1a;《未来已来&#xff1a;云原生之旅》&#x1f3c5; &#x1f516;行路有良友&#xff0c;便是天堂&#x1f516; 目录 一、引言 1、什么是云原生 2、云原生的背景和起源 背景 起源 关…

酷开会员丨酷开系统K歌模式,父亲节的家庭欢聚时光

K歌以其独特的魅力&#xff0c;为家庭娱乐带来了无限乐趣。想象一下&#xff0c;父亲节这天&#xff0c;打开电视进入K歌频道&#xff0c;与家人一起嗨唱&#xff0c;客厅里充满了欢声笑语&#xff0c;酷开系统的K歌应用也就成为了连接亲情的桥梁&#xff0c;让爸爸们都能在这个…

CSS-0_1 CSS和层叠(样式优先级、内联样式、选择器 用户代理样式)

CSS 的本质就是声明规则 ——《深入解析CSS》 文章目录 CSS层叠和优先级用户代理样式请和用户代理样式和谐相处 选择器单选择器的优先级选择器组的优先级关于选择器的其他源码顺序尽可能的选择优先级低的选择器 内联样式内联样式和JavaScript !important多个 !important 碎碎念…

Wing FTP Server v7.2.0 解锁版安装教程 (跨平台的专业FTP服务器软件)

前言 Wing FTP Server是一款跨平台的专业FTP服务器软件, 支持可扩展处理器架构采用异步IO处理, 在速度和效率方面领先于其他同类产品. 它在高负载的情况下也能持续地正常运行, 非常适合企业文件传输. 通过基于Web管理端, 何时何地都能轻松管理远程的服务器. 除了基本功能外, 它…

qt笔记之qml和C++的交互系列(二):rootObject

qt笔记之qml和C的交互系列(二)&#xff1a;rootObject code review! —— 2024-06-17 杭州 夜 文章目录 qt笔记之qml和C的交互系列(二)&#xff1a;rootObject一.使用rootObject例程1.运行2.main.cpp3.main.qml3.用于Debug的加长版main.cpp 二.QML文件的根对象详解基本概念常…

蜂鸣器:基础(1)

蜂鸣器&#xff1a;基础&#xff08;1&#xff09; 原文&#xff1a; In this tutorial, we are going to learn how to use the buzzer with Arduino, In detail, we will learn: 在本教程中&#xff0c;我们将学习如何将蜂鸣器与Arduino一起使用&#xff0c;将学习&#xff…

C的I/O操作

目录 引言 一、文件与目录操作 1. 打开与关闭文件 2. 文件读写操作 3. 文件定位与错误处理 二、字符流与字节流 1. 字符流处理 2. 字节流处理 三、序列化与反序列化 1. 序列化 2. 反序列化 四、新的I/O&#xff08;NIO&#xff09; 表格总结 文件与目录操作 字符…

证明 均匀分布 的期望和方差

均匀分布 均匀分布&#xff08;Uniform Distribution&#xff09;是一种常见的连续型概率分布&#xff0c;其中随机变量在给定区间内的每个值都有相同的概率。假设随机变量 ( X ) 在区间 ([a, b]) 上服从均匀分布&#xff0c;记作 均匀分布的概率密度函数&#xff08;PDF&am…

好用的库函数,qsort函数大详解(干货满满!)(进阶)

前言&#xff1a; 小编在上一篇文章说了这一篇将要写qsort函数的模拟实现&#xff0c;那么废话不多说&#xff0c;现在开始进入今天的代码之旅喽&#xff01; 目录&#xff1a; 1.qsort函数的模拟实现的逻辑和思路 2.qsort函数模拟实现的代码实现 3.代码展示 1.qsort函数的模…

YOLOv9独家提点|加入MobileViT 、SK 、Double Attention Networks、CoTAttention等几十种注意力机制(五)

本文介绍了YOLOv9模型的改进,包括加入多种注意力机制,如SE、CBAM、ECA和SimAM。此外,还探讨了MobileViT轻量级视觉Transformer在移动设备上的应用,以及SelectiveKernelNetworks和A2-Nets的双注意力结构。最后,CoTAttention网络在视觉问答任务中的改进展示了跨模态注意力交…

今日早报 每日精选15条新闻简报 每天一分钟 知晓天下事 6月17日,星期一

每天一分钟&#xff0c;知晓天下事&#xff01; 2024年6月17日 星期一 农历五月十二 1、 今年首个红色山洪灾害预警&#xff1a;17日&#xff0c;浙江西南部、福建北部局地发生山洪灾害可能性很大。 2、 国家医疗保障局重构产科服务价格项目&#xff0c;“分娩镇痛”亲情陪产…

AI大模型在运动项目的深度融合和在穿戴设备的实践及未来运动健康技术发展

文章目录 1. 技术架构2. 模型选择2.1 LSTM&#xff08;长短期记忆网络&#xff09;2.2 CNN&#xff08;卷积神经网络&#xff09;2.3 Transformer 3. 数据处理数据预处理 4. 实时性要求4.1 边缘计算4.2 模型优化 5. 数据隐私与安全6. 深入分析AI大模型在穿戴设备的应用和未来发…

Yum安装LAMP

查看当前80端口是否被占用 ss -tulanp | grep 80查询httpd是否在yum源中 yum info httpd安装httpd yum -y install httpd启动httpd服务&#xff0c;设置开机自启 systemctl enable httpd --now systemctl start httpd查看当前进程 ps aux | grep httpd查看当前IP&#xff…

Google谈出海:品牌「性价比」转向「心价比」

Google Marketing Live中国站活动现场 越来越多的中国全球化品牌基于对全球消费和海外地区的深刻洞察&#xff0c;不断提升产品研发和迭代能力&#xff0c;在海外消费者心中塑造「中国质造」和「中国智造」的新形象。2023年6月15日&#xff0c;凯度与Google合作发布《2023 凯…

AI数据分析:根据Excel表格数据进行时间序列分析

ChatGPT中输入提示词&#xff1a; 你是一个Python编程专家&#xff0c;要完成一个Python脚本编写的任务&#xff0c;具体步骤如下&#xff1a; 读取Excel表格&#xff1a;"F:\AI自媒体内容\AI行业数据分析\toolify月榜\toolify2023年-2024年月排行榜汇总数据.xlsx"…

vite-plugin-pwa 离线安装Vite应用

渐进式Web应用&#xff08;PWA&#xff09;通过结合 Web 和移动应用的特点&#xff0c;为用户带来更加流畅和快速的体验。且PWA支持离线访问能力&#xff08;访问静态资源本地缓存&#xff09;&#xff0c;极大提高了用户交互的流畅性&#xff0c;降低非必要的网络依赖。尤其适…