私有云基础架构与运维(OpenStack+openEuler版)
项目一.OpenStack 云计算基础架构平台概述
任务1.1 安装部署虚拟化环境
通过安装 openEuler-22.09 操作系统来熟悉虚拟机的安装,在操作过程中熟悉计算机虚
拟化资源的分配管理。
1.1.1 VMware Workstation 17 Pro 的安装\
在个人计算机上安装 VMware 公司的虚拟机软件 VMware Workstation 17 Pro 版本详细
的安装教程可自行在网上查询。
安装完成后,在桌面上会生成 VMware Workstation Pro 图标,双击打开虚拟机软件工
作界面,如图 1-6 所示。
图 1-6 虚拟机软件工作界面
1.1.2安装虚拟机
选择 openEuler 系统替换经典的 CentOS 系统作为实验虚拟环境。
启动 VMware Workstation 虚拟机软件,安装一台 openEuler22.09 操作系统的虚拟机,
节点类型为 2CPU、2GB内存、40GB系统磁盘,虚拟机的创建结果如图 1-7 所示。
图 1-7 openEuler22.09 操作系统虚拟机创建结果
图 1-8 Linux 系统安装界面
稍等片刻后,系统会自动进入系统语言选择界面,如图 1-9 所示,默认选择 English 语
言选项,单击“Continue”按钮,进入 Linux 系统安装向导界面,如图 1-10 所示。
图 1-9 系统语言选择界
图 1-10 系统安装向导界面
在 Linux 系统安装向导界面中单击“Installation Destination”,进入磁盘分区界面,如
图 1-11 所示。进入磁盘分区界面后,有两个选项:自动配置分区(Automatic)以及手动配
置分区(Custom)。分区应该按照实际服务器用途而定,这里选择自动分区方案。单击左
上角 Done 完成设置,分区完成。
图 1-11 磁盘分区界面
安装向导界面单击“Time & Data”,进入设置时区界面,用户可选择所在时区,如
“Asia/Shanghai”上海,并设置“24-hour”小时制,如图 1-12 所示。
图 1-12 设置时区
在安装向导界面单击“Root Password”,进入设置 root 密码界面,设置 root 用户密码,
并勾选“Use SM3 to encrypt the password”使用国密算法 SM3(商密 3 密码杂凑算法)加
密用户密码,如图 1-13 所示。如果设置的密码过于简单,则需单击两次“Done”完成确认。
完成上述设置后,在安装向导界面单击“开始安装”,系统开始安装,安装完成后,进入安装完成界面,如图所示,单击“重启系统”即可重启系统。
等待片刻后,进入系统登录成功界面,如图所示。在这里输入root用户名和设置的用户密码,即可以管理员身份成功进入openEuler22.09系统。
至此,openEuler22.09系统安装成功。
任务1.2 环境准备
次部署基于三节点环境进行部署,三个节点分别是控制节点(Controller)、计算节点(Compute)、存储节点(Storage)
首先根据上面的步骤,使用VMware Workstation Pro 创建3台基于openEuler22.09的虚拟机。
接下来的安装将按照以下节点环境进行。
如果你的IP环境不同,请按照你的IP环境来修改相应的配置文件
节点 | IP | 作用 |
controller | 192.168.213.130 | 控制节点 |
compute | 192.168.213.131 | 计算节点 |
storage | 192.168.213.132 | 存储节点 |
在正式部署之前,需要对每个节点做如下配置和检查
1.2.1 yum源配置
[root@controller ~]# vi /etc/yum.repos.d/openEuler.repo [OS] name=OS baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler [debuginfo] name=debuginfo baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/debuginfo/$basearch/ enabled=1 gpgcheck=1 gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/debuginfo/$basearch/RPM-GPG-KEY-openEuler [source] name=source baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/source/ enabled=1 gpgcheck=1 gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/source/RPM-GPG-KEY-openEuler [update] name=update baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/update/$basearch/ enabled=1 gpgcheck=1 gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler |
清理yum缓存,并出现生成即可
[root@controller ~]# yum clean all [root@controller ~]# yum makecache [root@controller ~]# yum update |
1.2.2 修改主机名及其映射
分别修改3台节点的主机名,以controller为例
[root@controller ~]# hostnamectl set-hostname controller [root@controller ~]# bash |
修改每个节点的/etc/hosts文件,增加如下内容:
[root@controller ~]# cat /etc/hosts 192.168.213.130 controller 192.168.213.131 compute 192.168.213.132 storage |
1.2.3 时间同步
集群要求每个节点的时间一致,一般由时钟同步软件保证,这里使用chcrony软件。
Controller节点
(1)安装服务
[root@controller ~]# dnf install chrony |
(2)修改/etc/chrony.conf配置文件,新增加如下内容
# 表示允许哪些IP从本节点同步时钟 pool ntp.aliyun.com iburst allow 192.168.213.0/24 |
(3)重启服务
[root@controller ~]# systemctl restart chronyd |
其他节点
(1)安装服务
[root@storage ~]# dnf install chrony |
(2)修改/etc/chrony.conf配置文件,增加内容如下:
[root@compute ~]# vi /etc/chrony.conf server 192.168.213.130 iburst # 192.168.213.130是controller IP,表示从这个机器获取时间,这里填自己controller 节点IP |
(3)重启服务
[root@compute ~]# systemctl restart chronyd |
同时,要把pool pool.ntp.org iburst这一行注释掉,不从公网同步时钟
(4)配置完成后,检查一下结果,在其他非controller节点执行
[root@compute ~]# chronyc sources |
返回结果类似如下内容,表示成功从controller同步时间
1.2.4 安装数据库
数据库安装在Controller节点,这里我们使用MariaDB
(1)安装软件包
[root@controller ~]# dnf install mysql-config mariadb mariadb-server python3-PyMySQL |
(2)新增配置文件/etc/my.cnf.d/openstacdk.cnf,内容如下
root@controller ~]# cat /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 192.168.213.130 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 |
(3)启动服务器
[root@controller ~]# systemctl start mariadb |
(4)初始化数据库
[root@controller ~]# mysql_secure_installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): #这里输入密码,由于我们时初始化Mariadb,直接回车就行 OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. #这里根据提示输入N Switch to unix_socket authentication [Y/n] n ... skipping. You already have your root account protected, so you can safely answer 'n'. #输入Y,修改密码 Change the root password? [Y/n] y New password: 000000 #这里设置密码为6个零,可根据自己的喜好设置 Re-enter new password: 000000 Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. #输入Y,删除匿名用户 Remove anonymous users? [Y/n] y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. #输入Y,关闭root远程登录权限 Disallow root login remotely? [Y/n] y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. #输入Y,删除test数据库 Remove test database and access to it? [Y/n] y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. #输入Y,重载配置 Reload privilege tables now? [Y/n] y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB! |
验证,根据设置的密码,检查是否能登录mariadb
[root@controller ~]# mysql -uroot -p |
1.2.5 安装消息队列
息队列安装在controller节点,这里使用rabbitmq
(1)安装软件包
[root@controller ~]# dnf install rabbitmq-server |
(2)启动服务
[root@controller ~]# systemctl start rabbitmq-server |
(3)配置openstack用户,RABBIT_PASS是openstack服务登录消息队里的密码,需要和后面各个服务的配置保持一致。
[root@controller ~]# rabbitmqctl add_user openstack 000000 [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" |
1.2.6 安装缓存服务
消息队列安装在controller节点,这里使用Memcached。
(1)安装软件包
[root@controller ~]# dnf install memcached python3-memcached |
(2)修改配置文件/etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller" |
(3)启动服务
[root@controller ~]# systemctl start memcached |
任务1.3 部署服务
1.3.1 Keystone
Keystone 是 OpenStack 的身份服务(Identity Service),它负责管理用户、角色、项目(租户)和域的认证和授权。Keystone 是 OpenStack 的核心组件之一,所有其他 OpenStack 服务都依赖于 Keystone 来进行用户身份验证和授权,必须安装。
Controller节点
(1)创建Keystone数据库并授权
[root@controller ~]# mysql -uroot -p MariaDB [(none)]> CREATE DATABASE keystone; |
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY '000000'; |
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY '000000'; |
(2)安装软件包
[root@controller ~]# dnf install openstack-keystone httpd mod_wsgi |
(3)配置Keystone
[root@controller ~]# vi /etc/keystone/keystone.conf [database] # 配置数据库入口 connection = mysql+pymysql://keystone:000000@controller/keystone [token] # 配置token provider provider = fernet |
(4)同步数据库
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone |
(5)初始化Fernet密钥仓库
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone |
(6)启动服务
[root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 \ --bootstrap-admin-url http://controller:5000/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne |
(7)配置Apache HTTP server
打开httpd.conf文件并配置
[root@controller ~]# vi /etc/httpd/conf/httpd.conf #修改以下项,如果没有则新添加 ServerName controller |
创建软链接
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ |
(8)启动Apache HTTP服务
[root@controller ~]# systemctl enable httpd.service [root@controller ~]# systemctl start httpd.service [root@controller ~]# systemctl status httpd.service |
(9)创建环境变量配置
[root@controller ~]# cat << EOF >> ~/.admin-openrc > export OS_PROJECT_DOMAIN_NAME=Default > export OS_USER_DOMAIN_NAME=Default > export OS_PROJECT_NAME=admin > export OS_USERNAME=admin > export OS_PASSWORD=000000 > export OS_AUTH_URL=http://controller:5000/v > export OS_IDENTITY_API_VERSION=3 > export OS_IMAGE_API_VERSION=2 > EOF |
(10)依次创建domain, projects, users, roles
需要先安装python3-openstackclient
[root@controller ~]# dnf install python3-openstackclient |
导入环境变量并验证
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
(11)创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建
[root@controller ~]# openstack domain create --description "An Example Domain" example |
[root@controller ~]# openstack project create --domain default --description "Service Project" service |
(12)创建(non-admin)project myproject,user myuser和role myrole,为myproject 和 myuser 添加角色myrole
[root@controller ~]# openstack project create --domain default --description "Demo Project" myproject |
[root@controller ~]# openstack user create --domain default --password-prompt myuser User Password: Repeat User Password: |
[root@controller ~]# openstack role create myrole |
(13)将角色 myrole 分配给用户 myuser,并关联到项目 myproject,并验证角色是否已成功分配
[root@controller ~]# openstack role add --project myproject --user myuser myrole [root@controller ~]# openstack role assignment list --project myproject --user myuser |
(14)验证
取消临时环境变量OS_AUTH_URL和OS_PASSWORD:
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# unset OS_AUTH_URL OS_PASSWORD |
为admin用户请求token
[root@controller ~]# openstack --os-auth-urlhttp://controller:5000/v3 \ > --os-project-domain-name Default --os-user-domain-name Default \ > --os-project-name admin --os-username admin token issue Password: 000000 |
为myuser用户请求token
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \ > --os-project-domain-name Default --os-user-domain-name Default \ > --os-project-name myproject --os-username myuser token issue Password: 000000 |
1.3.2 Glance
Glance 是 OpenStack 中的镜像服务(Image Service),负责管理和存储虚拟机镜像。它允许用户上传、下载、删除和查询虚拟机镜像,并支持多种镜像格式(如 QCOW2、RAW、VMDK 等)。Glance 是 OpenStack 计算服务(Nova)的核心组件之一,为虚拟机提供启动镜像,必须安装
Controller节点
(1)创建glance数据库并授权
[root@controller ~]# mysql -u root -p MariaDB [(none)]> CREATE DATABASE glance; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY '000000'; MariaDB [(none)]> exit |
(2)初始化 glance 资源对象
导入环境变量并验证
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
创建用户时,命令行会提示输入密码,请输入自定义的密码。
[root@controller ~]# openstack user create --domain default --password-prompt glance User Password:000000 Repeat User Password:000000 |
添加glance用户到service project并指定admin角色
[root@controller ~]# openstack role add --project service --user glance admin |
(3)创建glance服务实体
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image |
(4)创建glance API服务
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292 |
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292 |
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292 |
(5)安装软件包
root@controller ~]# dnf install openstack-glance |
(6)修改glance配置文件
[root@controller ~]# vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:000000@controller/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = 000000 [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ |
(7)同步数据库
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance |
(8)启动服务
[root@controller ~]# systemctl enable openstack-glance-api.service [root@controller ~]# systemctl start openstack-glance-api.service |
(9)验证
导入环境变量并验证
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
下载镜像
x86镜像下载
[root@controller~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img |
或者换个镜像
[root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img |
向Image服务上传镜像
[root@controller ~]# openstack image create --disk-format qcow2 --container-format bare --file cirros-0.4.0-aarch64-disk.img --public cirros |
确认镜像上传并验证属性
[root@controller ~]# openstack image list |
1.3.3 Placement
Placement 是 OpenStack 中的一个核心服务,主要负责资源调度和分配。它是 OpenStack 计算服务(Nova)的重要组成部分,用于管理计算节点的资源(如 CPU、内存、存储等),并确保资源的有效利用和负载均衡
controller节点
安装、配置Placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。
(1)创建数据库
使用root用户访问数据库服务:
[root@controller ~]# mysql -u root -p |
(2)创建placement数据库,授权数据库访问
MariaDB [(none)]> CREATE DATABASE placement; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \ IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \ IDENTIFIED BY '000000'; MariaDB [(none)]> exit |
(3)配置用户和Endpoints
source admin凭证,以获取admin命令行权限
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
创建placement用户并设置用户密码
[root@controller ~]# openstack user create --domain default --password-prompt placement User Password:000000 Repeat User Password:000000 |
添加placement用户到service project并指定admin角色
[root@controller ~]# openstack role add --project service --user placement admin |
创建placement服务实体:
[root@controller ~]# openstack service create --name placement \ --description "Placement API" placement |
创建Placement API服务endpoints
[root@controller ~]# openstack endpoint create --region RegionOne \ placement public http://controller:8778 |
[root@controller ~]# openstack endpoint create --region RegionOne \ placement internal http://controller:8778 |
[root@controller ~]# openstack endpoint create --region RegionOne \ placement admin http://controller:8778 |
(4)安装及配置组件
安装软件包
[root@controller ~]# dnf install openstack-placement-api |
编辑/etc/placement/placement.conf配置文件,完成如下操作:
[root@controller ~]# vi /etc/placement/placement.conf |
在[placement_database]部分,配置数据库入口:
[placement_database] connection = mysql+pymysql://placement:000000@controller/placement |
在[api]和[keystone_authtoken]部分,配置身份认证服务入口
[api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = 000000 |
数据库同步,填充placement数据库
[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement |
(4)启动服务
重启httpd服务
[root@controller ~]# systemctl restart httpd |
(5)验证
source admin凭证,以获取admin命令行权限
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
执行状态检查
[root@controller ~]# placement-status upgrade check |
这里可以看到Policy File JSON to YAML Migration的结果为Failure。这是因为在Placement中,JSON格式的policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具 将现有的JSON格式policy文件转化为YAML格式。
[root@controller ~]# oslopolicy-convert-json-to-yaml --namespace placement \ --policy-file /etc/placement/policy.json \ --output-file /etc/placement/policy.yaml [root@controller ~]# mv /etc/placement/policy.json{,.bak} |
注:当前环境中此问题可忽略,不影响运行。
(6)针对placement API 运行命令
安装osc-placement插件
[root@controller ~]# dnf install python3-osc-placement |
列出可用的资源类别及特性:
[root@controller ~]# openstack --os-placement-api-version 1.2 resource class list --sort-column name |
[root@controller ~]# openstack --os-placement-api-version 1.6 trait list --sort-column name |
1.3.4 Nova
Nova 是 OpenStack 中的核心组件之一,负责管理虚拟机实例(VM)的生命周期。它提供了虚拟机的创建、调度、启动、停止、重启、删除等功能。Nova 依赖于其他 OpenStack 组件(如 Keystone 用于身份认证,Glance 用于镜像管理,Neutron 用于网络管理等)来完成其工作。
在controller节点执行以下操作:
(1)创建数据库
使用root用户访问数据库服务:
[root@controller ~]# mysql -u root -p |
创建nova_api、nova和nova_cell0数据库:
MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0; |
授权数据库访问
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY '000000'; MariaDB [(none)]> exit |
(2)配置用户和Endpoints
source admin凭证,以获取admin命令行权限:
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
创建nova用户并设置用户密码:
[root@controller ~]# openstack user create --domain default --password-prompt nova User Password:000000 Repeat User Password:000000 |
添加nova用户到service project 并指定admin角色
[root@controller ~]# openstack role add --project service --user nova admin |
创建nova服务实体
[root@controller ~]# openstack service create --name nova \ --description "OpenStack Compute" compute |
创建Nova API服务endpoints
[root@controller ~]# openstack endpoint create --region RegionOne \ compute public http://controller:8774/v2.1 |
[root@controller ~]# openstack endpoint create --region RegionOne \ compute internal http://controller:8774/v2.1 |
[root@controller ~]# openstack endpoint create --region RegionOne \ > compute admin http://controller:8774/v2.1 |
(3)安装及配置组件
安装软件包
[root@controller ~]# dnf install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler |
编辑/etc/nova/nova.conf配置文件,完成如下操作
[root@controller ~]# vi /etc/nova/nova.conf |
在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用controller节点管理IP配置my_ip,显式定义log_dir
[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/ my_ip = 192.168.213.130 log_dir = /var/log/nova |
在[api_database]和[database]部分,配置数据库入口:
[api_database] connection = mysql+pymysql://nova:000000@controller/nova_api [database] connection = mysql+pymysql://nova:000000@controller/nova |
在[api]和[keystone_authtoken]部分,配置身份认证服务入口:
[api_database] connection = mysql+pymysql://nova:000000@controller/nova_api [database] connection = mysql+pymysql://nova:000000@controller/nova |
在[vnc]部分,启用并配置远程控制台入口
[vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip |
在[glance]部分,配置镜像服务API的地址
[glance] api_servers = http://controller:9292 |
在[oslo_concurrency]部分,配置lock path
[oslo_concurrency] lock_path = /var/lib/nova/tmp |
[placement]部分,配置placement服务的入口
[placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = passworduser_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 000000 |
(4)数据库同步
同步nova-api数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova |
注册cell0数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova |
创建cell1 cell:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova |
同步nova数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova |
验证cell0和cell1注册正确:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova |
(5)启动服务
[root@controller ~]# systemctl enable \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service [root@controller ~]# systemctl start \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service |
Compute节点
(1)安装软件包
[root@compute ~]# dnf install openstack-nova-compute |
(2)编辑/etc/nova/nova.conf配置文件
[root@compute ~]# vi /etc/nova/nova.conf |
在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理IP配置my_ip,显式定义compute_driver、instances_path、log_dir
[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller:5672 /my_ip = 192.168.100.20 compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances log_dir = /var/log/nova |
在[api]和[keystone_authtoken]部分,配置身份认证服务入口
[api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = 000000 |
在[vnc]部分,启用并配置远程控制台入口
[vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html |
在[glance]部分,配置镜像服务API的地址
[glance] api_servers = http://controller:9292 |
在[oslo_concurrency]部分,配置lock path
[oslo_concurrency] lock_path = /var/lib/nova/tmp |
[placement]部分,配置placement服务的入口:
[placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 000000 |
(3)确认计算节点是否支持虚拟机硬件加速(x86_64-intel)
处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo |
如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf的[libvirt]部分:
[root@compute ~]# vi /etc/nova/nova.conf [libvirt] virt_type = qemu |
如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。
确认计算节点是否支持虚拟机硬件加速(arm64-AMD)
处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速
[root@compute ~]# virt-host-validate |
该命令由libvirt提供,此时libvirt应已作为openstack-nova-compute依赖被安装,环境中已有此命令
显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。
QEMU: Checking if device /dev/kvm exists: FAIL (Check that CPU and firmware supports virtualization and kvm module is loaded) |
编辑/etc/nova/nova.conf的[libvirt]部分
[root@compute ~]# /etc/nova/nova.conf [libvirt] virt_type = qemu |
显示PASS时,表示支持硬件加速,不需要进行额外的配置。
QEMU: Checking if device /dev/kvm exists: PASS |
配置qemu(仅arm64)
仅当处理器为arm64架构时需要执行此操作。
编辑/etc/libvirt/qemu.conf
[root@compute ~]# vi /etc/libvirt/qemu.conf nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ /usr/share/AAVMF/AAVMF_VARS.fd", \ "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ /usr/share/edk2/aarch64/vars-template-pflash.raw"] |
编辑/etc/qemu/firmware/edk2-aarch64.json
[root@compute ~]# vi /etc/qemu/firmware/edk2-aarch64.json { "description": "UEFI firmware for ARM64 virtual machines", "interface-types": [ "uefi" ], "mapping": { "device": "flash", "executable": { "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw", "format": "raw" }, "nvram-template": { "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw", "format": "raw" } }, "targets": [ { "architecture": "aarch64", "machines": [ "virt-*" ] } ], "features": [ ], "tags": [ ]} |
(4)启动服务并查看状态
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service [root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service [root@compute ~]# systemctl status openstack-nova-compute libvirtd |
Controller节点
(1)添加计算节点到openstack集群
source admin凭证,以获取admin命令行权限:
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
确认nova-compute服务已识别到数据库中:
[root@controller ~]# openstack compute service list --service nova-compute |
发现计算节点,将计算节点添加到cell数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova |
(2)验证
列出服务组件,验证每个流程都成功启动和注册:
[root@controller ~]# openstack compute service list |
列出身份服务中的API端点,验证与身份服务的连接:
[root@controller ~]# openstack catalog list |
列出镜像服务中的镜像,验证与镜像服务的连接:
[root@controller ~]# openstack image list |
检查cells是否运作成功,以及其他必要条件是否已具备。
[root@controller ~]# nova-status upgrade check |
1.3.5 Neutron
Controller节点
Neutron 是 OpenStack 中的网络服务组件,负责为 OpenStack 环境提供网络连接和 IP 地址管理。它允许用户创建和管理虚拟网络、子网、路由器、安全组等网络资源,从而为虚拟机(VM)提供网络功能
(1)创建 keystone 数据库并授权
[root@controller ~]# mysql -u root -p MariaDB [(none)]> CREATE DATABASE neutron; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.011 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.002 sec) MariaDB [(none)]> exit; |
设置环境变量
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# env | grep OS_ |
(2)创建用户和服务
[root@controller ~]# openstack user create --domain default --password-prompt neutron User Password:000000 Repeat User Password:000000 |
[root@controller ~]# openstack role add --project service --user neutron admin [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network |
部署 Neutron API 服务
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 |
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 |
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696 |
(3)安装软件包
[root@controller ~]# dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2 |
(4)配置Neutron
修改/etc/neutron/neutron.conf
[root@controller ~]# vi /etc/neutron/neutron.conf [database] connection = mysql+pymysql://neutron:000000@controller/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = 000000 [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = 000000 [oslo_concurrency] lock_path = /var/lib/neutron/tmp |
配置ML2,ML2,具体配置可以根据需求自行修改,这里使用的是provider network + linuxbridge**
修改/etc/neutron/plugins/ml2/ml2_conf.ini
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true |
修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens33 [vxlan] enable_vxlan = true local_ip = 192.168.213.130 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver |
配置Layer-3代理
修改/etc/neutron/l3_agent.ini
[root@controller ~]# vi /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge |
配置DHCP代理 修改/etc/neutron/dhcp_agent.ini
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true |
配置metadata代理
修改/etc/neutron/metadata_agent.ini
[root@controller ~]# vi /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET |
配置nova服务使用neutron,修改/etc/nova/nova.conf
[root@controller ~]# vi /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET |
创建/etc/neutron/plugin.ini的符号链接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini |
(5)同步数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron |
(6)重启nova api服务
[root@controller ~]# systemctl restart openstack-nova-api |
(7)启动网络服务
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service [root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service \ neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service |
Compute节点
(1)安装软件包
[root@compute ~]# dnf install openstack-neutron-linuxbridge ebtables ipset -y |
(2)配置Neutron
修改/etc/neutron/neutron.conf
[root@compute ~]# vi /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = 000000 [oslo_concurrency] lock_path = /var/lib/neutron/tmp |
修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens33 [vxlan] enable_vxlan = true local_ip = 192.168.100.20 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver |
配置nova compute服务使用neutron,修改/etc/nova/nova.conf
root@compute ~]# vi /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 |
重启nova-compute服务
[root@compute ~]# systemctl restart openstack-nova-compute.service |
启动Neutron linuxbridge agent服务
[root@compute ~]# systemctl enable neutron-linuxbridge-agent [root@compute ~]# systemctl start neutron-linuxbridge-agent [root@compute ~]# systemctl status neutron-linuxbridge-agent |
1.3.6 Cinder
"Cinder" 是 OpenStack 项目中的一个核心组件,负责块存储(Block Storage)服务。它是 OpenStack 的存储服务模块,允许用户创建和管理持久化的块存储卷(volumes),这些卷可以附加到虚拟机(VMs)上,作为虚拟机的存储设备
Controller节点
(1)创建cinder数据库
[root@controller ~]# mysql -u root -p MariaDB [(none)]> CREATE DATABASE cinder; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000'; MariaDB [(none)]> exit |
(2)初始化Keystone资源对象
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt cinder |
[root@controller ~]# openstack role add --project service --user cinder admin [root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 |
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s |
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s |
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s |
(3)安装软件包
[root@controller ~]# dnf install openstack-cinder-api openstack-cinder-scheduler |
(4)修改cinder配置文件/etc/cinder/cinder.conf
[root@controller ~]# vi /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.100.10 [database] connection = mysql+pymysql://cinder:000000@controller/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = 000000 [oslo_concurrency] lock_path = /var/lib/cinder/tmp |
(5)数据库同步
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder |
(6)修改nova配置/etc/nova/nova.conf
[root@controller ~]# vi /etc/nova/nova.conf [cinder] os_region_name = RegionOne |
(7)启动服务
[root@controller ~]# systemctl restart openstack-nova-api [root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-scheduler [root@controller ~]# systemctl status openstack-cinder-api openstack-cinder-scheduler |
Storage节点
Storage节点要提前准备至少一块硬盘,作为cinder的存储后端
下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sdb
(1)安装软件包
[root@storage ~]# dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup |
(2)配置lvm卷组
[root@storage ~]# pvcreate /dev/sdb [root@storage ~]# vgcreate cinder-volumes /dev/sdb |
(3)修改cinder配置/etc/cinder/cinder.conf
[root@storage ~]# vi /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller auth_strategy = keystone my_ip = 192.168.213.132 enabled_backends = lvm glance_api_servers = http://controller:9292 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASS [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp |
(4)启动服务
[root@storage ~]# systemctl start openstack-cinder-volume target [root@storage ~]# systemctl start openstack-cinder-backup |
(5)验证
Controller节点
[root@controller ~]# source ~/.admin-openrc [root@controller ~]# openstack volume service list |
创建一个卷来验证配置是否正确
[root@controller ~]# openstack volume create --size 1 test-volume |
[root@controller ~]# openstack volume list |
1.3.7Horizon
orizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。
(1)安装软件包
[root@controller ~]# dnf install openstack-dashboard |
(2)修改配置文件/etc/openstack-dashboard/local_settings
[root@controller ~]# vi /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['*', ] OPENSTACK_KEYSTONE_URL = "http://controller:5000/v3" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member" WEBROOT = '/dashboard' POLICY_FILES_PATH = "/etc/openstack-dashboard" OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3, } |
(3)重启服务
[root@controller ~]# systemctl restart httpd |
至此,horizon服务的部署已全部完成,打开浏览器,输入http://192.168.213.130/dashboard,打开horizon登录页面
击“登入”按钮登录进入Dashboard操作界面
喜欢的希望您一键三连噢 从项目一到项目六完整版后续都会发,关注博主,欢迎大家讨论,