1、官方文档
OpenStack Installation Guidehttps://docs.openstack.org/install-guide/
本次安装是在Ubuntu 22.04上进行,基本按照OpenStack Installation Guide顺序执行,主要内容包括:
- 环境安装 (已完成)
- OpenStack服务安装
- keyston安装(已完成)
- glance安装 (已完成)
- placement安装(已完成)
- nova安装(已安装)
- neutron安装(本篇文档安装)◄──
- 启动一个实例
参考OpenStack Yoga版最小化部署(Minimal deployment for Yoga),顺序安装必须的核心服务,本次安装neutron服务。
2、Networking service overview
OpenStack Networking(也称为Neutron)是OpenStack云计算平台中负责网络服务的组件,它允许你创建并附加由其他OpenStack服务管理的接口设备到网络中(原文:OpenStack Networking (neutron) allows you to create and attach interface devices managed by other OpenStack services to networks.)。使用不同的插件(Plug-in)来适应不同的网络设备和软件,这为OpenStack的架构和部署提供了灵活性。
具体来说:
-
创建和附加接口设备:Neutron允许你创建虚拟接口(如虚拟网卡),并将这些接口附加到虚拟机或其他虚拟设备上,使它们能够连接到虚拟网络。
-
由其他OpenStack服务管理:在OpenStack环境中,不同的服务可能需要管理网络接口。例如,计算服务(Nova)可能会创建和管理虚拟机的网络接口,而Neutron则负责网络的配置和路由。
-
插件实现:Neutron的设计允许通过插件来扩展其功能。这些插件可以是第三方开发的,用于支持不同的网络硬件和软件。
-
适应不同的网络设备和软件:通过使用不同的插件,Neutron可以适应各种网络环境,无论是传统的物理网络设备还是现代的软件定义网络(SDN)解决方案。
-
提供灵活性:这种插件架构为OpenStack的架构和部署提供了灵活性。用户可以根据他们的需求选择合适的网络插件,以实现特定的网络功能和性能要求。
总的来说,OpenStack Networking (Neutron) 提供了一个可扩展的网络服务,它通过插件支持多种网络技术和设备,使得OpenStack能够在各种网络环境中灵活部署和运行。
OpenStack Networking(Neutron)包括以下组件:
-
neutron-server:接受并路由API请求到适当的OpenStack Networking插件进行处理。
-
OpenStack Networking 插件和代理(plug-in and agent):功能有:插拔端口,创建网络或子网,并提供IP地址分配。这些插件和代理根据特定云环境中使用的供应商和技术而有所不同。OpenStack Networking自带了适用于Cisco虚拟和物理交换机、NEC OpenFlow产品、Open vSwitch、Linux桥接以及VMware NSX产品的插件和代理。
-
常见的代理:包括L3(第三层)代理、DHCP(动态主机IP地址分配)代理以及插件代理。
-
消息队列:大多数OpenStack Networking安装使用它来在neutron-server和各种代理之间路由信息。它还充当数据库,存储特定插件的网络状态。
OpenStack Networking主要与OpenStack Compute交互,为其实例提供网络和连接性。
3、Networking (neutron) concepts
Neutron管理的范围,包括所有虚拟网络(Virtual Network Infrastructure,VNI),以及虚拟网络接入物理网络的部分(可理解为access layer of physical network)。通过Neutron,可以在租户内部(Project内部)创建完整的网络拓扑,还可以包括Firewall、VPN等安全服务。
-
网络、子网和路由器(Network、subnet、router):Neutron 提供了网络、子网和路由器的对象抽象。这些抽象具有与其物理对应物类似的功能。例如,网络包含子网,路由器在不同的子网和网络之间路由流量。
-
外部网络(external network):每个 Neutron 设置至少有一个外部网络。与其它虚拟定义的网络不同,外部网络代表了 实际可访问的物理网络的一部分。外部网络上的 IP 地址可以被外部访问。内部网络(即虚拟网络)通过外部网络和外部通讯。
-
内部网络(internal network):除了外部网络,任何 Neutron 设置还有多个内部网络。这些软件定义的网络直接连接到虚拟机(VM)。只有在同一内部网络上的 VM,或者通过接口连接到相同路由器的子网上的 VM,才能直接访问该网络上的 VM。
-
路由器和网络访问(router):为了使外部网络能够访问 VM,以及 VM 能够访问外部网络,需要在网络之间设置路由器。路由器可以和外部网络连接(就是说路由器上可以通过路由指向外部网络),路由器同时和内部网络连接。与物理路由器一样,子网可以访问连接到同一路由器的其他子网上的虚拟机,虚拟机也可以通过路由器访问外部网络。
-
端口和 IP 地址分配(port):你可以在外部网络上为内部网络的端口分配 IP 地址。当某物连接到子网时,该连接被称为端口(原文:Whenever something is connected to a subnet, that connection is called a port.) 。你可以将外部网络的 IP 地址与 VM 的端口关联起来,这样外部网络的实体就可以访问 VM。
-
安全组(security group):Neutron 还支持安全组。安全组允许管理员以组的形式定义防火墙规则。VM 可以属于一个或多个安全组,Neutron 将这些安全组中的规则应用于 VM,以阻止或允许端口、端口范围或流量类型。
-
插件(plug-in):每个 Neutron 使用的插件都有自己的概念。虽然这些概念对于操作 VNI 和 OpenStack 环境不是必需的,但理解它们可以帮助你设置 Neutron。所有 Neutron 安装都使用一个核心插件和一个安全组插件(或仅使用 No-Op 安全组插件)。
4、网络拓扑
提供两种网络拓扑方案。
4.1 Provider networks
创建网络过程中,会创建bridge(用于二层网络通讯)(实际的bridge是按需产生的,在创建虚机的过程中,根据虚机的信息按需创建bridge,以节约主机资源),bridge会和物理端口绑定(至于和哪个物理端口绑定,需在neutron配置文件在通过provider network来设置相应的物理端口)。
相同网段虚机之间通讯,依赖简单的二层网络进行通讯(比如:vm1 - br1 -- br2 --vm2)。
不同网段虚机之间通讯,需依赖外部的router(具备三层路由功能的设备即可)进行通讯(比如vm1的网关设置在外部的Router上)。
Provider networks的示意图如下:
4.2 Self-service networks
创建网络过程中,会创建bridge,bridge会创建vxlan端口,用于跨越三层的vxlan通讯;同时会创建router,用于三层网络通讯,从而大大提高了组网的灵活性。
相同网段虚机之间通讯,依赖二层over三层的vxlan进行overlay层面的二层网络进行通讯(比如vm1 -——br3 -vxlan——br4-vxlan-int1 ——vm2)。在这里,使用ens33的端口作为vxlan的vtep地址。
不同网段虚机之间通讯,可以使用创建的Router(图中的Router2)进行三层网络通讯,比如vm2的网关设置在Router2上,vm2去往外部的流量,首先发送到网关上(Router2),然后由Router2执行三层路由功能,发往外部,比如Router1。
Self-service networks的示意图如下:
这些功能会在后续安装过程中,通过实验环境进行说明。
在接下来的安装过程中,首先进行Provider networks网络的安装,然后创建虚机。
5、Controller node补充安装compute服务
之前只在compute1上安装了compute服务,为了后续测试需要,在controller node上安装compute服务,这样也可以在controller node上创建虚机。
1、安装软件包。
root@controller:~# apt install nova-compute
2、/etc/nova/nova.conf不需要修改。
3、加入cell database。
root@controller ~(admin/amdin)# openstack compute service list --service nova-compute
+--------------------------------------+--------------+------------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+--------------------------------------+--------------+------------+------+---------+-------+----------------------------+
| c04e53a4-fdb8-4915-9b1a-f5d195e753c4 | nova-compute | compute1 | nova | enabled | up | 2024-09-20T14:39:38.000000 |
| b3d4e71d-088a-4249-8d8f-e6d8528c698d | nova-compute | controller | nova | enabled | up | 2024-09-20T14:39:41.000000 |
+--------------------------------------+--------------+------------+------+---------+-------+----------------------------+
root@controller ~(admin/amdin)# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 8b1967df-7901-42b3-8b03-fc4e884f490d
Checking host mapping for compute host 'controller': 027eb56f-a860-41b8-afa3-91b65f1c8777
Creating host mapping for compute host 'controller': 027eb56f-a860-41b8-afa3-91b65f1c8777
Found 1 unmapped computes in cell: 8b1967df-7901-42b3-8b03-fc4e884f490d
root@controller ~(admin/amdin)# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 8b1967df-7901-42b3-8b03-fc4e884f490d
Found 0 unmapped computes in cell: 8b1967df-7901-42b3-8b03-fc4e884f490d
root@controller ~(admin/amdin)#
root@controller ~(admin/amdin)# openstack service list
+----------------------------------+-----------+-----------+
| ID | Name | Type |
+----------------------------------+-----------+-----------+
| 1b8f162ebcf848ee8bd69bc6b36a8dff | nova | compute |
| 639145725f804482a50d4740b0c79c43 | placement | placement |
| 75fe01049ec648b69e48d200971bf601 | keystone | identity |
| d6a3dadf92e542289c5ebd37e3553cdd | glance | image |
+----------------------------------+-----------+-----------+
root@controller ~(admin/amdin)# openstack compute service list
+--------------------------------------+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+--------------------------------------+----------------+------------+----------+---------+-------+----------------------------+
| b935d869-0102-45c0-8b24-e338c5606890 | nova-scheduler | controller | internal | enabled | up | 2024-09-20T14:42:08.000000 |
| e4929b42-af08-449f-b703-c0fc36c4220b | nova-conductor | controller | internal | enabled | up | 2024-09-20T14:42:04.000000 |
| c04e53a4-fdb8-4915-9b1a-f5d195e753c4 | nova-compute | compute1 | nova | enabled | up | 2024-09-20T14:42:08.000000 |
| b3d4e71d-088a-4249-8d8f-e6d8528c698d | nova-compute | controller | nova | enabled | up | 2024-09-20T14:42:01.000000 |
+--------------------------------------+----------------+------------+----------+---------+-------+----------------------------+
root@controller ~(admin/amdin)#
6、安装neutron服务(Install and configure for Ubuntu)
6.1 Install and configure controller node
6.1.1 Prerequisites
1、database操作
root@controller:~# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 185
Server version: 10.6.18-MariaDB-0ubuntu0.22.04.1 Ubuntu 22.04
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
-> IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.002 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
-> IDENTIFIED BY 'openstack';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> quit
Bye
root@controller:~#
2、keystone操作
创建neutron用户,添加neutron用户在project service中的admin role的角色(即授权)。
创建network service的服务访问点。
root@controller:~# cat admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='\u@\h \W(admin/amdin)\$ '
root@controller:~#
root@controller:~# source admin-openrc
root@controller ~(admin/amdin)# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | b0eb41c181c04fe8b4bc7ca8e3adbbfc |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
root@controller ~(admin/amdin)#
root@controller ~(admin/amdin)# openstack role add --project service --user neutron admin
root@controller ~(admin/amdin)# openstack service create --name neutron \
> --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 3df6f54ee6174d93bcabce96a06789d1 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
root@controller ~(admin/amdin)#
root@controller ~(admin/amdin)# openstack endpoint create --region RegionOne \
> network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 65b3b976624145db9e0737643e2a4d2b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3df6f54ee6174d93bcabce96a06789d1 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
root@controller ~(admin/amdin)# openstack endpoint create --region RegionOne \
> network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | d01bada60da84d28afb07f28c72fe847 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3df6f54ee6174d93bcabce96a06789d1 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
root@controller ~(admin/amdin)# openstack endpoint create --region RegionOne \
> network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c54af128bb1a47198bd1d831a5663221 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3df6f54ee6174d93bcabce96a06789d1 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
root@controller ~(admin/amdin)#
6.1.2 Networking Option 1: Provider networks
先根据Provider networks选项进行网络服务的安装。
1、安装软件包
(controller执行)
# apt install neutron-server neutron-plugin-ml2 \
neutron-linuxbridge-agent neutron-dhcp-agent \
neutron-metadata-agent
2、vi /etc/neutron/neutron.conf
---
[database]
# connection = sqlite:var/lib/neutron/neutron.sqlite
connection = mysql+pymysql://neutron:openstack@controller/neutron
---
[DEFAULT]
core_plugin = ml2
service_plugins =
---
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
---
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = openstack
---
[DEFAULT]
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = openstack
---
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3、vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
4、vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
physical_interface_mappings = provider:ens34 这一行指定了物理网络接口与提供给虚拟网络的映射关系。在这里,
provider
是一个虚拟网络的标识符,而ens34
是服务器上的一个物理网络接口卡(NIC)的名称。这意味着所有标记为provider
网络的流量都将通过ens34
这个物理接口传输。enable_security_group = true 这一行启用了安全组功能。安全组是 Neutron 提供的一种虚拟防火墙,用于控制进出虚拟机实例的网络流量。启用后,用户可以定义一系列的规则来允许或拒绝特定类型的流量。
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 这一行指定了用于实现安全组规则的防火墙驱动程序。在这里,它设置为使用基于 iptables 的防火墙驱动程序,这意味着 Neutron 将通过 iptables 规则来控制虚拟机实例的网络访问控制
network bridge filters开启:
root@controller:~# sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
root@controller:~# sysctl net.bridge.bridge-nf-call-ip6tables
net.bridge.bridge-nf-call-ip6tables = 1
root@controller:~#
5、vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 这一行定义了 DHCP 服务的驱动程序。DHCP(动态主机配置协议)用于自动分配 IP 地址给网络中的设备。在这里,它设置为使用
Dnsmasq
,这是一个轻量级的 DHCP 服务器,它也提供了 DNS 缓存和转发功能。Neutron 通过 Dnsmasq 为虚拟网络中的虚拟机提供 IP 地址配置
6.1.3 Configure the metadata agent
root@controller:~# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = openstack
6.1.4 Configure the Compute service to use the Networking service
root@controller:~# vi /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = openstack
service_metadata_proxy = true
metadata_proxy_shared_secret = openstack
6.1.5 Finalize installation
root@controller:~# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron ...
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> kilo
INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225
INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151
...
INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586
INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
OK
root@controller:~#
root@controller:~# service nova-api restart
root@controller:~# service neutron-server restart
root@controller:~# service neutron-linuxbridge-agent restart
root@controller:~# service neutron-dhcp-agent restart
root@controller:~# service neutron-metadata-agent restart
root@controller:~#
6.2 Install and configure compute node
6.2.1 安装软件包
root@compute1:~# apt install neutron-linuxbridge-agent
6.2.2 Configure the common component
root@compute1:~# vi /etc/neutron/neutron.conf
[database]
# connection = sqlite:var/lib/neutron/neutron.sqlite
transport_url = rabbit://openstack:openstack@controller //后面发现这里配置错了,见后面问题解决。
---
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = openstack
---
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
---
6.2.3 Networking Option 1: Provider networks
root@compute1:~# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34 //后面发现应该是ens35
---
[vxlan]
enable_vxlan = false
---
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
---
network bridge filters:
root@compute1:~# sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
root@compute1:~# sysctl net.bridge.bridge-nf-call-ip6tables
net.bridge.bridge-nf-call-ip6tables = 1
root@compute1:~#
6.2.4 Configure the Compute service to use the Networking service
root@compute1:~# vi /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = openstack
#
6.2.5 Finalize installation
root@compute1:~# service nova-compute restart
root@compute1:~# service neutron-linuxbridge-agent restart
root@compute1:~#
7、Verify operation
root@osclient:~# source admin-openrc
root@osclient ~(admin/amdin)# openstack extension list --network
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name | Alias | Description |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Address group | address-group | Support address group |
| Address scope | address-scope | Address scopes extension. |
| agent | agent | The agent management extension. |
| Agent's Resource View Synced to Placement | agent-resources-synced | Stores success/failure of last sync to Placement |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| Availability Zone | availability_zone | The availability zone extension. |
| Availability Zone Filter Extension | availability_zone_filter | Add filter parameters to AvailabilityZone resource
...
问题及解决
1、查看network agent,发现compute1的bridge agent没有发现:
root@osclient ~(admin/amdin)# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 4516d406-7b90-4029-93a9-6a7fbe964bc2 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| dd50147c-5a72-4386-9073-a4431c47a3b4 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| fc147e91-1504-4a3c-8709-0665c97b4cb6 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
root@osclient ~(admin/amdin)#
2、查看进程,发现cleanup工作不正常:
root@compute1:~# systemctl | grep neutron
neutron-linuxbridge-agent.service loaded active running Openstack Neutron Linux Bridge Agent
neutron-linuxbridge-cleanup.service loaded activating start start OpenStack Neutron Linux bridge cleanup
root@compute1:~#
3、查看日志,发现ens34不存在,检查后,应该是ens35:
root@compute1:/var/log/neutron# tail neutron-linuxbridge-cleanup.log
2024-09-21 03:46:09.365 6145 INFO neutron.common.config [-] /usr/bin/neutron-linuxbridge-cleanup version 20.5.0
2024-09-21 03:46:09.366 6145 INFO neutron.cmd.linuxbridge_cleanup [-] Interface mappings: {'provider': 'ens34'}.
2024-09-21 03:46:09.366 6145 INFO neutron.cmd.linuxbridge_cleanup [-] Bridge mappings: {}.
2024-09-21 03:46:09.366 6145 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpt3t6tqyi/privsep.sock']
2024-09-21 03:46:10.040 6145 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
2024-09-21 03:46:09.932 6168 INFO oslo.privsep.daemon [-] privsep daemon starting
2024-09-21 03:46:09.936 6168 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2024-09-21 03:46:09.937 6168 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
2024-09-21 03:46:09.938 6168 INFO oslo.privsep.daemon [-] privsep daemon running as pid 6168
2024-09-21 03:46:10.527 6145 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface ens34 for physical network provider does not exist. Agent terminated!
root@compute1:/var/log/neutron# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:51:16:68 brd ff:ff:ff:ff:ff:ff
altname enp2s0
inet 10.0.20.12/24 brd 10.0.20.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe51:1668/64 scope link
valid_lft forever preferred_lft forever
3: ens35: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:0c:29:51:16:72 brd ff:ff:ff:ff:ff:ff
altname enp2s3
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:db:70:49 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4、修改配置文件:
root@compute1:/var/log/neutron# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens35
5、重启服务,然后检查:
root@compute1:/var/log/neutron# service nova-compute restart
root@compute1:/var/log/neutron# service neutron-linuxbridge-agent restart
root@compute1:/var/log/neutron# systemctl | grep neutron
neutron-linuxbridge-agent.service loaded active running Openstack Neutron Linux Bridge Agent
neutron-linuxbridge-cleanup.service loaded active exited OpenStack Neutron Linux bridge cleanup
root@compute1:/var/log/neutron#
root@compute1:/var/log/neutron# tail neutron-linuxbridge-cleanup.log
2024-09-21 03:50:42.556 8781 INFO neutron.common.config [-] /usr/bin/neutron-linuxbridge-cleanup version 20.5.0
2024-09-21 03:50:42.556 8781 INFO neutron.cmd.linuxbridge_cleanup [-] Interface mappings: {'provider': 'ens35'}.
2024-09-21 03:50:42.556 8781 INFO neutron.cmd.linuxbridge_cleanup [-] Bridge mappings: {}.
2024-09-21 03:50:42.557 8781 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/linuxbridge_agent.ini', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmppao22jjf/privsep.sock']
2024-09-21 03:50:43.285 8781 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
2024-09-21 03:50:43.165 8802 INFO oslo.privsep.daemon [-] privsep daemon starting
2024-09-21 03:50:43.169 8802 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2024-09-21 03:50:43.171 8802 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
2024-09-21 03:50:43.171 8802 INFO oslo.privsep.daemon [-] privsep daemon running as pid 8802
2024-09-21 03:50:43.779 8781 INFO neutron.cmd.linuxbridge_cleanup [-] Linux bridge cleanup completed successfully
6、但还是不能发现compute1的bridge agent:
root@osclient ~(admin/amdin)# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 4516d406-7b90-4029-93a9-6a7fbe964bc2 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| dd50147c-5a72-4386-9073-a4431c47a3b4 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| fc147e91-1504-4a3c-8709-0665c97b4cb6 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
root@osclient ~(admin/amdin)#
7、检查compute1的日志
root@compute1:/var/log/neutron# tail neutron-linuxbridge-agent.log
2024-09-21 03:53:30.423 8873 ERROR oslo.messaging._drivers.impl_rabbit [req-1c74e7d2-af9b-486d-b776-8e11ab0d3d5f - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 17.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:53:30.424 8873 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 17.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:53:47.453 8873 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 19.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:53:47.453 8873 ERROR oslo.messaging._drivers.impl_rabbit [req-1c74e7d2-af9b-486d-b776-8e11ab0d3d5f - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 19.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:54:06.486 8873 ERROR oslo.messaging._drivers.impl_rabbit [req-1c74e7d2-af9b-486d-b776-8e11ab0d3d5f - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 21.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:54:06.487 8873 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 21.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:54:27.522 8873 ERROR oslo.messaging._drivers.impl_rabbit [req-1c74e7d2-af9b-486d-b776-8e11ab0d3d5f - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 23.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:54:27.522 8873 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 23.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:54:50.560 8873 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 25.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
2024-09-21 03:54:50.561 8873 ERROR oslo.messaging._drivers.impl_rabbit [req-1c74e7d2-af9b-486d-b776-8e11ab0d3d5f - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 25.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED
root@compute1:/var/log/neutron#
从日志信息来看,
neutron-linuxbridge-agent
正在尝试连接到 RabbitMQ 消息代理,但是连接失败了,错误代码为ECONNREFUSED
。这通常意味着 RabbitMQ 服务没有在预期的端口上监听,或者网络问题阻止了连接。
8、检查配置,发现配置错误,mq配置到[database]下面了,应该在[default]下面:
root@compute1:/var/log/neutron# vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
transport_url = rabbit://openstack:openstack@controller
9、重启服务后,工作正常了。
root@compute1:/var/log/neutron# service nova-compute restart
root@compute1:/var/log/neutron# service neutron-linuxbridge-agent restart
root@osclient ~(admin/amdin)# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 4516d406-7b90-4029-93a9-6a7fbe964bc2 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| dd50147c-5a72-4386-9073-a4431c47a3b4 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| f05c0a19-5657-4e12-8f4c-f5ea5dfc7043 | Linux bridge agent | compute1 | None | :-) | UP | neutron-linuxbridge-agent |
| fc147e91-1504-4a3c-8709-0665c97b4cb6 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
root@osclient ~(admin/amdin)#