一、什么是udev
FROM AI:
udev是Linux 2.6内核及以后版本中引入的一个设备管理框架,它取代了之前的devfs系统。udev以守护进程的形式运行,并且工作在用户空间而不是内核空间。它的主要功能是动态地创建和管理/dev目录下的设备节点,并且能够根据系统中硬件设备的状态变化来动态更新这些节点。
udev利用了sysfs文件系统提供的信息来管理设备节点。sysfs是一个虚拟的文件系统,它以文件和目录的形式展示了系统中的实际硬件设备和总线。通过监听内核发出的uevent事件,udev能够识别和响应硬件的添加、移除和其他变化。
udev的工作流程包括:内核检测到新设备后,在sysfs中为该设备创建记录;udev守护进程检测到新设备并通过规则文件进行配置;规则文件中定义了如何根据设备属性创建设备节点或执行特定操作。
udev的配置文件(规则)存放在/etc/udev/rules.d/目录下,并且具有.rules扩展名。这些规则文件根据文件名的ASCII字母顺序被读取,一旦找到匹配新设备的规则,udev就会根据规则对设备进行配置。
总的来说,udev是Linux系统中一个非常重要的组件,它提供了灵活的方式来管理设备节点,支持热插拔设备,并且能够通过用户定义的规则来适应各种特定的设备管理需求。
在udev的早期版本中,规则文件确实被存放在/lib/udev/rules.d/目录下。然而,在现代的Linux发行版中,这个目录通常用于存放系统级别的udev规则,这些规则由系统安装包或硬件厂商提供,用于处理通用的硬件设备。用户自定义的规则则推荐存放在/etc/udev/rules.d/目录下,这样可以避免在系统升级时被覆盖,同时也提供了更好的灵活性和控制权给系统管理员。
/lib/udev/rules.d/目录下的规则文件在udev启动时会被加载和应用,但它们通常不会被普通用户修改,因为这些规则是系统级别的,并且是为了确保系统能够正确识别和配置大多数硬件设备而设计的。如果你需要添加或修改规则来适应特定的硬件配置或需求,应该在/etc/udev/rules.d/目录下进行。
总的来说,/lib/udev/rules.d/目录下的文件是有用的,但它们主要用于系统级别的设备管理,而用户自定义的规则应该放在/etc/udev/rules.d/目录下。
二、udev man page
udev翻译手册:https://www.jinbuguo.com/systemd/udev.html
三、udev事件监控
- udevadm
可以用udev的管理工具udeadm来监控系统中所有的udev事件。
组建好软raid之后,拔出所有成员盘,然后重启,插入单成员盘事件如下所示。
[curtis@192 ~]$ udevadm monitor --udev
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
UDEV [53.550303] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2 (usb)
UDEV [53.560891] add /module/usb_storage (module)
UDEV [53.561245] add /devices/virtual/workqueue/scsi_tmf_2 (workqueue)
UDEV [53.561664] add /bus/usb/drivers/usb-storage (drivers)
UDEV [53.563384] add /module/uas (module)
UDEV [53.563705] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0 (usb)
UDEV [53.563726] add /bus/usb/drivers/uas (drivers)
UDEV [53.564957] bind /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2 (usb)
UDEV [53.565472] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2 (scsi)
UDEV [53.566353] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/scsi_host/host2 (scsi_host)
UDEV [53.567069] bind /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0 (usb)
UDEV [54.889762] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0 (scsi)
UDEV [54.898268] add /module/sd_mod (module)
UDEV [54.898481] add /class/scsi_disk (class)
UDEV [54.899083] add /bus/scsi/drivers/sd (drivers)
UDEV [54.910075] add /devices/virtual/bdi/8:0 (bdi)
UDEV [54.921349] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0 (scsi)
UDEV [54.922444] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/scsi_device/2:0:0:0 (scsi_device)
UDEV [54.922475] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/scsi_disk/2:0:0:0 (scsi_disk)
UDEV [54.923103] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/bsg/2:0:0:0 (bsg)
UDEV [54.923138] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/scsi_generic/sg1 (scsi_generic)
UDEV [54.923757] bind /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0 (scsi)
UDEV [55.026847] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/block/sda (block)
UDEV [55.046245] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/block/sda/sda4 (block)
UDEV [55.134313] add /devices/virtual/bdi/9:127 (bdi)
UDEV [55.139500] add /devices/virtual/block/md127 (block)
UDEV [55.340275] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/block/sda/sda1 (block)
UDEV [55.381817] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/block/sda/sda2 (block)
UDEV [55.381962] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/block/sda/sda5 (block)
UDEV [55.397764] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb3/3-2/3-2:1.0/host2/target2:0:0/2:0:0:0/block/sda/sda3 (block)
[curtis@192 ~]$ cat /proc/mdstat
Personalities :
md127 : inactive sda1[1](S)
1048512 blocks
unused devices: <none>
插入另外一个成员盘udev信息以及raid状态。
UDEV [7618.232851] add /devices/virtual/workqueue/scsi_tmf_3 (workqueue)
UDEV [7618.249645] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1 (usb)
UDEV [7618.250829] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0 (usb)
UDEV [7618.251810] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3 (scsi)
UDEV [7618.252344] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/scsi_host/host3 (scsi_host)
UDEV [7618.253155] bind /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0 (usb)
UDEV [7618.254407] bind /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1 (usb)
UDEV [7620.112823] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0 (scsi)
UDEV [7620.113959] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0 (scsi)
UDEV [7620.114844] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/scsi_disk/3:0:0:0 (scsi_disk)
UDEV [7620.115552] bind /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0 (scsi)
UDEV [7620.116381] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/scsi_device/3:0:0:0 (scsi_device)
UDEV [7620.118258] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/scsi_generic/sg2 (scsi_generic)
UDEV [7620.118276] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/bsg/3:0:0:0 (bsg)
UDEV [7620.128576] add /devices/virtual/bdi/8:16 (bdi)
UDEV [7620.186829] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/block/sdb (block)
UDEV [7620.317835] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/block/sdb/sdb4 (block)
UDEV [7620.332122] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/block/sdb/sdb3 (block)
UDEV [7620.376626] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/block/sdb/sdb2 (block)
UDEV [7620.378321] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/block/sdb/sdb5 (block)
UDEV [7620.400099] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/block/sdb/sdb6 (block)
UDEV [7620.407797] add /module/raid1 (module)
UDEV [7620.776956] change /devices/virtual/block/md127 (block)
UDEV [7620.828950] add /devices/pci0000:00/0000:00:17.0/0000:13:00.0/usb4/4-1/4-1:1.0/host3/target3:0:0/3:0:0:0/block/sdb/sdb1 (block)
[root@192 rules.d]# cat /proc/mdstat
Personalities : [raid1]
md127 : active (auto-read-only) raid1 sdb1[0] sda1[1]
1048512 blocks [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
双盘被插入之后,RAID组被组起来了,但是状态为auto-read-only?这是为什么呢?
与md-raid-assembly相关,从组装的action来看,默认raid的组装方式为增量组装。
ACTION=="add|change", IMPORT{program}="/sbin/mdadm --incremental --export $devnode --offroot $env{DEVLINKS}"
与软raid相关的udev事件配置文件。
/lib/udev/rules.d/01-md-raid-creating.rules
/lib/udev/rules.d/63-md-raid-arrays.rules
/lib/udev/rules.d/64-md-raid-assembly.rules
/lib/udev/rules.d/65-md-incremental.rules
/lib/udev/rules.d/69-md-clustered-confirm-device.rules
/lib/udev/rules.d/80-udisks2.rules
这些配置文件看着与mdadm管理工具源码中的udev规则是一样的。
-rw-rw-r-- 1 curtis curtis 849 Jan 6 13:25 udev-md-clustered-confirm-device.rules
-rw-rw-r-- 1 curtis curtis 2628 Jan 6 13:25 udev-md-raid-arrays.rules
-rw-rw-r-- 1 curtis curtis 1854 Jan 6 13:25 udev-md-raid-assembly.rules
-rw-rw-r-- 1 curtis curtis 321 Jan 6 13:25 udev-md-raid-creating.rules
-rw-rw-r-- 1 curtis curtis 2695 Jan 6 13:25 udev-md-raid-safe-timeouts.rules
那这些配置文件是什么时候被添加进入/lib/udev/rules.d/
1、系统自带。
2、在安装mdadm管理工具时被加入。
为什么raid udev事件只能生效一次,第二次插拔相同磁盘并不会重新触发增量组装RAID的Action?
后台进程以及udev后台服务。
curtis@curtis:~$ ps aux | grep udev | grep -v grep
root 666 0.0 0.0 26288 7088 ? Ss 13:58 0:00 /lib/systemd/systemd-udevd
curtis@curtis:~$ systemctl status systemd-udevd.service
● systemd-udevd.service - Rule-based Manager for Device Events and Files
Loaded: loaded (/lib/systemd/system/systemd-udevd.service; static)
Active: active (running) since Fri 2024-04-26 13:58:34 UTC; 11min ago
TriggeredBy: ● systemd-udevd-kernel.socket
● systemd-udevd-control.socket
Docs: man:systemd-udevd.service(8)
man:udev(7)
Main PID: 666 (systemd-udevd)
Status: "Processing with 32 children at max"
Tasks: 1
Memory: 33.3M
CPU: 2.809s
CGroup: /system.slice/systemd-udevd.service
└─666 /lib/systemd/systemd-udevd
netlink
如何优雅的调试udev规则?
// 查看udev磁盘的所有信息。
udevadm info --query=all --name=/dev/sda - - > 可以看到磁盘的所有信息 /dev/disk/by-id/xxx
链接文件由udev规则创建:/lib/udev/rules.d/60-persistent-storage.rules
// /lib/udev文件夹下存放着udev相关规则和工具。
/lib/udev/scsi_id
修改udev规则的信息打印等级 /etc/udev/udev.conf – > udev_log=“debug”
// 是不是必须??
重新加载规则:udevadm control --reload
触发ADD udev规则:udevadm test /block/sda
udevadm test [options] [devpath]
Simulate a udev event run for the given device, and print debug output.
-a, --action=string
The action string.
可以用-a参数来指定操作的类型,默认为ADD,也可以选择REMOVE等。
查看udev设备相关信息。
root@curtis:/home/curtis# udevadm info /dev/sdd
P: /devices/pci0000:00/0000:00:10.0/host32/target32:0:3/32:0:3:0/block/sdd
N: sdd
L: 0
S: disk/by-path/pci-0000:00:10.0-scsi-0:0:3:0
E: DEVPATH=/devices/pci0000:00/0000:00:10.0/host32/target32:0:3/32:0:3:0/block/sdd
E: DEVNAME=/dev/sdd
E: DEVTYPE=disk
E: DISKSEQ=15
E: MAJOR=8
E: MINOR=48
E: SUBSYSTEM=block
E: USEC_INITIALIZED=1973170
E: SCSI_TPGS=0
E: SCSI_TYPE=disk
E: SCSI_VENDOR=VMware,
E: SCSI_VENDOR_ENC=VMware,\x20
E: SCSI_MODEL=VMware_Virtual_S
E: SCSI_MODEL_ENC=VMware\x20Virtual\x20S
E: SCSI_REVISION=1.0
E: ID_SCSI=1
E: ID_VENDOR=VMware_
E: ID_VENDOR_ENC=VMware\x2c\x20
E: ID_MODEL=VMware_Virtual_S
E: ID_MODEL_ENC=VMware\x20Virtual\x20S
E: ID_REVISION=1.0
E: ID_TYPE=disk
E: MPATH_SBIN_PATH=/sbin
E: DM_MULTIPATH_DEVICE_PATH=0
E: ID_BUS=scsi
E: ID_PATH=pci-0000:00:10.0-scsi-0:0:3:0
E: ID_PATH_TAG=pci-0000_00_10_0-scsi-0_0_3_0
E: DEVLINKS=/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:3:0
E: TAGS=:systemd:
E: CURRENT_TAGS=:systemd:
Debian加载raid的方式是通过udev规则加载的,系统会拉起后台服务进程,由服务进程来根据udev规则执行对应的操作。
[ OK ] Started udev Kernel Device Manager.
[ 17.897547] systemd-journald[858]: Received request to flush runtime journal from PID 1
[ OK ] Started Flush Journal to Persistent Storage.
Starting Create Volatile Files and Directories...
[ 21.482139] virtio_net virtio1 enp0s4: renamed from eth0
[ 22.345006] hwclock-set (1261) used greatest stack depth: 11320 bytes left
[ OK ] Started udev Coldplug all Devices.
[ OK ] Started Create Volatile Files and Directories.
[ 24.458455] md/raid1:md0: active with 2 out of 2 mirrors
curtis@curtis-FP650:~$ ps aux | grep udev | grep -v grep
root 361 0.0 0.0 26996 6912 ? Ss 20:40 0:00 /lib/systemd/systemd-udevd
https://blog.csdn.net/u014674293/article/details/114934035
// 64-md-raid-assembly.rules
curtis@raspberrypi:/usr/lib/udev/rules.d $ cat 64-md-raid-assembly.rules
# do not edit this file, it will be overwritten on update
# Don't process any events if anaconda is running as anaconda brings up
# raid devices manually
# 环境变量匹配键,如果环境变量中的ANACONDA为"?*",跳转到md_inc_end
ENV{ANACONDA}=="?*", GOTO="md_inc_end"
# assemble md arrays
# 设备子系统名称不为block,跳转到md_inc_end
SUBSYSTEM!="block", GOTO="md_inc_end"
# skip non-initialized devices
ENV{SYSTEMD_READY}=="0", GOTO="md_inc_end"
# handle potential components of arrays (the ones supported by md)
ENV{ID_FS_TYPE}=="linux_raid_member", GOTO="md_inc"
# "noiswmd" on kernel command line stops mdadm from handling
# "isw" (aka IMSM - Intel RAID).
# "nodmraid" on kernel command line stops mdadm from handling
# "isw" or "ddf".
IMPORT{cmdline}="noiswmd"
IMPORT{cmdline}="nodmraid"
ENV{nodmraid}=="?*", GOTO="md_inc_end"
ENV{ID_FS_TYPE}=="ddf_raid_member", GOTO="md_inc"
ENV{noiswmd}=="?*", GOTO="md_inc_end"
ENV{ID_FS_TYPE}=="isw_raid_member", ACTION!="change", GOTO="md_inc"
GOTO="md_inc_end"
LABEL="md_inc"
# remember you can limit what gets auto/incrementally assembled by
# mdadm.conf(5)'s 'AUTO' and selectively whitelist using 'ARRAY'
ACTION=="add|change", IMPORT{program}="/sbin/mdadm --incremental --export $devnode --offroot $env{DEVLINKS}"
ACTION=="add|change", ENV{MD_STARTED}=="*unsafe*", ENV{MD_FOREIGN}=="no", ENV{SYSTEMD_WANTS}+="mdadm-last-resort@$env{MD_DEVICE}.timer"
ACTION=="remove", ENV{ID_PATH}=="?*", RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
ACTION=="remove", ENV{ID_PATH}!="?*", RUN+="/sbin/mdadm -If $name"
LABEL="md_inc_end"
- LABEL 相当于C语言函数中的符号
四、udev规则环境变量
udev中有自己的环境变量,可以在udev规则中执行脚本,打印出当前udev内的$PATH
udev $PATH
/usr/local/bin:/usr/bin
系统中的$PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/local/sbin
所以在udev的规则中执行shell脚本的时候会去调用一些系统命令,系统命令会去udev给定的环境变量中查找二进制文件,这时如过需要执行的命令没有在udev默认的规则里面,命令执行就会报错。
可以在RUN命令时通过指定PATH的方式解决上述问题。
EVN{PATH}="/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/local/sbin",RUN+="/path/to/shell/script"
如果还没有解决需要在脚本中将shell命令执行错误的信息打印出来。
result=$(insmod /root/test.ko 2>&1)
echo "$result" >> /var/log.txt