On 7/14/25 9:32 PM, Shin'ichiro Kawasaki wrote:
The test case repeats creating and deleting a loop device. This
generates many udev events and makes following test cases fail. To avoid
the unexpected test case failures, drain the udev events. For that
purpose, introduce the helper function _drain_udev_events(). When
systemd-udevd service is running, restart it to discard the events
quickly. When systemd-udevd service is not available, call
"udevadm settle", which takes longer time to drain the events.
Link: https://github.com/linux-blktests/blktests/issues/181
Reported-by: Yi Zhang <yi.zhang@xxxxxxxxxx>
Suggested-by: Bart Van Assche <bvanassche@xxxxxxx>
Suggested-by: Daniel Wagner <dwagner@xxxxxxx>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@xxxxxxx>
---
Changes from v1:
* Added "udevadm settle" in case systemd-udevd.service is not available
* Introduced _drain_udev_events()
common/rc | 9 +++++++++
tests/loop/010 | 5 +++++
2 files changed, 14 insertions(+)
diff --git a/common/rc b/common/rc
index 72441ab..dfc389f 100644
--- a/common/rc
+++ b/common/rc
@@ -544,6 +544,15 @@ _systemctl_stop() {
done
}
+_drain_udev_events() {
+ if command -v systemctl &>/dev/null &&
+ systemctl is-active --quiet systemd-udevd; then
+ systemctl restart systemd-udevd.service
+ else
+ udevadm settle --timeout=900
+ fi
+}
+
# Run the given command as NORMAL_USER
_run_user() {
su "$NORMAL_USER" -c "$1"
diff --git a/tests/loop/010 b/tests/loop/010
index 309fd8a..b1a4926 100755
--- a/tests/loop/010
+++ b/tests/loop/010
@@ -78,5 +78,10 @@ test() {
if _dmesg_since_test_start | grep --quiet "$grep_str"; then
echo "Fail"
fi
+
+ # The repeated loop device creations and deletions generated so many
+ # udev events. Drain the events to not influence following test cases.
+ _drain_udev_events
+
How about changing the above comment into the following to make it more
clear? "This test generates udev events faster than the rate at which
udevd can process events. Drain udev events to prevent that future test
cases fail." Anyway:
Reviewed-by: Bart Van Assche <bvanassche@xxxxxxx>