A filesystem abnormal mount issue was found during current testing: disk_container=$(...kata-runtime...io.kubernets.docker.type=container...) docker_id=$(...kata-runtime...io.katacontainers.disk_share= "{"src":"/dev/sdb","dest":"/dev/test"}"...) ${docker} stop "$disk_container" ${docker} exec "$docker_id" mount /dev/test /tmp -->success!! When the "disk_container" is stopped, the created sda/sdb/sdc disks are already deleted, but inside the "docker_id", /dev/test can still be mounted successfully. The reason is that runc calls unshare, which triggers clone_mnt(), increasing the "sb->s_active" reference count. As long as the "docker_id" does not exit, the superblock still has a reference count. So when mounting, the old superblock is reused in sget_fc(), and the mount succeeds, even if the actual device no longer exists. The whole process can be simplified as follows: mkfs.ext4 -F /dev/sdb mount /dev/sdb /mnt mknod /dev/test b 8 16 # [sdb 8:16] echo 1 > /sys/block/sdb/device/delete mount /dev/test /mnt1 # -> mount success Therefore, it is necessary to add an extra check. Solve this problem by checking disk_live() in super_s_dev_test(). Fixes: aca740cecbe5 ("fs: open block device after superblock creation") Link: https://lore.kernel.org/all/20250717091150.2156842-1-wozizhi@xxxxxxxxxx/ Signed-off-by: Zizhi Wo <wozizhi@xxxxxxxxxx> --- fs/super.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/fs/super.c b/fs/super.c index 80418ca8e215..8030fb519eb5 100644 --- a/fs/super.c +++ b/fs/super.c @@ -1376,8 +1376,16 @@ static int super_s_dev_set(struct super_block *s, struct fs_context *fc) static int super_s_dev_test(struct super_block *s, struct fs_context *fc) { - return !(s->s_iflags & SB_I_RETIRED) && - s->s_dev == *(dev_t *)fc->sget_key; + if (s->s_iflags & SB_I_RETIRED) + return false; + + if (s->s_dev != *(dev_t *)fc->sget_key) + return false; + + if (s->s_bdev && !disk_live(s->s_bdev->bd_disk)) + return false; + + return true; } /** -- 2.39.2