All places where we call set_mnt_shared() are guaranteed to have non-zero ->mnt_group_id - either by explicit test, or by having done successful invent_group_ids() covering the same mount since we'd grabbed namespace_sem. The opposite combination (non-zero ->mnt_group_id and !IS_MNT_SHARED()) *is* possible - it means that we have allocated group id, but didn't get around to set_mnt_shared() yet; such state is transient - by the time we do namespace_unlock(), we must either do set_mnt_shared() or unroll the group id allocations by cleanup_group_ids(). Signed-off-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx> --- fs/namespace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/namespace.c b/fs/namespace.c index ca36c4a6a143..a75438121417 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -2516,7 +2516,7 @@ static int invent_group_ids(struct mount *mnt, bool recurse) struct mount *p; for (p = mnt; p; p = recurse ? next_mnt(p, mnt) : NULL) { - if (!p->mnt_group_id && !IS_MNT_SHARED(p)) { + if (!p->mnt_group_id) { int err = mnt_alloc_group_id(p); if (err) { cleanup_group_ids(mnt, p); -- 2.39.5