Dear Linux RAID team, I’m encountering an issue with an IMSM-managed RAID configuration using the lastest mdadm version from upstream, and I’d appreciate any insight or guidance you might have. System Setup: Raid: IMSM Raid level: raid1 Super container: imsm0 (/dev/md127) Subarrays: /dev/md125 and /dev/md126 Underlying Disks: /dev/sda and /dev/sdb Scenario: 1. I unplugged both sda and sdb (in that order). 2. When I plugged sdb back first, followed by sda, the RAID container and its subarrays (md125, md126) were successfully reconstructed and showed correctly in /proc/mdstat. 3. However, if I plug sda back first, then sdb, the subarrays are not reconstructed successfully and sda is missing from the subarrays, by checking /proc/mdstat. Could this behavior be related to how mdadm prioritizes metadata from the first available disk during IMSM assembly (in this case, the disk got unplugged first had stale metadata and when it got plugged back first, it didn’t pass certain checks so never got added back)? Is this a known issue or expected behavior with IMSM RAID? I’d be happy to provide logs or more detailed dumps if needed. Thank you for your time and for all the work on Linux RAID support. Best, Richard