I am noot the poster. I am going to guess what was done to screw it up based on looking at a lot of different sets of people's partition mistakes. I fixed 100's of deleted/resized partitions and partitioned partitions, a prior job had a lot of people writing up and following work instructions that they did not fully understand. I am guessing there was a proper gpt sdd1 and sdc1 and that was in raid and that was working. I am further guessing that at some point in time another raid was created directly on sdd and sdc (for some reason) and the mdadm header overwrote part of the partition table. when that was done the partition devices and raid array would have stayed functioning until the node was rebooted and the OS could not find the partition table any more. I think the OS will ignore the backup partition table leaving no partitions until that is fixed. I believe there are 2 paths forward. 1: execute a mdadm --stop on the current wrong partition, and use a partitioning program to fix the missing partition table using the backup copy. 2: if none of the tools will automatically fix it, then carefully recreate the partition table making sure that the start and end are the same as the prior table, and make sure to answer 'N' if the partitioning program tells you there is a signature do you want to wipe it. On Tue, Jul 29, 2025 at 2:05 PM Marco Moock <mm@xxxxxxxxxx> wrote: > > Am 29.07.2025 um 17:26:56 Uhr schrieb François Patte: > > > Strang (for me): when I use fdisk I gt this answer : > > > > fdisk -l /dev/sdc > > The primary GPT table is corrupt, but the backup appears OK, so that > > will be used. > > Disk /dev/sdc: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors > > Disk model: TR-002 DISK00 > > Units: sectors of 1 * 512 = 512 bytes > > Sector size (logical/physical): 512 bytes / 4096 bytes > > I/O size (minimum/optimal): 4096 bytes / 4096 bytes > > Disklabel type: gpt > > Disk identifier: EBCB6018-6E52-4C38-8C69-87E89D51BA75 > > > > Device Start End Sectors Size Type > > /dev/sdc1 2048 15628052479 15628050432 7,3T Linux RAID > > > > Same for /dev/sdd > > > > So there are partitions /dev/sdc1 and /dev/sdd1 > > Number Major Minor RaidDevice State > 0 8 48 0 active sync /dev/sdd > 2 8 32 1 active sync /dev/sdc > > Your raid accesses the dis directly and that creates a huge mess. > Explain what is going on. > > -- > Gruß > Marco > > Send unsolicited bulk mail to 1753802816muell@xxxxxxxxxxxxxx > -- > _______________________________________________ > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx > Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue -- _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue