Re: extreme RAID10 rebuild times reported, but rebuild's progressing ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 1, 2025 at 12:36 AM pgnd <pgnd@xxxxxxxxxxxx> wrote:
>
> hi.
>
> > Are there some D state progress?
>
> no, there were none.
>
> the rebuild 'completed' in ~ 1hr 15mins ...
> atm, the array's up, passing all tests, and seemingly fully functional

I'm glad to hear this, so everything works well now :)

>
> > And how about `ps auxf | grep md`?
>
> ps auxf | grep md
>         root          97  0.0  0.0      0     0 ?        SN   09:10   0:00  \_ [ksmd]
>         root         107  0.0  0.0      0     0 ?        I<   09:10   0:00  \_ [kworker/R-md]
>         root         108  0.0  0.0      0     0 ?        I<   09:10   0:00  \_ [kworker/R-md_bitmap]
>         root        1049  0.0  0.0      0     0 ?        S    09:10   0:00  \_ [md124_raid10]
>         root        1052  0.0  0.0      0     0 ?        S    09:10   0:00  \_ [md123_raid10]
>         root        1677  0.0  0.0      0     0 ?        S    09:10   0:00  \_ [jbd2/md126-8]
>         root           1  0.0  0.0  24820 15536 ?        Ss   09:10   0:03 /usr/lib/systemd/systemd --switched-root --system --deserialize=49 domdadm dolvm showopts noquiet
>         root        1308  0.0  0.0  32924  8340 ?        Ss   09:10   0:00 /usr/lib/systemd/systemd-journald
>         root        1368  0.0  0.0  36620 11596 ?        Ss   09:10   0:00 /usr/lib/systemd/systemd-udevd
>         systemd+    1400  0.0  0.0  17564  9160 ?        Ss   09:10   0:00 /usr/lib/systemd/systemd-networkd
>         systemd+    2010  0.0  0.0  15932  7112 ?        Ss   09:11   0:02 /usr/lib/systemd/systemd-oomd
>         root        2029  0.0  0.0   4176  2128 ?        Ss   09:11   0:00 /sbin/mdadm --monitor --scan --syslog -f --pid-file=/run/mdadm/mdadm.pid
>         root        2055  0.0  0.0  16648  8012 ?        Ss   09:11   0:00 /usr/lib/systemd/systemd-logind
>         root        2121  0.0  0.0  21176 12288 ?        Ss   09:11   0:00 /usr/lib/systemd/systemd --user
>         root        4105  0.0  0.0 230344  2244 pts/0    S+   12:21   0:00              \_ grep --color=auto md
>         root        2247  0.0  0.0 113000  6236 ?        Ssl  09:11   0:00 /usr/sbin/automount --systemd-service --dont-check-daemon
>
> > there a filesystem on it? If so, can you still read/write data from it?
>
> yes, and yes.
>
> pvs
>    PV         VG         Fmt  Attr PSize  PFree
>    /dev/md123 VG_D1    lvm2 a--   5.45t     0
>    /dev/md124 VG_D1    lvm2 a--   7.27t     0
> vgs
>    VG       #PV #LV #SN Attr   VSize  VFree
>    VG_D1      2   1   0 wz--n- 12.72t     0
> lvs
>    LV             VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>    LV_D1          VG_D1      -wi-ao---- 12.72t
>
> cat /proc/mdstat
>         Personalities : [raid1] [raid10]
>         md123 : active (auto-read-only) raid10 sdg1[3] sdh1[4] sdj1[2] sdi1[1]
>               5860265984 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
>               bitmap: 0/44 pages [0KB], 65536KB chunk
>
>         md124 : active raid10 sdl1[0] sdm1[1] sdn1[2] sdk1[4]
>               7813770240 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
>               bitmap: 0/59 pages [0KB], 65536KB chunk
>
> lsblk /dev/sdn
>         NAME                  MAJ:MIN RM  SIZE RO TYPE   MOUNTPOINTS
>         sdn                     8:208  0  3.6T  0 disk
>         └─sdn1                  8:209  0  3.6T  0 part
>           └─md124               9:124  0  7.3T  0 raid10
>             └─VG_D1-LV_D1     253:8    0 12.7T  0 lvm    /NAS/D1
>
> fdisk -l /dev/sdn
>         Disk /dev/sdn: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
>         Disk model: WDC WD40EFPX-68C
>         Units: sectors of 1 * 512 = 512 bytes
>         Sector size (logical/physical): 512 bytes / 4096 bytes
>         I/O size (minimum/optimal): 4096 bytes / 131072 bytes
>         Disklabel type: gpt
>         Disk identifier: ...
>
>         Device     Start        End    Sectors  Size Type
>         /dev/sdn1   2048 7814037134 7814035087  3.6T Linux RAID
>
> fdisk -l /dev/sdn1
>         Disk /dev/sdn1: 3.64 TiB, 4000785964544 bytes, 7814035087 sectors
>         Units: sectors of 1 * 512 = 512 bytes
>         Sector size (logical/physical): 512 bytes / 4096 bytes
>         I/O size (minimum/optimal): 4096 bytes / 131072 bytes
>
> cat /proc/mounts  | grep D1
>         /dev/mapper/VG_D1-LV_D1 /NAS/D1 ext4 rw,relatime,stripe=128 0 0
>
>
> touch /NAS/D1/test.file
> stat /NAS/D1/test.file
>           File: /NAS/D1/test.file
>           Size: 0               Blocks: 0          IO Block: 4096   regular empty file
>         Device: 253,8   Inode: 11          Links: 1
>         Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
>         Access: 2025-03-31 12:33:48.110052013 -0400
>         Modify: 2025-03-31 12:33:48.110052013 -0400
>         Change: 2025-03-31 12:33:48.110052013 -0400
>          Birth: 2025-03-31 12:33:07.272309441 -0400
>

Regards
Xiao






[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux