extreme RAID10 rebuild times reported, but rebuild's progressing ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



on

	distro
		Name: Fedora Linux 41 (Forty One)
		Version: 41
		Codename:

	mdadm -V
		mdadm - v4.3 - 2024-02-15

	rpm -qa | grep mdadm
		mdadm-4.3-4.fc41.x86_64

i have a relatively-new (~1 month) 4x4TB RAID10 array.

after a reboot, one of the drives got kicked

	dmesg
		...
		[   15.513443] sd 15:0:7:0: [sdn] Attached SCSI disk
		[   15.784537] md: kicking non-fresh sdn1 from array!
		...

	cat proc mdstat
		md124 : active raid10 sdm1[1] sdl1[0] sdk1[4]
		      7813770240 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U]
		      bitmap: 1/59 pages [4KB], 65536KB chunk

smartctl shows no issues; can't yet find a reason for the kick.

re-adding the drive, rebuild starts.

it's progressing with recovery; after ~ 30mins, I see

	md124 : active raid10 sdm1[1] sdn1[2] sdl1[0] sdk1[4]
	      7813770240 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U]
	      [=========>...........]  recovery = 49.2% (1924016576/3906885120) finish=3918230862.4min speed=0K/sec
	      bitmap: 1/59 pages [4KB], 65536KB chunk

the values of

	finish=3918230862.4min speed=0K/sec

appear nonsensical.

is this a bug in my mdadm config, progress reporting & or an actual problem with _function_?





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux