Re: After disk failure not deep scrubbed pgs started to increase (ceph quincy)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Fixed by doubling the data pool pg finally.

Get Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Monday, August 18, 2025 10:18:17 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re:  Re: After disk failure not deep scrubbed pgs started to increase (ceph quincy)

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

I haven't had the urge to dive too deep into this. For me it's not
critical, I know that deep-scrubs build up after rebalancing/recovery,
I either mute the warning for a couple of days or increase the overall
deep-scrub-interval, depending on the agreement with the customer(s).
If you have large clusters to scrub, it doesn't really make a
difference if it's within a week or a month, really. Potentially, you
could have inconsistent PGs discovered a couple of days later, but
I've never had any issue with inconsistent PGs, nothing a pg repair
hasn't fixed yet.

Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:

> scrub during recovery enabled, yesterday recovery finished and still
> increasing
>
> Get Outlook for Android<https://aka.ms/AAb9ysg>
> ________________________________
> From: Eugen Block <eblock@xxxxxx>
> Sent: Friday, August 15, 2025 3:45:24 AM
> To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
> Subject:  Re: After disk failure not deep scrubbed pgs
> started to increase (ceph quincy)
>
> Email received from the internet. If in doubt, don't click any link
> nor open any attachment !
> ________________________________
>
> Hi,
>
> by default, scrubs during recovery are disabled, you could enable it
> if you think your cluster can handle it.
>
> The PG query output you pasted shows that it’s currently backfilling,
> that’s why it’s building up the (deep-)scrub backlog.
>
> Regards,
> Eugen
>
> https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/#confval-osd_scrub_during_recovery
>
> Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>:
>
>> Hi,
>> After a disk failure, the number of PGs not deep scrubbed started to
>> increase (Ceph Quincy). Here is an example PG:
>> https://gist.githubusercontent.com/Badb0yBadb0y/afe9b26fbae72c79c9430585bf4e5f23/raw/7e794b7d6245cc13276afec20fd8b5a5a7fbd082/PG%2520Query
>> Normally, when this happens, I observe some stuck operations and
>> increased latency on certain OSDs (restarting the affected OSD
>> usually resolves the issue, and the not deep-scrubbed PGs get
>> scrubbed within a day). However, this time I don’t see any obvious
>> problems.
>> Could you please help review the attached not deep-scrubbed PG and
>> let me know if you notice anything unusual?
>> For context, the disk was re-added yesterday and rebalance is still
>> in progress, but the not deep-scrubbed alerts started appearing
>> before the disk was added back.
>> Thank you.
>>
>> ________________________________
>> This message is confidential and is for the sole use of the intended
>> recipient(s). It may also be privileged or otherwise protected by
>> copyright or other legal rules. If you have received it by mistake
>> please let us know by reply email and delete it from your system. It
>> is prohibited to copy this message or disclose its content to
>> anyone. Any confidentiality or privilege is not waived or lost by
>> any mistaken delivery or unauthorized disclosure of the message. All
>> messages sent to and from Agoda may be monitored to ensure
>> compliance with company policies, to protect the company's interests
>> and to remove potential malware. Electronic messages may be
>> intercepted, amended, lost or deleted, or contain viruses.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> ________________________________
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by
> copyright or other legal rules. If you have received it by mistake
> please let us know by reply email and delete it from your system. It
> is prohibited to copy this message or disclose its content to
> anyone. Any confidentiality or privilege is not waived or lost by
> any mistaken delivery or unauthorized disclosure of the message. All
> messages sent to and from Agoda may be monitored to ensure
> compliance with company policies, to protect the company's interests
> and to remove potential malware. Electronic messages may be
> intercepted, amended, lost or deleted, or contain viruses.




________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux