Re: Freeze Break request: save and then truncate the mailman bounceevnt table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 09, 2025 at 01:56:04PM -0400, Stephen Smoogen wrote:
> On Wed, 9 Apr 2025 at 13:09, Kevin Fenzi via infrastructure <
> infrastructure@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > Trying to answer everyone... ;)
> >
> >
> > Well, thats... the entire table. Most of the time there's no bounce
> > events to process, or only 1-2.
> >
> > I worry there that just deleting them, but leaving the table might cause
> > postgresql to have bloated indexes or the like even tho the table is now
> > pretty small? But I guess a vacuum full would fix that.
> >
> > I also don't understand why they are keeping these... is there some
> > historical value in knowning about a bounce 5 years ago?
> >
> >
> So I would recommend scrubbing that table. When we were looking at the
> upgrade to mailman3 this was a table which was listed by upstream as
> needing manual cleaning if you did the upgrade. The problem was that it
> killed performance as it tries to work its way through the ginormous table
> and remove people who have bounced too much. When we looked at this in
> 2020, there were multiple reports from other users of upgrading mailman3 to
> newest that they lost half their membership weeks after the upgrade because
> the bounce table had finally gotten whatever usage count it needed to do
> and cleaned out what were considered bad accounts. It didn't matter if the
> bounce was 2 years ago and 6 months ago. That was enough to consider it a
> removal for bounces. They may have fixed that part eventually, but I would
> expect it to still cause issues at that size.

Yeah, the number of unprocessed ones is very very very low... so it's
not behind, but just causing tons of I/O. ;( 

> [I am assuming here that this is the issue I marked back then as MUST BE
> DONE during the upgrade and not some other one which caused similar
> problems from the large bounce table.]

Hard to say... :) 

In any case I added the index and it's... much much happier.

It's no longer pegged at 100% i/o used... but it's still kinda high.

I see the autovacuum using a lot of IO still. That might be because it
was behind before, or because it's re-vacuuming the bounces table. Will
try and see...

We may have to look at other applications for improvements.

We do have a slow query log we can look at.

kevin
-- 
_______________________________________________
infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue




[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux