Re: [nf-next 0/2] netfilter: nf_tables: make set flush more resistant to memory pressure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 25, 2025 at 02:24:04AM +0200, Florian Westphal wrote:
> Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx> wrote:
> > On Fri, Jul 04, 2025 at 02:30:16PM +0200, Florian Westphal wrote:
> > > Removal of many set elements, e.g. during set flush or ruleset
> > > deletion, can sometimes fail due to memory pressure.
> > > Reduce likelyhood of this happening and enable sleeping allocations
> > > for this.
> > 
> > I am exploring to skip the allocation of the transaction objects for
> > this case. This needs a closer look to deal with batches like:
> > 
> >  delelem + flush set + abort
> >  flush set + del set + abort
> > 
> > Special care need to be taken to avoid restoring the state of the
> > element twice on abort.
> 
> Its possible to defer the flush to until after we've reached the
> point of no return.
>
> But I was worried about delete/add from datapath, since it can
> happen in parallel.
> 
> Also, I think for:
> flush set x + delelem x y
>
> You get an error, as the flush marks the element as invalid in
> the new generation. Can we handle this with a flag
> in nft_set, that disallows all del elem operations on
> the set after a flush was seen?
>
> And, is that safe from a backwards-compat point of view?
> I tought the answer was: no.
> Maybe we can turn delsetelem after flush into a no-op
> in case the element existed.  Not sure.
>
> Which then means that we either can't do it, or
> need to make sure that the "del elem x" is always
> handled before the flush-set.
> 
> For maps it becomes even more problematic as we
> would elide the deactivate step on chains.
> 
> And given walk isn't stable for rhashtable at the
> moment, I don't think we can rely on "two walks" scheme.
> 
> Right now its fine because even if elements get inserted
> during or after the delset operation has done the walk+deactivate,
> those elements are not on the transaction list so we don't run into
> trouble on abort and always undo only what the walk placed on the
> transaction log.

I think the key is to be able to identify what elements have been
flushed by what flush command, so abort path can just restore/undo the
state for the given elements.

Because this also is possible:

       flush set x + [...] + flush set x

And [...] includes possible new/delete elements in x.

It should be possible to store an flush command id in the set element
(this increases the memory consumption of the set element, which your
series already does it) to identify what flush command has deleted it.
This is needed because the transaction object won't be in place but I
think it is a fair tradeoff. The flush command id can be incremental
in the batch (the netlink sequence number cannot be used for this
purpose).

> > This would allow to save the memory allocation entirely, as well as
> > speeding up the transaction handling.
> 
> Sure, it sounds tempting to pursue this.
>
> > From userspace, the idea would be to print this event:
> > 
> >         flush set inet x y
> > 
> > to skip a large burst of events when a set is flushed.
> 
> I think thats fine.
> 
> > Is this worth to be pursued?
> 
> Yes, but I am not sure it is doable without
> breaking some existing behaviour.

Of course, this needs careful look, but if the set element can be used
to annotate the information that allows us to restore to previous
state before flush (given the transaction object is not used anymore),
then it should work. Your series is extending the set element size for
a different purpose, so I think the extra memory should not be an
issue.




[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux