On Mon, Jul 7, 2025 at 5:32 PM Yann Ylavic <ylavic.dev@xxxxxxxxx> wrote: > > On Fri, Jul 4, 2025 at 3:54 PM Mike Rapoport <rppt@xxxxxxxxxx> wrote: > > + > > +static void execmem_cache_free_slow(struct work_struct *work) > > +{ > > + struct maple_tree *busy_areas = &execmem_cache.busy_areas; > > + MA_STATE(mas, busy_areas, 0, ULONG_MAX); > > + void *area; > > + > > + guard(mutex)(&execmem_cache.mutex); > > + > > + if (!execmem_cache.pending_free_cnt) > > + return; > > + > > + mas_for_each(&mas, area, ULONG_MAX) { > > + if (!is_pending_free(area)) > > + continue; > > + > > + pending_free_clear(area); > > Probably: > area = pending_free_clear(area); > ? Likewise in execmem_cache_free_slow() btw. > > > + if (__execmem_cache_free(&mas, area, GFP_KERNEL)) > > + continue; > > + > > + execmem_cache.pending_free_cnt--; > > + } > > + > > + if (execmem_cache.pending_free_cnt) > > + schedule_delayed_work(&execmem_cache_free_work, FREE_DELAY); > > + else > > + schedule_work(&execmem_cache_clean_work); > > +} > > > Regards; > Yann.