On 8/5/25 9:25 AM, Amery Hung wrote:
On Fri, Aug 1, 2025 at 5:58 PM Martin KaFai Lau <martin.lau@xxxxxxxxx> wrote:
On 7/29/25 11:25 AM, Amery Hung wrote:
void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now)
{
+ struct bpf_local_storage_map *storage_smap;
+ struct bpf_local_storage *local_storage = NULL;
+ bool bpf_ma, free_local_storage = false;
+ HLIST_HEAD(selem_free_list);
struct bpf_local_storage_map_bucket *b;
- struct bpf_local_storage_map *smap;
- unsigned long flags;
+ struct bpf_local_storage_map *smap = NULL;
+ unsigned long flags, b_flags;
if (likely(selem_linked_to_map_lockless(selem))) {
Can we simplify the bpf_selem_unlink() function by skipping this map_lockless
check,
smap = rcu_dereference_check(SDATA(selem)->smap, bpf_rcu_lock_held());
b = select_bucket(smap, selem);
- raw_spin_lock_irqsave(&b->lock, flags);
+ }
- /* Always unlink from map before unlinking from local_storage
- * because selem will be freed after successfully unlinked from
- * the local_storage.
- */
- bpf_selem_unlink_map_nolock(selem);
- raw_spin_unlock_irqrestore(&b->lock, flags);
+ if (likely(selem_linked_to_storage_lockless(selem))) {
only depends on this and then proceed to take the lock_storage->lock. Then
recheck selem_linked_to_storage(selem), bpf_selem_unlink_map(selem) first, and
then bpf_selem_unlink_storage_nolock(selem) last.
Thanks for the suggestion. I think it will simplify the function. Just
making sure I am getting you right, you mean instead of open code both
unlink_map and unlink_storage, only open code unlink_storage. First,
grab local_storage->lock and call bpf_selem_unlink_map(). Then, only
After grabbing the local-storage->lock, re-check selem_linked_to_storage() first
before calling bpf_selem_unlink_map().
proceed to unlink_storage only If bpf_selem_unlink_map() succeeds.
No strong opinion on open coding bpf_selem_unlink_map() or not. I think they are
the same. I reuse the bpf_selem_unlink_map() because I think it can be used as
is. The logic of bpf_selem_unlink() here should be very similar to the
bpf_local_storage_destroy() now except it needs to recheck the
selem_linked_to_storage():
1. grab both locks.
2. If selem_linked_to_storage() is true, the selem_linked_to_map() should also
be true since we now need to grab both locks before moving forward to unlink.
Meaning either a selem will not be unlinked at all or it will be unlinked from
both local_storage and map. Am I thinking it correctly or there is hole?
Then bpf_selem_unlink_map can use selem->local_storage->owner to select_bucket().
Not sure what this part mean. Could you elaborate?
I meant to pass owner pointer to select_bucket, like
select_bucket(smap, selem->local_storage->owner). Of course, the owner pointer
should also be used on the update side also, i.e. bpf_local_storage_update().
Then it does not need to take the second bucket lock when "if (b != old_b)" in
the bpf_local_storage_update() in patch 1.
+ local_storage = rcu_dereference_check(selem->local_storage,
+ bpf_rcu_lock_held());
+ storage_smap = rcu_dereference_check(local_storage->smap,
+ bpf_rcu_lock_held());
+ bpf_ma = check_storage_bpf_ma(local_storage, storage_smap, selem);
}
- bpf_selem_unlink_storage(selem, reuse_now);
+ if (local_storage)
+ raw_spin_lock_irqsave(&local_storage->lock, flags);
+ if (smap)
+ raw_spin_lock_irqsave(&b->lock, b_flags);
+
+ /* Always unlink from map before unlinking from local_storage
+ * because selem will be freed after successfully unlinked from
+ * the local_storage.
+ */
+ if (smap)
This "if (smap)" test
+ bpf_selem_unlink_map_nolock(selem);
+ if (local_storage && likely(selem_linked_to_storage(selem)))
and this orthogonal "if (local_storage &&
likely(selem_linked_to_storage(selem)))" test.
Understood it is from the existing codes that unlink from map and unlink from
local_storage can be done independently. It was there because it did not need to
grab both locks to move forward. Since now it needs both locks, I think we can
simplify things a bit like mentioned above.
+ free_local_storage = bpf_selem_unlink_storage_nolock(
+ local_storage, selem, true, &selem_free_list);
+
+ if (smap)
+ raw_spin_unlock_irqrestore(&b->lock, b_flags);
+ if (local_storage)
+ raw_spin_unlock_irqrestore(&local_storage->lock, flags);
+
+ bpf_selem_free_list(&selem_free_list, reuse_now);
+
+ if (free_local_storage)
+ bpf_local_storage_free(local_storage, storage_smap, bpf_ma, reuse_now);
}
void __bpf_local_storage_insert_cache(struct bpf_local_storage *local_storage,