On 4/1/25 5:46 PM, Ming Lei wrote: > On Tue, Apr 01, 2025 at 05:23:56PM +0530, Nilay Shroff wrote: >> >> >> On 3/29/25 7:29 AM, Ming Lei wrote: >>> On Fri, Mar 28, 2025 at 07:37:25AM -0700, syzbot wrote: >>>> Hello, >>>> >>>> syzbot found the following issue on: >>>> >>>> HEAD commit: 1a9239bb4253 Merge tag 'net-next-6.15' of git://git.kernel.. >>>> git tree: upstream >>>> console output: https://syzkaller.appspot.com/x/log.txt?x=1384b43f980000 >>>> kernel config: https://syzkaller.appspot.com/x/.config?x=c7163a109ac459a8 >>>> dashboard link: https://syzkaller.appspot.com/bug?extid=4c7e0f9b94ad65811efb >>>> compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40 >>>> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=178cfa4c580000 >>>> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11a8ca4c580000 >>>> >>>> Downloadable assets: >>>> disk image: https://storage.googleapis.com/syzbot-assets/fc7dc9f0d9a7/disk-1a9239bb.raw.xz >>>> vmlinux: https://storage.googleapis.com/syzbot-assets/f555a3ae03d3/vmlinux-1a9239bb.xz >>>> kernel image: https://storage.googleapis.com/syzbot-assets/55f6ea74eaf2/bzImage-1a9239bb.xz >>>> >>>> IMPORTANT: if you fix the issue, please add the following tag to the commit: >>>> Reported-by: syzbot+4c7e0f9b94ad65811efb@xxxxxxxxxxxxxxxxxxxxxxxxx >>>> >>> >>> ... >>> >>>> >>>> If you want syzbot to run the reproducer, reply with: >>>> #syz test: git://repo/address.git branch-or-commit-hash >>>> If you attach or paste a git patch, syzbot will apply it before testing. >>> >>> >>> diff --git a/block/blk-mq.c b/block/blk-mq.c >>> index ae8494d88897..d7a103dc258b 100644 >>> --- a/block/blk-mq.c >>> +++ b/block/blk-mq.c >>> @@ -4465,14 +4465,12 @@ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx( >>> return NULL; >>> } >>> >>> -static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, >>> - struct request_queue *q) >>> +static void __blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, >>> + struct request_queue *q) >>> { >>> struct blk_mq_hw_ctx *hctx; >>> unsigned long i, j; >>> >>> - /* protect against switching io scheduler */ >>> - mutex_lock(&q->elevator_lock); >>> for (i = 0; i < set->nr_hw_queues; i++) { >>> int old_node; >>> int node = blk_mq_get_hctx_node(set, i); >>> @@ -4505,7 +4503,19 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, >>> >>> xa_for_each_start(&q->hctx_table, j, hctx, j) >>> blk_mq_exit_hctx(q, set, hctx, j); >>> - mutex_unlock(&q->elevator_lock); >>> +} >>> + >>> +static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set, >>> + struct request_queue *q, bool lock) >>> +{ >>> + if (lock) { >>> + /* protect against switching io scheduler */ >>> + mutex_lock(&q->elevator_lock); >>> + __blk_mq_realloc_hw_ctxs(set, q); >>> + mutex_unlock(&q->elevator_lock); >>> + } else { >>> + __blk_mq_realloc_hw_ctxs(set, q); >>> + } >>> >>> /* unregister cpuhp callbacks for exited hctxs */ >>> blk_mq_remove_hw_queues_cpuhp(q); >>> @@ -4537,7 +4547,7 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, >>> >>> xa_init(&q->hctx_table); >>> >>> - blk_mq_realloc_hw_ctxs(set, q); >>> + blk_mq_realloc_hw_ctxs(set, q, false); >>> if (!q->nr_hw_queues) >>> goto err_hctxs; >>> >>> @@ -5033,7 +5043,7 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, >>> fallback: >>> blk_mq_update_queue_map(set); >>> list_for_each_entry(q, &set->tag_list, tag_set_list) { >>> - blk_mq_realloc_hw_ctxs(set, q); >>> + blk_mq_realloc_hw_ctxs(set, q, true); >>> >>> if (q->nr_hw_queues != set->nr_hw_queues) { >>> int i = prev_nr_hw_queues; >>> >> >> This patch looks good to me, however after we fix this one, I found another splat. >> I see that these new splats are side effect of commit ffa1e7ada456 ("block: Make >> request_queue lockdep splats show up earlier"). >> >> IMO in the block layer code (unless it's in an IO submission path or a path where we >> have already frozen queue) we may still want to allow memory allocation with GFP_KERNEL. >> So in that sense, for example, we may acquire ->elevator_lock followed by fs_reclaim. > > If any memory GFP_KERNEL allocation grabs ->elevator_lock, it is one real > deadlock risk. > >> Or in another words, shouldn't it be legitimate to acquire blk layer specific lock and >> then allocate memory using GFP_KERNEL assuming we haven't freezed queue or we're not in >> IO submission path. But this commit ffa1e7ada456 ("block: Make request_queue lockdep >> splats show up earlier") now showing up some false-positive splat as well, please see >> below: > > It depends if we may run GFP_KERNEL allocation with ->elevator_lock. > Okay, so do you think we shall use GFP_NOIO for memory allocation if it's done after we acquire ->elevator_lock? Thanks, --Nilay