Hi Jens, I just realised that I'm still seeing a warning about circular locking involving q->elevator_lock in -rc6. Do you know if there's a fix for this yet? David --- ====================================================== WARNING: possible circular locking dependency detected 6.15.0-rc6-build2+ #1293 Not tainted ------------------------------------------------------ (udev-worker)/4602 is trying to acquire lock: ffff88810c5585c0 (&q->elevator_lock){+.+.}-{4:4}, at: elv_iosched_store+0xf9/0x210 but task is already holding lock: ffff88810c5580a8 (&q->q_usage_counter(io)#9){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0xf/0x20 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&q->q_usage_counter(io)#9){++++}-{0:0}: validate_chain+0x1dc/0x280 __lock_acquire+0x5b6/0x720 lock_acquire.part.0+0xb4/0x1f0 blk_alloc_queue+0x38b/0x420 blk_mq_alloc_queue+0xd4/0x150 scsi_alloc_sdev+0x47e/0x5e0 scsi_probe_and_add_lun+0x15c/0x370 __scsi_add_device+0x123/0x190 ata_scsi_scan_host+0x94/0x1d0 async_run_entry_fn+0x4b/0x130 process_one_work+0x485/0x7b0 process_scheduled_works+0x73/0x90 worker_thread+0x1c8/0x2a0 kthread+0x2f9/0x310 ret_from_fork+0x24/0x40 ret_from_fork_asm+0x1a/0x30 -> #1 (fs_reclaim){+.+.}-{0:0}: validate_chain+0x1dc/0x280 __lock_acquire+0x5b6/0x720 lock_acquire.part.0+0xb4/0x1f0 __fs_reclaim_acquire+0x21/0x30 fs_reclaim_acquire+0x2d/0x70 might_alloc+0x8/0x40 kmem_cache_alloc_noprof+0x42/0x230 __kernfs_new_node+0xc6/0x3e0 kernfs_new_node+0x89/0xc0 kernfs_create_dir_ns+0x27/0xa0 sysfs_create_dir_ns+0xf5/0x170 kobject_add_internal+0x141/0x2c0 kobject_add+0xfd/0x140 elv_register_queue+0x74/0x100 blk_register_queue+0x16a/0x240 add_disk_fwnode+0x371/0x710 sd_probe+0x50d/0x620 really_probe+0x167/0x320 __driver_probe_device+0x121/0x160 driver_probe_device+0x4a/0xd0 __device_attach_driver+0x99/0xd0 bus_for_each_drv+0x104/0x140 __device_attach_async_helper+0xdb/0x140 async_run_entry_fn+0x4b/0x130 process_one_work+0x485/0x7b0 process_scheduled_works+0x73/0x90 worker_thread+0x1c8/0x2a0 kthread+0x2f9/0x310 ret_from_fork+0x24/0x40 ret_from_fork_asm+0x1a/0x30 -> #0 (&q->elevator_lock){+.+.}-{4:4}: check_noncircular+0x96/0xc0 check_prev_add+0x115/0x2f0 validate_chain+0x1dc/0x280 __lock_acquire+0x5b6/0x720 lock_acquire.part.0+0xb4/0x1f0 __mutex_lock+0x16d/0x5e0 elv_iosched_store+0xf9/0x210 queue_attr_store+0xcb/0x1c0 kernfs_fop_write_iter+0x194/0x210 vfs_write+0x220/0x310 ksys_write+0xb8/0x120 do_syscall_64+0x9f/0x100 entry_SYSCALL_64_after_hwframe+0x76/0x7e other info that might help us debug this: Chain exists of: &q->elevator_lock --> fs_reclaim --> &q->q_usage_counter(io)#9 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&q->q_usage_counter(io)#9); lock(fs_reclaim); lock(&q->q_usage_counter(io)#9); lock(&q->elevator_lock); *** DEADLOCK *** 5 locks held by (udev-worker)/4602: #0: ffff88810e4a4408 (sb_writers#4){.+.+}-{0:0}, at: vfs_write+0xfc/0x310 #1: ffff888112e2b888 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x125/0x210 #2: ffff88810c8062d8 (kn->active#55){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x135/0x210 #3: ffff88810c5580a8 (&q->q_usage_counter(io)#9){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0xf/0x20 #4: ffff88810c5580e0 (&q->q_usage_counter(queue)){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0xf/0x20 stack backtrace: CPU: 3 UID: 0 PID: 4602 Comm: (udev-worker) Not tainted 6.15.0-rc6-build2+ #1293 PREEMPT(voluntary) Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014 Call Trace: <TASK> dump_stack_lvl+0x57/0x80 print_circular_bug+0xb5/0xd0 check_noncircular+0x96/0xc0 ? _find_first_zero_bit+0x1e/0x50 check_prev_add+0x115/0x2f0 validate_chain+0x1dc/0x280 __lock_acquire+0x5b6/0x720 lock_acquire.part.0+0xb4/0x1f0 ? elv_iosched_store+0xf9/0x210 ? rcu_is_watching+0x34/0x60 ? lock_acquire+0x88/0xf0 __mutex_lock+0x16d/0x5e0 ? elv_iosched_store+0xf9/0x210 ? preempt_count_sub+0x18/0xc0 ? elv_iosched_store+0xf9/0x210 ? __pfx___mutex_lock+0x10/0x10 ? blk_mq_freeze_queue_wait+0xe6/0x130 ? lock_acquire.part.0+0xc4/0x1f0 ? __pfx_autoremove_wake_function+0x10/0x10 ? lock_acquire+0x88/0xf0 ? elv_iosched_store+0xf9/0x210 elv_iosched_store+0xf9/0x210 ? __pfx_elv_iosched_store+0x10/0x10 ? __pfx___mutex_trylock_common+0x10/0x10 queue_attr_store+0xcb/0x1c0 ? mark_lock+0x2e/0x110 ? __pfx_queue_attr_store+0x10/0x10 ? __lock_acquire+0x5b6/0x720 ? lock_is_held_type+0xbe/0x110 ? find_held_lock+0x2b/0x80 ? sysfs_file_kobj+0x129/0x140 ? __lock_release.isra.0+0x5a/0x150 ? sysfs_file_kobj+0x129/0x140 ? lock_release+0xe3/0x120 ? __rcu_read_unlock+0x4c/0x70 ? sysfs_file_kobj+0x133/0x140 ? __pfx_sysfs_kf_write+0x10/0x10 kernfs_fop_write_iter+0x194/0x210 vfs_write+0x220/0x310 ? __pfx_vfs_write+0x10/0x10 ? ktime_get_coarse_real_ts64+0x19/0x70 ? files_lookup_fd_raw+0x40/0x50 ? __fget_light+0x5b/0x90 ksys_write+0xb8/0x120 ? __pfx_ksys_write+0x10/0x10 ? syscall_trace_enter+0x10c/0x150 do_syscall_64+0x9f/0x100 entry_SYSCALL_64_after_hwframe+0x76/0x7e RIP: 0033:0x7f033d80e044 Code: c7 00 16 00 00 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 80 3d 65 a0 10 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 55 48 89 e5 48 83 ec 20 48 89 RSP: 002b:00007ffe41add9f8 EFLAGS: 00000202 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f033d80e044 RDX: 0000000000000003 RSI: 00007ffe41addd00 RDI: 0000000000000027 RBP: 00007ffe41adda20 R08: 00007f033d90f1c8 R09: 00007ffe41addad0 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000003 R13: 00007ffe41addd00 R14: 00005626a9b65980 R15: 00007f033d90ee80 </TASK>