Hi Kun, On Fri, 2025-05-02 at 19:28 +0000, Viacheslav Dubeyko wrote: > On Fri, 2025-05-02 at 04:59 +0000, huk23@xxxxxxxxxxxxxx wrote: > > Dear Maintainers, > > > > > > > > When using our customized Syzkaller to fuzz the latest Linux > > kernel, the following crash (14th)was triggered. > > > > > > I have the fix and I would like to check it. I am trying to use the C reproducer for triggering the issue. Probably, I am doing something wrong. I have complied the kernel by using the shared kernel config and I have compiled the C reproducer. It works several hours already and I still cannot trigger the issue. Am I doing something wrong? How long should I wait the issue reproduction? Could you please share the correct way of the issue reproduction? Thanks, Slava. > > > > > > > > HEAD commit: 6537cfb395f352782918d8ee7b7f10ba2cc3cbf2 > > git tree: upstream > > Output: > > https://github.com/pghk13/Kernel-Bug/blob/main/1220_6.13rc_KASAN/2. > > 回归-11/14-KASAN_%20slab-out-of- > > bounds%20Read%20in%20hfsplus_bnode_read/14call_trace.txt > > Kernel > > config:https://github.com/pghk13/Kernel-Bug/blob/main/config.txt ; > > C > > reproducer:https://github.com/pghk13/Kernel-Bug/blob/main/1220_6.13 > > rc_KASAN/2.回归-11/14-KASAN_%20slab-out-of- > > bounds%20Read%20in%20hfsplus_bnode_read/14repro.c > > Syzlang reproducer: > > https://github.com/pghk13/Kernel-Bug/blob/main/1220_6.13rc_KASAN/2.回归-11/14-KASAN_%20slab-out-of-bounds%20Read%20in%20hfsplus_bnode_read/14repro.txt > > > > > > > > > > > > > > > > Our reproducer uses mounts a constructed filesystem image. > > Problems can arise in hfs_bnode_read functions. node->page[pagenum] > > causes out-of-bounds reads when accessing memory that exceeds the > > range of the node's allotted page array. > > In particular, when a hfs_bnode_dump function reads with this > > function, an offset or length that exceeds the actual size of the > > node may be passed in. In the hfsplus_bnode_dump (inferred from the > > error call stack), when traversing the records in the B-tree node, > > an incorrect offset calculation may have been used, resulting in > > data being read out of the allocated memory. Consider adding > > stricter bounds checking to the hfs_bnode_read function. > > We have reproduced this issue several times on 6.15-rc1 again. > > > > > > > > > > > > > > If you fix this issue, please add the following tag to the commit: > > Reported-by: Kun Hu <huk23@xxxxxxxxxxxxxx>, Jiaji Qin > > <jjtan24@xxxxxxxxxxxxxx>, Shuoran Bai <baishuoran@xxxxxxxxxxxx> > > > > ================================================================== > > BUG: KASAN: slab-out-of-bounds in hfsplus_bnode_read+0x268/0x290 > > Read of size 8 at addr ffff8880439aefc0 by task syz- > > executor201/9472 > > > > > > CPU: 1 UID: 0 PID: 9472 Comm: syz-executor201 Not tainted 6.15.0- > > rc1 #1 PREEMPT(full) > > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0- > > 1ubuntu1.1 04/01/2014 > > Call Trace: > > <TASK> > > dump_stack_lvl+0x116/0x1b0 > > print_report+0xc1/0x630 > > kasan_report+0x96/0xd0 > > hfsplus_bnode_read+0x268/0x290 > > hfsplus_bnode_dump+0x2c6/0x3a0 > > hfsplus_brec_remove+0x3e4/0x4f0 > > __hfsplus_delete_attr+0x28e/0x3a0 > > hfsplus_delete_all_attrs+0x13e/0x270 > > hfsplus_delete_cat+0x67f/0xb60 > > hfsplus_unlink+0x1ce/0x7d0 > > vfs_unlink+0x30e/0x9f0 > > do_unlinkat+0x4d9/0x6a0 > > __x64_sys_unlink+0x40/0x50 > > do_syscall_64+0xcf/0x260 > > entry_SYSCALL_64_after_hwframe+0x77/0x7f > > RIP: 0033:0x7fcd9a901f5b > > Code: 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 > > 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 57 00 00 00 0f 05 > > <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48 > > RSP: 002b:00007ffeefb025d8 EFLAGS: 00000206 ORIG_RAX: > > 0000000000000057 > > RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fcd9a901f5b > > RDX: 00007ffeefb02600 RSI: 00007ffeefb02600 RDI: 00007ffeefb02690 > > RBP: 00007ffeefb02690 R08: 0000000000000001 R09: 00007ffeefb02460 > > R10: 00000000fffffffb R11: 0000000000000206 R12: 00007ffeefb03790 > > R13: 00005555641a1bb0 R14: 00007ffeefb025f8 R15: 0000000000000001 > > </TASK> > > > > > > Allocated by task 9472: > > kasan_save_stack+0x24/0x50 > > kasan_save_track+0x14/0x30 > > __kasan_kmalloc+0xaa/0xb0 > > __kmalloc_noprof+0x214/0x600 > > __hfs_bnode_create+0x105/0x750 > > hfsplus_bnode_find+0x1e5/0xb70 > > hfsplus_brec_find+0x2b2/0x530 > > hfsplus_find_attr+0x12e/0x170 > > hfsplus_delete_all_attrs+0x16f/0x270 > > hfsplus_delete_cat+0x67f/0xb60 > > hfsplus_rmdir+0x106/0x1b0 > > vfs_rmdir+0x2ae/0x680 > > do_rmdir+0x2d1/0x390 > > __x64_sys_rmdir+0x40/0x50 > > do_syscall_64+0xcf/0x260 > > entry_SYSCALL_64_after_hwframe+0x77/0x7f > > > > > > The buggy address belongs to the object at ffff8880439aef00 > > which belongs to the cache kmalloc-192 of size 192 > > The buggy address is located 40 bytes to the right of > > allocated 152-byte region [ffff8880439aef00, ffff8880439aef98) > > > > > > The buggy address belongs to the physical page: > > page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 > > pfn:0x439ae > > anon flags: 0x4fff00000000000(node=1|zone=1|lastcpupid=0x7ff) > > page_type: f5(slab) > > raw: 04fff00000000000 ffff88801b4423c0 0000000000000000 > > dead000000000001 > > raw: 0000000000000000 0000000080100010 00000000f5000000 > > 0000000000000000 > > page dumped because: kasan: bad access detected > > page_owner tracks the page as allocated > > page last allocated via order 0, migratetype Unmovable, gfp_mask > > 0x52cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 1, > > tgid 1 (swapper/0), ts 46334294792, free_ts 46209226995 > > prep_new_page+0x1b0/0x1e0 > > get_page_from_freelist+0x1649/0x30f0 > > __alloc_frozen_pages_noprof+0x2fd/0x6d0 > > alloc_pages_mpol+0x209/0x550 > > new_slab+0x24b/0x340 > > ___slab_alloc+0xf0c/0x17c0 > > __slab_alloc.isra.0+0x56/0xb0 > > __kmalloc_cache_noprof+0x291/0x4b0 > > call_usermodehelper_setup+0xb2/0x360 > > kobject_uevent_env+0xf82/0x16c0 > > driver_bound+0x15b/0x220 > > really_probe+0x56e/0x990 > > __driver_probe_device+0x1df/0x450 > > driver_probe_device+0x4c/0x1a0 > > __device_attach_driver+0x1e4/0x2d0 > > bus_for_each_drv+0x14b/0x1d0 > > page last free pid 1277 tgid 1277 stack trace: > > __free_frozen_pages+0x7cd/0x1320 > > __put_partials+0x14c/0x170 > > qlist_free_all+0x50/0x130 > > kasan_quarantine_reduce+0x168/0x1c0 > > __kasan_slab_alloc+0x67/0x90 > > __kmalloc_cache_noprof+0x169/0x4b0 > > usb_control_msg+0xbc/0x4a0 > > hub_ext_port_status+0x12c/0x6b0 > > hub_activate+0x9f6/0x1aa0 > > process_scheduled_works+0x5de/0x1bd0 > > worker_thread+0x5a9/0xd10 > > kthread+0x447/0x8a0 > > ret_from_fork+0x48/0x80 > > ret_from_fork_asm+0x1a/0x30 > > > > > > Memory state around the buggy address: > > ffff8880439aee80: 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc fc > > ffff8880439aef00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > > ffff8880439aef80: 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc > > ^ > > ffff8880439af000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > ffff8880439af080: 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc > > ================================================================== > > > > > > > > > > thanks, > > Kun Hu >