On Tue, Jul 29, 2025 at 02:17:10PM +0800, Edward Adam Davis wrote: > syzbot reports data-race in fat32_ent_get/fat32_ent_put. > > CPU0(Task A) CPU1(Task B) > ==== ==== > vfs_write > new_sync_write > generic_file_write_iter > fat_write_begin > block_write_begin vfs_statfs > fat_get_block statfs_by_dentry > fat_add_cluster fat_statfs > fat_ent_write fat_count_free_clusters > fat32_ent_put fat32_ent_get > > Task A's write operation on CPU0 and Task B's read operation on CPU1 occur > simultaneously, generating an race condition. > > Add READ/WRITE_ONCE to solve the race condition that occurs when accessing > FAT32 entry. Solve it in which sense? fat32_ent_get() and fat32_ent_put() are already atomic wrt each other; neither this nor your previous variant change anything whatsoever. And if you are talking about the results of *multiple* fat32_ent_get(), with some assumptions made by fat_count_free_clusters() that somehow get screwed by the modifications from fat_add_cluster(), your patch does not prevent any of that (not that you explained what kind of assumptions would those be). Long story short - accesses to individual entries are already atomic wrt each other; the fact that they happen simultaneously _might_ be a symptom of insufficient serialization, but neither version of your patch resolves that in any way - it just prevents the tool from reporting its suspicions. It does not give fat_count_free_clusters() a stable state of the entire table, assuming it needs one. It might, at that - I hadn't looked into that code since way back. But unless I'm missing something, the only thing your patch does is making your (rather blunt) tool STFU. If there is a race, explain what sequence of events leads to incorrect behaviour and explain why your proposed change prevents that incorrect behaviour. Note that if that behaviour is "amount of free space reported by statfs(2) depends upon how far did the ongoing write(2) get", it is *not* incorrect - that's exactly what the userland has asked for. If it's "statfs(2) gets confused into reporting an amount of free space that wouldn't have been accurate for any moment of time (or, worse yet, crashes, etc.)" - yes, that would be a problem, but it could not be solved by preventing simultaneous access to *single* entries, if it happens at all.