Em Wed, 30 Jul 2025 13:46:47 -0400 Sasha Levin <sashal@xxxxxxxxxx> escreveu: > >> Some sort of a "traffic light" system: > >> > >> 1. Green: the subsystem is happy to receive patches from any source. > >> > >> 2. Yellow: "If you're unfamiliar with the subsystem and using any > >> tooling to generate your patches, please have a reviewed-by from a > >> trusted developer before sending your patch". > >> > >> 3. No tool-generated patches without prior maintainer approval. > > That sounds a terrible idea. I mean, maintainers should be green for good patches and red for bad ones. It doesn't matter if they're aided or generated by AI or $TOOL. At the end, the one submitting it shall be able to properly understand, describe and debug it. It shall also be able to test it in real life before submitting. AI can do good things, but can also do bad things. I'd say that anyone using it shall double-check the code at least twice, checking if are there any hidden bugs. I've been doing myself some experiments: sometimes, LLM can quickly point something broken, doing root cause analysis, completing a TODO requirement and even write unittests and code. However, sometimes, AI starts to "allucinate"(*), pointing to things that don't exist, like inventing fields on structures and command line arguments that don't exist (it likely inferred the names from projects could be using similar patterns/goals). (*) AI being an statistics tool, the correct term is to diverge. > >Perhaps. Of course there's the Coccinelle scripts that fix a bunch of code > >around the kernel that will like be ignored in this. But this may still be > >a good start. This is something that maintainers don't want: yet-another-tool that newbies wanting to have their one microsecond of fame by getting patches merged to start sending stuff that weren't tested nor bring any value. Maybe we can add a text about that. Thanks, Mauro