On Mon, 4 Aug 2025, Steven Rostedt wrote: > I know we can't change the DCO, but could we add something about our policy > is that if you submit code, you certify that you understand said code, even > if (especially) it was produced by AI? Yeah, I think that's *precisely* what's needed. Legal stuff is one thing. Let's assume for now that it's handled by the LF statement, DCO, whatever. But "if I need to talk to a human that has a real clue about this code change, who is that?" absolutely (in my view) needs to be reflected in the changelog metadata. Because the more you challenge LLMs, the more they will hallucinate. If for nothing else, then for accountability (not legal, but factual). LLM is never going to be responsible for the generated code in the "human-to-human" sense. AI can assist, but a human needs to be the one proxying the responsibility (if he/she decides to do so), with all the consequences (again, not talking legal here at all). Thanks, -- Jiri Kosina SUSE Labs