I'm thinking we should treat AI-generated code the same way that we would treat sub-contracted code. I've worked at companies that outsourced some software-development to subcontracting companies. The way this would generally work is that there would be an on-site coordinator that submitted all of the code on behalf of the (most likely underpaid) coders working elsewhere. The way this was interpreted is that the coordinator, as a representative of the subcontracting company, was taking on the responsibility (and accountability) for verifying that the content being submitted is functional, non-malicious and not *known to be* violating anyone's copyright. If later it turned out that someone on their team was stealing code, the person whose name was on the commit would be held responsible for that violation. I think we can realistically only hold generative AI submissions to roughly this same standard: we already trust our contributors to do their due-diligence. They remain responsible for what the code they submit does (and will be held accountable for it if it's malicious or violates copyrights and patents). And, frankly, there is very little way we can detect if the code was AI-generated or written by a human being. If we tried to make rules against GenAI, the practical effect will be that people will just stop including notes telling us about it. Discouraging transparency won't improve the situation at all. -- _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue