On Mon, Jul 28, 2025 at 02:13:01PM +0100, Lorenzo Stoakes wrote:
On Mon, Jul 28, 2025 at 08:45:19AM -0400, Sasha Levin wrote:
> So at all times I think ensuring the human element is aware that they need
> to do some kind of checking/filtering is key.
>
> But that can be handled by a carefully worded policy document.
Right. The prupose of this series is not to create a new LLM policy but
rather try and enforce our existing set of policies on LLMs.
I get that, but as you can see from my original reply, my concern is more
as to the non-technical consequences of this series.
I retain my view that we need an explicit AI policy doc first, and ideally
this would be tempered by input at the maintainer's summit before any of
this proceeds.
I think adding anything like this before that would have unfortunate
unintended consequences.
And as a maintainer who does a fair bit of review, I'm likely to be on the
front lines to that :)
Oh, appologies, I'm not trying to push for this to be included urgently:
if there's interest in waiting with this until after maintainer's
summit/LPC I don't have any objection with that.
My point was more that I want to get this series in a "happy" state so
we have it available whenever we come up with a policy.
I'm thinking that no matter what we land on at the end, we'll need
something like this patch series to try and enforce that on the LLM side
of things.
--
Thanks,
Sasha