Em Mon, 28 Jul 2025 13:46:53 -0400 Steven Rostedt <rostedt@xxxxxxxxxxx> escreveu: > On Fri, 25 Jul 2025 13:34:32 -0700 > <dan.j.williams@xxxxxxxxx> wrote: > > > > This touches on explainability of AI. Perhaps the metadata would be > > > interesting for XAI research... not sure that's enough to be lugging > > > those tags in git history. > > > > Agree. The "who to blame" is "Author:". They signed DCO they are > > responsible for debugging what went wrong in any stage of the > > development of a patch per usual. We have a long history of debugging > > tool problems without tracking tool versions in git history. > > My point of the "who to blame" was not about the author of said code, > but if two or more developers are using the same AI agent and then some > patter of bugs appears that is only with that AI agent, then we know > that the AI agent is likely the culprit and to look for code by other > developers that used that same AI agent. > > It's a way to track down a bug in a tool that is creating code, not > about moving blame from a developer to the agent itself. I don't think you shall blame the tool, as much as you you cannot blame gcc for a badly written code. Also, the same way a kernel maintainer needs to know how to produce a good code, someone using AI also must learn how to properly use the tool. After all, at least at the current stage, AI is not intelligent. Artificial "neurons" just sums up values from its neighbors, trying to mimic what we know so far about neurons, which is not perfect. On several aspects, it is not much different than doing an stochastic analysis that would try to converge into a result. The entire process resembles the kind of systems that you could be analyzed using control theory[1], like root locus analysis. Anyone that ever played with that knows that sometimes the system is stable enough to converge to the best results, but the convergence is affected by poles and zeros: sometimes it might converge to a local minimum; sometimes it can end into a zero and diverge, producing completely bogus results. On other words, the one that posted a bad patch is the one to blame, together with the ones that reviewed it. AI is not a replacement for real intelligence. - Btw, if you want to play with that, I suggest using deepseek. Ensure that the DeepThink (R1) is enabled, as it shows how that particular model tries to find a convergence. - Out of curiosity, I asked two AIs for articles relating control theory with LLM: Bhargava, A., Witkowski, C., Looi, S.-Z., & Thomson, M. (2023). What’s the Magic Word? A Control Theory of LLM Prompting. arXiv preprint arXiv:2310.04444. URL: https://arxiv.org/abs/2310.04444 Kevian, D., Syed, U., Guo, X., Havens, A., Dullerud, G., Seiler, P., Qin, L., & Hu, B. (2024). Capabilities of Large Language Models in Control Engineering: A Benchmark Study. arXiv preprint arXiv:2404.03647. URL: https://arxiv.org/abs/2404.03647 Maher, G. (2025). LLMPC: Large Language Model Predictive Control. arXiv preprint arXiv:2501.02486. URL: https://arxiv.org/abs/2501.02486 Zahedifar, R. et al. "LLM-Agent-Controller: A Universal Multi-Agent Large Language Model System as a Control Engineer" URL: https://arxiv.org/abs/2505.19567 Zhang, Y. et al. "Unveiling LLM Mechanisms Through Neural ODEs and Control Theory" URL: https://arxiv.org/abs/2406.16985 Barfield, Woodrow (2021) "A Systems and Control Theory Approach for Law and Artificial Intelligence: Demystifying the 'Black-Box'" URL: https://www.mdpi.com/2571-8800/4/4/41 Zahedifar, R. et al. "LLM-controller: Dynamic robot control adaptation using large language models" URL: https://www.sciencedirect.com/science/article/abs/pii/S0921889024002975 Bhargava, A. "Toward a Control Theory of LLMs" (Blog Post) URL: https://aman-bhargava.com/ai/2023/12/17/towards-a-control-theory-of-LLMs.html I didn't read them (yet). Thanks, Mauro