On Wed, Aug 27, 2025 at 9:21 PM Tim via users <users@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
On Wed, 2025-08-27 at 11:22 +0100, Patrick O'Callaghan wrote:
> So much for AI ...
I always say it stands for Artificial Idiot. But when it comes to
using it with a search engine for results, it might as well stand for
"Ask an Idiot."
When I search for something and AI summary appears in the results, I
often find it's like you've asked an eight year old to explain
something to you that they don't understand. And when it comes to
technical things, it's just doing a "they say" regurgitating of scraped
data, that often comes from non-expert sources.
Many sites are just SEO clickbait using AI-generated content and zero
effort to ensure correct information. There are a few very good examples.
One was looking for a way to generate a chemical with certain properties.
AI found it in a journal outside the field of the investigators. We may see
further advances in chemistry from AI improving access to relevant
literature, but copyright protections tend to create silos that work against
that.
I asked AI's a question based on a paper that pointed out a fundamental problem
with a method published years ago. GPT-3 came up the answer citing the old
paper. GPT-5 explained the factor the old paper got wrong, but you have to ask for
citations. Yesterday I read in ARS Technica about Google's cyclone track prediction
project that outdid the conventional physics based models for short-term (couple days)
track predictions. I think it is using historical data and picking examples similar to the
current data, so essentially sifting thru massive data sets for a previous storm that
started out with similar characteristics to the current system.
A while ago I was searching the internet for an answer how to do
something with HTML and CSS and avoiding any scripting, and AI results
kept cropping up that were utterly wrong, every time. And it's getting
harder and harder to research things when you keep getting crap like
that presented.
It may well be a better human language recognition algorithm than older
speech recognition, but it really doesn't know anything or have an
ability to sort truth from fiction. It's like dealing with conspiracy
theorists.
But people are treating it like some kind of magic spell. Say the
right invocation, and make it do something you want, without ever
learning how to do things for yourself. I don't see this boding well
for future humanity.
I agree with these 3 points. In Linux support forms there are users
who blindly apply AI-generated "solutions" without making any effort
to understand the text that comes after "sudo". I expect the same
lack of effort occurs in repair shops that have laid off experienced
workers.
George N. White III
-- _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue