Asking
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

How much would you pay for a calculator that was wrong 30% of the time?



Photo above - if AI hallucinations are bad, does that mean human hallucinations are good?

Big shot investors are bailing on AI stock darlings (see BBC link below). Yet my email in-box is full of spam telling me that Nvidia, Bitcoin, and gold are going to the moon in 2026. Netflix monthly subscription fees too, apparently. Mine seems to have doubled while I was watching Stranger Things.

What’s been missing from the “AI is great, or will AI crash” debate? An analysis of the product quality, and the theoretical size of the customer base. The article at bottom isn’t perfect, but it is compelling: AI takes mind blowing amounts of electricity, data, and expensive chips, yet yields diminishing returns with each successive generation. The opposite of Moore’s law about computing capabilities. (Google it)

The low hanging fruit has already been picked.

The assumption that more data and chips will always drive exponentially better AI performance is (according to the authors) “a flaw hiding in plain sight”. Their specific example;

“If I told you that my baby weighed 9 pounds at birth, and 18 months later it had doubled in weight,” Marcus posits, “that doesn’t mean it’s going to keep doubling and become a trillion-pound baby by the time it goes to college.”

AI experts claim as long as you give their software more food (data) their baby will naturally grow, unlike human babies, which would die from obesity. Those experts are confident that AI can never choke itself to death on excess.

That’s not the main problem, however. It’s the hallucination rate. After all those yearly LLM upgrades and advances, the error rate (ChatGPT-4) is still an astonishing 28.6%. It routinely invents fake sources, cites articles that never existed, gets established dates wrong, and assumes a $hitload of facts not in evidence. Yesterday I caught Google’s AI claiming that Tuesday is New Years Eve (12/31/25).

Let me repeat – the wrong answer 28.6% of the time. No wonder companies are giving it away for free. You couldn’t charge money for a desktop calculator with this error rate, either.

I’m just sayin’ . . .



We might finally know what will burst the AI bubble | BBC Science Focus Magazine
This page is a permanent link to the reply below and its nested replies. See all post replies »
dancingtongue · 80-89, M
It's the old aphorism of power corrupts and absolute power corrupts absolutely. Whoever came up with the term Artificial Intelligence is the absolute grifter since it implies actual intelligence when it is only the increased ability to sort and regurgitate data faster and more conveniently. And we all know how much of that "data" that exists on the Internet is at least flawed if not outright wrong, no matter which ideological extreme you may be viewing it from.

There is no creative thought; no rational analysis. No intelligence involved.

In the early days of IT there was a term, GIGO: Garbage In, Garbage Out. Label it Intelligence if you want, but you are still indiscriminately sorting garbage.
SusanInFlorida · 31-35, F
@dancingtongue upvoted.

AI godfathers are urgently trying to find a way to limit AI "training" to actual human created materials, and dodge the mountains of stuff churned out by other AI systems.

as if human created text is objective and factual.

Reddit signed a huge contract to give exclusive rights to their user generated content to Google AI (link at bottom)

if Google ends up racist, white supremacist, anti-semitic, and fascist, we'll know how that happened.

https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/