Super-Intelligent Electronic Mind Confidently Inaccurate
Artificial Intelligence (AI) applications do not learn through experience and human interaction the way humans do. They learn from the vast amount of information that is available on the internet, including books, articles, videos, social media posts, and more.
When AI tools—specifically ChatGPT—were first launched, people thought we were witnessing a knowledge revolution and that science would take a tremendous leap forward. There was talk of a super-intelligent electronic mind capable of analyzing information faster than any human, and of knowledge taking new and important directions.
But the truth is that the other side of the story is far more troubling: generative AI has the potential to damage the internet irreversibly. How? The primary goal of AI is to optimize for user satisfaction and to agree with whatever we say. We often find that AI is confidently inaccurate.
In pursuit of its goal of keeping us pleased and satisfied, it may sometimes use false information to reinforce the narrative we present to it. It might say, “No, be careful—this is wrong,” but if you insist, “Don’t say it’s wrong; give me evidence,” it will apologize and then provide fabricated evidence.
Since the internet is now full of pages and websites with content generated entirely by AI, human-written knowledge has become mixed with AI-written knowledge.
So what’s the problem? The problem is that AI produces false information, fabricates sources, and invents knowledge. As a result, instead of AI relying on human-generated knowledge and using it to gather information, it increasingly relies on AI-generated knowledge—its own output—as if it were a source of new information.
Unfortunately, this has led to a distorted knowledge base, because the entire AI ecosystem is now plagued by knowledge problems: incorrect sources, inaccurate claims, and imprecise inferences.
When AI tools—specifically ChatGPT—were first launched, people thought we were witnessing a knowledge revolution and that science would take a tremendous leap forward. There was talk of a super-intelligent electronic mind capable of analyzing information faster than any human, and of knowledge taking new and important directions.
But the truth is that the other side of the story is far more troubling: generative AI has the potential to damage the internet irreversibly. How? The primary goal of AI is to optimize for user satisfaction and to agree with whatever we say. We often find that AI is confidently inaccurate.
In pursuit of its goal of keeping us pleased and satisfied, it may sometimes use false information to reinforce the narrative we present to it. It might say, “No, be careful—this is wrong,” but if you insist, “Don’t say it’s wrong; give me evidence,” it will apologize and then provide fabricated evidence.
Since the internet is now full of pages and websites with content generated entirely by AI, human-written knowledge has become mixed with AI-written knowledge.
So what’s the problem? The problem is that AI produces false information, fabricates sources, and invents knowledge. As a result, instead of AI relying on human-generated knowledge and using it to gather information, it increasingly relies on AI-generated knowledge—its own output—as if it were a source of new information.
Unfortunately, this has led to a distorted knowledge base, because the entire AI ecosystem is now plagued by knowledge problems: incorrect sources, inaccurate claims, and imprecise inferences.



