We’re watching a dangerous cycle take shape on the internet.
AI tools generate content. Some of it is accurate. A lot of it is not. That content gets published online.
Then other AI systems crawl the web, pick up those articles, and treat the information as fact.
Eventually, the false claims get repeated, repackaged, and redistributed across more content platforms and tools.
It creates a vicious cycle where fiction becomes fact, not because it is true, but because it has been published often enough to seem that way.
And it raises a serious question, what will this mean for the future of the web?
This isn’t theoretical. It’s already happening.
Patrick Hathaway recently posted on Linkedin a fake article comparing Screaming Frog and Sitebulb.

The piece had all the hallmarks of a well-researched comparison.
It listed pricing, global user stats, platform compatibility, and even included a ridiculously looking chart (likely AI generated as well).
There was just one problem. None of it was true.
Whoever wrote it either had no idea what they were talking about, or more likely, it was AI-generated from a few scraped pages and a pile of hallucinated filler.
Now that article is live. Which means it can be crawled, indexed, and used to train the next wave of AI content.
From there, it will likely be quoted in other articles, pulled into AI answers, and referenced by unsuspecting users who assume it must be true because it looks legit.
And that’s how this all spirals out of control.
Once AI starts learning from its own lies, there is no quality control. Garbage in, garbage out.
The next generation of content tools will cite false facts with confidence. And readers will have no easy way to tell what is real and what is made up.
This isn’t just a content problem. It’s a trust problem. And it’s growing fast.
And when misinformation spreads widely enough, even people who do know better begin to second-guess.
They are trying to ground AI answers in reliable sources and trust-based signals. But the internet is being flooded with AI content faster than it can be cleaned up.
And when the foundation is corrupted, everything built on top of it becomes harder to trust.
It is also making life harder for the people who actually care about accuracy.
The content creators who take time to research, write, and verify are being drowned out by an endless stream of low-effort noise.
And as they lose incentive to publish, the signal-to-noise ratio only gets worse.
What This Means for the Future of Links
We are heading into a new era where “getting it from the source” might be the only way to stay grounded.
Because if this cycle continues, trust will become the most valuable currency on the internet.
And like any scarce resource, it will not be easy to come by.
I’ve said this before, and it’s becoming even more true now — links are going to play a bigger role moving forward.
As AI-generated content becomes harder to evaluate objectively, search engines won’t be able to rely solely on what a page says.
They’ll need to look at outside trust signals.
And that means putting more weight on links from trusted, authoritative sites to help separate real expertise from noise.
If content alone can no longer be trusted at face value, then the authority of the sites that vouch for it will be one of the few signals left that search engines can trust.
In a web full of AI noise, real links might be the last word you can trust.
Comments