The launch of ChatGPT blew apart the search industry, and the last few years have seen more and more AI integration into search engine results pages.
In an attempt to keep up with the LLMs, Google launched AI Overviews and just announced AI Mode tabs.
The expectation is that SERPs will become blended with a Large Language Model (LLM) interface, and the nature of how users search will adapt to conversations and journeys.
However, there is an issue surrounding AI hallucinations and misinformation within LLM and Google AI Overview generated results, and it seems to be largely ignored, not just by Google but also by the news publishers it affects.
More worrying is that users are either unaware or prepared to accept the cost of misinformation for the sake of convenience.
Barry Adams is the authority on editorial SEO and works with the leading news publisher titles worldwide via Polemic Digital. Barry also founded the News & Editorial SEO Summit along with John Shehata.
I read a LinkedIn post from Barry where he said:
“LLMs are incredibly dumb. There is nothing intelligent about LLMs. They’re advanced word predictors, and using them for any purpose that requires a basis in verifiable facts – like search queries – is fundamentally wrong.
But people don’t seem to care. Google doesn’t seem to care. And the tech industry sure as hell doesn’t care, they’re wilfully blinded by dollar signs.
I don’t feel the wider media are sufficiently reporting on the inherent inaccuracies of LLMs. Publishers are keen to say that generative AI could be an existential threat to publishing on the web, yet they fail to consistently point out GenAI’s biggest weakness.”
The post prompted me to speak to him in more detail about LLM hallucinations, their impact on publishing, and what the industry needs to understand about AI’s limitations.
You can watch the full interview with Barry on IMHO below, or continue reading the article summary.
Why Are LLMs So Bad At Citing Sources?
I asked Barry to explain why LLMs struggle with accurate source attribution and factual reliability.
Barry responded, “It’s because they don’t know anything. There’s no intelligence. I think calling them AIs is the wrong label. They’re not intelligent in any way. They’re probability machines. They don’t have any reasoning faculties as we understand it.”
He explained that LLMs operate by regurgitating answers based on training data, then attempting to rationalize their responses through grounding efforts and link citations.
Even with careful prompting to use only verified sources, these systems maintain a high probability of hallucinating references.
“They are just predictive text from your phone, on steroids, and they will just make stuff up and very confidently present it to you because that’s just what they do. That’s the entire nature of the technology,” Barry emphasized.
This confident presentation of potentially false information represents a fundamental problem with how these systems are being deployed in scenarios they’re not suited for.
Are We Creating An AI Spiral Of Misinformation?
I shared with Barry my concerns about an AI misinformation spiral where AI content increasingly references other AI content, potentially losing the source of facts and truth entirely.
Barry’s outlook was pessimistic, “I don’t think people care as much about truth as maybe we believe they should. I think people will accept information presented to them if it’s useful and if it conforms with their pre-existing beliefs.”
“People don’t really care about truth. They care about convenience.”
He argued that the last 15 years of social media have proven that people prioritize confirmation of their beliefs over factual accuracy.
LLMs facilitate this process even more than social media by providing convenient answers without requiring critical thinking or verification.
“The real threat is how AI is replacing truth with convenience,” Barry observed, noting that Google’s embrace of AI represents a clear step away from surfacing factual information toward providing what users want to hear.
Barry warned we’re entering a spiral where “entire societies will live in parallel realities and we’ll deride the other side as being fake news and just not real.”
Why Isn’t Mainstream Media Calling Out AI’s Limitations?
I asked Barry why mainstream media isn’t more vocal about AI’s weaknesses, especially given that publishers could save themselves by influencing public perception of Gen AI limitations.
Barry identified several factors: “Google is such a powerful force in driving traffic and revenue to publishers that a lot of publishers are afraid to write too critically about Google because they feel there might be repercussions.”
He also noted that many journalists don’t genuinely understand how AI systems work. Technology journalists who understand the issues sometimes raise questions, but general reporters for major newspapers often lack the knowledge to scrutinize AI claims properly.
Barry pointed to Google’s promise that AI Overviews would send more traffic to publishers as an example: “It turns out, no, that’s the exact opposite of what’s happening, which everybody with two brain cells saw coming a mile away.”
How Do We Explain The Traffic Reduction To News Publishers?
I noted research that shows users do click on sources to verify AI outputs, and that Google doesn’t show AI Overviews on top news stories. Yet, traffic to news publishers continues to decline overall.
Barry explained this involves multiple factors:
“People do click on sources. People do double-check the citations, but not to the same extent as before. ChatGPT and Gemini will give you an answer. People will click two or three links to verify.
Previously, users conducting their own research would click 30 to 40 links and read them in detail. Now they might verify AI responses with just a few clicks.
Additionally, while news publishers are less affected by AI Overviews, they’ve lost traffic on explainer content, background stories, and analysis pieces that AI now handles directly with minimal click-through to sources.”
Barry emphasized that Google has been diminishing publisher traffic for years through algorithm updates and efforts to keep users within Google’s ecosystem longer.
“Google is the monopoly informational gateway on the web. So you can say, ‘Oh, don’t be dependent on Google,’ but you have to be where your users are and you cannot have a viable publishing business without heavily relying on Google traffic.”
What Should Publishers Do To Survive?
I asked Barry for his recommendations on optimizing for LLM inclusion and how to survive the introduction of AI-generated search results.
Barry advised publishers to accept that search traffic will diminish while focusing on building a stronger brand identity.
“I think publishers need to be more confident about what they are and specifically what they’re not.”
He highlighted the Financial Times as an exemplary model because “nobody has any doubt about what the Financial Times is and what kind of reporting they’re signing up for.”
This clarity enables strong subscription conversion because readers understand the specific value they’re receiving.
Barry emphasized the importance of developing brand power that makes users specifically seek out particular publications, “I think too many publishers try to be everything to everybody and therefore are nothing to nobody. You need to have a strong brand voice.”
He used the example of the Daily Mail that succeeds through consistent brand identity, with users specifically searching for the brand name with topical searches such as “Meghan Markle Daily Mail” or “Prince Harry Daily Mail.”
The goal is to build direct relationships that bypass intermediaries through apps, newsletters, and direct website visits.
The Brand Identity Imperative
Barry stressed that publishers covering similar topics with interchangeable content face existential threats.
He works with publishers where “they’re all reporting the same stuff with the same screenshots and the same set photos and pretty much the same content.”
Such publications become vulnerable because readers lose nothing by substituting one source for another. Success requires developing unique value propositions that make audiences specifically seek out particular publications.
“You need to have a very strong brand identity as a publisher. And if you don’t have it, you probably won’t exist in the next five to ten years,” Barry concluded.
Barry advised news publishers to focus on brand development, subscription models, and building content ecosystems that don’t rely entirely on Google. That may mean fewer clicks, but more meaningful, higher-quality engagement.
Moving Forward
Barry’s opinion and the reality of the changes AI is forcing are hard truths.
The industry requires honest acknowledgment of AI limitations, strategic brand building, and acceptance that easy search traffic won’t return.
Publishers have two options: To continue chasing diminishing search traffic with the same content that everyone else is producing, or they invest in direct audience relationships that provide sustainable foundations for quality journalism.
Thank you to Barry Adams for offering his insights and being my guest on IMHO.
More Resources:
Featured Image: Shelley Walsh/Search Engine Journal