Brand still wins - here’s why AI proves it.
In 30 seconds
The AI conversation in marketing right now is dominated by slop, and a general sense that originality is harder to find than it used to be.
But buried in that conversation is an interesting (if slightly existential) counter-argument: can AI finally start separating good products from simply well-marketed ones? It’s the age-old “product quality vs marketing" debate, with new AI bells on.
It's a thought worth taking seriously. But it’s misguided – and we have data to prove it. So here’s our deep dive.
Most of the AI conversation right now is about slop. About how the tools that were supposed to liberate us from content drudgery have instead flooded us with hollow, tired output, and how it's getting harder to find original thinking and genuine voice, creativity or effort.
Worse: it’s often other marketers complaining about AI slop through the medium of – you guessed it – AI slop.
“I guess going on LinkedIn is basically asking for slop at this point”
Against that backdrop, there’s an argument that goes: if AI is now drawing more on real human conversations, couldn't it start separating genuinely good products from simply well-marketed ones? Couldn't the era of marketing your way to the top while delivering a mediocre actual product be coming to an end?
i.e. is AI at least killing off sloppy products???
It sounds interesting – and for non-marketers, it’s a welcome respite from slop chat mountain. And the thing is, there's just enough truth in it to make it plausible. But it doesn’t really hold up.
-
The "Discussions and Forums" feature - the widget of Reddit threads, Quora posts, and Facebook group discussions appearing in search results - has been around since 2022. But its scale is now in a different league.
Research across 10,000 product-focused search terms found it present in over 70% of product review results. Reddit appears in 37% of all Google SERPs and in 95% of product review queries specifically. In automotive and e-commerce, the feature fires for over 40% of queries.
Google's justification is written into its own quality guidelines. When it added "Experience" to E-E-A-T (Expertise, Authoritativeness, Trustworthiness), it was making explicit that first-hand human experience carries a kind of authority that no amount of content optimisation can manufacture.
And in February 2024, Google paid $60 million a year to license Reddit's data for AI training and search improvements. This is basically the clearest sign you can get that they were restructuring around an updated theory of what makes a good answer.
-
The same logic plays out inside AI answer engines.
An index of over 680 million citations across ChatGPT, Google AI Overviews, Perplexity, Gemini, and Claude covering August 2024 to April 2026 found Reddit topping the list across every major model at roughly 40% of all citations.
In a separate analysis of over 350,000 AI citations across January and February 2026, Reddit's citation volume grew month-on-month even as total RAG volume fell. Community platforms are becoming structurally embedded in how AI systems construct answers.
More data supporting this:
Google AI Overviews cite Reddit in 21% of responses.
For Perplexity, Reddit accounted for 24% of all citations in January 2026 — more than any other single source.
UGC and community platforms collectively account for close to half of all AI citations - and the top 15 domains capture 68% of the total citation pool.
-
It’s not just forums, it’s reviews too.
Trustpilot is currently ranked as the 5th most-cited domain globally by ChatGPT. According to recent 2026 data, the platform has seen a massive surge in visibility within artificial intelligence models, with click-throughs from AI-powered search tools increasing by 1,490% year-over-year.
Likewise, review platforms appear in roughly 34.5% of Google AI Overviews. In fact, review sites are appearing in AIOsmore frequently than ever, mainly because Google has massively expanded the reach of AIOs, and because in nearly 40% of cases, Google cites multiple review platforms together to provide a "balanced" perspective.
Elsewhere on Google (you remember the rest of Google, right? The non-AI bit underneath?) the average first page has grown to contain 4.2 different types of SERP features (up from 1.8 in 2019), many of which integrate review snippets.
Which all goes to feed the argument: “the bots are trying to get better at working out what’s actually a good product”
Right?
LLMs are brand engines, not product evaluators
To understand why the "better product wins" theory falls short, you need to understand what these models are doing when they generate a recommendation.
Large language models don't query a database. They’re not a big excel in the sky translated into an answer by some infallible, omniscient typist. They generate text by drawing on patterns of association built from enormous amounts of training data. Which means when your brand appears repeatedly in specific contexts (recommended in subreddits, cited in review platforms, mentioned in editorial coverage), the model builds semantic connections between your brand and those contexts.
Frequency matters. Authority matters. Contextual diversity matters. A brand appearing across editorial coverage, expert roundups, community discussions, and review platforms alongside owned web content simultaneously builds language associations that teach the LLM what to say.
And various studies have been able to measure the practical consequences.
Brand authority signals — web mentions, branded search volume, third-party presence — correlate 3x more strongly with AI visibility than traditional backlink metrics.
Distributing content to third-party outlets drives a 239% median lift in AI visibility.
But all third party editorial is not equal. A 2026 study found that syndicated press releases account for only 0.04% of AI search citations. AI ignores low-quality duplicate content, whereas true editorial coverage in news publications makes up 81% of all news-related AI citations.
LLMs don't choose brands, they repeat patterns. The brands showing up in AI answers are the ones that became the strongest patterns in the data those models trained on.
OK, but what about the product quality argument?
Getting there!
The question is, if AI – or even just Google SERPs - are pulling from authentic human conversation and reviews rather than brand-published content, isn't that conversation at least partly a record of product quality?
Up to a point. LLMs learn from criticism and complaints as well as endorsements, meaning brands with significant negative associations are more likely to be mentioned with caveats or even dropped from recommendations. When your own site claims you're the best but the man on the street says you're average, AI does appear to notice. You can't SEO your way past a genuine reputation problem.
But what is a reputation, really?
LLMs aren’t just computers producing objective outputs. And neither are we.
History lesson incoming...
In 1994, neuroscientist Antonio Damasio published Descartes' Error, drawing on years of studying patients who had suffered damage to the emotional regions of their brains. These patients could still analyse, calculate, weigh options perfectly. What they couldn't do was decide. One spent thirty minutes on whether to use a blue or black pen.
Damasio's conclusion: emotion doesn't interfere with decision-making. It is decision-making.
"We are not thinking machines that feel. We are feeling machines that think."
His somatic marker hypothesis holds that every experience gets tagged with an emotional marker before conscious reasoning even engages. We decide emotionally, then rationalise.
Which means that when people discuss brands in the places AI now treats as primary sources, they are not filing objective quality reports. They're describing how a brand made them feel. The trust built over years. The moment it let them down. The sense of being seen, or not. It’s actually emotional experience dressed up as product opinion.
So despite the vastness of its dataset, the record AI reads isn't a quality audit. It's closer to a brand ledger.
Brand compounds
If you’re reading this and are prepared to invest in your brand for the era of AI, you’re in luck – as you’re still early.
AI citation rewards long-term brand investment. Brands that already had strong digital presence when LLMs were trained now benefit from a self-reinforcing cycle: more AI visibility generates more human discussion, which creates more content, which feeds future model training. The time to get on board was, ideally, yesterday. But today’s your next best option.
Brands in the top quartile for web mentions earn ten times more AI Overview citations than those a tier below. Data also shows community-active domains are four times more likely to be cited.
It's not that product quality is irrelevant. A brand with genuinely poor reviews accumulates a signal that AI systems will surface. But the brands winning in AI discovery aren't necessarily winning because they always have objectively superior products. They're winning because they've done the hard, long work of becoming part of the cultural fabric of their category in the publications and conversations that AI systems are trained on and keep drawing from.
We don’t always choose and love the things that are objectively best for us. Because choosing isn’t objective.
And so, when you ask AI to make the choice for you, its output is compounding what was actually an emotional decision. It’s just that because it’s a computer, you think you’re getting cold, hard facts.
Where this leaves the brand vs. performance debate
For a while, the performance-first view of marketing had a persuasive argument on its side. You could track a click. You couldn't track the feeling that made someone search for your brand name three weeks after reading a piece of editorial coverage.
Well, no one puts brand in the corner anymore. AI adoption is through the roof. The behaviour is shifting, and as it does, the value of accumulated brand conversation is becoming visible in a way it never really was before.
Every piece of coverage that names you as an authority; every recommendation; every community where your brand comes up naturally... these aren’t easy spaces to influence, but each one feeds the systems that are now shaping how you and your category gets discovered. And brand is the core that shapes them.
The brand thinkers who said this mattered all along - Damasio on how decisions are actually made, Byron Sharp on mental availability, the positioning school on owning a space in your audience's mind - were talking about the same underlying truth from different angles. Build associations, in enough places, with enough people, and you become the instinctive answer when the question arises.
AI hasn't overturned that. It's given us a new way to see whether we've done it.
Bottle is an integrated marketing agency that thinks about brand, SEO, social, and PR as interconnected parts of modern discovery, not separate channels. If you're thinking about how your brand holds up across the new discovery landscape, we'd love to talk.