Why AI Makes Google’s E-E-A-T Focus More Important

So it turns out Google doesn’t like “commodity content”, and rewards content that’s original and interesting in search and AI results.

Give it half a second’s thought and this was always going to be the direction Google was going to take with its AI search.

Google’s whole thing was helping us find the valuable parts of the internet.

But when something – in this case content – can be mass produced, its perceived value goes down.

If mass-produced AI content takes over the web, then more genuinely original content becomes harder to find – and (relative) scarcity or genuine quality tends to create value in a sea of mass-produced “good enough” products.

(This is why a tailored woollen suit cost so much more than one made from synthetic materials and stitched in a sweatshop – the latter may be functional, but they tend to rapidly fall apart, and can also make you look bad if you try to pretend you can’t tell the difference.)

Where Google’s value lies

If Google can help us find that more valuable original, insightful, *human* content, Google continues to have value for us.

This is why their focus on E-E-A-T – Experience, Expertise, Authoritativeness, and Trustworthiness – made sense in the age of search, and it makes even more sense in the age of GenAI, where awareness of the questionable trustworthiness of AI output is increasingly front of mind.

They were never going to take the arrival of GenAI lying down, and they were always going to come back to finding ways to cut through the mass of average material out there to help us find the really good stuff. That’s their whole thing.

What makes a sensible AI strategy?

It’s also notable that while they’ve been making a lot of effort to make Gemini and the rest of their AI suite substantially better over the last couple of years (after a poor start with Bard and early AI search results), Google’s most distinctive AI product – NotebookLM – focused on providing verifiable citations from clear sources, rather than just making stuff up.

Google’s strategic need from their AI efforts has been clear for years, even if they’ve had some wobbles along the way – focus on utility. Meanwhile, OpenAI’s has largely consisted of throwing features around the place to see what sticks, and rapidly ditching what doesn’t.

ChatGPT 3.5’s launch may have led Google to scramble to catch up, but they’ve not deviated from their core objective. They’re not moving fast and breaking things, but moving deliberately and adapting their core offering to fit the new environment.

It’s something quite a few other companies could learn from.

Why New AI Writing Tells Emerge and Spread

Screenshot of the page of repeated typing from The Shining“Quietly” is quietly becoming a big GenAI copy tell, and that’s more interesting than you think.

(It may not actually be very interesting – but that’s what AI would tell you, because “more interesting than you think” is another GenAI linguistic meme it’s now nearly impossible to escape.)

The problem isn’t AI writing

This is not another rant about GenAI writing patterns. I personally hated the em-dash long before it was cool – not its use as a grammatical tool, which I use all the time, but its ugly aesthetics.

The point is that it used to take months, if not years to notice trends in headlines and framing devices – now they’re shifting far, far more rapidly.

This started with the BuzzFeed effect, more than a decade ago – everything was suddenly clickbait or a listicle, usually with an uneven number. The writing style even of newspapers of record shifted towards ever more chatty informality.

Suddenly every media brand sounded like a relatively smart Californian trying to sound dumber than they are.

The issue is systemic

GenAI has been trained on this stuff.

And because this kind of content was designed largely to cut through social and search algorithms via a brute force attack – combined with test, learn, repeat until false – it was produced in inordinately vast quantities, spamming the system.

And because LLMs are probabilistic, and they’re trained from the internet, this kind of annoyingly-formulated content is a core part of their training data.

Pattern recognition drives addictive behaviour

This kind of copy is designed to appeal to intrigue, encourage engagement, encourage a click, trigger a dopamine response when the (barely mysterious) mystery of what the hell the headline is talking about is revealed and either tells you something new or makes you feel smarter if you already guessed the answer.

It’s designed to suck you in, and keep you coming back.

There was a lawsuit about this recently. Meta and YouTube lost, found guilty of designing their platforms to suck users in and get them hooked.

GenAI is the output of a pattern recognition system. These are patterns it has recognised.

Now it’s doing its own equivalent of test, learn, double down and iterate to find new formulas that will suck in intrigue- and dopamine-hungry brains.

And so headlines written by AI – a great use case for the media – are all starting to converge into similar patterns again. Just as they did a decade ago when BuzzFeed disrupted then industry and turned almost all newspapers on the planet just that little bit dumber.

This is how language and culture has always evolved. The process just seems to be accelerating.

The AI Productivity Paradox

MC Escher's hands drawing each otherThe ability of AI to produce paradoxes continues to fascinate me.

One recent survey found that workers lose the equivalent of 51 working days a year to technology friction – yet people who use AI effectively save 40–60 minutes a day.

The same survey found that only 9% of workers trust AI for complex, business-critical decisions, compared with 61% of executives. After the recent Wall Street Journal poll showing a similar split between senior management and staff, this is starting to look like a pattern.

And, to be honest, I can see both sides.

Why AI Often Looks Better to Executives Than Employees

For senior leaders, GenAI is often genuinely useful. If you want a high-level overview or a summary to help you orientate yourself and set direction, it can be superb.

But for the people doing the detailed work, the output frequently looks good enough only if you don’t look too closely.

Yet the closer you look, the more probabilistic problems appear: missing caveats, vague generalisations, invented facts, sentences that sound solid when skimming but mean nothing.

When details matter, getting to something usable with even the best GenAI tools can take dozens of rounds of amends and refinement. It’s not hard to see why many staff feel the technology is creating as much friction as it removes.

Why Reliable AI Needs More Structure

What’s interesting is that the newest attempts to make these systems more reliable seem to point in exactly the same direction.

The leaked Claude Code system appears to work so well largely because it surrounds the model with multiple layers of contextual constraint and instruction.

Gary Marcus has argued for years that something like this – closer to his preferred “neurosymbolic” approach – is the only plausible route to reliable AI.

Meanwhile, Elin N. has proposed an alternative approach she calls “substrate engineering“: tightly controlling the language, context and structure around a model to produce much more consistent results.

In other words, the more reliable these systems become, the less they seem to work like magic and the more they seem to depend on carefully-constructed contextual scaffolding.

The Catch-22 at the Heart of AI Adoption

Most workers do not yet have the time, knowledge or support to build that scaffolding for themselves.

Yet without the detailed knowledge of the people actually doing the work, the scaffolding often is not good enough.

Which may help explain why the promised productivity gains have yet to emerge.

Getting the best results from GenAI increasingly seems to require expertise in both the technology itself and the domain you are using it to help with.

The people most sceptical of these tools may therefore also be the people most needed to make them work.

Review: Landscape and Memory, by Simon Schama

4/5

This is a big, strange, frequently fascinating, but strangely disjointed book. Impressionistic history, not narrative. It’s also far longer than the page count suggests – a huge, heavy book that needs two hands to hold even in paperback.

Effectively a collection of essays that combine to make up one big essay, it jumps around in places and time as it explores Western civilisation’s relationship with the landscapes in which that civilisation has developed.

Yet this is a bit of a misrepresentation, as really the focus is primarily on the 18th and 19th centuries, as the conscious awareness of landscape as a thing started to emerge. And primarily via England, France, the United States, and Germany / the Holy Roman Empire. Other countries do get a look in. but these four dominate.

It’s at times more lyrical memoir or art criticism than cultural history, with the schema and structure and choices of what to cover making sense only to its author – making me wonder how on earth Schama managed to get this commissioned, given it came pretty early in his career, five years before he became a household name via his TV work. It feels more like the kind of self-indulgent passion project with which someone famous is rewarded to get them to produce something a bit more commercial.

But there’s still a lot here to like. For me, it’s best when it delves into myth and legend – though it doesn’t do this as much as I think is warranted, or as much as I’d have liked, given how good Schama is on myth when he does write about it:

“how much myth is good for us? And how can we measure the dosage? Should we avoid the stuff altogether for fear of contamination or dismiss it out of hand as sinister and irrational esoterica that belong only in the most unsavory margins of ‘real’ (to wit, our own) history?

“…The real problem… is whether it is possible to take myth seriously on its own terms, and to respect its coherence and complexity, without becoming morally blinded by it’s poetic power. This is only a variation, after all, of the habitual and insoluble dilemma of the anthropologist (or for that matter the historian, though not many of us like to own up to it): of how to reproduce ‘the other,” separated from us by space, time, or cultural customs, without either losing ourselves altogether in total immersion or else rendering the subject ‘safe’ by the usual eviscerations of Western empirical analysis.

“Of one thing at least I am certain: that not to take myth seriously in the life of an ostensibly ‘disenchanted’ culture like our own is actually to impoverish our understanding of our shared world.” (p.134)

And (much) later, concluding the thought with the closest the book has to an explanation of Schama’s aim in writing it:

“it seems to me that neither the frontiers between the wild and the cultivated, nor those that lie between the past and the present, are so easily fixed. Whether we scrambled the slopes or ramble the woods, our Western sensibilities carry a bulging backpack of myth and recollection… The sum of our pasts, generation laid over generation, like the slow mold of the seasons, forms the compost of our future. We live off it .” (p.574)

Appropriately enough this book is a rambling affair, following paths that make little sense as you wander them. But gradually the intent of the person who’s staked out those paths starts to make some kind of sense – as with an Impressionist painting, the subject of which can only be seen when you take a few steps back.

Here, the details are so dense, so varied, you’re better off with your nose close to the canvas – the parts work better on their own rather than summed into a whole.

Review: Becoming a Philosopher: Spinoza to Sartre, by Jonathan Rée

4/5 stars

An excellent companion to Rée’s superb Witcraft, his history of how philosophical ideas made their way into English (often with a considerable delay). The chapters here on Kierkegaard and Sartre neatly fill some gaps in that earlier book’s narrative, as it (mistakenly and frustratingly, in my view) ended the story largely with Wittgenstein. (Yes, Kierkegaard was earlier, but didn’t get translated into English until the early-mid 20th century.)

The introductory interview was also a nice touch, with Rée’s dislike of histories of philosophy – and especially of Bertrand Russell’s, and of Russell more broadly – an entertaining educated rant that helped shift my perspective on what has become one of my favourite genres of book over the last few years. I knew it’s not just me who sometimes, when reading the original works rather than someone else’s summary of them, struggles to understand and needs to re-read paragraphs repeatedly – but it was very reassuring to hear that the same is true for Rée.

Philosophy is hard, basically. Intellectual biographies and histories of philosophy may make it more accessible – but the point is philosophy is all about the act of thinking, not just understanding ideas.

This feels like a particularly useful insight in the age of GenAI, when it’s easier than ever to find a summary of an idea, and to have someone (albeit a bot) explain a complex concept in simple terms. This may be a shortcut to understanding, but sometimes this can mean your understanding is only superficial – by reaching your knowledge via an intermediary, rather than working at it yourself, you’re likely to be missing nuances and details, as well as to be picking up received wisdom and interpretative assumptions from other people, rather than determining your own understanding.

Taking shortcuts via other people’s interpretations isn’t always a bad thing, by any means – but it’s worth being aware of what you may be missing by doing so. I’m probably never going to read Heidegger’s Being and Time or Sartre’s Being and Nothing in English, let alone in the original German and French. I’ve always known I’m going to be missing something as a result – the summaries of these books that I *have* read have convinced me there are aspects of both I’d find fascinating. But Rée’s emphasis on taking the time to digest philosophical works, to ruminate on them, to make the effort to truly understand them has given me pause.

Much to think about here, in other words – not bad for what is at its core a collection of book reviews.

Beyond SEO: What Makes Content Valuable in an AI Era

A photo of a PowerPoint slide titled "Content is valuable, but how is it valued?" with bullets on: Rarity, Clear IP control, Quality, Domain-specific content, and ContinuityBad photo of a good slide on what makes content valuable in an AI era, from Kevin Anderson at the inaugural Source Code event last night.

A successor to the much missed Hack/Hackers series looking at how tech and journalism can come together to do great things, it was unsurprisingly dominated by conversations about AI.

The point about what is valuable about the content we produce was also core to my old colleague Steven Wilson-Beales‘ session on SEO / GEO / AEO / AIO / whatever you want to call it, and what a “zero click” web could look like in practice.

Key points:
– You need differentiation
– You need to add value
– You need to be accessible, relevant, and credible

It’s almost as if E-E-A-T is still a thing!

Also, the lesson we should all have taken from the last decade and a bit of chasing search and social algorithms is simple – diversify.

Don’t get over reliant on any one traffic source. Don’t chase the algorithm, because the algorithm is changing faster than ever – and with AI search, will increasingly adapt it’s findings to every individual.

And a top tip – given AI tools have been trained on existing content, you need to take a careful look at your archives. If they don’t answer the potential needs of an AI bot in query fan out mode, they may need an update.

But the absolute key point – and this speaks to a lot of the work I’ve been doing behind the scenes lately – It’s no longer enough to focus your SEO / GEO efforts on optimisation of individual pages.

You need to see your content as part of a broader system – because the bots are no longer looking for just one page to rank at the top of a list, they’re looking for the right information to answer the query. If they can’t get it from you, they’ll get it from someone else. (Or just make it up…)

Why the Best Strategies Often Sound Too Simple

Apparently a new Cornell University study has found that workers who use / fall for corporate bullshit are worse at their jobs

This brought back fond memories of the Bullshit Bingo tracker we used to keep to try and steer clients (and ourselves) away from jargon when working on B2B projects back in my Group SJR days…

Simple, jargon-free language is almost always the best option if you want your message to be understood – but it can be hard to get it past approvers, because the more you simplify the language, the clearer the strategic recommendations become.

For some, this clarity feels like a risk – because the best strategies tend to be very simple, once you strip them of all the linguistic fluff. This is where and why business bullshit creeps in – to make the clear seem complicated, so the person presenting seems like they’re better value for money.

Of course, what this all misses is that devising the strategy *is* the easy bit (relatively). The hard part is getting others on board to start rolling it out, and to ensure the organisation as a whole doesn’t just adopt it as a mantra, but understands and acts on it.

This is why strategic development needs to take its time – the conversations and debates that inform a strategy are the first step towards helping the broader organisation accept it.

Put lots of jargon in your explanations, you’re creating barriers to understanding and adoption.

But equally. there’s always a risk that someone will call you on it – and reveal that underneath all the convoluted wording, you’re really not saying much of substance. That’s surely a far bigger reputational risk than showing you have the insight to cut through to the heart of the matter with a clear, simple strategic recommendation.

Why Brand Momentum Is a Structural Problem, Not a Media One

Thinking of media channels as cognitive environments – shaped by context, attention and mode of consumption – is a useful perspective shift, from this piece by Faris Yakob, via WARC.

Table if attention level, purpose and typical media portfolio of different modalitiesI also like Yakob’s framing of modality (how something is experienced), momentum (how it builds), and moments (how it comes into focus). But beneath that, this still feels largely like optimisation thinking – just applied to modalities and moments rather than formats and placements.

The part that matters most for brand-building is momentum, and that’s the least clearly explained. How do ideas actually build over time across different environments, teams, markets and formats? What creates momentum deliberately and consistently – the long as well as the short of it – connecting one “moment” to the next, beyond loose consistency or a set of distinctive assets?

This need for sustained momentum becomes more obvious in B2B contexts, where “moments” are harder to engineer, cycles are longer, and distinctiveness can be difficult – even risky – to pursue.

In those environments, the question is whether the organisation can produce and sustain a coherent narrative across everything it does, over time.

That isn’t really a media or creative (or modality or moment) problem – it’s structural.

It comes down to how narratives are defined, how topics are prioritised, how content is developed and reused, and how different teams interpret and apply the same underlying ideas over time, not just over campaigns or activations.

In other words, it’s about the architecture of the system that generates the communication, not just the optimisation of what gets put into it.

Without that, modality and moments are useful lenses, but they don’t explain why some brands build momentum while others just generate activity.

AEO = SEO? Or is something more needed?

The Conspiracy Theory MemeI’m seeing more and more people realise that “AEO” (Answer Engine Optimisation”) is just SEO in new clothes. But are GenAI outputs even something you can optimise for?

These systems don’t just read what you publish and serve up the most relevant parts – they synthesise it, blending multiple sources based on patterns they infer across a wider field of signals:

– everything you publish
– everything others publish about you
– everything they consider adjacent or comparable

They’re also not just looking at what’s being said now. They’re conflating and combining the accumulated traces of how your organisation expresses itself over time – across campaigns, content, product information and everything in between.

Repetition and consistency may help, but they won’t just pick up what you intend. They absorb whatever is most legible – including contradictions, gaps, and overlap with competitors.

If your positioning isn’t distinctive, you’ll get flattened into the category. If your communication isn’t coherent, the model will reconstruct a version of your brand from whatever patterns it can find. And when it comes to facts and details – where accuracy actually matters – these systems are still unreliable enough to pose a real risk.

This is where a focus on structured data starts to look like a promising way forward. That was my first assumption. But it’s becoming increasingly clear that this isn’t going to be enough.

The key is to remember that these systems don’t *understand* information. They generate outputs by following probabilistic sequences – patterns shaped by the data they’ve seen.

It’s a sophistiated form of word association. Structure helps, but only where it clarifies those patterns to nudge the model to follow the path you’d prefer.

Over time, what you’re really creating – deliberately or not – is a set of associations the LLM learns to treat as related. What we’d normally think of as a brand “narrative” sits inside that – not as something the model understands directly, but as a pattern of connections it learns to reproduce.

This means “AEO” should be considered less about optimising individual outputs, and more about the long-term shape of the signals you generate – across teams, markets and years.

I’ve been doing some work on this recently, trying to make that problem more tangible and diagnosable in practice. Still early, but the direction of travel feels clearer.

The brands that show up well won’t just be the ones optimising for visibility. They’ll be the ones whose overall pattern of behaviour is coherent enough that even a probabilistic system can’t easily misread what they are.

Review: Brand Thinking and Other Noble Pursuits, by Debbie Millman

2/5 stars

Brand thinking? Groupthinking more like…

As this is a book of fairly straightforward, slightly gushing interviews with various people from the world of marketing, this would today have worked much better as a podcast. In this format it feels pretty repetitive as well as being dated (first published in 2011, with some of the focus on social media as if it’s new and Apple as if it’s a challenger brand feeling really rather quaint.

There probably were some actively thought-provoking points made somewhere in here, but everyone blurred into one in the end. so I have no idea who said what, and nothing really stood out – except the guy who was very vocal about his dislike of Daniel Kahneman and the idea of Behavioural Economics.

Of course, these “insights” may have seemed more radical 15 years ago. And for newcomers to marketing they still might.

But it’s notable how much of what’s said here sounds fine in theory but feels very hard to turn into tangible takeaways that people trying to build brands themselves could actually use. It mostly all ends up sounding like fluff and cod psychology. You can see how marketing and branding ended up getting a bit of a bad name if this is the best they had to offer.

Then again, maybe it’s because pretty much everyone featured here is American? As Mark Ritson – today’s leading marketing advocate – keeps saying, American marketing and advertising hasn’t been particularly sophisticated for decades.

In short, useful to read if in the profession, but there’s very little surprising, practical or inspiring here. It’s mostly pretty obvious platitudes.