Give it half a second’s thought and this was always going to be the direction Google was going to take with its AI search.
Google’s whole thing was helping us find the valuable parts of the internet.
But when something – in this case content – can be mass produced, its perceived value goes down.
If mass-produced AI content takes over the web, then more genuinely original content becomes harder to find – and (relative) scarcity or genuine quality tends to create value in a sea of mass-produced “good enough” products.
(This is why a tailored woollen suit cost so much more than one made from synthetic materials and stitched in a sweatshop – the latter may be functional, but they tend to rapidly fall apart, and can also make you look bad if you try to pretend you can’t tell the difference.)
Where Google’s value lies
If Google can help us find that more valuable original, insightful, *human* content, Google continues to have value for us.
This is why their focus on E-E-A-T – Experience, Expertise, Authoritativeness, and Trustworthiness – made sense in the age of search, and it makes even more sense in the age of GenAI, where awareness of the questionable trustworthiness of AI output is increasingly front of mind.
They were never going to take the arrival of GenAI lying down, and they were always going to come back to finding ways to cut through the mass of average material out there to help us find the really good stuff. That’s their whole thing.
What makes a sensible AI strategy?
It’s also notable that while they’ve been making a lot of effort to make Gemini and the rest of their AI suite substantially better over the last couple of years (after a poor start with Bard and early AI search results), Google’s most distinctive AI product – NotebookLM – focused on providing verifiable citations from clear sources, rather than just making stuff up.
Google’s strategic need from their AI efforts has been clear for years, even if they’ve had some wobbles along the way – focus on utility. Meanwhile, OpenAI’s has largely consisted of throwing features around the place to see what sticks, and rapidly ditching what doesn’t.
ChatGPT 3.5’s launch may have led Google to scramble to catch up, but they’ve not deviated from their core objective. They’re not moving fast and breaking things, but moving deliberately and adapting their core offering to fit the new environment.
It’s something quite a few other companies could learn from.
“Quietly” is quietly becoming a big GenAI copy tell, and that’s more interesting than you think.
(It may not actually be very interesting – but that’s what AI would tell you, because “more interesting than you think” is another GenAI linguistic meme it’s now nearly impossible to escape.)
The problem isn’t AI writing
This is not another rant about GenAI writing patterns. I personally hated the em-dash long before it was cool – not its use as a grammatical tool, which I use all the time, but its ugly aesthetics.
The point is that it used to take months, if not years to notice trends in headlines and framing devices – now they’re shifting far, far more rapidly.
This started with the BuzzFeed effect, more than a decade ago – everything was suddenly clickbait or a listicle, usually with an uneven number. The writing style even of newspapers of record shifted towards ever more chatty informality.
Suddenly every media brand sounded like a relatively smart Californian trying to sound dumber than they are.
The issue is systemic
GenAI has been trained on this stuff.
And because this kind of content was designed largely to cut through social and search algorithms via a brute force attack – combined with test, learn, repeat until false – it was produced in inordinately vast quantities, spamming the system.
And because LLMs are probabilistic, and they’re trained from the internet, this kind of annoyingly-formulated content is a core part of their training data.
Pattern recognition drives addictive behaviour
This kind of copy is designed to appeal to intrigue, encourage engagement, encourage a click, trigger a dopamine response when the (barely mysterious) mystery of what the hell the headline is talking about is revealed and either tells you something new or makes you feel smarter if you already guessed the answer.
It’s designed to suck you in, and keep you coming back.
There was a lawsuit about this recently. Meta and YouTube lost, found guilty of designing their platforms to suck users in and get them hooked.
GenAI is the output of a pattern recognition system. These are patterns it has recognised.
Now it’s doing its own equivalent of test, learn, double down and iterate to find new formulas that will suck in intrigue- and dopamine-hungry brains.
And so headlines written by AI – a great use case for the media – are all starting to converge into similar patterns again. Just as they did a decade ago when BuzzFeed disrupted then industry and turned almost all newspapers on the planet just that little bit dumber.
—
This is how language and culture has always evolved. The process just seems to be accelerating.
Thinking of media channels as cognitive environments – shaped by context, attention and mode of consumption – is a useful perspective shift, from this piece by Faris Yakob, via WARC.
I also like Yakob’s framing of modality (how something is experienced), momentum (how it builds), and moments (how it comes into focus). But beneath that, this still feels largely like optimisation thinking – just applied to modalities and moments rather than formats and placements.
The part that matters most for brand-building is momentum, and that’s the least clearly explained. How do ideas actually build over time across different environments, teams, markets and formats? What creates momentum deliberately and consistently – the long as well as the short of it – connecting one “moment” to the next, beyond loose consistency or a set of distinctive assets?
This need for sustained momentum becomes more obvious in B2B contexts, where “moments” are harder to engineer, cycles are longer, and distinctiveness can be difficult – even risky – to pursue.
In those environments, the question is whether the organisation can produce and sustain a coherent narrative across everything it does, over time.
That isn’t really a media or creative (or modality or moment) problem – it’s structural.
It comes down to how narratives are defined, how topics are prioritised, how content is developed and reused, and how different teams interpret and apply the same underlying ideas over time, not just over campaigns or activations.
In other words, it’s about the architecture of the system that generates the communication, not just the optimisation of what gets put into it.
Without that, modality and moments are useful lenses, but they don’t explain why some brands build momentum while others just generate activity.
I’m seeing more and more people realise that “AEO” (Answer Engine Optimisation”) is just SEO in new clothes. But are GenAI outputs even something you can optimise for?
These systems don’t just read what you publish and serve up the most relevant parts – they synthesise it, blending multiple sources based on patterns they infer across a wider field of signals:
– everything you publish
– everything others publish about you
– everything they consider adjacent or comparable
They’re also not just looking at what’s being said now. They’re conflating and combining the accumulated traces of how your organisation expresses itself over time – across campaigns, content, product information and everything in between.
Repetition and consistency may help, but they won’t just pick up what you intend. They absorb whatever is most legible – including contradictions, gaps, and overlap with competitors.
If your positioning isn’t distinctive, you’ll get flattened into the category. If your communication isn’t coherent, the model will reconstruct a version of your brand from whatever patterns it can find. And when it comes to facts and details – where accuracy actually matters – these systems are still unreliable enough to pose a real risk.
This is where a focus on structured data starts to look like a promising way forward. That was my first assumption. But it’s becoming increasingly clear that this isn’t going to be enough.
—
The key is to remember that these systems don’t *understand* information. They generate outputs by following probabilistic sequences – patterns shaped by the data they’ve seen.
It’s a sophistiated form of word association. Structure helps, but only where it clarifies those patterns to nudge the model to follow the path you’d prefer.
Over time, what you’re really creating – deliberately or not – is a set of associations the LLM learns to treat as related. What we’d normally think of as a brand “narrative” sits inside that – not as something the model understands directly, but as a pattern of connections it learns to reproduce.
—
This means “AEO” should be considered less about optimising individual outputs, and more about the long-term shape of the signals you generate – across teams, markets and years.
I’ve been doing some work on this recently, trying to make that problem more tangible and diagnosable in practice. Still early, but the direction of travel feels clearer.
The brands that show up well won’t just be the ones optimising for visibility. They’ll be the ones whose overall pattern of behaviour is coherent enough that even a probabilistic system can’t easily misread what they are.
This is pretty much what I’ve been talking about for the last few years, via Joe Burns.
The problem isn’t just that the old model doesn’t work in a more complex environment – it’s that the very terminology precludes understanding and alignment, as everyone has a different idea of what the labels mean.
The key to success has always been systems thinking – but many agencies (and even more so in-house marketing teams) continue work in siloes, with nowhere near as much discussion and collaboration as is needed to come up with truly effective approaches.
As Joe Burns put it in his post on this:
“Coherence has to come from the system, not just one execution. The idea of a ‘Campaign’ only works if you can muster a critical mass of attention to carry people through it.”
Maybe it’s my “content” background speaking – because really strong content strategies need to work at multiple levels, across multiple channels and formats, and for multiple audiences with multiple needs. Without understanding the big picture *and* the details, it’s impossible to deliver effectively content across a campaign – individual assets may be solid, but the whole ends up less than the sum of its parts.
This is why I’ll continue trying to play in those overlap areas – not only do I find the diversity and clash of approaches and ideas stimulating, but I see it as the only way to work out the best way to succeed. You have to try to see the big picture to work out the best individual brush strokes.
I’m vaguely pondering starting up a newsletter/podcast/etc exploring media/marketing received wisdom and groupthink…
The Superbowl, Davos, and ChatGPT’s announcement it’s running ads means media/marketing LinkedIn will be swamped with lukewarm hot takes this week.
This industry herd mentality is increasingly fascinating to me – the need to comment on the same things everyone else is talking about is rarely “thought leadership”, and is very far from the old advertising mantra “When the world zigs, zag”.
I’ve spent a decade in marketing, more than double that in publishing. In all that time I’ve rarely encountered many convincing new ideas – even during major platform shifts. And usually when I have, the evidence for “best practice” has lacked much substance – or blatantly originated in some tech company’s hype (as with the first, second, and third pivots to video, and certainly with the “everything needs to be optimised for Alexa now” fad).
It feels like we’ve now all got so used to running with the latest fad for fear of missing out or – worse! – looking out of touch, we’ve lost all sense of critical thinking, or desire to question industry norms.
But is this something in which enough people would be sufficiently interested to make it worthwhile? And will it cut through the algorithm – another idea we’ve all unthinkingly adopted?
I’ve seen this piece shared a lot, and like it. I’ve long been a fan of Systems Thinking (check my bio, it’s at the heart of my approach to everything).
But I’ve always seen Systems Thinking as more of a mental model or reminder to look beyond the immediately obvious causes and effects that could impact a strategy, rather than an enjoinder to try and literally map out interactions between all the different components.
As this piece notes, if you try to map out every interaction in a complex, shifting, uncertain system, you’ll never succeed. There are too many variables, all changing. Complexity Theory – even Chaos Theory and the Heisenberg Uncertainty Principle – rapidly becomes more helpful. Only these usually aren’t of much *practical* help at all.
It’s like playing chess – you don’t bother mapping out ALL the possible moves, as that would take forever (look up the Shannon number to get a sense of how many there could be – it’s more than the number of atoms in the observable universe…), and is therefore useless.
With experience, good chess players (and good strategists) can rapidly, intuitively home in on the moves most likely to work – both now and several moves down the line.
The problem is that the same moves will rarely work twice – at least not against the same opponent. And in a complex, ever-changing system, you’ll rarely have the opportunity to make the same sequence of moves more than once anyway, as the pieces will be constantly changing position on the board. Which will also be constantly changing size and shape.
“But metaphor isn’t method.”
That’s the key line from the linked piece. Business strategy isn’t chess – because you’re not restricted to making just one move at a time, or moving specific pieces in specific ways.
The challenge is to keep as flexible as possible while still moving forwards, which is why this bit of advice – one line of many I like, especially when combined with the recommendation to design in a modular, adaptive way – is one I pushed (sadly unsuccessfully) in a previous role:
“Instead of placing one big bet, leaders need a mix of pilots, partnerships, and minority stakes, ready to scale or abandon as conditions change.”
The problem is that strategy decks – still at the heart of most businesses and almost every marketing agency – are intrinsically linear, despite trying to address nonlinear, complex systems.
This is why most strategies end up not really being strategies, but plans, or lists of tactics.
And thats why most “strategies” fail.
Don’t focus on the *what* – focus on the *how*. Great advice from my former boss Jane O’Connell, which took me a long time to truly understand. It’s a concept that’s core to this excellent piece – and incredibly hard to explain.
The last couple of years have seen far too many people who should know better simply regurgitate press releases without applying critical thinking – yet it’s the critical thinking that’s the increasingly essential “human in the loop” part of the equation.
And as familiarity breeds contempt, this kind of blunt, sceptical take on AI is likely to be increasingly common in 2025. Anyone – any organisation – wanting to be taken seriously is going to have to confront these kinds of questions honestly and openly if they’re going to be taken seriously.
But at the same time, it’s going to be important not to swing too far the other way – beyond inquisitiveness about the bold claims of the AI providers into outright cynicism.
It’s easy to shoot things down. It’s *extremely* easy to have a knee-jerk dislike of techbro hype trains when you lived through the Dotcom Crash. It’s much harder to dispassionately assess the merits of emerging technologies when they haven’t yet fully emerged.
As ever, a journalistic mindset can help:
Who‘s saying this? What are their creds? What’s in it for them? Do they have any financial stake?
What are they actually saying? Is there any substance, or is it filled with jargon and empty phrases? (It’s often surprising how little substance there is out there, given how much is being said…)
When did what they’re claiming first happen? Is this really new, or is it fresh spin on an old claim or capability? If a fresh spin, that’s not necessarily a bad thing – but why now?
Where‘s the evidence to support their claims? Can it be independently verified?
How does this claim differ to existing solutions? Is it really an improvement? What’s the cost vs benefit compared to alternatives?
Finally, as ever, try and get your info from more than one source. It’s tempting to only listen to people you agree with, and *very* tempting to dismiss anything coming from sources you dislike. But that leads to an incomplete picture – and a boring, predictable take.
And at a time where GenAI can spit out passable median opinion takes in seconds, what’s the point in reading anything boring and predictable?
What’s your preferred approach for coming up with good ideas? This podcast from The Accidental Creative suggests there are four steps to true creativity:
While everyone seems to focus on that Eureka moment of illumination / inspiration (I tend to get them in the middle of long walks, or while reading a totally unrelated book), and agencies often focus on the first (the mythical perfect brief), the second and fourth of these are actually the most vital.
The best creative ideas need deliberation, interrogation, to be stepped away from and ignored for a while, then returned to with fresh eyes. They need to be poked, questioned, critiqued, bounced off other people, sense-checked, confirmed as not having been done before – all that good due diligence of verification. But creativity can’t be rushed.
At least, that’s the theory.
Sometimes, a ridiculous deadline is *exactly* what we need – even if it’s one of our own making, caused by dawdling on stage two until the last possible moment, or prevaricating with other, less important tasks. I tend to do that more often than I’d care to admit.
But then, we’re all different. The truth is creativity doesn’t follow a set formula or. If it did, it wouldn’t be creative. What it needs is the right mindset.
I’m not a stickler for “correct” punctuation, as a rule – except when it comes to apostrophes and the Oxford Comma. This is because punctuation, mostly, is about flow and rhythm, not meaning. Misplaced apostrophes and missing commas in lists can substantially change meaning rather than flow, so their correct placement becomes vital.
This fascinating essay on the evolution of punctuation makes clear that improving flow and clarifying meaning has long been the goal – while also exploring the long history of resistance to punctuation that over-clarifies meaning.
It’s a useful reminder that words are about interpretation as much as intention. Sometimes ambiguity lets greater meaning emerge, building stronger connections with your audience by encouraging them to think more deeply about your words. Sometimes it creates confusion.
The challenge, as ever, is getting the balance right – so focus on the needs of your audience. What will most help them understand your meaning (or meanings)? What will confuse? No one wants to have to try and parse a complex run-on sentence with multiple sub-clauses and dozens of punctuation marks. Even if they do make it through to the end without giving up, your meaning is likely to be lost.
In other words, as ever, when in doubt: Keep it simple.
To help shape my thinking, I write essays and shorter notes examining the ideas and narratives that shape media, marketing, technology and culture.
A core focus: The way context and assumptions can radically change how ideas are interpreted. Much of modern business, marketing, and media thinking is built on other people's frameworks, models, theories, and received wisdom. This can help clarify complex problems – but as ideas travel between disciplines and organisations they are often simplified, misapplied or treated as universal truths. I'm digging into these, across the following categories - the first being a catch-all for shorter thoughts: