The ability of AI to produce paradoxes continues to fascinate me.
One recent survey found that workers lose the equivalent of 51 working days a year to technology friction – yet people who use AI effectively save 40–60 minutes a day.
The same survey found that only 9% of workers trust AI for complex, business-critical decisions, compared with 61% of executives. After the recent Wall Street Journal poll showing a similar split between senior management and staff, this is starting to look like a pattern.
And, to be honest, I can see both sides.
Why AI Often Looks Better to Executives Than Employees
For senior leaders, GenAI is often genuinely useful. If you want a high-level overview or a summary to help you orientate yourself and set direction, it can be superb.
But for the people doing the detailed work, the output frequently looks good enough only if you don’t look too closely.
Yet the closer you look, the more probabilistic problems appear: missing caveats, vague generalisations, invented facts, sentences that sound solid when skimming but mean nothing.
When details matter, getting to something usable with even the best GenAI tools can take dozens of rounds of amends and refinement. It’s not hard to see why many staff feel the technology is creating as much friction as it removes.
Why Reliable AI Needs More Structure
What’s interesting is that the newest attempts to make these systems more reliable seem to point in exactly the same direction.
The leaked Claude Code system appears to work so well largely because it surrounds the model with multiple layers of contextual constraint and instruction.
Gary Marcus has argued for years that something like this – closer to his preferred “neurosymbolic” approach – is the only plausible route to reliable AI.
Meanwhile, Elin N. has proposed an alternative approach she calls “substrate engineering“: tightly controlling the language, context and structure around a model to produce much more consistent results.
In other words, the more reliable these systems become, the less they seem to work like magic and the more they seem to depend on carefully-constructed contextual scaffolding.
The Catch-22 at the Heart of AI Adoption
Most workers do not yet have the time, knowledge or support to build that scaffolding for themselves.
Yet without the detailed knowledge of the people actually doing the work, the scaffolding often is not good enough.
Which may help explain why the promised productivity gains have yet to emerge.
Getting the best results from GenAI increasingly seems to require expertise in both the technology itself and the domain you are using it to help with.
The people most sceptical of these tools may therefore also be the people most needed to make them work.
This is a big, strange, frequently fascinating, but strangely disjointed book. Impressionistic history, not narrative. It’s also far longer than the page count suggests – a huge, heavy book that needs two hands to hold even in paperback.
Effectively a collection of essays that combine to make up one big essay, it jumps around in places and time as it explores Western civilisation’s relationship with the landscapes in which that civilisation has developed.
Yet this is a bit of a misrepresentation, as really the focus is primarily on the 18th and 19th centuries, as the conscious awareness of landscape as a thing started to emerge. And primarily via England, France, the United States, and Germany / the Holy Roman Empire. Other countries do get a look in. but these four dominate.
It’s at times more lyrical memoir or art criticism than cultural history, with the schema and structure and choices of what to cover making sense only to its author – making me wonder how on earth Schama managed to get this commissioned, given it came pretty early in his career, five years before he became a household name via his TV work. It feels more like the kind of self-indulgent passion project with which someone famous is rewarded to get them to produce something a bit more commercial.
But there’s still a lot here to like. For me, it’s best when it delves into myth and legend – though it doesn’t do this as much as I think is warranted, or as much as I’d have liked, given how good Schama is on myth when he does write about it:
“how much myth is good for us? And how can we measure the dosage? Should we avoid the stuff altogether for fear of contamination or dismiss it out of hand as sinister and irrational esoterica that belong only in the most unsavory margins of ‘real’ (to wit, our own) history?
“…The real problem… is whether it is possible to take myth seriously on its own terms, and to respect its coherence and complexity, without becoming morally blinded by it’s poetic power. This is only a variation, after all, of the habitual and insoluble dilemma of the anthropologist (or for that matter the historian, though not many of us like to own up to it): of how to reproduce ‘the other,” separated from us by space, time, or cultural customs, without either losing ourselves altogether in total immersion or else rendering the subject ‘safe’ by the usual eviscerations of Western empirical analysis.
“Of one thing at least I am certain: that not to take myth seriously in the life of an ostensibly ‘disenchanted’ culture like our own is actually to impoverish our understanding of our shared world.” (p.134)
And (much) later, concluding the thought with the closest the book has to an explanation of Schama’s aim in writing it:
“it seems to me that neither the frontiers between the wild and the cultivated, nor those that lie between the past and the present, are so easily fixed. Whether we scrambled the slopes or ramble the woods, our Western sensibilities carry a bulging backpack of myth and recollection… The sum of our pasts, generation laid over generation, like the slow mold of the seasons, forms the compost of our future. We live off it .” (p.574)
Appropriately enough this book is a rambling affair, following paths that make little sense as you wander them. But gradually the intent of the person who’s staked out those paths starts to make some kind of sense – as with an Impressionist painting, the subject of which can only be seen when you take a few steps back.
Here, the details are so dense, so varied, you’re better off with your nose close to the canvas – the parts work better on their own rather than summed into a whole.
An excellent companion to Rée’s superb Witcraft, his history of how philosophical ideas made their way into English (often with a considerable delay). The chapters here on Kierkegaard and Sartre neatly fill some gaps in that earlier book’s narrative, as it (mistakenly and frustratingly, in my view) ended the story largely with Wittgenstein. (Yes, Kierkegaard was earlier, but didn’t get translated into English until the early-mid 20th century.)
The introductory interview was also a nice touch, with Rée’s dislike of histories of philosophy – and especially of Bertrand Russell’s, and of Russell more broadly – an entertaining educated rant that helped shift my perspective on what has become one of my favourite genres of book over the last few years. I knew it’s not just me who sometimes, when reading the original works rather than someone else’s summary of them, struggles to understand and needs to re-read paragraphs repeatedly – but it was very reassuring to hear that the same is true for Rée.
Philosophy is hard, basically. Intellectual biographies and histories of philosophy may make it more accessible – but the point is philosophy is all about the act of thinking, not just understanding ideas.
This feels like a particularly useful insight in the age of GenAI, when it’s easier than ever to find a summary of an idea, and to have someone (albeit a bot) explain a complex concept in simple terms. This may be a shortcut to understanding, but sometimes this can mean your understanding is only superficial – by reaching your knowledge via an intermediary, rather than working at it yourself, you’re likely to be missing nuances and details, as well as to be picking up received wisdom and interpretative assumptions from other people, rather than determining your own understanding.
Taking shortcuts via other people’s interpretations isn’t always a bad thing, by any means – but it’s worth being aware of what you may be missing by doing so. I’m probably never going to read Heidegger’s Being and Time or Sartre’s Being and Nothing in English, let alone in the original German and French. I’ve always known I’m going to be missing something as a result – the summaries of these books that I *have* read have convinced me there are aspects of both I’d find fascinating. But Rée’s emphasis on taking the time to digest philosophical works, to ruminate on them, to make the effort to truly understand them has given me pause.
Much to think about here, in other words – not bad for what is at its core a collection of book reviews.
Bad photo of a good slide on what makes content valuable in an AI era, from Kevin Anderson at the inaugural Source Code event last night.
A successor to the much missed Hack/Hackers series looking at how tech and journalism can come together to do great things, it was unsurprisingly dominated by conversations about AI.
The point about what is valuable about the content we produce was also core to my old colleague Steven Wilson-Beales‘ session on SEO / GEO / AEO / AIO / whatever you want to call it, and what a “zero click” web could look like in practice.
Key points:
– You need differentiation
– You need to add value
– You need to be accessible, relevant, and credible
It’s almost as if E-E-A-T is still a thing!
Also, the lesson we should all have taken from the last decade and a bit of chasing search and social algorithms is simple – diversify.
Don’t get over reliant on any one traffic source. Don’t chase the algorithm, because the algorithm is changing faster than ever – and with AI search, will increasingly adapt it’s findings to every individual.
And a top tip – given AI tools have been trained on existing content, you need to take a careful look at your archives. If they don’t answer the potential needs of an AI bot in query fan out mode, they may need an update.
—
But the absolute key point – and this speaks to a lot of the work I’ve been doing behind the scenes lately – It’s no longer enough to focus your SEO / GEO efforts on optimisation of individual pages.
You need to see your content as part of a broader system – because the bots are no longer looking for just one page to rank at the top of a list, they’re looking for the right information to answer the query. If they can’t get it from you, they’ll get it from someone else. (Or just make it up…)
This brought back fond memories of the Bullshit Bingo tracker we used to keep to try and steer clients (and ourselves) away from jargon when working on B2B projects back in my Group SJR days…
Simple, jargon-free language is almost always the best option if you want your message to be understood – but it can be hard to get it past approvers, because the more you simplify the language, the clearer the strategic recommendations become.
For some, this clarity feels like a risk – because the best strategies tend to be very simple, once you strip them of all the linguistic fluff. This is where and why business bullshit creeps in – to make the clear seem complicated, so the person presenting seems like they’re better value for money.
Of course, what this all misses is that devising the strategy *is* the easy bit (relatively). The hard part is getting others on board to start rolling it out, and to ensure the organisation as a whole doesn’t just adopt it as a mantra, but understands and acts on it.
This is why strategic development needs to take its time – the conversations and debates that inform a strategy are the first step towards helping the broader organisation accept it.
Put lots of jargon in your explanations, you’re creating barriers to understanding and adoption.
But equally. there’s always a risk that someone will call you on it – and reveal that underneath all the convoluted wording, you’re really not saying much of substance. That’s surely a far bigger reputational risk than showing you have the insight to cut through to the heart of the matter with a clear, simple strategic recommendation.
Thinking of media channels as cognitive environments – shaped by context, attention and mode of consumption – is a useful perspective shift, from this piece by Faris Yakob, via WARC.
I also like Yakob’s framing of modality (how something is experienced), momentum (how it builds), and moments (how it comes into focus). But beneath that, this still feels largely like optimisation thinking – just applied to modalities and moments rather than formats and placements.
The part that matters most for brand-building is momentum, and that’s the least clearly explained. How do ideas actually build over time across different environments, teams, markets and formats? What creates momentum deliberately and consistently – the long as well as the short of it – connecting one “moment” to the next, beyond loose consistency or a set of distinctive assets?
This need for sustained momentum becomes more obvious in B2B contexts, where “moments” are harder to engineer, cycles are longer, and distinctiveness can be difficult – even risky – to pursue.
In those environments, the question is whether the organisation can produce and sustain a coherent narrative across everything it does, over time.
That isn’t really a media or creative (or modality or moment) problem – it’s structural.
It comes down to how narratives are defined, how topics are prioritised, how content is developed and reused, and how different teams interpret and apply the same underlying ideas over time, not just over campaigns or activations.
In other words, it’s about the architecture of the system that generates the communication, not just the optimisation of what gets put into it.
Without that, modality and moments are useful lenses, but they don’t explain why some brands build momentum while others just generate activity.
I’m seeing more and more people realise that “AEO” (Answer Engine Optimisation”) is just SEO in new clothes. But are GenAI outputs even something you can optimise for?
These systems don’t just read what you publish and serve up the most relevant parts – they synthesise it, blending multiple sources based on patterns they infer across a wider field of signals:
– everything you publish
– everything others publish about you
– everything they consider adjacent or comparable
They’re also not just looking at what’s being said now. They’re conflating and combining the accumulated traces of how your organisation expresses itself over time – across campaigns, content, product information and everything in between.
Repetition and consistency may help, but they won’t just pick up what you intend. They absorb whatever is most legible – including contradictions, gaps, and overlap with competitors.
If your positioning isn’t distinctive, you’ll get flattened into the category. If your communication isn’t coherent, the model will reconstruct a version of your brand from whatever patterns it can find. And when it comes to facts and details – where accuracy actually matters – these systems are still unreliable enough to pose a real risk.
This is where a focus on structured data starts to look like a promising way forward. That was my first assumption. But it’s becoming increasingly clear that this isn’t going to be enough.
—
The key is to remember that these systems don’t *understand* information. They generate outputs by following probabilistic sequences – patterns shaped by the data they’ve seen.
It’s a sophistiated form of word association. Structure helps, but only where it clarifies those patterns to nudge the model to follow the path you’d prefer.
Over time, what you’re really creating – deliberately or not – is a set of associations the LLM learns to treat as related. What we’d normally think of as a brand “narrative” sits inside that – not as something the model understands directly, but as a pattern of connections it learns to reproduce.
—
This means “AEO” should be considered less about optimising individual outputs, and more about the long-term shape of the signals you generate – across teams, markets and years.
I’ve been doing some work on this recently, trying to make that problem more tangible and diagnosable in practice. Still early, but the direction of travel feels clearer.
The brands that show up well won’t just be the ones optimising for visibility. They’ll be the ones whose overall pattern of behaviour is coherent enough that even a probabilistic system can’t easily misread what they are.
As this is a book of fairly straightforward, slightly gushing interviews with various people from the world of marketing, this would today have worked much better as a podcast. In this format it feels pretty repetitive as well as being dated (first published in 2011, with some of the focus on social media as if it’s new and Apple as if it’s a challenger brand feeling really rather quaint.
There probably were some actively thought-provoking points made somewhere in here, but everyone blurred into one in the end. so I have no idea who said what, and nothing really stood out – except the guy who was very vocal about his dislike of Daniel Kahneman and the idea of Behavioural Economics.
Of course, these “insights” may have seemed more radical 15 years ago. And for newcomers to marketing they still might.
But it’s notable how much of what’s said here sounds fine in theory but feels very hard to turn into tangible takeaways that people trying to build brands themselves could actually use. It mostly all ends up sounding like fluff and cod psychology. You can see how marketing and branding ended up getting a bit of a bad name if this is the best they had to offer.
Then again, maybe it’s because pretty much everyone featured here is American? As Mark Ritson – today’s leading marketing advocate – keeps saying, American marketing and advertising hasn’t been particularly sophisticated for decades.
In short, useful to read if in the profession, but there’s very little surprising, practical or inspiring here. It’s mostly pretty obvious platitudes.
Most of what the “GEO” crowd are peddling now *sounds* logical with all its talk of structured data and query fan outs, and is more or less exactly what I was arguing back in late 2023 / early 2024.
I was wrong then, and they’re still wrong now. As Orange Labs founder Britney Muller puts it:
During training, LLMs process text from across the web, but they don’t log URLs, store sources, or remember where anything came from. What’s left is a frozen statistical snapshot (Gao et al., 2023). Not an index. Not a database.
Search engines do the crawling, indexing, and retrieval. LLMs lean on them heavily to surface real-time info (because on their own, they can’t).
Stop optimizing for ‘AI.’ Optimize for search engines (so retrieval-based AI can cite you) + earn third-party coverage (so the model already knows you before the prompt is typed).
That’s not to say query fan out logic (and other “GEO” tactics) doesn’t have its place in content planning – it does. But all this *really* is is a fancy name for an FAQ page (with less emphasis on the “F”). That’s been a core idea in SEO for over two decades. And pretty much all the rest of the “GEO” advice is similarly reskinned old school SEO – from keyword stuffing to linkfarm spamdexing – that Google quietly filtered out years ago.
There’s an awful lot of snake oil being flogged out there at the moment. If some of it seems to work, it’s more by accident than design.
I initially loved this – effectively a popular historiography of the (Italian, mostly) Renaissance, exploring different perspectives and opinions and how these have evolved over time – while also providing overviews of some of the key events and personalities.
This is a wildly confusing period, so this approach actually works pretty well – highlighting who focused on what and offering multiple explanations as to why. Until about halfway through I loved it, and still remain convinced that looking at history by first looking at the lens of the historians and players who shaped that history is an approach more popular history books should take, rather than just run with a narrative.
But… “The Renaissance”, singular? This goes totally against the author’s core argument, which is all about how there are any number of ways of looking at this period (or even defining how long a period we’re talking about). Yet despite this we get surprisingly little about the Northern Renaissance, and almost every key figure called out was based in northern Italy – despite multiple references to Erasmus as a nexus of Renaissance correspondence, we get few investigations into how or whether what was happening in Italy was influenced by or influenced what was going on elsewhere (bar the frequent French invasions and other aspects of high politics).
Equally, about halfway through I started to find the whole thing a little overwhelming as we jump from overarching thesis (there’s no one right way of interpreting any of this) to detailed biography, so philosophical aside, to onrunning jokes. After a promising start, the structure starts to get lost, and it increasingly feels like a series of essays or blog posts loosely bound together.
The more this went on, the more I felt it could have been better if presented as essays rather than a whole – because after a while the running jokes (“Battle Pope”, “Abelarding”, references to Game of Thrones, etc etc) start to detract from rather than clarify the argument. This jokey style is one that’s been very popular the last decade or so, and can work – but in a book this long it can start to grate, even if you don’t object to it in principle, as some might.
Which is a shame, because there’s a lot of really good stuff in here. I learned a lot, and will want to go back and re-read various parts (as long as I can work out which with the jokey chapter titles) to refresh my memory – and eventually start to make a little more sense of a chaotic and challenging to understand period.
This, on the resurgence of the Rise of the Robots fears about the threat of widespread AI job losses, gets some of the way to articulating the niggling issues I have with this apocalyptic narrative:
Even if you do believe the technology has got or can get good enough to replace workers at scale, the economics simply don’t make sense.
Of course, we’ve spent the last two decades witnessing many, many things that made no economic sense yet that happened anyway thanks to a combination of complacency, willful ignorance, ideology, bloody-mindedness, and spite. Just because something makes no economic sense doesn’t mean it won’t happen.
But despite non-AI industry stocks having been hammered over the last couple of weeks, think what needs to happen to enable this AI revolution. Most developed nations had energy and clean water supply challenges even before factoring in a data centre building boom. We still have a deep reliance on rare earth metals for the hardware that the AI needs to function (the clue’s in the name).
What happens to prices when demand surges to unprecedented levels and supply struggles to keep up? And how does that change the balance sheet projections when deciding whether to replace human workers with a grandiose form of a new SaaS subscription, whose monthly costs and reliability could shift at any moment?
Remember the $7 *trillion* Sam Altman was asking for to invest in infrastructure? That’s likely to be a substantial under-estimate of the amounts needed given how much every industry upstream of the AI companies is already struggling to meet their projected needs.
History is all about perspective, and perspectives. This history of England’s most turbulent century – a period I studied to postgrad level – is a welcome attempt to offer alternative views of events via the eyes of non-English observers. As we’re somehow still referring to the central event as the English Civil War – ignoring Scotland, Ireland and Wales – this is very much needed.
The introduction promised a lot, and got me genuinely excited to see how much this focus on foreign perspectives – and foreign policy – would shift my own understanding. But while there were some new things for me here, at its heart this was all rather familiar.
Then again, I’m not really the target audience. As well as having studied the period, I also spent some time plotting out a potential novel that hinged in part on the foreign policy of James VI/I and the (limited) British involvement in the Thirty Years War.
For anyone relatively new to the period, or looking for a refresher overview, this would be really rather good. Standard accounts do tend to focus almost exclusively on England, where here Scotland and Ireland (not so much Wales) do get their due. But more importantly, most accounts tend to obsess about the religious angle, the disputes over tax and revenue, the disputes about the limits to the power of the monarchy, the attempts by parliament to assert itself.
All those are present here too – but so too are explorations of the European horror at the execution of Mary Queen of Scots; the Spanish side of the Spanish Armada and the Spanish Match, as well as worries about the subsequent French marriage; general concern as the civil wars broke out and further horror at England’s execution of a second monarch in sixty-odd years; the Dutch rivalry and wars and invasion.
All this is necessary to a solid understanding of the era – but all too often is skipped over or sidelined. Here, while it’s still not foregrounded as much as I’d hoped – or as much as is promised in the introduction – it’s hard to avoid the fuller understand appreciation that England was not operating in isolation. That other countries existed even then, and that even the foreign relations were far more than just theoretical, largely religious concerns.
All that said, cutting this off with the Glorious Revolution (another bad name that’s stuck) makes zero sense from a non-English perspective (even if the epilogue continues the story through to George I). Logically, the cut off should be more like 1745 (that final Jacobite rising, in the midst of British involvement in the War of Austrian Succession) and the solidification of the Hanoverian dynasty, or even a century later with the death of the Young Pretender / Bonnie Prince Charlie. But I guess by that point Britain was so firmly involved in European and global affairs that the emphasis on non-English opinions about the English would hardly be surprising.
So, a good overview – even if sadly not as radical and overhaul of the period’s traditional narratives as I was hoping.
Back when ChatGPT 3.5 came out, I was telling anyone who’d listen that it was going to disrupt search and publishing.
In early 2024, while at PwC, I started pitching new content formats to address this – intended to help capture whatever the GenAI equivalent of search ranking was going to be. “GEO” before this label stuck (I was calling it AIO at the time).
My thinking then was based on what seemed to be a logical, structured approach – similar to the “query fan out” advocates you’ll see in the “GEO” space today. (Basically label the hell out of your content, anticipate and answer the questions your target audience is likely to ask, as that structure should help the AI understand the context more easily, and so encourage it to pull from your page rather than someone else’s. Effectively a slightly deeper version of an old school Q&A or FAQ piece…)
But as I dug deeper it soon became clear that the challenge with LLM-based GenAI (from a model visibility perspective) wasn’t to do with clarifying the intended meaning of the information you want the model to ingest and regurgitate, as I first thought. (“These things can process unstructured data, but they’ll process *structured* data easier – so let’s structure it for them.”)
Instead it’s that these systems – despite being called Large *Language* Models – don’t actually understand language, or context. “Logic” to them is a meaningless concept; not only that, they have no concept of what a concept even is.
—
Tokens aren’t words, and don’t have meaning independently – they only appear to have meaning when combined into words.
Tokens create the illusion of being words (and having meaning) because of the probabilistic nature of these tools, when working with them using language as the system interface. This creates an environment in which they’re working within the rules of language, so can produce output that makes sense – even if they don’t “understand” what they’re saying.
But URLs aren’t language, and don’t have linguistic rules or any consistency from site to site in terms of information architecture. Every site’s URL structure is similar, but different.
And as LLMs don’t really understand structure (except as recognisable, predictable patterns), this makes accurately relating URLs a significant challenge for current LLM-based GenAI tools.
—
This is a structural challenge, baked into the very nature of these models. Despite what many GEO “experts” are now claiming, if your goal is to generate links and traffic from GenAI results, it’s not going to be an easy one to engineer if you’re working from outside that system.
It may be possible to tweak model outputs to improve this and increase URL attribution accuracy, but a) it won’t remove the underlying structural constraints, and b) what would be the incentive for the GenAI companies to do this?
“What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain as employees juggle multiple AI-enabled workflows…
“Over time, overwork can impair judgment, increase the likelihood of errors, and make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity.”
As so often, it’s too early to say what the true impact of GenAI will be on the workforce – see other recent studies suggesting that productivity gains may (so far!) be overstated or marginal – but if it leads to doing more work at unsustainable rates, it would be a strange irony if the fears about job losses ultimately prove unfounded. Could GenAI end up pushing organisations to need more people, not fewer?
“You don’t know if you’re gonna get what you want on the first take or the 12th take or the 40th take”
This is GenAI’s current biggest challenge: It’s still being sold as primarily an efficiency tool – do more, faster!
In practice, as most who’ve played with it have found, it’s only faster if good enough is good enough. If you’re seeking excellence, it can help you to improve and refine what you’re doing – but not at speed.
The time / cost / quality pyramid persists, despite what we were all hoping.
What GenAI *is* allowing is for more people to try things that previously they’d never have been able to do – like code, write better, or create video or imagery.
But what this fascinating piece shows is that even genuine experts with a desire to experiment and push the boundaries can struggle to get genuinely excellent results – and that human + machine + time + iteration + patience remains (for now) the only way to get beyond good enough.
“These are nondeterministic, unpredictable systems that are now receiving inputs and context from other such systems… From a security perspective, it’s an absolute nightmare.”
The whole exercise initially struck me as a fun enough probabilistic parlour trick – similar to the entertaining “Infinite conversation” site with bots based on Werner Herzog and Slavoj Žižek from a couple of years back. There’s no true *intelligence* here, just chatbots slotting into established tropes for online forums, including creating their own memes and complaining about privacy and the mods (here, “the humans”).
So far so unsurprising – just as it’s unsurprising that some people who should know better have decided to read meaning and understanding into these interactions. (Hell, some of the stuff robot Werner Herzog came up with could also sound profound – it’s all in the voice…)
But what *is* new is the naiveté of some early adopters who’ve entrusted incredibly sensitive personal information and provided ridiculous amounts of access to AI agents whose programming is not deterministic and which are now able to interact with other agents.
The tech may be impressive – these agents are able to *do* more than I was expecting by this stage – but the potential for compound risk is insane. No sensible organisation would let a system like this anywhere near its operations until it’s possible to put far more robust constraints in place.
And so, just as with gambling, the question with GenAI systems seems increasingly to be all about personal and organisational risk tolerance.
My risk tolerance for this kind of thing is low, because the potential payoff – a bit of enhanced productivity? – is similarly low. If you’re really so time poor that you’re willing to take this gamble, then you need to rethink your priorities.
Much like the region it’s covering, this book lacks a certain coherence – and seems to be dominated by the looming presence of Germany.
This makes sense, of course – but if a region is in the middle or central, the obvious question is the middle or centre of what, and what’s surrounding it? Here, Rady seems to focus far more on contrasting central Europe to western Europe than to the east (Russia is the other obvious figure looming over the region’s history, but features far less than Germany), the north, or the south.
For me, the focus on a more or less linear, more or less political history of the region made some sense – and individual chapters were great overviews – but given the fuzziness of the definition of the region and the lack of any long political continuity for most of the countries that exist there today – this makes it even harder to keep track. When there’s no clear narrative, narrative history tends to struggle.
This is because – as Rady makes clear in the final couple of chapters – the concept of central Europe is so relatively recent.
The conclusion mentions something that shows how difficult the task the author set himself was – talking about nations without states, and states without nations, all with borders that have overlapped each other at various times. This is a perceptive and useful summary – but it makes the political history approach feel more than usually useless.
What may have been more helpful would have been a cultural history, or even a linguistic one. If this is a land of overlapping nations, how did these national identities emerge and persist given how frequently the political boundaries have shifted? That’s the book I think I was hoping for, but it’s not this one.
“While 82% of advertising executives believe Gen Z and millennial consumers feel positively about AI-generated ads, only 45% of these consumers actually feel that way”
But this is hardly a surprise. A couple of years back I referred to GenAI being at every stage of the Gartner hype cycle simultaneously, and that remains true today – it’s just that more people have passed over the peak of inflated expectations.
Meanwhile, the AI companies need to keep on trying to inflate those expectations further to keep the investment money coming in to allow them to build the infrastructure they need to keep delivering.
But we’re at a stage now where high level promises like those you get in an advert or keynote are hitting the law of diminishing returns. These companies are selling to an increasingly sceptical crowd – as a global society, we’re further down the funnel and are looking for more proof points before we buy in.
(This is part of why I’m convinced Elon Musk knew exactly what he was doing with his Grok porn bot – the uproar was great free publicity for Grok’s ability to create photorealistic images and video… PR can be cynical…)
Given this, is an old school Super bowl campaign really going to make any difference? or is this now just another old school brand awareness play, given Google seems to be on the verge of demolishing OpenAI’s previous lead?
Either way, we’re definitely entering a new phase in the AI play – and the emphasis is increasingly going to need to be on proof of impact, not just proof of concept. The narrative needs to shift.
This is pretty much what I’ve been talking about for the last few years, via Joe Burns.
The problem isn’t just that the old model doesn’t work in a more complex environment – it’s that the very terminology precludes understanding and alignment, as everyone has a different idea of what the labels mean.
The key to success has always been systems thinking – but many agencies (and even more so in-house marketing teams) continue work in siloes, with nowhere near as much discussion and collaboration as is needed to come up with truly effective approaches.
As Joe Burns put it in his post on this:
“Coherence has to come from the system, not just one execution. The idea of a ‘Campaign’ only works if you can muster a critical mass of attention to carry people through it.”
Maybe it’s my “content” background speaking – because really strong content strategies need to work at multiple levels, across multiple channels and formats, and for multiple audiences with multiple needs. Without understanding the big picture *and* the details, it’s impossible to deliver effectively content across a campaign – individual assets may be solid, but the whole ends up less than the sum of its parts.
This is why I’ll continue trying to play in those overlap areas – not only do I find the diversity and clash of approaches and ideas stimulating, but I see it as the only way to work out the best way to succeed. You have to try to see the big picture to work out the best individual brush strokes.
To help shape my thinking, I write essays and shorter notes examining the ideas and narratives that shape media, marketing, technology and culture.
A core focus: The way context and assumptions can radically change how ideas are interpreted. Much of modern business, marketing, and media thinking is built on other people's frameworks, models, theories, and received wisdom. This can help clarify complex problems – but as ideas travel between disciplines and organisations they are often simplified, misapplied or treated as universal truths. I'm digging into these, across the following categories - the first being a catch-all for shorter thoughts: