On systems thinking and why strategies fail

An AI-generated image of a school of fish being attacked by a shark - an attempt at a visual metaphorI’ve seen this piece shared a lot, and like it. I’ve long been a fan of Systems Thinking (check my bio, it’s at the heart of my approach to everything).

But I’ve always seen Systems Thinking as more of a mental model or reminder to look beyond the immediately obvious causes and effects that could impact a strategy, rather than an enjoinder to try and literally map out interactions between all the different components.

As this piece notes, if you try to map out every interaction in a complex, shifting, uncertain system, you’ll never succeed. There are too many variables, all changing. Complexity Theory – even Chaos Theory and the Heisenberg Uncertainty Principle – rapidly becomes more helpful. Only these usually aren’t of much *practical* help at all.

It’s like playing chess – you don’t bother mapping out ALL the possible moves, as that would take forever (look up the Shannon number to get a sense of how many there could be – it’s more than the number of atoms in the observable universe…), and is therefore useless.

With experience, good chess players (and good strategists) can rapidly, intuitively home in on the moves most likely to work – both now and several moves down the line.

The problem is that the same moves will rarely work twice – at least not against the same opponent. And in a complex, ever-changing system, you’ll rarely have the opportunity to make the same sequence of moves more than once anyway, as the pieces will be constantly changing position on the board. Which will also be constantly changing size and shape.

“But metaphor isn’t method.”

That’s the key line from the linked piece. Business strategy isn’t chess – because you’re not restricted to making just one move at a time, or moving specific pieces in specific ways.

The challenge is to keep as flexible as possible while still moving forwards, which is why this bit of advice – one line of many I like, especially when combined with the recommendation to design in a modular, adaptive way – is one I pushed (sadly unsuccessfully) in a previous role:

“Instead of placing one big bet, leaders need a mix of pilots, partnerships, and minority stakes, ready to scale or abandon as conditions change.”

The problem is that strategy decks – still at the heart of most businesses and almost every marketing agency – are intrinsically linear, despite trying to address nonlinear, complex systems.

This is why most strategies end up not really being strategies, but plans, or lists of tactics.

And thats why most “strategies” fail.

Don’t focus on the *what* – focus on the *how*. Great advice from my former boss Jane O’Connell, which took me a long time to truly understand. It’s a concept that’s core to this excellent piece – and incredibly hard to explain.

Have a read – and a think.

Why are you writing?

This:

The question of what AI does to publishing has much more to do with why people are reading than how you wrote. Do they care who you are? About your voice or your story? Or are they looking for a database output?
Benedict Evans, on LinkedIn

Context is (usually) more important to the success of content than the content itself. And that context depends on the reader/viewer/listener.

It’s the classic journalistic questioning model, but about the audience, not the story:

  • Who are they?
  • What are they looking for?
  • Why are they looking for it?
  • Where are they looking for it?
  • When do they need it by?
  • How else could they get the same results?
  • Which options will best meet their needs?

Every one of these questions impacts that individual’s perceptions of what type of content will be most valuable to them, and therefore their choice of preferred format / platform for that specific moment in time. Sometimes they’ll want a snappy overview, other times a deep dive, yet other times to hear direct from or talk with an expert.

GenAI enables format flexibility, and chatbot interfaces encourage audience interaction through follow-up Q&As that can help make answers increasingly specific and relevant. This means it will have some pretty wide applications – but it still won’t be appropriate to every context / audience need state.

The real question is which audience needs can publishers – and human content creators – meet better than GenAI?

It’s easy to criticise “AI slop” – but the internet has been awash with utterly bland, characterless human-created slop for years. If GenAI forces those of us in the media to try a bit harder, then it’s all for the good.

The Tragedy of the Commons redux

The Tragedy of the Commons is coming for the internet:

Google’s AI Is Destroying Search, the Internet, and Your Brain

404 Media, 23 July 2025

The GenAI equivalent of Googlebombing (remember that?) was one of my first concerns when pondering the likely impact of GenAI search, way back when ChatGPT 3.5 came out and the prospect started looking real.

This kind of thing is, sadly, inevitable. And while Google’s got very solid experience of getting around attempts to manipulate its algorithms, it doesn’t have a great track record of releasing AI products that can distinguish facts from confabulations (remember both the Bard and the Gemini launches?).

The other inevitability is that this is also going to lead to more scammy marketing techniques. We’re going to be inundated with yet more of those snake oil salespeople popping up to promise brands results in GenAI, just as they used to in the early days of SEO – fuelled by similar tactics of vast networks of websites all interlinking to each other to create the impression of authority.

Only now, rather than using underpaid humans in content farms, they’ll be using GenAI to spit out infinite copy and infinite webpages, poisoning the GenAI well for everyone in pursuit of short-term profits.

Why We Need a More Journalistic Approach to AI

The last couple of years have seen far too many people who should know better simply regurgitate press releases without applying critical thinking – yet it’s the critical thinking that’s the increasingly essential “human in the loop” part of the equation.

And as familiarity breeds contempt, this kind of blunt, sceptical take on AI is likely to be increasingly common in 2025. Anyone – any organisation – wanting to be taken seriously is going to have to confront these kinds of questions honestly and openly if they’re going to be taken seriously.

But at the same time, it’s going to be important not to swing too far the other way – beyond inquisitiveness about the bold claims of the AI providers into outright cynicism.

It’s easy to shoot things down. It’s *extremely* easy to have a knee-jerk dislike of techbro hype trains when you lived through the Dotcom Crash. It’s much harder to dispassionately assess the merits of emerging technologies when they haven’t yet fully emerged.

As ever, a journalistic mindset can help:

  1. Who‘s saying this? What are their creds? What’s in it for them? Do they have any financial stake?
  2. What are they actually saying? Is there any substance, or is it filled with jargon and empty phrases? (It’s often surprising how little substance there is out there, given how much is being said…)
  3. When did what they’re claiming first happen? Is this really new, or is it fresh spin on an old claim or capability? If a fresh spin, that’s not necessarily a bad thing – but why now?
  4. Where‘s the evidence to support their claims? Can it be independently verified?
  5. How does this claim differ to existing solutions? Is it really an improvement? What’s the cost vs benefit compared to alternatives?

Finally, as ever, try and get your info from more than one source. It’s tempting to only listen to people you agree with, and *very* tempting to dismiss anything coming from sources you dislike. But that leads to an incomplete picture – and a boring, predictable take.

And at a time where GenAI can spit out passable median opinion takes in seconds, what’s the point in reading anything boring and predictable?

The Real Risk of GenAI Search Isn’t Lost Traffic – It’s Misattribution

Fascinating, if predictable, findings on ChatGPT source attribution, via TechCrunch – with significant implications for the emerging “Generative Engine Optimisation” successor to SEO that should concern anyone publishing online.

Short version – ChatGPT’s ability to provide accurate citations for the sources of its information remains extremely hit and miss, despite the rise of GenAI search:

“the fundamental issue is OpenAI’s technology is treating journalism ‘as decontextualized content’, with apparently little regard for the circumstances of its original production”

In other words, GenAI focuses on the substance, not the source. It doesn’t matter where a story / insight actually originated – only where the GenAI tool considers is most plausible for it to have originated.

This isn’t just a question of lost traffic due to the lack of a link – there are far more serious implications here.

For example, if you’re a corporate brand producing a big chunky piece of thought leadership based on months of research, this means you could find your work misattributed to a direct competitor if the GenAI algorithms decide a competitor is more likely to have produced something like this. Equally, someone else’s work – or opinion – may be attributed to you.

This is, of course, a potentially huge liability for any brand – especially as hostile actors could use this flaw in the way these tools work to game the system, similar to the old days of Googlebombing, and make it look like your brand has said something it hasn’t.

But it gets worse – there’s nothing* you can do about it:

“Nor does completely blocking crawlers mean publishers can save themselves from reputational damage risks by avoiding any mention of their stories in ChatGPT. The study found the bot still incorrectly attributed articles to the New York Times despite the ongoing lawsuit, for example.”

Welcome to the age of GenAI…

(* well, nothing guaranteed to work all the time, at least…)

The GenAI default style

A GenAI pixelated image of two robots talking while other robots look onThe default writing style of GenAI is becoming ever more prevalent on LinkedIn, both in posts and comments.

This GenAI standard copy has a rhythm that, because it’s becoming so common, is becoming increasingly noticeable.

Sometimes it’s really very obvious we’ve got bots talking to bots – especially on those AI-generated posts where LinkedIn tries to algorithmically flatter us by pretending we’re one of a select few experts invited to respond to a question.

Top tip: If you’re using LinkedIn to build a personal / professional brand, you really need a personality – a style or tone (and preferably ideas) of your own. If you sound the same as everyone else, you fade into the background noise.

So while it may be tempting to hit the “Rewrite with AI” button, or just paste a question into your Chatbot of choice, my advice: Don’t.

Or, at least, don’t without giving it some thought.

There are lots of good reasons to use AI to help with your writing – it’s an annoyingly good editor when used carefully, and can be a superb help for people working in their second language, or with neurodiverse needs. It can be helpful to spot ways to tighten arguments, and in suggesting additional points. But like any tool, it needs a bit of practice and skill to use well.

But seeing that this platform is about showing off professional skills, don’t use the default – that’s like turning up to a client presentation with a PowerPoint with no formatting.

Put a bit of effort in, and maybe you’ll get read and responded to by people, not just bots. And isn’t that the point of *social* media?

On the value of awards

A stock photo of a Cannes Lion awardThis from John Hegarty resonated. Unpopular opinion, but awards – especially in B2B marketing – are the ad industry equivalent of social media vanity metrics. They may get you marginally more reach (usually long after the campaign’s over), but rarely with your real target audiences.

What’s worse, the positive signals award wins send out can create feedback loops of groupthink about tactics that can actively harm your ability to deliver.

I know it’s tough to demonstrate marketing effectiveness, but award wins rarely prove much beyond that marketing people like something. So unless you’re selling to marketers, they don’t really have much value.

This means awards make perfect sense for agencies (and individuals) to enter – but for their clients? The point of marketing is to improve brand perception and make sales with your buyers, not getting a round of applause from other marketers.

Which is why, often, I find the less glamorous side of marketing is where the real businesses impact can be found.

How to get the most out of SEO – what we know, and what we don’t

There’s some fascinating stuff in this SEO long read, based on impressive research and analysis. Just bear in mind that, as leaked Google documents put it, “If you think you understand how [search algorithms] work, trust us: you don’t. We’re not sure that we do either.”

A diagram of SEO impact factors created by Mario FischerTo save you time, the main lesson is that “achieving a high ranking isn’t solely about having a great document or implementing the right SEO measures with high-quality content”. Search results shift in near realtime based on thousands of utterly opaque, interconnected assessments of obscure demand and user intent signals, so there’s only so much website managers can do.

For me, this all confirms a few core content principles:

  • Context is king, not content. You can have an amazing page full of astounding insight, but if it doesn’t clearly meet the needs of the user at that moment in time, it will go unviewed.
  • Page structure is at least as important as substance – if (human and bot) audiences can’t quickly tell that your page is interesting and relevant, they’ll bounce.
  • But don’t worry – the key to success is rarely going to be a single webpage. More important is the authority of the domain and brand.
  • This means the impact of content is at least as much about cumulative brand building as it is immediate engagement. Think of the long tail, not just the short spike – and focus your content strategy on building this long-term growth over the short-term quick hit.
  • Given so much about how this works is unknown, and so many factors are outside your control, it’s best not to over-think it. Follow all the advice SEO experts offer, and you’ll end up with something so over-engineered it’ll lose its coherence and flow. This will increase bounce rates.

So how to succeed?

Go back to basics: Focus on ensuring your content fulfills a clear audience need (ideally currently unmet by other sources), using language audiences are looking for, presented in ways audiences are likely to engage with, and with clear links to and from other relevant content to help both humans and bots understand its relevance within the broader context.

In other words, SEO may be complex when you dig into the details – but it’s really just a combination of common sense, long-term authority building, and a good bit of luck.

It’s still worth reading the whole thing, though.

The GenAI copyright wars are hotting up

A GenAI-created visual metaphor for creativity versus technology, with lawyers arguing their caseGiven the music industry’s track record of building successful cases for unauthorised sampling and even inadvertent plagiarism (aka Cryptomnesia, as with the George Harrison ‘My Sweet Lord’ lawsuit back in the 70s) this will be the one to watch.

The music industry’s absolutist approach to copyright is a dangerous path to follow, however. How can you legally define the difference between “taking inspiration from” and “imitating”? What’s the difference between a GenAI tool creating music in the style of an artist, and an artist operating within a genre tradition?

*Everything* is a mashup or a reference, to a greater or lesser extent – that’s how culture works. We’re all standing on the shoulders of giants – as well as myriad lesser influences, most of which are subconscious. Hell, the saying “there’s nothing new under the sun” comes from the Book of Ecclesiastes, written well over 2,000 years ago.

Put legal restrictions on the right of anyone – human or bot – to build or riff on what’s come before, and culture risks hitting a dead end.

So while I have sympathy with artists’ concerns, the claim that GenAI could “sabotage creativity” is a nonsense in the same way claims that the printing press or photocopier could sabotage creativity are. Creativity is about the combination of ideas and influences and continual experimentation to find out what works – GenAI can help us all do this faster than ever. If anything, this should help increase creativity.

What *does* sabotage creativity is short-termist, protectionist restrictions on who’s allowed to do what – exactly like the ones these lawsuits are trying to impose.

On declining trust in AI and the hype cycle

Classic poster image for The Terminator“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions.”

An interesting finding, this – especially as it transcends product and service categories – though perhaps to be expected at this stage of the GenAI hype cycle.

This kind of scepticism isn’t easy to overcome – with new technologies acceptance and mass adoption is often a matter of time – but as the authors of the study point out, the key issue to address is the lack of trust in AI as a technology.

Some of this lack of trust is due to lack of familiarity – natural language GenAI seems intuitive, but actually takes a lot of practice to get decent results.

Some will be due to the opposite – follow the likes of Gary Marcus, and it’s hard not to get sceptical about the sustainability, benefits, and reliability of the current approach to GenAI.

The danger, though, is that this scepticism may be spreading to AI as a whole. The prominence of GenAI in the current AI discourse is leading to different types of artificial intelligence becoming conflated in the popular imagination – even though, just a few years ago, the form of machine learning we now call GenAI wouldn’t even have been classified as artificial intelligence.

Tech terms can rapidly become toxic – think “web3”, “NFT”, and “metaverse”. Could GenAI be starting to experience a similar branding problem? And could this damage perception of other kinds of AI in the process?