On systems thinking and why strategies fail

An AI-generated image of a school of fish being attacked by a shark - an attempt at a visual metaphorI’ve seen this piece shared a lot, and like it. I’ve long been a fan of SystemsThinking (check my bio, it’s at the heart of my approach to everything).

But I’ve always seen Systems Thinking as more of a mental model or reminder to look beyond the immediately obvious causes and effects that could impact a strategy, rather than an enjoinder to try and literally map out interactions between all the different components.

As this piece notes, if you try to map out every interaction in a complex, shifting, uncertain system, you’ll never succeed. There are too many variables, all changing. Complexity Theory – even Chaos Theory and the Heisenberg Uncertainty Principle – rapidly becomes more helpful. Only these usually aren’t of much *practical* help at all.

It’s like playing chess – you don’t bother mapping out ALL the possible moves, as that would take forever (look up the Shannon number to get a sense of how many there could be – it’s more than the number of atoms in the observable universe…), and is therefore useless.

With experience, good chess players (and good strategists) can rapidly, intuitively home in on the moves most likely to work – both now and several moves down the line.

The problem is that the same moves will rarely work twice – at least not against the same opponent. And in a complex, ever-changing system, you’ll rarely have the opportunity to make the same sequence of moves more than once anyway, as the pieces will be constantly changing position on the board. Which will also be constantly changing size and shape.

“But metaphor isn’t method.”

That’s the key line from the linked piece. Business strategy isn’t chess – because you’re not restricted to making just one move at a time, or moving specific pieces in specific ways.

The challenge is to keep as flexible as possible while still moving forwards, which is why this bit of advice – one line of many I like, especially when combined with the recommendation to design in a modular, adaptive way – is one I pushed (sadly unsuccessfully) in a previous role:

“Instead of placing one big bet, leaders need a mix of pilots, partnerships, and minority stakes, ready to scale or abandon as conditions change.”

The problem is that strategy decks – still at the heart of most businesses and almost every marketing agency – are intrinsically linear, despite trying to address nonlinear, complex systems.

This is why most strategies end up not really being strategies, but plans, or lists of tactics.

And thats why most “strategies” fail.

Don’t focus on the *what* – focus on the *how*. Great advice from my former boss Jane O’Connell, which took me a long time to truly understand. It’s a concept that’s core to this excellent piece – and incredibly hard to explain.

Have a read – and a think.

Why are you writing?

This:

The question of what AI does to publishing has much more to do with why people are reading than how you wrote. Do they care who you are? About your voice or your story? Or are they looking for a database output?
Benedict Evans, on LinkedIn

Context is (usually) more important to the success of content than the content itself. And that context depends on the reader/viewer/listener.

It’s the classic journalistic questioning model, but about the audience, not the story:

  • Who are they?
  • What are they looking for?
  • Why are they looking for it?
  • Where are they looking for it?
  • When do they need it by?
  • How else could they get the same results?
  • Which options will best meet their needs?

Every one of these questions impacts that individual’s perceptions of what type of content will be most valuable to them, and therefore their choice of preferred format / platform for that specific moment in time. Sometimes they’ll want a snappy overview, other times a deep dive, yet other times to hear direct from or talk with an expert.

GenAI enables format flexibility, and chatbot interfaces encourage audience interaction through follow-up Q&As that can help make answers increasingly specific and relevant. This means it will have some pretty wide applications – but it still won’t be appropriate to every context / audience need state.

The real question is which audience needs can publishers – and human content creators – meet better than GenAI?

It’s easy to criticise “AI slop” – but the internet has been awash with utterly bland, characterless human-created slop for years. If GenAI forces those of us in the media to try a bit harder, then it’s all for the good.