This, on the resurgence of the Rise of the Robots fears about the threat of widespread AI job losses, gets some of the way to articulating the niggling issues I have with this apocalyptic narrative:
Even if you do believe the technology has got or can get good enough to replace workers at scale, the economics simply don’t make sense.
Of course, we’ve spent the last two decades witnessing many, many things that made no economic sense yet that happened anyway thanks to a combination of complacency, willful ignorance, ideology, bloody-mindedness, and spite. Just because something makes no economic sense doesn’t mean it won’t happen.
But despite non-AI industry stocks having been hammered over the last couple of weeks, think what needs to happen to enable this AI revolution. Most developed nations had energy and clean water supply challenges even before factoring in a data centre building boom. We still have a deep reliance on rare earth metals for the hardware that the AI needs to function (the clue’s in the name).
What happens to prices when demand surges to unprecedented levels and supply struggles to keep up? And how does that change the balance sheet projections when deciding whether to replace human workers with a grandiose form of a new SaaS subscription, whose monthly costs and reliability could shift at any moment?
Remember the $7 *trillion* Sam Altman was asking for to invest in infrastructure? That’s likely to be a substantial under-estimate of the amounts needed given how much every industry upstream of the AI companies is already struggling to meet their projected needs.
“What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain as employees juggle multiple AI-enabled workflows…
“Over time, overwork can impair judgment, increase the likelihood of errors, and make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity.”
As so often, it’s too early to say what the true impact of GenAI will be on the workforce – see other recent studies suggesting that productivity gains may (so far!) be overstated or marginal – but if it leads to doing more work at unsustainable rates, it would be a strange irony if the fears about job losses ultimately prove unfounded. Could GenAI end up pushing organisations to need more people, not fewer?
“You don’t know if you’re gonna get what you want on the first take or the 12th take or the 40th take”
This is GenAI’s current biggest challenge: It’s still being sold as primarily an efficiency tool – do more, faster!
In practice, as most who’ve played with it have found, it’s only faster if good enough is good enough. If you’re seeking excellence, it can help you to improve and refine what you’re doing – but not at speed.
The time / cost / quality pyramid persists, despite what we were all hoping.
What GenAI *is* allowing is for more people to try things that previously they’d never have been able to do – like code, write better, or create video or imagery.
But what this fascinating piece shows is that even genuine experts with a desire to experiment and push the boundaries can struggle to get genuinely excellent results – and that human + machine + time + iteration + patience remains (for now) the only way to get beyond good enough.
“These are nondeterministic, unpredictable systems that are now receiving inputs and context from other such systems… From a security perspective, it’s an absolute nightmare.”
The whole exercise initially struck me as a fun enough probabilistic parlour trick – similar to the entertaining “Infinite conversation” site with bots based on Werner Herzog and Slavoj Žižek from a couple of years back. There’s no true *intelligence* here, just chatbots slotting into established tropes for online forums, including creating their own memes and complaining about privacy and the mods (here, “the humans”).
So far so unsurprising – just as it’s unsurprising that some people who should know better have decided to read meaning and understanding into these interactions. (Hell, some of the stuff robot Werner Herzog came up with could also sound profound – it’s all in the voice…)
But what *is* new is the naiveté of some early adopters who’ve entrusted incredibly sensitive personal information and provided ridiculous amounts of access to AI agents whose programming is not deterministic and which are now able to interact with other agents.
The tech may be impressive – these agents are able to *do* more than I was expecting by this stage – but the potential for compound risk is insane. No sensible organisation would let a system like this anywhere near its operations until it’s possible to put far more robust constraints in place.
And so, just as with gambling, the question with GenAI systems seems increasingly to be all about personal and organisational risk tolerance.
My risk tolerance for this kind of thing is low, because the potential payoff – a bit of enhanced productivity? – is similarly low. If you’re really so time poor that you’re willing to take this gamble, then you need to rethink your priorities.
This is pretty much what I’ve been talking about for the last few years, via Joe Burns.
The problem isn’t just that the old model doesn’t work in a more complex environment – it’s that the very terminology precludes understanding and alignment, as everyone has a different idea of what the labels mean.
The key to success has always been systems thinking – but many agencies (and even more so in-house marketing teams) continue work in siloes, with nowhere near as much discussion and collaboration as is needed to come up with truly effective approaches.
As Joe Burns put it in his post on this:
“Coherence has to come from the system, not just one execution. The idea of a ‘Campaign’ only works if you can muster a critical mass of attention to carry people through it.”
Maybe it’s my “content” background speaking – because really strong content strategies need to work at multiple levels, across multiple channels and formats, and for multiple audiences with multiple needs. Without understanding the big picture *and* the details, it’s impossible to deliver effectively content across a campaign – individual assets may be solid, but the whole ends up less than the sum of its parts.
This is why I’ll continue trying to play in those overlap areas – not only do I find the diversity and clash of approaches and ideas stimulating, but I see it as the only way to work out the best way to succeed. You have to try to see the big picture to work out the best individual brush strokes.
I usually hate tips for writers – writing, to me, should be a natural thing. But having seen a lot of very bad writing, more concerned with showing off the writer’s linguistic skill or subject-matter expertise than enlightening the reader, this approach strikes me as vital to keep in mind at all times:
Writing is a modern twist on an ancient, species-wide behaviour: drawing someone else’s attention to something visible. Imagine stopping during a hike to point out a distant church to your hiking companion: look, over there, in the gap between those trees – that patch of yellow stone? Now can you see the spire? “When you write,” Pinker says, “you should pretend that you, the writer, see something in the world that’s interesting, and that you’re directing the attention of your reader to that thing.”
Perhaps this seems stupidly obvious. How else could anyone write? Yet much bad writing happens when people abandon this approach. Academics can be more concerned with showcasing their knowledge; bureaucrats can be more concerned with covering their backsides; journalists can be more concerned with breaking the news first, or making their readers angry. All interfere with “joint attention”, making writing less transparent.
This isn’t a “rule for writers”; it’s a perspective shift. It’s also an answer to an old question: should you write for yourself or for an audience? The answer is “for an audience”. But not to impress them. The idea is to help them discern something you know they’d be able to see, if only they were looking in the right place.
Notes and Essays
To help shape my thinking, I write essays and shorter notes examining the ideas and narratives that shape media, marketing, technology and culture.
A core focus: The way context and assumptions can radically change how ideas are interpreted. Much of modern business, marketing, and media thinking is built on other people's frameworks, models, theories, and received wisdom. This can help clarify complex problems – but as ideas travel between disciplines and organisations they are often simplified, misapplied or treated as universal truths. I'm digging into these, across the following categories - the first being a catch-all for shorter thoughts: